By using AWS re:Post, you agree to the Terms of Use
/Tips for handling overlapping intents? (Lex v2)/

Tips for handling overlapping intents? (Lex v2)

0

For my use case, we have defined a Greeting intent and 2 domain-specific intents - let's call them IntentA and IntentB.

The Greeting intent is trained on obvious openers like 'Hi', 'Hello', 'Hello <botname>', 'Good morning' and so forth. The bot will respond by greeting the user and informing them what the bot can help with.

For intents A and B, we are building the utterance lists with the aid of pre-bot transcripts from real users.

The issue is, many users naturally ask for intentA and intentB with a greeting phrase followed by their main intent. For example, 'Good morning, can you please help me <complete intentA>' or 'Hi there, I need to <complete intentB>'.

Since Lex does not appear to offer muiti-intent detection within a single utterance, what would be the recommended approach? Here are the strategies we're considering:

  1. Train the Greeting intent on the greeting strings, and train intents A + B using phrases both with and without greeting text. (Assumption: Lex would learn that isolated greetings should match the Greeting intent, otherwise the intent-specific content should take precedence)
  2. Train the Greeting intent as above, but train A + B with no greeting content. (Assumption: greeting phrases are shorter, therefore the salient portion of the user's utterance would likely be given more weight by Lex...but would this generalize if phrase lengths were equal across all 3 intents?)
  3. Remove the Greeting intent and do not include greeting words in A + B training utterances. (Assumption: Greeting words in an intent would be ignored, and on their own would trigger the FallbackIntent which could be programmed to greet the user the first time & reprompt / escalate after that)
  4. Explicitly treat Greeting words as stop words - would this trigger the FallbackIntent if the user says 'Hi' for example?
  5. Any other options we've overlooked?

Thanks in advance!

asked 11 days ago32 views
1 Answers
2

Thanks for the good question. I think your option 1. is most likely to achieve what you want. by extending the training examples of intents A + B both with and without carrier greeting phrases, it helps with the machine learning model at robustly recognizing those intents based on the salient content part, and only resort to the greeting intent if there's no other specific content.

answered 11 days ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions