Tips for handling overlapping intents? (Lex v2)

0

For my use case, we have defined a Greeting intent and 2 domain-specific intents - let's call them IntentA and IntentB.

The Greeting intent is trained on obvious openers like 'Hi', 'Hello', 'Hello <botname>', 'Good morning' and so forth. The bot will respond by greeting the user and informing them what the bot can help with.

For intents A and B, we are building the utterance lists with the aid of pre-bot transcripts from real users.

The issue is, many users naturally ask for intentA and intentB with a greeting phrase followed by their main intent. For example, 'Good morning, can you please help me <complete intentA>' or 'Hi there, I need to <complete intentB>'.

Since Lex does not appear to offer muiti-intent detection within a single utterance, what would be the recommended approach? Here are the strategies we're considering:

  1. Train the Greeting intent on the greeting strings, and train intents A + B using phrases both with and without greeting text. (Assumption: Lex would learn that isolated greetings should match the Greeting intent, otherwise the intent-specific content should take precedence)
  2. Train the Greeting intent as above, but train A + B with no greeting content. (Assumption: greeting phrases are shorter, therefore the salient portion of the user's utterance would likely be given more weight by Lex...but would this generalize if phrase lengths were equal across all 3 intents?)
  3. Remove the Greeting intent and do not include greeting words in A + B training utterances. (Assumption: Greeting words in an intent would be ignored, and on their own would trigger the FallbackIntent which could be programmed to greet the user the first time & reprompt / escalate after that)
  4. Explicitly treat Greeting words as stop words - would this trigger the FallbackIntent if the user says 'Hi' for example?
  5. Any other options we've overlooked?

Thanks in advance!

AR3
已提問 2 年前檢視次數 278 次
1 個回答
2

Thanks for the good question. I think your option 1. is most likely to achieve what you want. by extending the training examples of intents A + B both with and without carrier greeting phrases, it helps with the machine learning model at robustly recognizing those intents based on the salient content part, and only resort to the greeting intent if there's no other specific content.

AWS
yi
已回答 2 年前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南