Tips for handling overlapping intents? (Lex v2)

0

For my use case, we have defined a Greeting intent and 2 domain-specific intents - let's call them IntentA and IntentB.

The Greeting intent is trained on obvious openers like 'Hi', 'Hello', 'Hello <botname>', 'Good morning' and so forth. The bot will respond by greeting the user and informing them what the bot can help with.

For intents A and B, we are building the utterance lists with the aid of pre-bot transcripts from real users.

The issue is, many users naturally ask for intentA and intentB with a greeting phrase followed by their main intent. For example, 'Good morning, can you please help me <complete intentA>' or 'Hi there, I need to <complete intentB>'.

Since Lex does not appear to offer muiti-intent detection within a single utterance, what would be the recommended approach? Here are the strategies we're considering:

  1. Train the Greeting intent on the greeting strings, and train intents A + B using phrases both with and without greeting text. (Assumption: Lex would learn that isolated greetings should match the Greeting intent, otherwise the intent-specific content should take precedence)
  2. Train the Greeting intent as above, but train A + B with no greeting content. (Assumption: greeting phrases are shorter, therefore the salient portion of the user's utterance would likely be given more weight by Lex...but would this generalize if phrase lengths were equal across all 3 intents?)
  3. Remove the Greeting intent and do not include greeting words in A + B training utterances. (Assumption: Greeting words in an intent would be ignored, and on their own would trigger the FallbackIntent which could be programmed to greet the user the first time & reprompt / escalate after that)
  4. Explicitly treat Greeting words as stop words - would this trigger the FallbackIntent if the user says 'Hi' for example?
  5. Any other options we've overlooked?

Thanks in advance!

AR3
질문됨 2년 전290회 조회
1개 답변
2

Thanks for the good question. I think your option 1. is most likely to achieve what you want. by extending the training examples of intents A + B both with and without carrier greeting phrases, it helps with the machine learning model at robustly recognizing those intents based on the salient content part, and only resort to the greeting intent if there's no other specific content.

AWS
yi
답변함 2년 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠