All, this one has really stumped me all week... I will do my best to try and describe the behavior as clearly as possibly can...
We use Amazon Connect Contact Flows with the widget "Get Customer Input" to route a caller through an automated Lex V2 bot experience. We are gathering information for our firm about a potential client's needs. In Contact Flow "Get Customer Input" we exit only on two paths: Default and Error (then from Default, we check session attributes and determine next steps in the Flow). All this is working as desired.
The part that bombs out on us, is the situation outlined below:
- At the end of every different matter intent a caller could choose from and trigger, we have a custom data type that captures a lengthy user issue description (for a caller describing their particular legal case like: tell me about your injuries, or what's the main issue you are looking to address, etc. etc.)
- The point is, this custom data type is a generic "catch up" Speech-to-Text ONLY use case. We aren't actually trying to divulge meaningful attributes from within for this slot. It's essentially a bit like a voice mail recording message. So the answers are very open-ended by the users and extremely varied. We totally realize and appreciate this is not how a slot is usually built...
- So, it's very hard to train data for this custom slot type, because it's a bit like a mini-story the user tells, and it could be anything. We've tried to train with literally hundreds of slot utterances, and still we experience lots of Fallback (which we expect and handle). We've spoken with AWS Lex SAs before and they thought if we trained on 200-300 phrases, we should be able to catch everything. But we still get a high rate of answers that fall through the cracks...
- So we DON'T want to validate the answer against any trained slot type utterance data--we just want to save the inputTranscript to the slot, and move on.
- We use Lambda codehook on every conversation turn, and check for fallback, and if fallback occurs, we literally take the inputTranscript Speech-to-Text and manually save it to the slot.
- This works great when we have custom data type questions that are not the last question in the slots list. It totally works as expected: senses the Fallback, takes the inputTranscript, saves it, and moves on by explicitly calling DialogAction type = ElicitSlot for the next question.
- HOWEVER, on the last question which is custom data type, when Invocation changes from DialogCodeHook to FulfillmentCodeHook, we again save the last value, and we call DialogAction type = Close.** If a fallback value occurs here, neither Lambda, nor Lex Conversations logs record any error. BUT the Amazon Connect "Get Customer Input" goes down the "Error" branch instead of the "Default" branch.** NOTE: If we give a "clean" answer (one of the test trained answers) to this last question, the FulfillmentCodeHook logic for DialogAction type = Close works just great, no problems...
The payload of intent that I pass to DialogAction Close for either the working or breaking case is exactly the same -- with only the last question slot value being different, of course.
No error is showing up in my Lambda. No error in the Lex conversation log. But the Amazon Connect fails hard on "Error" path coming out of the Get Customer Input widget, and there's literally no more info in CloudWatch from the Connect side that shows "why" it hit the error branch.
I've tried everything I can think of; does anyone else have any ideas or seen similar behavior on last question custom data type fallback issues?
One last note: I get this behavior in either the pre-August 17 and post-August 17 release of Lex V2 bots.
Thanks all,
Jeremy
Thank you Thomas!! We just heard this release yesterday and we are VERY excited and hopeful to test this out now. Will update for everyone later.