Amazon QnA Intent unexpected fails - help

0

I am interacting with an Amazon Lex bot that is using an AMAZON QnA intent with a Kendra index. I am testing it with a simple query, for example: What is <something here>?

The issue is that it sometimes fails and sometimes works WITH THE SAME QUESTION. Here are the jsons (I replaced some information with <> so no private data is shown.)

Working json response:

{
    "messages": [
     {
      "content": "<This output is working as expected>",
      "contentType": "PlainText"
     }
    ],
    "sessionState": {
     "dialogAction": {
      "type": "ElicitIntent"
     },
     "sessionAttributes": {},
     "originatingRequestId": "<originatingRequestId>"
    },
    "interpretations": [
     {
      "intent": {
       "name": "FallbackIntent",
       "slots": {}
      },
      "interpretationSource": "Lex"
     },
     {
      "nluConfidence": {
       "score": 0.74
      },
      "intent": {
       "name": "Greetings",
       "slots": {}
      },
      "interpretationSource": "Lex"
     },
     {
      "nluConfidence": {
       "score": 0.72
      },
      "intent": {
       "name": "RequestAccessLink",
       "slots": {}
      },
      "interpretationSource": "Lex"
     },
     {
      "nluConfidence": {
       "score": 0.68
      },
      "intent": {
       "name": "AccessToIntranet",
       "slots": {}
      },
      "interpretationSource": "Lex"
     },
     {
      "nluConfidence": {
       "score": 0.64
      },
      "intent": {
       "name": "CreateCriticalTicket",
       "slots": {
        "message_ticket": null
       }
      },
      "interpretationSource": "Lex"
     }
    ],
    "requestAttributes": {
     "x-amz-lex:qnA-search-response": "<working>",
     "x-amz-lex:qnA-search-response-source": "<working>"
    },
    "sessionId": "<sessionID>"
   }

Not working json response:

{
    "messages": [
     {
      "content": "<fallbackintent message>",
      "contentType": "PlainText"
     }
    ],
    "sessionState": {
     "dialogAction": {
      "type": "Close"
     },
     "intent": {
      "name": "FallbackIntent",
      "slots": {},
      "state": "ReadyForFulfillment",
      "confirmationState": "None"
     },
     "sessionAttributes": {},
     "originatingRequestId": "<originatingRequestId>"
    },
    "interpretations": [
     {
      "intent": {
       "name": "FallbackIntent",
       "slots": {},
       "state": "ReadyForFulfillment",
       "confirmationState": "None"
      },
      "interpretationSource": "Lex"
     },
     {
      "nluConfidence": {
       "score": 0.76
      },
      "intent": {
       "name": "Greetings",
       "slots": {}
      },
      "interpretationSource": "Lex"
     },
     {
      "nluConfidence": {
       "score": 0.75
      },
      "intent": {
       "name": "RequestAccessLink",
       "slots": {}
      },
      "interpretationSource": "Lex"
     },
     {
      "nluConfidence": {
       "score": 0.69
      },
      "intent": {
       "name": "AccessToIntranet",
       "slots": {}
      },
      "interpretationSource": "Lex"
     },
     {
      "nluConfidence": {
       "score": 0.64
      },
      "intent": {
       "name": "CreateCriticalTicket",
       "slots": {
        "message_ticket": null
       }
      },
      "interpretationSource": "Lex"
     }
    ],
    "sessionId": "<sessionId>"
   } 

Any ideas?

asked 12 days ago91 views
1 Answer
1

The LLM will be used to evaluate how confident it is that an answer is good enough to return. With some variability, some questions might get this behavior.

If you could open a support ticket with the account number, region, bot name/alias and session information and an engineer could look into it.

profile pictureAWS
answered 11 days ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions