- Más nuevo
- Más votos
- Más comentarios
Kendra is available in EU-West-2. However you also have the option of using Knowledge bases for Amazon Bedrock. If you want to use retrieval augmented generation to generate an answer at runtime using your own documents as the source of truth, then you can use the QnAIntent and index your documents in a Bedrock Knowledge base. Details in this blog: https://aws.amazon.com/blogs/machine-learning/create-natural-conversations-with-amazon-lex-qnaintent-and-knowledge-bases-for-amazon-bedrock/ And in this video: https://www.youtube.com/watch?v=Z0hSkxTJB64
There are also some additional enhancements to the QnAIntent that were released recently: https://www.youtube.com/watch?v=KUk2s-SOfgs
If you want to return answers, but want to be able to pre-approve the exact wording of those answers. Then you can create one json file for each answer and index those in the knowledge base. You can then use the "Exact Response" option in QNAIntent (with Knowledge base for Amazon Bedrock)
Each answer would be contained in its own json file together with a few examples of questions that would result in that answer:
{ "question1": "How do I enroll in a health insurance plan during open enrollment?", "question2": "What is the process for enrolling in health insurance?", "question3": "What steps do I need to take to enroll in a health plan?", "answer": "During open enrollment, you can enroll in a health insurance plan through the Health Insurance Marketplace (Healthcare.gov) or directly with a private insurance company. You'll need to provide personal and financial information, compare plans, and select the one that best fits your needs." }
When you create the data source for your knowledge base, under "Content chunking and parsing" you select "Custom". Under "Chunking strategy" you select "No chunking"
In your Lex bot you create the qnaintent, reference the id of your knowledge base, select "Exact response", and in "Answer Field" you put the name of the answer field from your json file. In my example file above that would be "answer".
Your bot will now: Take the customers utterance and calculate an embedding (a numerical representation for the meaning of the utterance) Search the knowledge base for json files that are similar in meaning to the customer's question. Return the best matches. Use the LLM to review the conversation history and identify the answer that is the best one based on the context of the conversation. Return only exactly what is in the "answer" filed in the json file.
I will make a video on this as well. Look for it on the AWS youtube channel in a week or two, or follow me on linkedin and I will post it there when it is published.
Contenido relevante
- OFICIAL DE AWSActualizada hace 2 años
- OFICIAL DE AWSActualizada hace 10 meses
- OFICIAL DE AWSActualizada hace 2 años
- OFICIAL DE AWSActualizada hace 2 años