Using Kendra as the RAG technique for an Q&A GenAI app but the generated answers seem to be limited to only the indexed documents


Hi, I am experimenting Kendra as the RAG technique for an Q&A GenAI app as described in this blog

After playing with the prompts, it does seem to be able to generate relevant answers based on the indexed documents (most of the time). However, it doesn't seem to be able to leverage information that is NOT in the indexed documents. For example, I asked a question 'what is pokemon?' for which pokemon is not in any of the indexed doc, it would generate garbage responses.

My question is that does this Kendra-RAG technique respond with information from the indexed documents ONLY? My understanding is that kendra is there to supplement the external information. What do I need to create a Q&A bot that will answer questions with internal AND external intel?

Do I use technique vectorstore DB like this:

Meaning not to use Kendra at all? Thanks for the advise.

질문됨 일 년 전2787회 조회
5개 답변


So this seems to be a prompt engineering problem. In the back-end of this architecture, the wokflow is orchestrated by an open source Python Library called Langchain. If you take a look at the Python code and the way Langchain is orchestrating the RAG architecture you will see a section called 'Prompt Template'. In this prompt template you will be able to view the background prompt and where the "context" (excerpts and documents from kendra) and "question" (user input) go. It can look something like this:

*prompt_template = """ Human: This is a friendly conversation between a human and an AI. The AI is talkative and provides specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.

Assistant: OK, got it, I'll be a talkative truthful AI assistant.

Human: Here are a few documents in <documents> tags: <documents> {context} </documents> Based on the above documents, provide a detailed answer for, {question} Answer "don't know" if not present in the document. Assistant:""" *

As you can see the line "Answer "don't know" if not present in the document." is there as a safeguard to ensure the model does not hallucinate. BUT what this also does is ensure that any query that does not pertain do the information stored in your vector store would be met with the answer "don't know". If you would like to have the more general answers to questions, you can remove that line and anywhere else in the prompt where it indicates to the LLM to answer questions ONLY based on the documents provided. This will increase your model hallucinations, but will also allow you to ask general questions.

I hope this helps! -Moh

답변함 일 년 전
profile picture
검토됨 4달 전

I was exploring this topic recently and was recommended the following blog post to read more on this.

Link to blog post

답변함 일 년 전
  • Thank you Vijay for the pointer. I ran into errors when trying out the sample notebook. We’re you able to import the sagemakerEmbedding library?


I was able to resolve this problem by increasing the temperature. The model looks for knowledge from external data as its creativity (temperature) increases.

답변함 일 년 전
  • I'm glad it is working.


You are right. Kendra is limited to the knowledge of the documents it has. This is why you often will see that different chatbots or Q&A systems leverage the power of LLM to provide more human like answers. Please see this example. It describes a sample chat that user both Kendra and LLM. The LLM is from the SageMaker Jump start, but you can modify the code to work with Bedrock

답변함 일 년 전

If you want to get answers based on the context, your prompt (in Python) might be something like this:

Answer the following question based on the context above:

However, if you want to ask a question without Retrieval Augmented Generation, then your prompt would just be the question itself:

답변함 일 년 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠