sagemaker llama 2 model responding 'don't know' with Kendra index

0

I've build a document based chatbot with sagemaker llama 2-7b model with Kendra index which is giving the answer but at the end it is adding 'don't know'. but using same sagemaker model with faiss it is correctly answering. What could be the reason the model is answering but keep saying 'don't know' with Kendra index?

prompt_template = """ <s>[INST] <<SYS>> The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. {context} <</SYS>> Instruction: Based on the above documents, provide a detailed answer for, {question} Answer "don't know" if not present in the document. Solution: [/INST]""" PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "question"], ) condense_qa_template = """ <s>[INST] <<SYS>> Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question.

Chat History: {chat_history} Follow Up Input: {question} <</SYS>> Standalone question: [/INST]"""

Above is the prompt template I've used for both model i.e. with Kendra and faiss.

已提問 4 個月前檢視次數 282 次
沒有答案

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南