- Newest
- Most votes
- Most comments
Can you try this PR : https://github.com/aws-samples/amazon-kendra-langchain-extensions/pull/6/files
Or do the changes manually : samples/kendra_chat_flan_xxl.py Change this :
input_str = json.dumps({"inputs": prompt, "parameters": model_kwargs})
With this :
input_str = json.dumps({"text_inputs": prompt, **model_kwargs})
Change this :
return response_json[0]["generated_text"]
with this :
return response_json["generated_texts"][0]
Do the other changes for the file samples/kendra_retriever_flan_xxl.py
thank you Wael_AWS for the suggestions. I did those two exact changes and still ran into the same error. Then, I started to examine the environment variables used in the scripts.
Re-issuing all of them (including AWS_REGION, KENDRA_INDEX_ID, FLAN_XXL_ENDPOINT) seem to solve the issue, and restarting streamlit manually seem to resolve the issue. We had the env variables and streamlit startup in a script before. We are trying to understand the cause of issue.
Thank you for your suggestion!
Please change text "inputs" in this line [https://github.com/aws-samples/amazon-kendra-langchain-extensions/blob/main/kendra_retriever_samples/kendra_chat_flan_xxl.py#L33] to "text_inputs". ContentHandler class in the code will be different for different LLM and their variations. Please refer to the input and output expectation from model provider and change ContentHandler accordingly.
thank you AWS-User-Nitin. I made the changes as suggested. Was still getting the same error. However with some env variables reset, that seems to solve the issue. I was able to run inference with XXL now. thank you for your suggestion.
Relevant content
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 3 years ago
Hi Clara, can you please show me how your llm and contenthandler is defined? The issue is with the code there