- Più recenti
- Maggior numero di voti
- Maggior numero di commenti
Hello,
It seems the issue might be due to differences in hidden system prompts or configurations between Lambda and the Bedrock Playground. Double-check that both environments are using identical prompt setups, including any system prompts or hidden parameters that might be influencing the output. For detailed guidance on how system prompts can impact results, you can refer to this article: https://repost.aws/articles/AR-LV1HoR_S0m-qy89wXwHmw/the-leverage-of-llm-system-prompt-by-knowledge-bases-for-bedrock-in-rag-workflows
Hi,
Are you sure that you prompt exactly the same way from Lambda and meta website ? Are you sure for example that Meta doesn't include a system prompt that you don't see but that provide guidance to the LLM.
See my article to measure how such a system prompt via very deep guidance can impact results: https://repost.aws/articles/AR-LV1HoR_S0m-qy89wXwHmw/the-leverage-of-llm-system-prompt-by-knowledge-bases-for-bedrock-in-rag-workflows
Best,
Didier
Yes, I'm sure my prompt is the same across the places where I'm executing the model. I figured it out. Hopefully, this helps the next guy who runs into this. Simply put, for this model to work properly executed from a Lambda function, the prompt needs to be nested inside of some formatting text as in the Python example below. Without doing this, the model can produce erratic results. A full code example can be found here (https://docs.aws.amazon.com/bedrock/latest/userguide/bedrock-runtime_example_bedrock-runtime_InvokeModel_MetaLlama3_section.html).
AWS folks - It's worth noting that part of my confusion here stemmed from the fact that the Bedrock documentation in the AWS console has an "API Request" section at the bottom of each foundation model. In the Meta 3 8B case, at least, that section was sort of misleading. That is, if you want to run the model successfully, you need more than the set of parameters listed in Bedrock for the FM.
Parting thoughts: I'm guessing that both the AWS Playground and the website I linked programmatically format user prompts as below. That would explain the discrepancy.
# Embed the prompt in Llama 3's instruction format.
formatted_prompt = f"""
<|begin_of_text|>
<|start_header_id|>user<|end_header_id|>
{prompt}
<|eot_id|>
<|start_header_id|>assistant<|end_header_id|>
"""
Contenuto pertinente
- AWS UFFICIALEAggiornata 2 anni fa
- AWS UFFICIALEAggiornata 4 anni fa
- AWS UFFICIALEAggiornata 3 anni fa