- Newest
- Most votes
- Most comments
In Amazon Bedrock, prompts are typically passed to the model in a single, consolidated format rather than as a series of separate prompts in different chats. When using the Converse API or InvokeModel API, you can provide a prompt identifier that includes all the necessary information for the model.
This consolidated prompt can include various elements such as:
- The foundation model to be used
- Model configuration settings
- System instructions
- User and assistant messages
- Tool configurations
- Any specified guardrails
For Bedrock agents, the process is similar. The instructions, guardrails, and your specific prompt are combined into a single, comprehensive input for the model. This approach ensures that all relevant information and constraints are considered together when the model generates its response.
It's important to note that while the prompt is consolidated, it can still include structured elements like system messages, user messages, and assistant messages, especially for models like Anthropic's Claude that use a specific conversational format.
When using the Converse API, you can include additional messages to extend or modify the prompt, allowing for some flexibility in how the information is structured and presented to the model. However, these are still part of a single interaction rather than separate chat sessions.
This consolidated approach helps maintain context and ensures that all specified parameters, including guardrails and instructions, are consistently applied throughout the model's response generation process.
Sources
Prompt Management for Amazon Bedrock - AWS
Converse - Amazon Bedrock
Prompt engineering concepts - Amazon Bedrock
Relevant content
- asked 9 months ago
- asked 7 months ago
- asked 8 months ago
- asked 2 months ago
- AWS OFFICIALUpdated 20 days ago