Skip to content

Using Prompt Management Prompts with Agents

0

Hi All

When a Bedrock Prompt Management Prompt is created, in the variants property you have the option to set a Generative AI Resource, in this case I have set an Bedrock Agent that I'd like it to be used with.

When I test the prompt in the prompt builder, I can see the agent appear as the generative AI resource, I set the parameters in the test variables as required by the prompt and click run. Everything runs as expected, it synthesises the initial agent message as expected.

I've also noticed when you go to the Agent directly in the console and use the test window, within the chat input modal you can select the tool button(three vertical dots) to use a Prompt, which will then allow for setting of required variables. And again this works as expected, synthesising the initial agent chat message correctly.

But...

How do I programmatically(i.e. with boto3) invoke the prompt as an input to the configured agent or vice versa. I can't get anything to work with either the bedrock runtime or bedrock agent runtime API's (invoke_agent, invoke_model etc.). What is going on?

Thanks

Adrian

asked 8 months ago1.1K views
1 Answer
0

Thank you for your question about using Prompt Management Prompts with Bedrock Agents programmatically. There are a few important points to clarify regarding this functionality.

Currently, there is a limitation when it comes to using prompts with agents programmatically. While you can create prompts that include a Generative AI Resource (such as a Bedrock Agent) and test them in the console, this functionality is not directly available through the API.

When using the API to invoke an agent with a prompt, you need to take a different approach. Instead of referencing the prompt directly, you must enter the text of the prompt into the 'inputText' field of the InvokeAgent request. This means you can't directly use the prompt ARN or ID when making API calls to invoke the agent.

To use a prompt with an agent programmatically, you would need to:

  1. Retrieve the prompt text from your Prompt Management system.
  2. Insert any necessary variable values into the prompt text.
  3. Use this prepared text as the input for the InvokeAgent API call.

It's important to note that this workflow differs from how prompts can be used with foundation models, where you can specify the prompt ARN directly in API calls.

For now, the ability to select and use prompts with agents through the console (as you described in the test window) is a feature that simplifies the process for manual testing and interaction, but it doesn't have a direct equivalent in the programmatic API calls.

If you need to frequently use specific prompts with your agents in a programmatic workflow, you may want to consider storing the prompt templates in your application code and preparing them with the necessary variables before making the InvokeAgent call.

AWS is continuously improving its services, so it's worth keeping an eye on future updates that might introduce more seamless integration between Prompt Management and Agent invocation via API.
Sources
Create a prompt using Prompt management - Amazon Bedrock
Test a prompt using Prompt management - Amazon Bedrock

answered 8 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.