Using Bedrock invokeModel API with prompt created in prompt management

0

I've created a prompt in prompt management using Claude Haiku 3.5 with 2 variables {{word}} and {{sentence}}. I'm using the boto3 SDK to invoke the model from a Lambda function with the following code:

import json
import boto3

# create boto3 client for Bedrock converse API
bedrock = boto3.client(service_name='bedrock-runtime', region_name='us-west-2')

def lambda_handler(event, context):
    
    to_translate = event['selected_text']
    containing_sentence = event['containing_sentence']

    request_body = {
        "promptVariables": {
            "word": to_translate,
            "sentence": containing_sentence
        }
    }

    body=json.dumps(request_body)

    response = bedrock.invoke_model(
    modelId="<my_prompt_ARN>",
    contentType="application/json",
    accept="application/json",
    body=body
)

I am getting an error "An error occurred (ValidationException) when calling the InvokeModel operation: Malformed request for promptVariablesreceived"

What do I have wrong/missing in my request body? The only example I can find in AWS documentation is in the 'Prompt variables' section on this page, which seems to match up.

2 Answers
1
Accepted Answer

Hello Anthony,

Looking at your lambda code, the issue has two main parts:

  1. You're using the wrong API method (invoke_model instead of converse or converse_stream). For Prompt Management, you need to use the converse API as documented in the boto3 documentation.

  2. The promptVariables structure needs to include a nested "text" object for each variable.

Try this:

import json
import boto3

bedrock = boto3.client(service_name='bedrock-runtime', region_name='us-west-2')

def lambda_handler(event, context):
    try:
        to_translate = event['selected_text']
        containing_sentence = event['containing_sentence']

        response = bedrock.converse(
            modelId="<prompt_ARN>",
            promptVariables={
                "word": {
                    "text": to_translate
                },
                "sentence": {
                    "text": containing_sentence
                }
            }
        )

        response_text = response['output']['message']['content'][0]['text']

        return {
            'statusCode': 200,
            'body': json.dumps({
                'response': response_text
            })
        }
        
    except Exception as e:
        return {
            'statusCode': 500,
            'body': json.dumps({
                'error': str(e)
            })
        }
AWS
answered 2 months ago
profile picture
EXPERT
reviewed 2 months ago
  • That fixed it, thank you! As an aside, there are a couple of places in the docs that should really be updated to cover this: the invokeModel page in the API reference still states "If you use a prompt created through Prompt management, specify the ARN of the prompt version. For more information, see Test a prompt using Prompt management." Additionally, the Using the Converse API page in the docs shows the promptVariables property without the needed text keys.

  • Is there C# equivalent of this code in the latest .NET SDK

-2

To correctly use the prompt variables with a prompt created in Prompt Management, you should structure your request body as follows:

  1. Remove the "promptVariables" key from the top level of your request body.
  2. Include your variables directly in the top level of the request body.

Here's how your modified code should look:

request_body = {
    "word": to_translate,
    "sentence": containing_sentence
}

body = json.dumps(request_body)

response = bedrock.invoke_model(
    modelId="<your_prompt_ARN>",
    contentType="application/json",
    accept="application/json",
    body=body
)

This structure aligns with how Bedrock expects the variables for a managed prompt to be passed. Each variable you defined in your prompt ({{word}} and {{sentence}}) should be a top-level key in your request body JSON.

Also, ensure that your prompt ARN is correct and that you have the necessary permissions to invoke the model with this prompt.

If you're still encountering issues after making these changes, you may want to double-check the following:

  1. Verify that the variable names in your request exactly match those in your prompt template.
  2. Ensure that you're providing values for all required variables defined in your prompt.
  3. Check that your AWS credentials and region settings are correct.

By making these adjustments, you should be able to successfully invoke your managed prompt using the InvokeModel API.

Sources
Prompt Management for Amazon Bedrock - AWS
Code examples for Amazon Bedrock Runtime using AWS SDKs - Amazon Bedrock

profile picture
answered 2 months ago
  • Forming the request body as suggested here results in an error "An error occurred (ValidationException) when calling the InvokeModel operation: Providing attributes other than 'promptVariables' during runtime is not yet supported"

  • Is there C# equivalent of this code in the latest .NET SDK

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions