使用 AWS re:Post 即表示您同意 AWS re:Post 使用條款

How do I resolve validation exceptions in Amazon Bedrock?

3 分的閱讀內容
0

I want to troubleshoot validation exceptions for inference parameters in foundational models.

Short description

Inference parameters help you adjust the behavior of a large language model that Amazon Bedrock provides to get the output that you expect. Validation errors occur when you run the InvokeModel or InvokeModelWithResponseStream APIs on a foundation model that uses an incorrect inference parameter or corresponding value. These errors also occur when you use an inference parameter for one model with a model that doesn't have the same API parameter.

Resolution

Incorrect inference parameter name for the Anthropic Claude 3 model

You get a validation exception error when you include the top_n parameter for the Anthropic Claude 3 model. This parameter is an incorrect inference parameter name for this model:

Error: "ValidationException: An error occurred (ValidationException) when calling the InvokeModel operation:
Malformed input request: #: subject must not be valid against schema {"required":["messages"]}#:
extraneous key [top_n] is not permitted, please reformat your input and try again."

Example code with the incorrect inference parameter:


        # Invoke Claude 3 with the text prompt
        model_id = "anthropic.claude-3-sonnet-20240229-v1:0"
        try:
            response = client.invoke_model(
                modelId=model_id,
                body=json.dumps(
                    {
                        "anthropic_version": "bedrock-2023-05-31",
                        "max_tokens": 1024,
                        "top_n": 1 ,
                        "messages": [
                            {
                                "role": "user",
                                "content": [{"type": "text", "text":<prompt>}],
                            }]}))

To avoid this error, make sure that you choose from the following valid inference parameters:

  • max_tokens: The maximum number of tokens to generate before stopping
  • temperature: The amount of randomness injected into the response
  • top_p: The diversity of text that the model generates by setting the percentage of most-likely candidates that the model considers for the next token.
  • top_k: The number of most-likely candidates that the model considers for the next token.
  • stop_sequences: Custom text sequences that cause the model to stop generating.
  • prompt: The system prompt that provides context and instructions.

For more information, see Anthropic Claude Messages API.

Incorrect temperature inference value for the mistral-7b model

You get a validation exception error when the mistral-7b model receives an incorrect value for the temperature inference parameter:

Error: "ValidationException: An error occurred (ValidationException) when calling the InvokeModel operation:
Malformed input request: #/temperature: 2.0 is not less or equal to 1.0, please reformat your input
and try again"

Example code with the incorrect parameter value:

# Invoke minstral-7b with the text prompt
model_id = "mistral.mistral-7b-instruct-v0:2"
body = {
    "prompt": <prompt>,
    "max_tokens": 200,
    "temperature": 2.0
}
response = client.invoke_model(modelId=model_id, body=json.dumps(body))

The temperature inference parameter controls the randomness of predictions made by the model. Its range is from 0-1. The value 2.0 for temperature doesn't fall within this range and generates an error. For more information, see Mistral AI models.

Other models

For more information on inference parameters for models on Amazon Bedrock, see Inference parameters for foundation models.
Note: Embedded models have no tunable inference parameters.

AWS 官方
AWS 官方已更新 7 個月前