ModelErrorException: An error occurred (ModelErrorException) when calling the InvokeModel operation: The system encountered an unexpected error during processing. Try your request again.

0

Hello, this is an error that occurred while using the model sonnet. I changed the model to haiqu and there was no model error. I don't know why. I tried again and it seems that the error is intermittent. Invokemodel error is intermittent using boto3.

from tqdm import tqdm

for i, docs in tqdm(enumerate(df['text'])):
    query = "docs에 대해 알려줘라고 질문을 하겠습니다"
    prompt_template = f"""
    <documents>
    <documents index="1">
    {docs}
    </document_content>
    </documents>
    당신은 docs를 기반으로 답변하는 AI 비서입니다.
    다른말은 하지 않아야 하며, 반드시 예시처럼 대답하세요
    ex) <문장1> 예시 문장 </문장1>
    {query}
    """

    body = json.dumps({
        "anthropic_version": "bedrock-2023-05-31",
        "max_tokens": 4096,
        "messages": [
            {
                "role": "user",
                "content": [
                    {
                        "type": "text",
                        "text": prompt_template
                    }
                ]
            }
        ]
    })
    print(prompt_template)
    response = bedrock_client.invoke_model(
        body=body,
        contentType='application/json',
        accept='application/json',
        modelId=model_id
    )
    
    response_body = json.loads(response.get('body').read())
    response = response_body['content'][0]['text']
    df1.at[i, 'sentence'] = response
2 Answers
2

That probably issue could be due a temporary error, for that you can implement a retry mechanism in your code to handle the ModelErrorException. A library like backoff can be used to manage retries with exponential backoff, allowing the system to recover from temporary errors. In case of a ModelErrorException, you can catch and handle it gracefully by logging the error, skipping the affected document, or trying a different model if available.

Example of implementing retries with exponential backoff:

import backoff
import boto3
from botocore.exceptions import ClientError

@backoff.on_exception(backoff.expo, ClientError, max_tries=5)
def invoke_model(body, model_id):
    bedrock_client = boto3.client('bedrock')
    response = bedrock_client.invoke_model(
        body=body,
        contentType='application/json',
        accept='application/json',
        modelId=model_id
    )
    return response

Catch and handle errors:

from tqdm import tqdm
from botocore.exceptions import ClientError

bedrock_client = boto3.client('bedrock')
model_id = 'your_model_id_here'

for i, doc in tqdm(enumerate(df['text'])):
    query = "Please answer the following question based on the document"
    prompt_template = f"""
    <documents>
    <document index="1">
    {doc}
    </document_content>
    </documents>
    You are an AI assistant answering questions based on the document.
    Please answer in the format shown in the example.
    e.g., <sentence1> Example sentence </sentence1>
    {query}
    """

    body = json.dumps({
        "anthropic_version": "bedrock-2023-05-31",
        "max_tokens": 4096,
        "messages": [
            {
                "role": "user",
                "content": [
                    {
                        "type": "text",
                        "text": prompt_template
                    }
                ]
            }
        ]
    })

    try:
        response = invoke_model(body, model_id)
        response_body = json.loads(response.get('body').read())
        response_text = response_body['content'][0]['text']
        df1.at[i, 'sentence'] = response_text
    except ClientError as e:
        if e.response['Error']['Code'] == 'ModelErrorException':
            print(f"ModelErrorException occurred for document {i}. Skipping this document.")
            continue
        else:
            raise e
profile picture
EXPERT
answered 21 days ago
1

Hello,

The error ModelErrorException suggests that the issue may be related to the availability or the state of the model you're trying to use.

Few things you can try to troubleshoot the problem:

  1. Check the model status: Ensure that the model you're trying to use (in this case, "sonnet") is in a healthy and available state.

  2. Retry the request: As the error message suggests, try your request again. The issue may be temporary, and retrying the request may resolve the problem.

  3. Check for resource limits: Ensure that you haven't reached any service limits or quotas that might be causing the issue.

  4. Check for network issues: Intermittent errors can sometimes be caused by network connectivity issues. Ensure that your network connection is stable and reliable.

  5. Add retry logic: Implement a retry mechanism in your code to handle intermittent errors. When such error occurs the logic will wait for 30-60 seconds and then retry.

  6. Contact AWS Support: If the issue persists, you may want to consider contacting AWS Support to investigate the problem further.

[1] https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html

AWS
answered 21 days ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions

Relevant content