- Newest
- Most votes
- Most comments
When encountering a ValidationException with the message "Malformed request" for the Cohere Command model, especially when using tools in your request, there are a few things you can check:
-
Inference Parameters: Ensure that you're using the correct inference parameters for the Cohere Command model. Each model has its own set of accepted parameters, and using parameters that aren't supported by the specific model can lead to validation errors.
-
Tool Configuration: If you're using tools in your request, make sure that the tool configuration adheres to the required conventions and JSON schema. Incorrect formatting or structure in the tool configuration can trigger validation exceptions.
-
Parameter Values: Double-check that the values you're providing for each parameter are within the acceptable range or format. For example, some parameters might have specific constraints on their values (e.g., temperature usually needs to be between 0 and 1).
-
Request Structure: Ensure that your overall request structure is correct. The error message suggests checking your parameters and values, so review the entire request payload to make sure it's properly formatted.
-
Model Version: Verify that you're using the correct and most up-to-date model version. Sometimes, changes in model versions can affect the accepted parameters or request structure.
If you've checked all these aspects and are still encountering the error, you might want to review any recent changes in your code or the model's documentation. It's also possible that there could be an issue with the service itself, in which case you may need to contact AWS support for further assistance.
Remember to validate your inputs carefully and follow the model-specific guidelines provided in the AWS Bedrock documentation to avoid validation exceptions.
Sources
Resolve validation exception errors | AWS re:Post
Troubleshooting tool calls - Amazon Nova
Relevant content
- AWS OFFICIALUpdated 6 months ago
- AWS OFFICIALUpdated 6 months ago

I was able to receive correct output response from the model several times, but i occasionally could get such an error for the same request.