- Newest
- Most votes
- Most comments
Hi Jay,
There isn’t a specific charge for using the Bedrock knowledge bases themselves, but you will incur costs for the AI models and vector databases associated with them. Here’s a breakdown:
1.AI Models: You'll be charged based on the AI models you use. Pricing details are available here: AWS Bedrock Pricing https://aws.amazon.com/bedrock/pricing/?nc1=h_ls
2.Vector Databases: If you're using OpenSearch Serverless as part of your setup, you’ll need to consider its costs. Pricing details can be found here: OpenSearch Service Pricing https://aws.amazon.com/opensearch-service/pricing/?nc1=h_ls#Amazon_OpenSearch_Serverless
The overall cost will be the sum of the vector store and the model inferences. You can get a more precise estimate by looking at the input and output tokens during model inference, which will help you calculate costs based on usage.
Hello.
There is no charge for the Bedrock knowledge bases feature itself, but there is a charge for the AI models and vector databases used.
AI model pricing is listed in the document below.
https://aws.amazon.com/bedrock/pricing/?nc1=h_ls
In your case, I think you are using OpenSearch Serverless, which is created together with the knowledge base, so you should check the price list in the document below.
https://aws.amazon.com/opensearch-service/pricing/?nc1=h_ls#Amazon_OpenSearch_Serverless
Hi,
Documentation says clearly: "When using Agents for Amazon Bedrock and Knowledge Bases for Amazon Bedrock, you are only charged for the models and the vector databases you use with these capabilities
"
See https://aws.amazon.com/bedrock/pricing/?nc1=h_ls to confirm.
So, you have to select a vector store based on the list you see at https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-setup.html
The usual cost of vector stores has 2 components: size of vector data + number of requests to the store
Then, you also have to select an LLM among the supported ones on Bedrock: see https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-supported.html Each of these models have different costs: see https://aws.amazon.com/bedrock/pricing/?nc1=h_ls
The cost of your setup will be the sum of cost of vector store + cost of LLM inferences
To estimate cost of inferences, you can obtain the number of input and output tokens in the response metadata returned by the LLM: so, make some trials with representative queries of your future queries to compute how much they will cost in aggregate.
Best,
Didier
Relevant content
- Accepted Answerasked 10 days ago
- asked 4 days ago
- asked 7 months ago
- AWS OFFICIALUpdated 11 days ago
- AWS OFFICIALUpdated 5 months ago
- AWS OFFICIALUpdated 6 months ago