All Content tagged with Generative AI on AWS

Innovate faster to reinvent customer experiences and applications

Content language: English

Select up to 5 tags to filter
Sort by most recent
I am using Llama 3 70B through AWS Bedrock SDK. I observe that 90%+ responses are not in JSON, as mentioned in the prompt. Instead, I'm getting blank responses or string responses. Can anyone help me...
1
answers
0
votes
1153
views
asked 3 months ago
Is there any limit on context length of imported custom model in Bedrock? Trying to import a model from the Llama-3 family with a long context length. Context lengths beyond 32K don't seem to be...
1
answers
0
votes
1174
views
asked 3 months ago
Hi, I need to create a mongodb vector store through AWS Bedrock's default chunking. I am successful in creating that, yet when I am using metadata filters throught the API's "Retrieve_and_generate"...
0
answers
0
votes
934
views
asked 3 months ago
I have a lambda which query dynamo Db table for an available times once the the date is entered by the user and then the lambda will add those available time to the session attribute and will elicit...
2
answers
0
votes
1122
views
asked 3 months ago
I am curious to understand why is Llama 3 70B restricted to only 2048 output token length? Is there a way to increase the limit for me? Also, I do get an exception of number of calls I make...
1
answers
0
votes
1860
views
asked 4 months ago
Hello, I need help deleting a knowledge base. Here is the setup. I have a knowledge base with a Data source pointing to Amazon S3 and the Vector database in Amazon OpenSearch Serverless. My mistake...
3
answers
0
votes
1723
views
Raphael
asked 4 months ago
Hi, I could not find information about the precision of models such as Mistral and Llama, which are accessible on Amazon Bedrock. Could you please provide information about their precision?
2
answers
0
votes
1294
views
cem
asked 4 months ago