- Newest
- Most votes
- Most comments
Thank you for your questions about AWS Bedrock and its Knowledge Base feature. I'll address each of your points:
- Regarding setting the system prompt to allow the LLM to generate from its pretrained data without hallucinating:
When using Amazon Bedrock Knowledge Bases, the primary goal is to ground the LLM's responses in the information contained within your knowledge base, rather than relying solely on the model's pretrained data. The knowledge base is designed to provide context and factual information to the LLM, which helps reduce hallucinations.
However, it's important to note that you don't have direct control over a "system prompt" in the same way you might with other LLM implementations. Instead, Bedrock manages the interaction between the knowledge base and the LLM internally. The focus is on using the Retrieval Augmented Generation (RAG) technique to enhance the model's responses with relevant information from your data sources.
- Regarding the use of conversation summary with LangChain and Bedrock:
While Bedrock does offer some conversation context management features, the specific functionality you're asking about (summarizing the user's query after each request) is not a built-in feature of Bedrock itself. However, you may be able to implement this kind of functionality using LangChain in combination with Bedrock.
LangChain is a flexible framework that allows you to build custom chains and agents. You could potentially create a chain that includes a summarization step after each user query. This would be separate from Bedrock's internal context management but could be integrated into your overall application flow.
It's worth noting that Bedrock does provide some conversation context management. When using the RetrieveAndGenerate API, Bedrock manages short-term memory of the conversation to provide more contextual results. However, this is handled internally by the service and is not the same as explicitly summarizing each query.
For more advanced conversation management or custom summarization, you would likely need to implement additional logic in your application, possibly using LangChain's capabilities in conjunction with Bedrock's APIs.
Sources
Knowledge bases for Amazon Bedrock - AWS Prescriptive Guidance
How Amazon Bedrock knowledge bases work - Amazon Bedrock
Amazon Knowledge Bases for Bedrock Conversation Context | AWS re:Post
So with the system prompt am I not able to edit it and send it using the api, because i see when i am in the Test Knowledge Base stage I can?
Relevant content
- asked 6 months ago
- asked a year ago
- asked a year ago
- AWS OFFICIALUpdated a year ago
- How do I troubleshoot permission errors that I get when I create a knowledge base in Amazon Bedrock?AWS OFFICIALUpdated 5 months ago
- AWS OFFICIALUpdated 7 months ago