- 최신
- 최다 투표
- 가장 많은 댓글
Hi,
Your use-case is a fine-grained access control to the application user accessing the KB+LLM solution. Since this is specific to the custom application you are developing, AWS offers you the building blocks for achieving this.
Since you are already using Bedrock KB, Agents for Amazon Bedrock could be a natural fit here. It can execute multi-step tasks based on natural language input and an organisation's data and policies. These managed agents orchestrate interactions between different components, including language models, API integrations, user conversations, and knowledge bases loaded with the organisation's data. You could also use the trace capability to follow the chain of thought reasoning used as the plan is carried out, to view the intermediate steps in the orchestration process and troubleshoot issues.
Please check this video to get an idea. For your use-case, if you could include an employee id (instead of customer id, as in the video) in the request and maintain the mapping between employee ids and knowledge bases (e.g. in a DynamoDB table), you should be able to restrict the employee to the right KB. Your approach of segregating KBs is correct.
Alternatively, you could create separate routes for an APIGW/Lambda functions and consider using (Amazon Verified Permissions)[https://docs.aws.amazon.com/verifiedpermissions/latest/userguide/what-is-avp.html] which is a scalable, fine-grained centralised permissions management and authorisation service for custom applications. AVP integrates seamlessly with API GW and Cognito.
Thanks, Rama
In addition to this, you can optionally add metadata to files in your data source. Metadata allows for your data to be filtered during knowledge base query. Pls refer this. This could also be helpful in your use-case.
Thanks Rama, i would assume that solution #2 still needs a mapping between employee ids and metadata (e.g. in a DynamoDB table) so that filtering could happen during query execution. I am interested in what the response would be from the LLM if the employee id didn't match to a metadata item (ie the employee did not have access) would it just return with "no information found" or "you do not have authorization the data request"
So, based on the default prompt template there this condition "..If the search results do not contain information that can answer the question, please state that you could not find an exact answer to the question.." I think if the search results are empty i.e. fully filtered out, it is likely you will get "I could not find an exact answer to the question". Best to test this though!