Skip to content

Discrepancy in Guardrail Enforcement between AWS Console and API for ChatBedrock

0

Hi, I'm currently working with AWS Bedrock with LangChain to implement guardrails for prompt injection prevention in my chatbot application. However, I've encountered an issue where the guardrails seem to be enforced correctly in the AWS Console but not when using API calls through my code.

3 Answers
0

Did you try this example? https://python.langchain.com/v0.2/docs/integrations/llms/bedrock/

# Guardrails for Amazon Bedrock with trace
llm = BedrockLLM(
    credentials_profile_name="bedrock-admin",
    model_id="<Model_ID>",
    model_kwargs={},
    guardrails={"id": "<Guardrail_ID>", "version": "<Version>", "trace": True},
    callbacks=[BedrockAsyncCallbackHandler()],

You can enable the trace and view what exactly happened

AWS
answered 2 years ago
  • Hi, I tried using ChatBedrock instead of BedrockLLM since BedrockLLM doesn't support Claude v3 models. However, prompt injection is still possible even after providing my guardrail details. I also tried using input tags like <amazon-bedrock-guardrails-guardContent_xyz>, but it remains prone to prompt injection.

0

Since LangChain is 3rd party and not AWS tool, it may be hard to address this challenge. If you don't have any special reason to use LangChain, I suggest you to check the new unified AWS Converse API, which also supports Guardrails

https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-use-converse-api.html

AWS
answered 2 years ago
0

Maybe you want to check Amazon Bedrock Knowledge Bases. It is very easy implementation for RAG with a friendly API that also supports guardrails and preserves the context. https://aws.amazon.com/bedrock/knowledge-bases/ https://docs.aws.amazon.com/bedrock/latest/userguide/kb-test-query.html

AWS
answered 2 years ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.