Bedrock AWS HIPPA compliance with 3rd party (OpenAI) LLM API Calls

0

Came across claims that bedrock calls were HIPPA compliant even with OpenAI traffic, seeking clarification. Generally, medical documents must be sterilized before using public access models such as GPT-4. If AWS has access to some sort of "Do Not Record" argument unique to bedrock offerings, I'd very much like to know that. I've seen bedrock claims, something along the lines as, "all end-points are HIPPA compliant" but obviously if there is a third-party service, along the information custody chain, this is irrelevant. Please provide clarity.

2개 답변
0

Hi Russel, you can find further information on Amazon Bedrock compliance validation in the user guide documentation.

In general the Security section of the user guide and also the security section of the FAQ are the recommended starting points for any clarification related to compliance.

To download security and compliance documents you can use AWS Artifact.

AWS
답변함 2달 전
0

The practice of sterilizing PHI when using OpenAI is a safe minimum practice. According to their public documentation ChatGPT is not offering a BAA (assume PHI is used insecurely, likely to train the model in question). AWS will also sign a BAA with our covered-entity customers and AWS Bedrock is a HIPAA eligible service -- meaning, if configured and used appropriately it can be used in HIPAA-regulated solutions (or be part of a solution that is HIPAA-compliant). While Bedrock encrypts data it uses at-rest and in-transit, the whole of HIPAA-compliance is a bigger conversation, so the previous links should be read and understood fully.

As an aside, Amazon Q, Q Business, CodeWhisperer (Q Developer/Builder) should not be fed PHI -- these services are special implementations of the same models that Bedrock uses, but are not intended for medical record purposes.

AWS
답변함 23일 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠