2 Answers
- Newest
- Most votes
- Most comments
1
Hello Imaran,
I would caution you to really think through this model.
The entire ability for you to protect your IP is based on the ability to obfuscate the source and protect your model. However both of these are going to reside in your clients account, and based on the description above, the model will likely be decrypted in memory of a host somewhere to run.
The real way to architect this, would be to run the software and model in your AWS account and provide an API that your customers can use to interact with the model that you host. You can even use services like PrivateLink to ensure that the customer access to the API is completely private and secure.
Please let me know if I can answer any more questions. Best Craig
answered a year ago
0
answered a year ago
Relevant content
- asked 5 months ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 6 months ago
- AWS OFFICIALUpdated 8 months ago
- AWS OFFICIALUpdated a year ago
Hi Craig, Thank you very much for your feedback, we have few questions regarding the approach you suggested. Clients we are dealing with do not wish to share their data. If we expose functionalities as API endpoints, they have to send the data for processing. The approach you suggested breaks this requirement in my opinion, please correct me if I'm wrong? That is the only reason why we decided to run our code in client side. Is there any better way to achieve this in AWS?
We found this article, however; we tried to architect the system in FHE (Fully Homomorphic Encryption) manner but the approach is not yet fully developed and current FHE libraries do not support even most popular AI models, yet alone our own models. https://aws.amazon.com/blogs/machine-learning/enable-fully-homomorphic-encryption-with-amazon-sagemaker-endpoints-for-secure-real-time-inferencing/