Deploying large scale ML model

0

Hi, I am deploying an ML model with a retrieval component from AWS and it had two parts:

  1. ML model that is deployed using Sagemaker. The model isn't big, so this is simple.
  2. Retrieval: The ML model first retrieves information from a database using ANN algorithm(like Annoy or Scann). The database needs to be loaded into memory at all times for really fast inference. However, the database is big(around 500GB). What is the best way to deploy this database? Is Sagemaker the best bet?
  • Can you please clarify, What database are you using to store the data? What kind of data is stored in the database?

  • Also I wonder if you could give an idea of how many times the DB would be queried for one model inference? Once? Many? The acceptable latency here might guide whether it's better/practical to have the endpoint call out to a separate DB service, or necessary to try and wedge everything into the endpoint container's RAM.

asked 2 years ago90 views
No Answers

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions