Aucune réponse
- Le plus récent
- Le plus de votes
- La plupart des commentaires
Contenus pertinents
- demandé il y a un an
- demandé il y a un mois
- demandé il y a un an
- AWS OFFICIELA mis à jour il y a 8 mois
- AWS OFFICIELA mis à jour il y a 2 ans
- AWS OFFICIELA mis à jour il y a 3 ans
- AWS OFFICIELA mis à jour il y a 2 ans
Can you please clarify, What database are you using to store the data? What kind of data is stored in the database?
Also I wonder if you could give an idea of how many times the DB would be queried for one model inference? Once? Many? The acceptable latency here might guide whether it's better/practical to have the endpoint call out to a separate DB service, or necessary to try and wedge everything into the endpoint container's RAM.