Best way to do low-latency lookups for ML model prediction caching?

0

I'd like to increase AWS posture on ML model prediction caching. Let's say we want to serve a lookup table upstream of a REST API that contains past predictions, and look it up at every prediction to avoid re-running through the model when a given input-output pair has already been computed. It's ok to wait several minutes between cache updates and typical TTL could be between minutes and days. What is the best way to do that in AWS?

  1. Use the caching functionality of API GW?
  2. Use a warm Lambda@Edge that locally has a lookup file with past predictions?
  3. Use DAX or Elasticache upstream of the model call? Is there a way to have DAX or Elasticache live at the edge?
1개 답변
0
수락된 답변

You can use CloudFront in front of your API to cache HTTP responses according to the Cache-Control header sent by your origin. Please note that this would only work for GET/HEAD HTTP requests.

profile pictureAWS
전문가
achraf
답변함 4년 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠