I'd like to increase AWS posture on ML model prediction caching. Let's say we want to serve a lookup table upstream of a REST API that contains past predictions, and look it up at every prediction to avoid re-running through the model when a given input-output pair has already been computed. It's ok to wait several minutes between cache updates and typical TTL could be between minutes and days. What is the best way to do that in AWS?
- Use the caching functionality of API GW?
- Use a warm Lambda@Edge that locally has a lookup file with past predictions?
- Use DAX or Elasticache upstream of the model call? Is there a way to have DAX or Elasticache live at the edge?