- 최신
- 최다 투표
- 가장 많은 댓글
Scaling the number of nodes could work to help optimize cost, but there is no native way to manage this so it would require custom logic/management. In general, DAX is more efficient with fewer larger nodes rather than many smaller nodes. Typically, the DAX cluster size is not driven by dataset size, but rather by throughput (object size relative to cache hits & misses) of reads and writes. Each node is a replica so the dataset is not sharded across the cluster.
It would be helpful to understand the purpose of DAX in their design. If it is for latency, what is the latency requirement for the read calls? Direct calls to a DynamoDB table have very consistent low ms latency that might be low enough to achieve their performance requirement and could be more cost effective.
ref: