- Más nuevo
- Más votos
- Más comentarios
There is no correlation between amount of memory and network bandwidth. All functions get the same bandwidth. It may seem like smaller functions get less bandwidth, but that is just because it does not have enough CPU to actually drive the max bandwidth.
Saying that, I am not sure if the network bandwidth is your limiting factor.
The thing that occurs to me is that each thread might be creating a new TLS connection to DynamoDB. That takes a little bit of time to set up (TCP, TLS, authentication) so you might try playing with thread numbers and batch sizes.
Hi,
Since you mention the upper of 10,240 MB, I understand that you know that there is a correlation between allocated memory and available CPU power: see https://docs.aws.amazon.com/lambda/latest/operatorguide/computing-power.html
The amount of memory also determines the amount of virtual CPU available to a function.
Adding more memory proportionally increases the amount of CPU, increasing the overall
computational power available. If a function is CPU-, network- or memory-bound, then changing
the memory setting can dramatically improve its performance.
But, as you mention similar perfs at 2,048 MB, I would look at DynamoDb RCUs: see https://aws.amazon.com/dynamodb/pricing/provisioned/
Read capacity unit (RCU): Each API call to read data from your table is a read request. Read requests
can be strongly consistent, eventually consistent, or transactional. For items up to 4 KB in size, one
RCU can perform one strongly consistent read request per second. Items larger than 4 KB require additional
RCUs. For items up to 4 KB in size, one RCU can perform two eventually consistent read requests per second.
Transactional read requests require two RCUs to perform one read per second for items up to 4 KB.
For example, a strongly consistent read of an 8 KB item would require two RCUs, an eventually consistent
read of an 8 KB item would require one RCU, and a transactional read of an 8 KB item would require four RCUs.
See Read Consistency for more details.
You should try a high provisioned capacity in terms of RCU and see if you improve
Finally, you should investigate your perfs under strongly consistent and eventually consistent reads, and get rid of strongly consistent rids if this level of consistency is not needed (or guaranted by another mean in your app). See https://medium.com/expedia-group-tech/dynamodb-guidelines-for-faster-reads-and-writes-3b172b4c2120
Best.
Didier
Contenido relevante
- OFICIAL DE AWSActualizada hace 2 años
- OFICIAL DE AWSActualizada hace 3 años
The documentation states that the throughput read capacity for on-demand is 40,000 units, which is 80,000 eventually consistent reads per second, for items up to 4 KB. Since my items are 40 KB, I should be able to read 8,000 items per second. Why don't I get this throughput?