Hi,
I created a Kinesis Data Stream with 50 pre-defined shards and set up a Lambda function triggered by this KDS. I configured the trigger options as follows: batch size=1, batch window=0, and Concurrent batches per shard=10. Then, on a high-performance EC2 instance, I used Python's multiprocess feature to run 2000 processes concurrently, with each process writing data to the KDS in a loop 10 times, resulting in a total of approximately 20,000 records written in about 10 seconds.
Based on my trigger settings, I expected 50 (shards) * 10 (Concurrent batches per shard) = 500 Lambda instances to be launched simultaneously (as this number does not exceed the default limit of 1000 concurrently executing Lambdas). I estimated that with 20,000 records distributed across 500 (Lambda instances), each instance would take approximately 0.7 seconds (Lambda execution time), resulting in a total execution time of around 30 seconds. However, according to the CloudWatch Logs' Log Stream, the execution took a total of 30 minutes.
In CloudWatch Logs, I can see only 9 Log Streams, which leads me to believe that only 9 Lambda instances were launched to process the data. Is there any mistake in my configuration?
Thank you for the reply! I am using a random string for partition key, so I have 20000 partition keys. In my Lambda, it accesses RDS Proxy of RDS SQL Server (Express version), and according to Microsoft docs this version only allows 10 DB connections. So, are the DB connections limiting the Lambda instances? but why? ... Best Regards.
I do not think the number of connection is limiting Lambda. First, if it would, you would see an error in the function, it will not cause less concureency. Second, you are using RDS proxy, which its role is exactly this, proxy a large number of connection (from Lambda) to a small number towards the database.
I am not sure what is limiting your concurrency. I would do the following things:
Hi Uri, Thank you very much! I finally find out that the unreserved account concurrence of my Lambda is 10... Thanks again! Best Regards.