Connection pool is full, discarding connection

0

I've got a lambda that is giving me the warning from the subject line in my logs. Specifically, I'm getting this message over and over:

[WARNING]	2022-09-26T20:27:57.948Z	7994926c-f98a-4501-8aca-76c9d5b8aa34	Connection pool is full, discarding connection: canvas.instructure.com. Connection pool size: 10
[WARNING]	2022-09-26T20:27:57.958Z	7994926c-f98a-4501-8aca-76c9d5b8aa34	Connection pool is full, discarding connection: canvas.instructure.com. Connection pool size: 10
.....

I'm trying to collect grade data for students in a class. Say there are 24 students in a class. I'll get this warning 14 times, since the pool size is capped at 10.

Seems like it should be simple enough to increase the pool size, but I've tried consulting this post to no avail. i.e. I've set:

client_config = botocore.config.Config(
    max_pool_connections=50,
)

and passed that in for all my clients. Hasn't fixed anything.

What can be done if setting the config for all my clients isn't dealing with the warnings? Could the fact that in my concurrent calls, I'm calling a function residing in a separate lambda layer be to blame?

asked 2 months ago38 views
1 Answer
0

That error originates from the urllib3. The goal of client_config strategy is to find a way to get the desired configuration passed through the code down to the urllib3 level. We would need to know more about what you're doing in the Python to provide any additional help.

As mentioned in many of the comments of the github issue you cited...

  • are you using any concurrency techniques in Python that might "blow out the pool"?
  • do you have any global variables that could be persistent across sessions?
  • which AWS service clients are you using?
  • how often is your Lambda being triggered?
  • how long does your Lambda run for?

Subsequent Lambda runs can retain references to global object in memory from the previous run if you're not careful to re-initialize your variables at the start of each run. I have been bitten by that this in the past. This could be happening here.

If you have concurrent Python in your Lambda (threading, multiprocessing, concurrent.futures, etc) you might try removing that and using Step Functions to manage the concurrency for you.

It is difficult to make any solid suggestions without more understanding of your current process.

profile picture
answered 2 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions