Connection pool is full, discarding connection

0

I've got a lambda that is giving me the warning from the subject line in my logs. Specifically, I'm getting this message over and over:

[WARNING]	2022-09-26T20:27:57.948Z	7994926c-f98a-4501-8aca-76c9d5b8aa34	Connection pool is full, discarding connection: canvas.instructure.com. Connection pool size: 10
[WARNING]	2022-09-26T20:27:57.958Z	7994926c-f98a-4501-8aca-76c9d5b8aa34	Connection pool is full, discarding connection: canvas.instructure.com. Connection pool size: 10
.....

I'm trying to collect grade data for students in a class. Say there are 24 students in a class. I'll get this warning 14 times, since the pool size is capped at 10.

Seems like it should be simple enough to increase the pool size, but I've tried consulting this post to no avail. i.e. I've set:

client_config = botocore.config.Config(
    max_pool_connections=50,
)

and passed that in for all my clients. Hasn't fixed anything.

What can be done if setting the config for all my clients isn't dealing with the warnings? Could the fact that in my concurrent calls, I'm calling a function residing in a separate lambda layer be to blame?

질문됨 2년 전294회 조회
1개 답변
0

That error originates from the urllib3. The goal of client_config strategy is to find a way to get the desired configuration passed through the code down to the urllib3 level. We would need to know more about what you're doing in the Python to provide any additional help.

As mentioned in many of the comments of the github issue you cited...

  • are you using any concurrency techniques in Python that might "blow out the pool"?
  • do you have any global variables that could be persistent across sessions?
  • which AWS service clients are you using?
  • how often is your Lambda being triggered?
  • how long does your Lambda run for?

Subsequent Lambda runs can retain references to global object in memory from the previous run if you're not careful to re-initialize your variables at the start of each run. I have been bitten by that this in the past. This could be happening here.

If you have concurrent Python in your Lambda (threading, multiprocessing, concurrent.futures, etc) you might try removing that and using Step Functions to manage the concurrency for you.

It is difficult to make any solid suggestions without more understanding of your current process.

profile picture
답변함 2년 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠