Dealing With LimitExceededExceptions For Rekogntion Video

0

Hello everyone, I am currently hitting a snag when performing rekogntion on some videos. Basically I have an s3 bucket containing several videos that I would like to run through AWS Rekogntion. In order to speed up the process I am using a ProcessPoolExecutor to feed the videos to AWS Rekognition concurrently. However, when doing so I receive the LimitExceededException. I have 40 cores available on my machine so I am running 40 processes at a time. I know that the limit for Rekognition on videos is 20 jobs so what I tried to do is set a limit on the number of workers in my process pool to 20 but I still receive the LimitExceededExecption. Is there a way that I can check how many jobs are currently running so I can develop a backoff method in my code to wait until the number of jobs is below 20 before running a new job? Thanks in advance

Edited by: brussell152 on Dec 23, 2019 9:35 AM

已提問 4 年前檢視次數 211 次
1 個回答
0

Hi,

If you have many videos which you would like to process through Rekognition Video, we recommend an architecture similar to the one describe in https://github.com/aws-samples/amazon-textract-serverless-large-scale-document-processing/blob/master/README.md, specifically the async section of the architecture.

In this architecture, you submit messages to a SQS queue and then a lambda schedules jobs to Rekognition Video as existing jobs are completed. You can re-use the same code as in the above github link, just change calls from Textract to Rekognition, and it should work.

As of now, there is no way to understand number of jobs in progress, so we recommend the above architecture for large scale jobs processing.

AWS
已回答 4 年前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南