Rekognition Bulk Analysis Info

0

Hello,

I would like to use Bulk Analysis info to all my entire bucket, meaning more than 10.000 documents but tasks are failing due to limit. So how can I achieve bulk processes into smaller batches? Can't I do 10.000, 10.000 for entire bucket?

I should be able to select entire bucket, if there is limit error it should give me error early of the process. If I can select entire bucket and can't more than 10.000 than I should be able to split.

How can I achieve my problem?

Thanks!

Alpcan
已提问 5 个月前147 查看次数
2 回答
0

Hi Alpcan,

the 10000 limit is related to "Maximum number of images per Amazon Rekognition Media Analysis job", so you can have multiple parallel jobs for handling more than 10k images.

You can try to shard by folders or by file names ... Please look at this function as you might build upon it: https://github.com/aws-samples/amazon-rekognition-batch-processing-label-and-face-detection/blob/master/s3_detect_label_and_then_face.py

Note that you can have max 20 "Concurrent Amazon Rekognition Media Analysis jobs per account", that you can request to increase in the Service Quotas of your AWS account: https://us-east-1.console.aws.amazon.com/servicequotas/home/services/rekognition/quotas

Please accept the answer if it helped

profile pictureAWS
已回答 5 个月前
profile picture
专家
已审核 5 个月前
0

I think I should be able to select entire bucket if I have option like 'bulk', it should automatically dive into 10k parts

Alpcan
已回答 5 个月前

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则