Rekognition Bulk Analysis Info

0

Hello,

I would like to use Bulk Analysis info to all my entire bucket, meaning more than 10.000 documents but tasks are failing due to limit. So how can I achieve bulk processes into smaller batches? Can't I do 10.000, 10.000 for entire bucket?

I should be able to select entire bucket, if there is limit error it should give me error early of the process. If I can select entire bucket and can't more than 10.000 than I should be able to split.

How can I achieve my problem?

Thanks!

Alpcan
已提問 5 個月前檢視次數 147 次
2 個答案
0

Hi Alpcan,

the 10000 limit is related to "Maximum number of images per Amazon Rekognition Media Analysis job", so you can have multiple parallel jobs for handling more than 10k images.

You can try to shard by folders or by file names ... Please look at this function as you might build upon it: https://github.com/aws-samples/amazon-rekognition-batch-processing-label-and-face-detection/blob/master/s3_detect_label_and_then_face.py

Note that you can have max 20 "Concurrent Amazon Rekognition Media Analysis jobs per account", that you can request to increase in the Service Quotas of your AWS account: https://us-east-1.console.aws.amazon.com/servicequotas/home/services/rekognition/quotas

Please accept the answer if it helped

profile pictureAWS
已回答 5 個月前
profile picture
專家
已審閱 5 個月前
0

I think I should be able to select entire bucket if I have option like 'bulk', it should automatically dive into 10k parts

Alpcan
已回答 5 個月前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南