Browse through the questions and answers listed below or filter and sort to narrow down your results.
0
answers
0
votes
4
views
asked 2 months ago
Provisioned Rate exceeded on DeleteFaces in Rekognition
I am getting a Provisioned Rate exceeded error when calling the DeleteFaces api of AWS rekognition, using the current version of the .net DSK.
I think what triggered the error in the first place is that I was running some batches in parallel, and even though they don't make many calls to AWS rekognition, it happened occasionally that their calls were simultaneous, exceding the limit of 5 per seconds (I managed retry attempts but I had not realised the SDK had its own retry policy by default).
However it seems now that any single call I make to DeleteFaces returns a "Provisioned Rate exceeded" exception, no matter how long I wait before I retry (>1h), or how many face ids I try to delete per call (50 to 4096), or the retry policy of the sdk (legacy or standard). Other functions of the rekognition API seem to work fine. Is there a rate limit other than the 5 calls per seconds advertised as a default for rekognition (and when that limit is reached, how long does it take before it allows new calls)? Or is it some sort of shadow ban after a certain number of provisioned rate exceeded errors (and if so how long should I wait before the ban is lifted)? Does it matter how many faces I delete in one call? I already batch faceids by batches of 4096. Do these get further split by the .net sdk in multiple calls, resulting in some sort of api calls amplification?
Accepted AnswerAmazon Rekognition
2
answers
0
votes
4
views
asked 3 months ago
MaxLabels for Amazon Rekognition Video not working
The `MaxLabels` argument works on the Amazon Rekognition Image but not using a Video. I have this PHP payload I use:
```
'ClientRequestToken' => (string)Str::uuid(),
'JobTag' => VideoOperations::REKOGNITION_LABEL_DETECTION,
'NotificationChannel' => [
'RoleArn' => config('rekognition.notification_channel.role_arn'),
'SNSTopicArn' => config('rekognition.notification_channel.sns_arn'),
],
'MinConfidence' => config('rekognition.min_confidence'),
'MaxLabels' => config('rekognition.max_labels'),
'Video' => [
'S3Object' => [
'Bucket' => config('rekognition.bucket'),
'Name' => $video->filename
],
]
```
I checked the official github AWSDocs Rekognition samples and could not find an implementation of the `MaxLabels` on a Video Rekognition. Is it not supported?
https://github.com/awsdocs/amazon-rekognition-developer-guide/search?q=MaxLabels
Accepted AnswerAmazon Rekognition
1
answers
0
votes
2
views
asked 4 months ago
1
answers
1
votes
6
views
asked 4 months ago
1
answers
0
votes
13
views
asked 4 months ago
Costs for Batch Rekognition OCR
I have the need to use "text in image" from Amazon Rekognition to process more than 300k images per month.
Does Amazon Rekognition provide a way to process images in batch, doing only 1 request to analyze multiple images at once?
And the main question: does a batch process is cheaper than a single process? For example, I have 10 images to extract the text using Amazon Rekognition. The cost for 10 requests to Rekognition (one for each image) is the same as the cost for 1 batch request sending 10 images at once?
Accepted AnswerAmazon Rekognition
2
answers
0
votes
9
views
asked 5 months ago
1
answers
0
votes
16
views
asked 5 months ago
3
answers
0
votes
0
views
asked 7 months ago
Can Amazon Rekognition recognize a person if they go out of the camera frame and come back in?
If a person that Amazon Rekognition Video recognizes leaves the camera frame and then comes back into the frame, does Amazon Rekognition recognize them as the same person?
Accepted AnswerAmazon Rekognition
1
answers
0
votes
4
views
asked a year ago
2
answers
0
votes
0
views
asked a year ago
Rekognition Jobs FaceDetection and FaceSearch return different findings...
Hi guys,
I was playing with Jobs, FaceSearch and FaceDetection while I realized that even though they are run against the same video in S3 they return different values for the same finding. For instance bot Jobs run against the same bucket object and i get difference in BoundingBox and Landmarks which i think should be the same and also Iremember that while i tested this 4 months ago i got those expected results.
Example: Video with just 1 Person.
- aws rekognition start-face-detection --video "S3Object={Bucket=facejobs,Name=head-pose-face-detection-female.mp4 }"
- aws rekognition start-face-search --video "S3Object={Bucket=facejobs,Name=head-pose-face-detection-female.mp4 }" --collection-id test-collection
Getting the results:
aws rekognition get-face-detection --job-id "3661ab79c711bbd530ca5a910003d..."
aws rekognition get-face-search --job-id "cc3555290d2093bec860519bf53403e8..."
Comparing:
FaceDetection Results
-------------------------
```json
{
"JobStatus": "SUCCEEDED",
"VideoMetadata": {
"Codec": "h264",
"DurationMillis": 134584,
"Format": "QuickTime / MOV",
"FrameRate": 12.0,
"FrameHeight": 432,
"FrameWidth": 768
},
"Faces": \[
{
"Timestamp": 0,
"Face": {
"BoundingBox": {
"Width": 0.13135695457458496,
"Height": 0.36436259746551514,
"Left": 0.4154500961303711,
"Top": 0.22901538014411926
},
"Landmarks": \[
{
"Type": "eyeLeft",
"X": 0.4518287479877472,
"Y": 0.3687707185745239
},
{
"Type": "eyeRight",
"X": 0.5152483582496643,
"Y": 0.3756844997406006
},
{
"Type": "mouthLeft",
"X": 0.451990008354187,
"Y": 0.5045619010925293
},
{
"Type": "mouthRight",
"X": 0.5046293139457703,
"Y": 0.5103421807289124
},
{
"Type": "nose",
"X": 0.47848179936408997,
"Y": 0.4353737533092499
}
],
"Pose": {
"Roll": 2.838758707046509,
"Yaw": -1.3927381038665771,
"Pitch": 10.166311264038086
},
"Quality": {
"Brightness": 79.76757049560547,
"Sharpness": 26.1773681640625
},
"Confidence": 99.99970245361328
}
},
{
"Timestamp": 499,
```
FaceSearch
--------------------
```json
{
"JobStatus": "SUCCEEDED",
"VideoMetadata": {
"Codec": "h264",
"DurationMillis": 134584,
"Format": "QuickTime / MOV",
"FrameRate": 12.0,
"FrameHeight": 432,
"FrameWidth": 768
},
"Persons": \[
{
"Timestamp": 0,
"Person": {
"Index": 0,
"Face": {
"BoundingBox": {
"Width": 0.13410408794879913,
"Height": 0.365193247795105,
"Left": 0.4145432412624359,
"Top": 0.2288028597831726
},
"Landmarks": \[
{
"Type": "eyeLeft",
"X": 0.4514598548412323,
"Y": 0.3685579001903534
},
{
"Type": "eyeRight",
"X": 0.5149661898612976,
"Y": 0.37557920813560486
},
{
"Type": "mouthLeft",
"X": 0.4519285261631012,
"Y": 0.5038205981254578
},
{
"Type": "mouthRight",
"X": 0.5038713216781616,
"Y": 0.5095799565315247
},
{
"Type": "nose",
"X": 0.47897493839263916,
"Y": 0.43512672185897827
}
],
"Pose": {
"Roll": 1.5608868598937988,
"Yaw": -18.46771240234375,
"Pitch": 8.22950553894043
},
"Quality": {
"Brightness": 81.00172424316406,
"Sharpness": 53.330047607421875
},
"Confidence": 99.99984741210938
}
},
"FaceMatches": \[]
},
{
"Timestamp": 499,
```
So basically it makes using both calls to get insights impossible. Think the situation when you want to detect emotions and also search a face. And what is weirder, which i can not confirm as it was 3 months ago is that i tested this and the results where the same.. I even saved the json from that time but i can not be certain the way that i got them as i don't remember my implementation, or if i manipulate them :(
But i got this:
FaceDetection
----------------
```json
{
"JobStatus": "SUCCEEDED",
"VideoMetadata": {
"Codec": "h264",
"DurationMillis": 1500,
"Format": "QuickTime / MOV",
"FrameRate": 30,
"FrameHeight": 720,
"FrameWidth": 1280
},
"Faces": \[{
"Timestamp": 499,
"Face": {
"BoundingBox": {
"Width": 0.16875895857810974,
"Height": 0.4913144111633301,
"Left": 0.4124282896518707,
"Top": 0.2672847807407379
```
FaceSearch
---------------
```json
{
"JobStatus": "SUCCEEDED",
"VideoMetadata": {
"Codec": "h264",
"DurationMillis": 1500,
"Format": "QuickTime / MOV",
"FrameRate": 30,
"FrameHeight": 720,
"FrameWidth": 1280
},
"Persons": \[{
"Timestamp": 499,
"Person": {
"Index": 0,
"Face": {
"BoundingBox": {
"Width": 0.16875895857810974,
"Height": 0.4913144111633301,
"Left": 0.4124282896518707,
"Top": 0.2672847807407379
},
"Landmarks": \[{
```
As you can see, BoundingBox are equal on both APIs jobs.
Thanks for any insight you can provide me.
Again, the use case is to be able to use Emotions with Search API as the latter does not return Emotions. Also weird choice.
Accepted AnswerAmazon Rekognition
2
answers
0
votes
2
views
asked 2 years ago
1
answers
0
votes
2
views
asked 2 years ago
7
answers
0
votes
7
views
asked 2 years ago
Unable to create Rekognition project in Console nor cli
I have followed the tutorial in <https://docs.aws.amazon.com/rekognition/latest/customlabels-dg/cp-create-project.html>.
- In the console, I am unable to see, as mentionned, the points 2, 3 and 4:
2. In the left pane, choose Use Custom Labels. The Amazon Rekognition Custom Labels landing page is shown.
3. Choose Get started.
4. Choose Create Project.
- In the cli, I am able to make a call to the API and have a prediction on an image in my bucket.
But if I try `aws rekognition create-project --project-name my-project`, I get an error "An error occurred (AccessDeniedException) when calling the CreateProject operation".
I have tried with both root user and a newly created user with permissions "AmazonRekognitionFullAccess" and "AdministratorAccess" and my credentials are set as well.
I used "aws configure" and correctly passed my access key, secret key and region. I checked it in "~/.aws/credentials" and "~/.aws/config".
Don't know what I am missing here, any help would be appreciate.
Accepted AnswerAmazon Rekognition
2
answers
0
votes
1
views
asked 2 years ago
Head Pose (unit of Yaw, Pitch, Roll) (Point of View)
Hello,
My question is about Head Pose in AWS Rekognition.
What is the unit of Yaw, Pitch, Roll (degree or radian)? and what is their range?
And also, are they from the camera's point of view?
For example, if the value of Yaw is -10, what is the rotation direction (right, left)? And is it relative to camera or frame?
Thank you.
Mohammad
Head Pose (unit of Yaw, Pitch, Roll) (Point of View)?
Accepted AnswerAmazon Rekognition
1
answers
0
votes
4
views
asked 2 years ago
How to create a custom label dataset by feeding manifest programmatically
Hello,
From https://docs.aws.amazon.com/rekognition/latest/customlabels-dg/cd-create-dataset.html
I see that I can create a manifest without using SageMaker, as long as it conforms to the format specified here: https://docs.aws.amazon.com/rekognition/latest/customlabels-dg/cd-required-fields.html
But that Custom Labels Guide only shows that I can supply/specify my manifest by clicking on "Import image Labeled by SageMaker Ground Truth"
Is there a way to create or modify dataset and supply my manifest programmatically?
Thanks.
Edited by: mymingle on Mar 2, 2020 5:48 PM
Accepted AnswerAmazon Rekognition
7
answers
0
votes
3
views
asked 2 years ago
Custom labels programmatically
Hi
Is there a way to create a dataset with custom label images programmatically? I have a large number of images which have already been labeled so I don't want to do this again via the console.
All of the tutorials and demos use the console. I can't fin
Any help appreciated
Thanks
Edited by: chunt on Dec 30, 2019 5:14 AM
Accepted AnswerAmazon Rekognition
4
answers
0
votes
1
views
asked 2 years ago
Custom labels detection slow
Hi,
I am detecting images using a custom label model and the .net api, but it is really slow. The images are hosted at S3 and and about 150KB in size. Each call takes about 3 seconds to complete. Is this normal? What could be affecting the speed?
Thanks,
Tomas
Accepted AnswerAmazon Rekognition
1
answers
0
votes
3
views
asked 2 years ago
DetectLabels doesn't returns instances information
Hi,
I'm using detectLabels on an Id Card and it only returns Label Names and Confidence. The documentation says that it also returns Instances and Parents, with the bounding box of the label. I use the same image in the demo page and it returns all the info, but not when I call the API method.
Could you help me please? I tested it with many images.
Regards
Accepted AnswerAmazon Rekognition
2
answers
0
votes
0
views
asked 3 years ago