Questions tagged with Amazon Rekognition
Content language: English
Sort by most recent
[AWS Rekognition] "Amazon Rekognition experienced a service issue" Internal Failure and can't train a model.
I want to train a model using these pictures as training data: https://www.dropbox.com/s/n35jne68hjws2d4/runes.zip?dl=0 And the test data will be this: https://imgur.com/4LQIUSX.jpg And this: https://imgur.com/LOKzEOp.jpg I have succeeded in labeling everything I need and my understanding is that this should be viable (or does anybody have any feedback on how my data should look like for this to work? Maybe I'm doing something wrong?) However I've tried to train my Rekognition model five times and it's failed every time with "Amazon Rekognition experienced a service issue. It seems to me that this is an internal failure in the service, can I get some feedback on what I can do?
MaxLabels for Amazon Rekognition Video not working
The `MaxLabels` argument works on the Amazon Rekognition Image but not using a Video. I have this PHP payload I use: ``` 'ClientRequestToken' => (string)Str::uuid(), 'JobTag' => VideoOperations::REKOGNITION_LABEL_DETECTION, 'NotificationChannel' => [ 'RoleArn' => config('rekognition.notification_channel.role_arn'), 'SNSTopicArn' => config('rekognition.notification_channel.sns_arn'), ], 'MinConfidence' => config('rekognition.min_confidence'), 'MaxLabels' => config('rekognition.max_labels'), 'Video' => [ 'S3Object' => [ 'Bucket' => config('rekognition.bucket'), 'Name' => $video->filename ], ] ``` I checked the official github AWSDocs Rekognition samples and could not find an implementation of the `MaxLabels` on a Video Rekognition. Is it not supported? https://github.com/awsdocs/amazon-rekognition-developer-guide/search?q=MaxLabels
Line over text in Rekognition
Cheers! I am using Rekognition to detect text in a picture. At one point I have to detect whether text has a line over it or not. By line, I mean whether text is crossed out (like in supermarket catalogues when you have an original price crossed out and the discounted price not crossed out). Is something like that possible with Rekognition? Thanks!
Rekognition and National Institute of Standards and Technology (NIST) verification
Hi, I wanted to know if there is any plan for Rekognition to be tested with NIST in the near future? I've tested top 5 NIST evaluated algorithms and Rekognition blows them out of the water. Having NIST scores would be huge for us end users.
"Rekognition has experienced service issue" while training in custom labels
I receive the following error when training a model with Custom labels with a small dataset (around 500 images): Model Status: "TRAINING_FAILED" Status Message: "Amazon Rekognition experienced a service issue." I have received this error 6 times ranging in the past 3 days. The training process runs for a few hours failing. Any help will be appreciated on this issue. Thanks
Quota increase for Rekognition Stored Video concurrent jobs
Hello, In the quotas and guidelines for Rekognition, it is mentioned that "Amazon Rekognition Video supports a maximum of 20 concurrent jobs per account". This number is too less for my use case. Is there any way it can be increased by raising a request? In case there cannot be more than 20 concurrent jobs, are the other jobs stored in some sort of internal queue or are they lost?
AWS Rekognition: Search Faces in a Collection issue
Hi, We have been using AWS Rekognition for searching faces in our collection. However, today (27/01/2022) we observed that the faces which earlier matched with a certain similarity are not being matched from today. Has there been a recent update to the face matching model? Also, the matches and similarity which I am getting from search faces function, when I try matching them with compare faces I get a different similarity. Looking forward to understand what's the issue here. Regards, Swarup
How do I interpret the results from Custom Labels?
I am using Custom Labels to see if the model can recognize which quail laid a certain egg. When I examine the project, I see that the correct option didn't have the highest confidence, but was still listed as the model getting the correct result from the egg. Why is this happening???
[On-Premise] Face comparison on premise?
Hi, I'm currently learning on AWS Rekognition. I am quite interested in the AWS Rekognition SDK, especially the Face Comparison. Is there any diagram flow regarding this service? From what I understand, the Face Comparison is done on the cloud because when doing the comparison I need to upload the image. Is the image that I upload saved on temp storage or not? And is it possible to do it without uploading the image so all the process is done on-premise? Thank you
Kinesis Video Stream with Content moderation (AWS Rekognition)
I would like to perform content moderation from AWS Rekognition on a Kinesis Video Stream real time. I see there is documentation for using AWS Rekognition to detect faces on Live Kinesis Video Stream but is it possible to call the contentmoderation method function and perform the same task. If yes, where will the results be stored? Also, if yes, can we extract just that part of the video where the content has been detected and store it in S3 bucket?
Locate the json file which contains changes that we made on a newly created rekognition dataset
When dataset is created and shown in the rekognition interface. We opt to make few changes to the labels height and width. Now, before we start to train it. We need to download the manifest/json file which will have the changes that we made on the interface. The issue is that we cannot locate where the manifest file is on s3 where these changes get saved. The initial manifest used to generate the dataset while creating the project does not contain any updates to the records. (on which we made changes)
AI used for recognizing emojis on wallpaper
I want to build an app that will recognize what emojis have been used on the wallpaper. So for instance this app will receive an input image like [this](https://i.stack.imgur.com/5wRQm.png) And on output should return an array of names of recognized emojis: ``` [ "Smiling Face with Sunglasses", "Grinning Face with Smiling Eyes", "Kissing Face with Closed Eyes" ] ``` Of course, the names of these emojis will come from the names of files of training images. For example, [this](https://i.stack.imgur.com/BaEGG.png) file, will be called `Grinning_Face_with_Smiling_Eyes.jpg` I would like to use `AWS Rekognition Label`, but they require a minimum of 10 images of each emoji for training. As you know, I can only provide one image of each emoji, because there is no more option, they are in 2D ;) Now my question is: What should I do? How can I skip these requirements? Which service should I choose? On the stack, I have read that I should for instance rotate each image 12 times by 30o, or crop the emoji by half. I have done that, but, precision is very small - around `0.3` PS. In real business instead of emojis, there are covers of the books, which AI has to recognize. There is also one image per book-cover photo in 2D.