By using AWS re:Post, you agree to the Terms of Use
/Amazon Rekognition/

Questions tagged with Amazon Rekognition

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Rekognition: error when trying to detect faces with s3 object name containing a colon (:)

Actually Rekognition works fine but when I use a filename containing a colon (:) for the S3Object, it makes an error. It is very problematic for me because all my files already have colons and I can't change their names. So if use this It works fine: ``` { "Image":{ "S3Object":{ "Bucket":"console-sample-images", "Name":"skateboard.jpg" } ``` but if i use a name with a colon like this It gives me an error. ``` { "Image":{ "S3Object":{ "Bucket":"console-sample-images", "Name":"skate:board.jpg" } ``` Error output: `{"name":"Error","content":"{\"__type\":\"InvalidS3ObjectException\",\"Code\":\"InvalidS3ObjectException\",\"Message\":\"Unable to get object metadata from S3. Check object key, region and/or access permissions.\"}","message":"faultCode:Server.Error.Request faultString:'null' faultDetail:'null'","rootCause":{"errorID":2032,"target":{"bytesLoaded":174,"dataFormat":"text","bytesTotal":174,"data":"{\"__type\":\"InvalidS3ObjectException\",\"Code\":\"InvalidS3ObjectException\",\"Message\":\"Unable to get object metadata from S3. Check object key, region and/or access permissions.\"}"},"text":"Error #2032: Stream Error. URL: https://rekognition.eu-west-1.amazonaws.com","currentTarget":{"bytesLoaded":174,"dataFormat":"text","bytesTotal":174,"data":"{\"__type\":\"InvalidS3ObjectException\",\"Code\":\"InvalidS3ObjectException\",\"Message\":\"Unable to get object metadata from S3. Check object key, region and/or access permissions.\"}"},"type":"ioError","bubbles":false,"eventPhase":2,"cancelable":false},"errorID":0,"faultCode":"Server.Error.Request","faultDetail":null,"faultString":""}` Is there a workaround for this problem? (encoding the ':' a certain way?) Thank you for your help.
1
answers
0
votes
13
views
asked 4 days ago

Rekognition search faces API endpoint

Hi everyone! Currently, I've accomplished detecting all the faces from a collection and then generating sub-galleries of each subject with all their photos associated with the ruby SDK '~> 1.65' To do this, I've indexed the faces of all photos within a collection, list all the faces (https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/Rekognition/Client.html#list_faces-instance_method), then grabbing each face_id recognized and search the faces related to that face_id (https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/Rekognition/Client.html#search_faces-instance_method), and delete the face id used to do the API call and all the returned ones to tell when a new detected subject starts and ends. My issue is that the search faces API returns different results depending on which face id param you are doing the request with. For example, if there are 10 faces ids detected that belong to a person (1, 2, 3, ..., 10), the search faces call with face id = 1 param, should return the faces id (2, 3, 4, ..., 10) but if you continue to do this with the other face ids this is not always the case with some scenarios where the search faces call with face id = 3 has returned a subset of the previously mentioned like just (4, 5, 6). Is there any other way to achieve this to prevent this kind of "error"? if not, this is a real concern for us because it depends on the order in which we call the search faces with different face ids, and sometimes it seems like there is more than 1 subject detected with almost the same photos when in reality it's the same person. Thanks in advance!
3
answers
0
votes
5
views
asked 18 days ago

FaceId's when indexing multiple face images of same person

Hi there, I am not understanding how Rekognition deals with multiple images for the same face, I need help! Scenario: - I have 3 years worth of school classroom photos and need to create yearbook of 5 of the students - I need to identify which photos have only the 5 students in them, and ignore all others. To enhance the accuracy, I was reading the [documentation](https://docs.aws.amazon.com/rekognition/latest/dg/recommendations-facial-input-images.html) that recommends to use multiple images of the same face when using IndexFaces. > When creating a collection using IndexFaces, use multiple face images of an individual with different pitches and yaws (within the recommended range of angles). We recommend that at least five images of the person are indexed—straight on, face turned left with a yaw of 45 degrees or less, face turned right with a yaw of 45 degrees or less, face tilted down with a pitch of 30 degrees or less, and face tilted up with a pitch of 45 degrees or less I think I get this by default as I will be indexing all images, which will contain many many images of the students, from all angles. This is correct, no? Or do I need to do something specific to tie the photos of the same student to the one FaceId? I noted that if I use 5 images of the same person, using IndexFaces, I get 5 different FaceIds returned. I thought (perhaps incorrectly) that Rekognition would recognise that it's the same face and just update it's internal data about the FaceId, and return the same FaceId for each of the subsequent 4 images (after the first one is indexed) that are of the same face. So, can anyone help me: 1. Should indexing multiple images of the same person result in multiple FaceIds? 2. Internally, does Rekognition group all those FaceIds together, so it does not matter which FaceId I use in the future to search on - it will still benefit from the multiple images I uploaded for that face? 3. Is there any difference in performance if I search by `SearchFaces` or by `SearchFacesByImage` 4. Is my approach of indexing all the images I have, then searching on a single known FaceId a the student the way to go? sorry for multiple questions :) hope you can help, hive mind!
1
answers
0
votes
5
views
asked 2 months ago
2
answers
0
votes
4
views
asked 3 months ago

old bug in CompareFaces JPG throws error if no face; when will it be fixed?

Hello There is a **very old bug** (first reported on this forum in 2017 and again in 2018) that throws error **Invalid Parameter Exception** when **CompareFaces** is called with input JPEG file to search for a given face and the output/search JPEG does not have any faces on it. In other words - the Input Image (S3 object) does have a Face to look for and the Output JPEG photo to compare does not have any face on it or not any recognizable face (such as person is facing backwards, for example) or output image JPEG has a photo of a bottle or large object which obscures the Input face. So it just throws Invalid Parameter Exception which is 100% wrong. This is akin to throwing invalid parameter error when End Of File is found on Read - which is not error but an expected condition of reading a file. This bug that makes Rekognition **too slow to use and unproductive** because the error is bogus. We are required to make 2 API calls (costing both time and money) to first detect any face on output JPG file and only then call compare-face if any face is found by a detect method. Why? Compare Face call should simply return 0.0 confidence output with API success if there is no face on the output search JPEG, it should not return spurious errors because this condition is not error, the no-face-found is a perfectly acceptable result. When will this bug be fixed?? I have tried using Rekognition today (9 Nov 2021) and still getting Invalid Parameter Exception error. thank you.
2
answers
0
votes
3
views
asked 6 months ago

JSON format error in Manifest file

Hi' im' trying to setup a training dataset for object localization using the manifest method. Im' getting the following error: **401: The S3 input manifest file s3://custom-labels-console-us-east-1-833b01ea2f/datasets/TestTrainingSet/manifests/output/output.manifest does not have the correct JSON format.** I couldn't figure out what's wrong with the json, was hoping somebody could help me resolving the issue. Following is the json for one of image files. {"source-ref": "s3://custom-labels-console-us-east-1-833b01ea2f/DentalCEPH/TrainingData/001.png", "bounding-box": {"image_size": \[{"width": 1935, "height": 2400, "depth": 3}], "annotations": \[{"class_id": 0, "top": 929, "left": 768, "width": 128, "height": 128}, {"class_id": 1, "top": 968, "left": 1405, "width": 128, "height": 128}]}, "bounding-box-metadata": {"objects": \[{"confidence": 1}, {"confidence": 1}], "class-map": {"0": "sella turcica", "1": "nasion", "2": "orbitale", "3": "porion", "4": "subspinale", "5": "supramentale", "6": "pogonion", "7": "menton", "8": "gnathion", "9": "gonion", "10": "lower incisal incision", "11": "upper incisal incision", "12": "upper lip", "13": "lower lip", "14": "subnasale", "15": "soft tissue pogonion", "16": "posterior nasal spine", "17": "anterior nasal spine", "18": "articulate"}, "type": "groundtruth/object-detection", "human-annotated": "yes", "creation-date": "2020-11-18T02:53:27", "job-name": "custom labels"}} Thank you, V.Vamsi Krishna Edited by: vkrishna on Mar 7, 2021 3:53 AM
4
answers
0
votes
6
views
asked a year ago

Rekognition Jobs FaceDetection and FaceSearch return different findings...

Hi guys, I was playing with Jobs, FaceSearch and FaceDetection while I realized that even though they are run against the same video in S3 they return different values for the same finding. For instance bot Jobs run against the same bucket object and i get difference in BoundingBox and Landmarks which i think should be the same and also Iremember that while i tested this 4 months ago i got those expected results. Example: Video with just 1 Person. - aws rekognition start-face-detection --video "S3Object={Bucket=facejobs,Name=head-pose-face-detection-female.mp4 }" - aws rekognition start-face-search --video "S3Object={Bucket=facejobs,Name=head-pose-face-detection-female.mp4 }" --collection-id test-collection Getting the results: aws rekognition get-face-detection --job-id "3661ab79c711bbd530ca5a910003d..." aws rekognition get-face-search --job-id "cc3555290d2093bec860519bf53403e8..." Comparing: FaceDetection Results ------------------------- ```json { "JobStatus": "SUCCEEDED", "VideoMetadata": { "Codec": "h264", "DurationMillis": 134584, "Format": "QuickTime / MOV", "FrameRate": 12.0, "FrameHeight": 432, "FrameWidth": 768 }, "Faces": \[ { "Timestamp": 0, "Face": { "BoundingBox": { "Width": 0.13135695457458496, "Height": 0.36436259746551514, "Left": 0.4154500961303711, "Top": 0.22901538014411926 }, "Landmarks": \[ { "Type": "eyeLeft", "X": 0.4518287479877472, "Y": 0.3687707185745239 }, { "Type": "eyeRight", "X": 0.5152483582496643, "Y": 0.3756844997406006 }, { "Type": "mouthLeft", "X": 0.451990008354187, "Y": 0.5045619010925293 }, { "Type": "mouthRight", "X": 0.5046293139457703, "Y": 0.5103421807289124 }, { "Type": "nose", "X": 0.47848179936408997, "Y": 0.4353737533092499 } ], "Pose": { "Roll": 2.838758707046509, "Yaw": -1.3927381038665771, "Pitch": 10.166311264038086 }, "Quality": { "Brightness": 79.76757049560547, "Sharpness": 26.1773681640625 }, "Confidence": 99.99970245361328 } }, { "Timestamp": 499, ``` FaceSearch -------------------- ```json { "JobStatus": "SUCCEEDED", "VideoMetadata": { "Codec": "h264", "DurationMillis": 134584, "Format": "QuickTime / MOV", "FrameRate": 12.0, "FrameHeight": 432, "FrameWidth": 768 }, "Persons": \[ { "Timestamp": 0, "Person": { "Index": 0, "Face": { "BoundingBox": { "Width": 0.13410408794879913, "Height": 0.365193247795105, "Left": 0.4145432412624359, "Top": 0.2288028597831726 }, "Landmarks": \[ { "Type": "eyeLeft", "X": 0.4514598548412323, "Y": 0.3685579001903534 }, { "Type": "eyeRight", "X": 0.5149661898612976, "Y": 0.37557920813560486 }, { "Type": "mouthLeft", "X": 0.4519285261631012, "Y": 0.5038205981254578 }, { "Type": "mouthRight", "X": 0.5038713216781616, "Y": 0.5095799565315247 }, { "Type": "nose", "X": 0.47897493839263916, "Y": 0.43512672185897827 } ], "Pose": { "Roll": 1.5608868598937988, "Yaw": -18.46771240234375, "Pitch": 8.22950553894043 }, "Quality": { "Brightness": 81.00172424316406, "Sharpness": 53.330047607421875 }, "Confidence": 99.99984741210938 } }, "FaceMatches": \[] }, { "Timestamp": 499, ``` So basically it makes using both calls to get insights impossible. Think the situation when you want to detect emotions and also search a face. And what is weirder, which i can not confirm as it was 3 months ago is that i tested this and the results where the same.. I even saved the json from that time but i can not be certain the way that i got them as i don't remember my implementation, or if i manipulate them :( But i got this: FaceDetection ---------------- ```json { "JobStatus": "SUCCEEDED", "VideoMetadata": { "Codec": "h264", "DurationMillis": 1500, "Format": "QuickTime / MOV", "FrameRate": 30, "FrameHeight": 720, "FrameWidth": 1280 }, "Faces": \[{ "Timestamp": 499, "Face": { "BoundingBox": { "Width": 0.16875895857810974, "Height": 0.4913144111633301, "Left": 0.4124282896518707, "Top": 0.2672847807407379 ``` FaceSearch --------------- ```json { "JobStatus": "SUCCEEDED", "VideoMetadata": { "Codec": "h264", "DurationMillis": 1500, "Format": "QuickTime / MOV", "FrameRate": 30, "FrameHeight": 720, "FrameWidth": 1280 }, "Persons": \[{ "Timestamp": 499, "Person": { "Index": 0, "Face": { "BoundingBox": { "Width": 0.16875895857810974, "Height": 0.4913144111633301, "Left": 0.4124282896518707, "Top": 0.2672847807407379 }, "Landmarks": \[{ ``` As you can see, BoundingBox are equal on both APIs jobs. Thanks for any insight you can provide me. Again, the use case is to be able to use Emotions with Search API as the latter does not return Emotions. Also weird choice.
2
answers
0
votes
2
views
asked 2 years ago

Amazon.Rekognition.AmazonRekognitionException: Missing Authentication Token

Hello guys, I've integrated my app with Amazon Rekognition last year but now I'm getting this exception. I've updated my project with the latest version of the AWSSDK.Rekognition nuget package but it didn't solve it. Here is the code: ``` public async Task<CompareFacesResponse> CompareAsync(byte[] photo1, byte[] photo2) { var client = new AmazonRekognitionClient(awsAccessKeyId, awsSecretAccessKey, Amazon.RegionEndpoint.EUWest1); return await client.CompareFacesAsync(new CompareFacesRequest { SimilarityThreshold = 80.0f, SourceImage = new Image { Bytes = new MemoryStream(photo1) }, TargetImage = new Image { Bytes = new MemoryStream(photo2) } }); } ``` Then I got this exception: ``` Amazon.Rekognition.AmazonRekognitionException: Missing Authentication Token ---> Amazon.Runtime.Internal.HttpErrorResponseException: Exception of type 'Amazon.Runtime.Internal.HttpErrorResponseException' was thrown. at Amazon.Runtime.HttpWebRequestMessage.GetResponseAsync(CancellationToken cancellationToken) in D:\JenkinsWorkspaces\trebuchet-stage-release\AWSDotNetPublic\sdk\src\Core\Amazon.Runtime\Pipeline\HttpHandler\_mobile\HttpRequestMessageFactory.cs:line 539 at Amazon.Runtime.Internal.HttpHandler`1.InvokeAsync[T](IExecutionContext executionContext) in D:\JenkinsWorkspaces\trebuchet-stage-release\AWSDotNetPublic\sdk\src\Core\Amazon.Runtime\Pipeline\HttpHandler\HttpHandler.cs:line 175 at Amazon.Runtime.Internal.Unmarshaller.InvokeAsync[T](IExecutionContext executionContext) at Amazon.Runtime.Internal.ErrorHandler.InvokeAsync[T](IExecutionContext executionContext) --- End of inner exception stack trace --- at Amazon.Runtime.Internal.HttpErrorResponseExceptionHandler.HandleException(IExecutionContext executionContext, HttpErrorResponseException exception) in D:\JenkinsWorkspaces\trebuchet-stage-release\AWSDotNetPublic\sdk\src\Core\Amazon.Runtime\Pipeline\ErrorHandler\HttpErrorResponseExceptionHandler.cs:line 60 at Amazon.Runtime.Internal.ErrorHandler.ProcessException(IExecutionContext executionContext, Exception exception) in D:\JenkinsWorkspaces\trebuchet-stage-release\AWSDotNetPublic\sdk\src\Core\Amazon.Runtime\Pipeline\ErrorHandler\ErrorHandler.cs:line 212 at Amazon.Runtime.Internal.ErrorHandler.InvokeAsync[T](IExecutionContext executionContext) in D:\JenkinsWorkspaces\trebuchet-stage-release\AWSDotNetPublic\sdk\src\Core\Amazon.Runtime\Pipeline\ErrorHandler\ErrorHandler.cs:line 104 at Amazon.Runtime.Internal.CallbackHandler.InvokeAsync[T](IExecutionContext executionContext) at Amazon.Runtime.Internal.EndpointDiscoveryHandler.InvokeAsync[T](IExecutionContext executionContext) at Amazon.Runtime.Internal.EndpointDiscoveryHandler.InvokeAsync[T](IExecutionContext executionContext) in D:\JenkinsWorkspaces\trebuchet-stage-release\AWSDotNetPublic\sdk\src\Core\Amazon.Runtime\Pipeline\Handlers\EndpointDiscoveryHandler.cs:line 79 at Amazon.Runtime.Internal.CredentialsRetriever.InvokeAsync[T](IExecutionContext executionContext) in D:\JenkinsWorkspaces\trebuchet-stage-release\AWSDotNetPublic\sdk\src\Core\Amazon.Runtime\Pipeline\Handlers\CredentialsRetriever.cs:line 98 at Amazon.Runtime.Internal.RetryHandler.InvokeAsync[T](IExecutionContext executionContext) at Amazon.Runtime.Internal.RetryHandler.InvokeAsync[T](IExecutionContext executionContext) in D:\JenkinsWorkspaces\trebuchet-stage-release\AWSDotNetPublic\sdk\src\Core\Amazon.Runtime\Pipeline\RetryHandler\RetryHandler.cs:line 137 at Amazon.Runtime.Internal.CallbackHandler.InvokeAsync[T](IExecutionContext executionContext) at Amazon.Runtime.Internal.CallbackHandler.InvokeAsync[T](IExecutionContext executionContext) at Amazon.Runtime.Internal.ErrorCallbackHandler.InvokeAsync[T](IExecutionContext executionContext) at Amazon.Runtime.Internal.MetricsHandler.InvokeAsync[T](IExecutionContext executionContext) at Test.Main.CompareAsync(Byte[] photo1, Byte[] photo2)... ``` **UPDATE** After enabling the logs I could find more details below... which raises a question: Is the latest nuget package compatible with .NET Core? Because I'm running this code in a Mac and I plan to deploy it in a docker container. ``` UserCrypto 1|2019-03-18T16:33:47.043Z|INFO|UserCrypto is not supported. This may be due to use of a non-Windows operating system or Windows Nano Server, or the current user account may not have its profile loaded. Unable to load shared library 'Crypt32.dll' or one of its dependencies. In order to help diagnose loading problems, consider setting the DYLD_PRINT_LIBRARIES environment variable: dlopen(libCrypt32.dll, 1): image not found AmazonRekognitionClient 2|2019-03-18T16:33:47.820Z|ERROR|An exception of type HttpErrorResponseException was handled in ErrorHandler. --> Amazon.Runtime.Internal.HttpErrorResponseException: Exception of type 'Amazon.Runtime.Internal.HttpErrorResponseException' was thrown. at Amazon.Runtime.HttpWebRequestMessage.GetResponseAsync(CancellationToken cancellationToken) in D:\JenkinsWorkspaces\trebuchet-stage-release\AWSDotNetPublic\sdk\src\Core\Amazon.Runtime\Pipeline\HttpHandler\_mobile\HttpRequestMessageFactory.cs:line 539 at Amazon.Runtime.Internal.HttpHandler`1.InvokeAsync[T](IExecutionContext executionContext) in D:\JenkinsWorkspaces\trebuchet-stage-release\AWSDotNetPublic\sdk\src\Core\Amazon.Runtime\Pipeline\HttpHandler\HttpHandler.cs:line 175 at Amazon.Runtime.Internal.Unmarshaller.InvokeAsync[T](IExecutionContext executionContext) at Amazon.Runtime.Internal.ErrorHandler.InvokeAsync[T](IExecutionContext executionContext) AmazonRekognitionClient 3|2019-03-18T16:33:47.875Z|ERROR|AmazonRekognitionException making request CompareFacesRequest to https://rekognition.eu-west-1.amazonaws.com/. Attempt 1. --> Amazon.Rekognition.AmazonRekognitionException: Missing Authentication Token ---> Amazon.Runtime.Internal.HttpErrorResponseException: Exception of type 'Amazon.Runtime.Internal.HttpErrorResponseException' was thrown. at Amazon.Runtime.HttpWebRequestMessage.GetResponseAsync(CancellationToken cancellationToken) in D:\JenkinsWorkspaces\trebuchet-stage-release\AWSDotNetPublic\sdk\src\Core\Amazon.Runtime\Pipeline\HttpHandler\_mobile\HttpRequestMessageFactory.cs:line 539 at Amazon.Runtime.Internal.HttpHandler`1.InvokeAsync[T](IExecutionContext executionContext) in D:\JenkinsWorkspaces\trebuchet-stage-release\AWSDotNetPublic\sdk\src\Core\Amazon.Runtime\Pipeline\HttpHandler\HttpHandler.cs:line 175 at Amazon.Runtime.Internal.Unmarshaller.InvokeAsync[T](IExecutionContext executionContext) at Amazon.Runtime.Internal.ErrorHandler.InvokeAsync[T](IExecutionContext executionContext) --- End of inner exception stack trace --- at Amazon.Runtime.Internal.HttpErrorResponseExceptionHandler.HandleException(IExecutionContext executionContext, HttpErrorResponseException exception) in D:\JenkinsWorkspaces\trebuchet-stage-release\AWSDotNetPublic\sdk\src\Core\Amazon.Runtime\Pipeline\ErrorHandler\HttpErrorResponseExceptionHandler.cs:line 60 at Amazon.Runtime.Internal.ErrorHandler.ProcessException(IExecutionContext executionContext, Exception exception) in D:\JenkinsWorkspaces\trebuchet-stage-release\AWSDotNetPublic\sdk\src\Core\Amazon.Runtime\Pipeline\ErrorHandler\ErrorHandler.cs:line 212 at Amazon.Runtime.Internal.ErrorHandler.InvokeAsync[T](IExecutionContext executionContext) in D:\JenkinsWorkspaces\trebuchet-stage-release\AWSDotNetPublic\sdk\src\Core\Amazon.Runtime\Pipeline\ErrorHandler\ErrorHandler.cs:line 104 at Amazon.Runtime.Internal.CallbackHandler.InvokeAsync[T](IExecutionContext executionContext) at Amazon.Runtime.Internal.EndpointDiscoveryHandler.InvokeAsync[T](IExecutionContext executionContext) at Amazon.Runtime.Internal.EndpointDiscoveryHandler.InvokeAsync[T](IExecutionContext executionContext) in D:\JenkinsWorkspaces\trebuchet-stage-release\AWSDotNetPublic\sdk\src\Core\Amazon.Runtime\Pipeline\Handlers\EndpointDiscoveryHandler.cs:line 79 at Amazon.Runtime.Internal.CredentialsRetriever.InvokeAsync[T](IExecutionContext executionContext) in D:\JenkinsWorkspaces\trebuchet-stage-release\AWSDotNetPublic\sdk\src\Core\Amazon.Runtime\Pipeline\Handlers\CredentialsRetriever.cs:line 98 at Amazon.Runtime.Internal.RetryHandler.InvokeAsync[T](IExecutionContext executionContext) ``` Edited by: cleytonT on Mar 18, 2019 8:35 AM
1
answers
0
votes
7
views
asked 3 years ago
  • 1
  • 90 / page