Rekognition Jobs FaceDetection and FaceSearch return different findings...

0

Hi guys,

I was playing with Jobs, FaceSearch and FaceDetection while I realized that even though they are run against the same video in S3 they return different values for the same finding. For instance bot Jobs run against the same bucket object and i get difference in BoundingBox and Landmarks which i think should be the same and also Iremember that while i tested this 4 months ago i got those expected results.

Example: Video with just 1 Person.

  • aws rekognition start-face-detection --video "S3Object={Bucket=facejobs,Name=head-pose-face-detection-female.mp4 }"
  • aws rekognition start-face-search --video "S3Object={Bucket=facejobs,Name=head-pose-face-detection-female.mp4 }" --collection-id test-collection

Getting the results:
aws rekognition get-face-detection --job-id "3661ab79c711bbd530ca5a910003d..."
aws rekognition get-face-search --job-id "cc3555290d2093bec860519bf53403e8..."

Comparing:

FaceDetection Results

{  
    "JobStatus": "SUCCEEDED",  
    "VideoMetadata": {  
        "Codec": "h264",  
        "DurationMillis": 134584,  
        "Format": "QuickTime / MOV",  
        "FrameRate": 12.0,  
        "FrameHeight": 432,  
        "FrameWidth": 768  
    },  
    "Faces": \[  
        {  
            "Timestamp": 0,  
            "Face": {  
                "BoundingBox": {  
                    "Width": 0.13135695457458496,  
                    "Height": 0.36436259746551514,  
                    "Left": 0.4154500961303711,  
                    "Top": 0.22901538014411926  
                },  
                "Landmarks": \[  
                    {  
                        "Type": "eyeLeft",  
                        "X": 0.4518287479877472,  
                        "Y": 0.3687707185745239  
                    },  
                    {  
                        "Type": "eyeRight",  
                        "X": 0.5152483582496643,  
                        "Y": 0.3756844997406006  
                    },  
                    {  
                        "Type": "mouthLeft",  
                        "X": 0.451990008354187,  
                        "Y": 0.5045619010925293  
                    },  
                    {  
                        "Type": "mouthRight",  
                        "X": 0.5046293139457703,  
                        "Y": 0.5103421807289124  
                    },  
                    {  
                        "Type": "nose",  
                        "X": 0.47848179936408997,  
                        "Y": 0.4353737533092499  
                    }  
                ],  
                "Pose": {  
                    "Roll": 2.838758707046509,  
                    "Yaw": -1.3927381038665771,  
                    "Pitch": 10.166311264038086  
                },  
                "Quality": {  
                    "Brightness": 79.76757049560547,  
                    "Sharpness": 26.1773681640625  
                },  
                "Confidence": 99.99970245361328  
            }  
        },  
        {  
            "Timestamp": 499,  

FaceSearch

{  
    "JobStatus": "SUCCEEDED",  
    "VideoMetadata": {  
        "Codec": "h264",  
        "DurationMillis": 134584,  
        "Format": "QuickTime / MOV",  
        "FrameRate": 12.0,  
        "FrameHeight": 432,  
        "FrameWidth": 768  
    },  
    "Persons": \[  
        {  
            "Timestamp": 0,  
            "Person": {  
                "Index": 0,  
                "Face": {  
                    "BoundingBox": {  
                        "Width": 0.13410408794879913,  
                        "Height": 0.365193247795105,  
                        "Left": 0.4145432412624359,  
                        "Top": 0.2288028597831726  
                    },  
                    "Landmarks": \[  
                        {  
                            "Type": "eyeLeft",  
                            "X": 0.4514598548412323,  
                            "Y": 0.3685579001903534  
                        },  
                        {  
                            "Type": "eyeRight",  
                            "X": 0.5149661898612976,  
                            "Y": 0.37557920813560486  
                        },  
                        {  
                            "Type": "mouthLeft",  
                            "X": 0.4519285261631012,  
                            "Y": 0.5038205981254578  
                        },  
                        {  
                            "Type": "mouthRight",  
                            "X": 0.5038713216781616,  
                            "Y": 0.5095799565315247  
                        },  
                        {  
                            "Type": "nose",  
                            "X": 0.47897493839263916,  
                            "Y": 0.43512672185897827  
                        }  
                    ],  
                    "Pose": {  
                        "Roll": 1.5608868598937988,  
                        "Yaw": -18.46771240234375,  
                        "Pitch": 8.22950553894043  
                    },  
                    "Quality": {  
                        "Brightness": 81.00172424316406,  
                        "Sharpness": 53.330047607421875  
                    },  
                    "Confidence": 99.99984741210938  
                }  
            },  
            "FaceMatches": \[]  
        },  
        {  
            "Timestamp": 499,  

So basically it makes using both calls to get insights impossible. Think the situation when you want to detect emotions and also search a face. And what is weirder, which i can not confirm as it was 3 months ago is that i tested this and the results where the same.. I even saved the json from that time but i can not be certain the way that i got them as i don't remember my implementation, or if i manipulate them :(
But i got this:

FaceDetection

{  
    "JobStatus": "SUCCEEDED",  
    "VideoMetadata": {  
        "Codec": "h264",  
        "DurationMillis": 1500,  
        "Format": "QuickTime / MOV",  
        "FrameRate": 30,  
        "FrameHeight": 720,  
        "FrameWidth": 1280  
    },  
    "Faces": \[{  
            "Timestamp": 499,  
            "Face": {  
                "BoundingBox": {  
                    "Width": 0.16875895857810974,  
                    "Height": 0.4913144111633301,  
                    "Left": 0.4124282896518707,  
                    "Top": 0.2672847807407379  

FaceSearch

{  
    "JobStatus": "SUCCEEDED",  
    "VideoMetadata": {  
        "Codec": "h264",  
        "DurationMillis": 1500,  
        "Format": "QuickTime / MOV",  
        "FrameRate": 30,  
        "FrameHeight": 720,  
        "FrameWidth": 1280  
    },  
    "Persons": \[{  
            "Timestamp": 499,  
            "Person": {  
                "Index": 0,  
                "Face": {  
                    "BoundingBox": {  
                        "Width": 0.16875895857810974,  
                        "Height": 0.4913144111633301,  
                        "Left": 0.4124282896518707,  
                        "Top": 0.2672847807407379  
                    },  
                    "Landmarks": \[{  

As you can see, BoundingBox are equal on both APIs jobs.

Thanks for any insight you can provide me.

Again, the use case is to be able to use Emotions with Search API as the latter does not return Emotions. Also weird choice.

asked 4 years ago223 views
2 Answers
0
Accepted Answer

Dear brahama80,

Thanks for the feedback. You are likely seeing the difference due to the recent face model update to version 5.0. When models are updated, existing collections stay on the same version for backward compatibility, whereas face detection and new collections automatically start using the latest version. You can check your collection model version (likely V 4.0) using this API: https://awscli.amazonaws.com/v2/documentation/api/latest/reference/rekognition/describe-collection.html. If you are able to create a new face collection and re-index the faces, then we suggest doing that to make the two workflows use the same version. We have also taken note of your request to return emotions to face search output, and added it to our backlog.

AWS
answered 4 years ago
0

Awesome JingAtAWS! nailed it.

Just reindexed and now i can see again matching BBs between jobs. Thanks!

And would be great to be able to get emotions on both, as if not you have to come up with strange hacks to combine the data.. hehe

Cheers!

BTW, is there a place where i can follow roadmap?

answered 4 years ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions