- Newest
- Most votes
- Most comments
Amazon Rekognition can indeed perform emotion detection in both images and videos. However, there are some important points to consider regarding emotion detection in videos:
-
For stored video analysis, you need to use the asynchronous API operations. This means you start the analysis with a "Start" operation (like StartFaceDetection) and then retrieve the results with a corresponding "Get" operation (like GetFaceDetection).
-
When analyzing faces in videos, Rekognition can detect various attributes including emotions. The emotions that can be detected include Happy, Sad, Angry, Surprised, Disgusted, Calm, Confused, and Fear.
-
If you're not getting any emotion data in your results, there could be a few reasons for this:
- The face detection confidence threshold might be set too high.
- The quality of the video or the faces in the video might not be sufficient for emotion detection.
- You might be using an older version of the API that doesn't include emotion detection for videos.
-
To ensure you're getting emotion data, make sure you're using the latest version of the Amazon Rekognition Video API, and that you're calling the correct operations for video analysis.
-
Also, check that the faces in your video are clear and well-lit, as this can affect the accuracy of emotion detection.
If you've confirmed all these points and are still not getting emotion data, you may want to contact AWS support for further assistance. They can help diagnose if there's an issue with your specific use case or if there might be a problem with the service.
Remember, while Rekognition is powerful, emotion detection can be challenging and may not always produce results, especially if the facial expressions in the video are subtle or if the video quality is not optimal.
Sources
Understanding Rekognition's image and video operations - Amazon Rekognition
Amazon Rekognition improves Face Analysis
Amazon Rekognition – frequently asked questions - AWS
Understanding Rekognition's types of analysis - Amazon Rekognition
Code snippet below:
Function to check the status of the job and get the results when it's complete
def get_emotion_results(job_id): while True: response = rekognition_client.get_face_detection(JobId=job_id) status = response['JobStatus']
if status == 'SUCCEEDED':
print('Job completed successfully.')
return response['Faces']
elif status == 'FAILED':
print('Job failed.')
return None
else:
print('Job in progress. Checking again in 10 seconds...')
time.sleep(10)
Function to extract emotions from the job results
def extract_emotions(faces): emotion_data = []
for face_detail in faces:
timestamp = face_detail.get('Timestamp')
face = face_detail.get('Face', {})
emotions = face.get('Emotions', None)
if emotions:
print(f"Emotions detected at timestamp {timestamp}ms:")
for emotion in emotions:
emotion_data.append({
'Timestamp': timestamp,
'Emotion': emotion['Type'],
'Score': emotion['Confidence']
})
print(f" {emotion['Type']}: {emotion['Confidence']:.2f}%")
else:
print(f"No emotions detected for face at timestamp {timestamp}")
return emotion_data
Function to save emotion results to a file
def save_emotions_to_file(emotion_data, output_file): with open(output_file, 'w') as file: file.write("Timestamp (ms), Emotion, Confidence Score\n") for data in emotion_data: file.write(f"{data['Timestamp']}, {data['Emotion']}, {data['Score']:.2f}%\n")
print(f"Emotion results written to {output_file}")
Relevant content
- asked 5 years ago
- asked 5 years ago
- asked a year ago
- AWS OFFICIALUpdated 4 years ago