How to improve Rekognition Content Moderation model?

0

Hi. I am using the Rekognition Content moderation api on stored videos for my use case. There are second level categories that are not being detected properly - there are some false positives and false negatives. Could you please tell me how I could pass my own videos/ images extracted from frames with labels to improve the AWS Rekognition's Content Moderation model? Thanks!

asked 2 years ago346 views
1 Answer
0
Accepted Answer

Hi!

Unfortunately AWS Rekognition's Content Moderation model is a general model, this mean the Service Team is in charge of maintaining/updating the model.

If you have an AWS representative assigned, I would suggest you contact them and they can notify the Service Team about this, or open an AWS support ticket!

Another option could be to create your own content moderation model, Amazon Rekognition Custom Labels could help you with this training.

For frame extraction from videos, here you have a code example which may help you!

Hope this helps!

AWS
Dani M
answered 2 years ago
  • Hi, Oh, okay. Could you then help me with these questions:

    1. If we are not directly uploading videos for training, could I instead use images dataset rather than frames extracted? Could you please explain what is the difference?
    2. I see the custom model console uses image classification or object detection on custom labels. Can you tell me more about what are the things to consider while training for custom content moderation dataset of videos? If the training models are for images itself, how will that apply on my videos?
  • Hey, So custom labels helps you create an image classification/multilabel/objectdetection model inside Amazon Rekognition. You will use an image dataset to train the model. Once the model has been trained, the service will manage the model endpoint for you and you will be able to call Amazon Rekognition to detect labels/objects in your images, using the model you have trained. As Custom Labels does not support videos as input (other rekognition APIs do), you have to split your videos into frames and call the service passing the frame as image, the code example provided above helps you with this workaround! Hope this helps and remember to accept the answer if it has helped you (^_^)

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions