Lookout for Vision - Model resilience to rotated input image


Dear experts,

I was looking into trying to start using AWS Lookout for Vision for some use cases and I have one doubt that I could not solved when looking into the documentation - so forgive me if that is specified somewhere and kindly point me to the source in such case -.

As written in the subject, I am wondering if when performing inference the model is resilient to things such as image rotation by design, or if there is some trick to take into account for such use case.

Thank you!

asked a year ago250 views
1 Answer

During the training phase, augment your dataset with rotated versions of your images. You can include various rotations like 90, 180, and 270 degrees or even smaller increments if needed. This will help the model learn to recognize the objects of interest despite their orientation in the input image. If you expect images with varying rotations during inference, you can apply pre-processing techniques to detect the orientation of the image and correct it before passing it to the Lookout for Vision model. You can use image processing libraries like OpenCV or PIL in Python to detect and correct image orientation. Another approach is to create multiple models, each trained on images with different orientations. During inference, you can run the input image through each model and aggregate their predictions to determine the final output. This can help improve the overall performance and resilience of your solution to handle rotated images.

profile picture
answered a year ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions