- Newest
- Most votes
- Most comments
The Amazon Lookout for Vision documentation recommends consistent image capture conditions, such as camera positioning, lighting, and object pose. In an ideal scenario you can control the environment and ensure consistent conditions.
If that's not possible, then providing more data that reflects the variations (object position and rotation, lightning conditions) can increase your models' performance. To what extent depends a lot on the situation (variability in conditions, types and sizes of anomalies, etc.) so it's difficult to give a general answer. I would recommend to run a PoC, evaluate your performance metrics (such as F1 score), and then try to improve your model.
In addition you can try to preprocess your raw images to reduce variations. For example, you could use an object detection model to locate the object (and its orientation) in the larger image and then crop/rotate the image so it only contains your object (ideally rotated to a consistent orientation).
In your camera setup you could also evaluate bandpass filters as a way to improve lightning conditions. An experienced partner would be a great asset in such a discussion.
Relevant content
- asked 2 years ago
- asked 2 years ago
- AWS OFFICIALUpdated 4 months ago
- AWS OFFICIALUpdated 5 months ago
- AWS OFFICIALUpdated 2 months ago
- AWS OFFICIALUpdated 9 months ago