Understanding Amazon Rekognition Training which ignores images with WARNING_NO_ANNOTATION

2

Based on the amazon doc mentioned here about custom labelling and how rekognition trains, https://docs.aws.amazon.com/rekognition/latest/customlabels-dg/tm-debugging-json-line-errors.html#tm-warning-WARNING_UNANNOTATED_RECORD, it seems as if training on images with intentionally unassigned bounding boxes does not happen. I want to know why that's the case. From my humble understanding of standard ML object detection models, we ought to include information from images containing at least 1 label/bounding box and also those images with no labels at all (the model needs information about those data/data representations with negative feedback / nothing to label at all). Can anyone please shed light on why Amazon rekognition chooses to train on images with atleast 1 bounding box and ignores (as in not used in training /computing loss) the human curated images with intentionally no bounding boxes ? Thanks!

MdotO
質問済み 2年前367ビュー
1回答
0

Hello MdotO,

Thanks for your question about how an image added to a dataset that doesn’t have labels, currently doesn’t get used for training on Rekognition Custom Labels. For supervised object detection, many commonly used algorithms do skip unannotated images, and indeed that is currently how ours is implemented. At inference time, the results of detect-custom-labels contain a Confidence score that can be used to evaluate the confidence of the prediction, as well as the absence of labels with too low Confidence depending on the model.

I have relayed this information in your question to our Product Manager as well.

Thanks, aws-cdunn

AWS
回答済み 2年前
  • Hi. Thank you very much for the information and relaying it to the PM. I guess my sort of understanding of how images will be used was inclined towards a YOLO approach which incorporates non bounding box part of image as negative feedback. However, my concern with this is that AWS currently does the same thing for both training set and test set. I suppose for training, as you mentioned, we need atleast 1 label/bounding box so non annotated are discarded. However, for test images(from which the model does not learn), AWS does the same. For my case, the real life distribution and test set distribution would be very different (as AWS removes non annotated in test). This can cause misleading numbers of F1 score for test score while in real life data distribution there maybe a lot of images for which no annotation is required to be predicted by model but we have no way of judging it since the test set never considers this case. (Note: in our case, we label 8 types of vehicles which occur in around 40% of the images . The remaining 60% are empty images(no vehicles) or irrelevant vehicle types. My problem is thus if there are any false positives from model for these images, we can only see that from real life dataset rather than test set- as it never includes those- which should actually be the test set's purpose I believe. Please correct me if I am wrong in understanding the purpose of test set here)

ログインしていません。 ログイン 回答を投稿する。

優れた回答とは、質問に明確に答え、建設的なフィードバックを提供し、質問者の専門分野におけるスキルの向上を促すものです。

質問に答えるためのガイドライン

関連するコンテンツ