1回答
- 新しい順
- 投票が多い順
- コメントが多い順
0
Hello Graham,
Good question, this is because Rekognition's model defines slates as essentially blank screens with text/metadata, not a clapperboard in the shot. Sorry for the confusion, but the segment detection models aren't trained to recognize objects in the shot, perhaps the documentation could be more explicit about this.
As a workaround, you could identify clapperboards with the StartLabelDetection API.
const labelDetectionResponse = await rekClient.send(new StartLabelDetectionCommand({
...
Settings: {
GeneralLabels: {
LabelInclusionFilters: [
"Clapperboard",
],
}
}
...
));
This will give you a result that looks something like this:
{
"JobStatus": "SUCCEEDED",
"VideoMetadata": {
...
},
"Labels": [
{
"Timestamp": 0,
"Label": {
"Name": "Clapperboard",
"Confidence": 90.0569076538086,
"Instances": [
{
"BoundingBox": {
"Width": 0.3286278247833252,
"Height": 0.7593286037445068,
"Left": 0.262031614780426,
"Top": 0.21159084141254425
},
"Confidence": 90.02198028564453
}
],
"Parents": [],
"Aliases": [],
"Categories": [
{
"Name": "Hobbies and Interests"
}
]
}
},
... // more detections for other timestamps
]
}
These are frame-based detections, not segment based, so it will give you a detection for all of the frames where the clapper board is detected.
Best, Lucas Jarman, Rekognition Video
回答済み 9ヶ月前
関連するコンテンツ
- 質問済み 3年前
- AWS公式更新しました 1年前
- AWS公式更新しました 8ヶ月前
- AWS公式更新しました 4ヶ月前