1 回答
- 最新
- 投票最多
- 评论最多
0
Hello Graham,
Good question, this is because Rekognition's model defines slates as essentially blank screens with text/metadata, not a clapperboard in the shot. Sorry for the confusion, but the segment detection models aren't trained to recognize objects in the shot, perhaps the documentation could be more explicit about this.
As a workaround, you could identify clapperboards with the StartLabelDetection API.
const labelDetectionResponse = await rekClient.send(new StartLabelDetectionCommand({
...
Settings: {
GeneralLabels: {
LabelInclusionFilters: [
"Clapperboard",
],
}
}
...
));
This will give you a result that looks something like this:
{
"JobStatus": "SUCCEEDED",
"VideoMetadata": {
...
},
"Labels": [
{
"Timestamp": 0,
"Label": {
"Name": "Clapperboard",
"Confidence": 90.0569076538086,
"Instances": [
{
"BoundingBox": {
"Width": 0.3286278247833252,
"Height": 0.7593286037445068,
"Left": 0.262031614780426,
"Top": 0.21159084141254425
},
"Confidence": 90.02198028564453
}
],
"Parents": [],
"Aliases": [],
"Categories": [
{
"Name": "Hobbies and Interests"
}
]
}
},
... // more detections for other timestamps
]
}
These are frame-based detections, not segment based, so it will give you a detection for all of the frames where the clapper board is detected.
Best, Lucas Jarman, Rekognition Video
已回答 9 个月前
相关内容
- AWS 官方已更新 4 个月前
- AWS 官方已更新 8 个月前
- AWS 官方已更新 1 年前