1 個回答
- 最新
- 最多得票
- 最多評論
0
Hello Graham,
Good question, this is because Rekognition's model defines slates as essentially blank screens with text/metadata, not a clapperboard in the shot. Sorry for the confusion, but the segment detection models aren't trained to recognize objects in the shot, perhaps the documentation could be more explicit about this.
As a workaround, you could identify clapperboards with the StartLabelDetection API.
const labelDetectionResponse = await rekClient.send(new StartLabelDetectionCommand({
...
Settings: {
GeneralLabels: {
LabelInclusionFilters: [
"Clapperboard",
],
}
}
...
));
This will give you a result that looks something like this:
{
"JobStatus": "SUCCEEDED",
"VideoMetadata": {
...
},
"Labels": [
{
"Timestamp": 0,
"Label": {
"Name": "Clapperboard",
"Confidence": 90.0569076538086,
"Instances": [
{
"BoundingBox": {
"Width": 0.3286278247833252,
"Height": 0.7593286037445068,
"Left": 0.262031614780426,
"Top": 0.21159084141254425
},
"Confidence": 90.02198028564453
}
],
"Parents": [],
"Aliases": [],
"Categories": [
{
"Name": "Hobbies and Interests"
}
]
}
},
... // more detections for other timestamps
]
}
These are frame-based detections, not segment based, so it will give you a detection for all of the frames where the clapper board is detected.
Best, Lucas Jarman, Rekognition Video
已回答 9 個月前
相關內容
- 已提問 6 個月前
- AWS 官方已更新 4 個月前
- AWS 官方已更新 9 個月前
- AWS 官方已更新 2 年前