- 最新
- 投票最多
- 评论最多
Hi Koltunski,
Thank you for asking this question on the MediaLive forums, unfortunately there is not currently a way to implement two images going to different outputs on a single MediaLive channel. To do this you will need to create two channels for the different image overlays. To overcome the timecode and GOP issues that you are looking to avoid. I would suggest that you use MediaConnect to send your source to the two MediaLive channels and use Epoch Locking as your output locking mode in MediaLive which will attempt to synchronize each output to the Unix Epoch.
Also, when using Epoch Locking there are 2 things to note, you need to have timecode on your source for this to work and scte-35 messaging is not supported when using Epoch Locking.
Zach
Edited by: Zacharyb-AWS on Jul 28, 2020 1:12 PM
Thank you Zachary for your in-depth answer.
One more question: you say that for 'Epoch Locking' to work the input video must have a 'timecode'.
What kind of 'timecode' are you referring to?
Hi Koltunski,
Thank you for the follow up question, Video Timecode is time metadata that is assigned to a specific frame or a specific point in a video which is used for reference points and editing.
Zach
Koltunski
Please see the reference to embedded timecode in https://docs.aws.amazon.com/medialive/latest/ug/timecode.html
Hello,
I came back and I am trying to implement the pipeline described by Zachary in the second message of this thread.
So, again: what I am trying to acheive is: a single video input, two transcoding pipelines, each on overlaying an image on top of the video, and then weave the two resulting HLS streams together by taking every other segment from each one. For this to work, the two pipelines must obviously be GOP-aligned.
So far I've got two MediaLive pipelines and two inputs. Each input pulls the same HLS video source. Each MediaLive pipeline overlays a different image on top of the video; both send their output to a single MediaStore container. I stream like that for a while, stop, go to the container and edit the M3U manifests:
(...)
#EXTINF:10,
overlay_a_720p30_00326.ts
#EXTINF:10,
overlay_b_720p30_00327.ts
#EXTINF:10,
overlay_a_720p30_00328.ts
#EXTINF:10,
overlay_b_720p30_00329.ts
(...)
so that the two streams, 'overlay_a' and 'overlay_b' are weaved together. I use Epoch Locking, and I inject embedded timecodes during the transcode.
When I then watch such 'weaved' stream, it is clear that the two pipelines are many seconds apart. Thinking about it, it's not surprising - they are probaby apart already at the input and all of that timecode and locking later on cannot rectify the initial desynchronization.
Zachary says that one has to 'use MediaConnect to send your source to the two MediaLive channels'.
The problem is, I cannot see any way how I could create one MediaLive input of type 'MediaConnect' and then connect this one input to two MediaLive channels? The UI only lets me to use each input once?
Hi LKoltunski,
Your description indicates that Epoch locking is not presently working. There are two additional requirements for Epoch locking to work successfully:
- The input timecode must be in UTC format.
- There cannot be any "complex" frame rate conversions happening in the channel. In other words, you can't take a 29.97 input and output 30.00 fps. Even multiples are OK.
When Epoch locking is working properly you will see segment numbers that are based on Epoch time. In other words, the segment number will be the Epoch timestamp divided by the exact segment duration.
With respect to the MediaConnect question, you need to create additional inputs with the same MediaConnect flow ARNs.
Please advise if you have any other questions,
Regards,
Steve
Edited by: AWSsteve on Feb 11, 2021 11:04 AM
Thanks Steve!
So you're saying that even without MediaConnect, the following pipeline
---> MLive input -> MLive transcode (insert timecodes & Epoch) -> MStore container
HLS ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| --> weave into one HLS
---> MLive input -> MLive transcode (insert timecodes & Epoch) -> MStore container
is supposed to stay synchronized, if Epoch locking works?
Edited by: LKoltunski on Feb 11, 2021 12:01 PM
Edited by: LKoltunski on Feb 11, 2021 12:02 PM
Edited by: LKoltunski on Feb 11, 2021 12:03 PM
If you can supply the required number of streams to feed both MediaLive channels, yes you can do it that way. But if you only have a single source, MediaConnect can distribute the single source to the two MediaLive channels by creating two MediaLive inputs referencing the same MediaConnect flow(s).
The key for Epoch locking to work is that every MediaLive pipeline is receiving the same source with the same embedded timecode (in UTC).
Note another way to confirm what timecode is being received, is that MediaLive will emit a log entry in the as-run log stream for each channel showing the initial timecode.
Edited by: AWSsteve on Feb 11, 2021 12:05 PM
I came back to report mixed results.
About two weeks ago, I followed what Steve suggested and I managed to create this:
https://4n42nbhrddyctj.data.mediastore.eu-west-1.amazonaws.com/tears_abab.m3u8
(the above is a manually rewritten manifest, basically 'tears_a.m3u8' and 'tears_b.m3u8' weaved together in order to see if they are time-synchronized).
If you watch this with VLC, you'll see 10-second long segments, the odd ones with a visible, large 'A' overlaid, the even ones - with a large 'B'. Transition between the 'A' and the 'B' is seamless. Epoch Locking works; I can also tell this by the way the segments are named ( 'tears_a_720p30_161339652.ts' - the '161339652' is the Epoch divided by the segment duration just like Steve was saying ).
So success!
However, today I was trying to re-create this pipeline, to no avail. Whatever I try, the segments which land in MediaStore are named like this: 'tears_a_720p30_00012.ts' - '00012' instead of the Epoch - and by this alone, I can tell that the synchronization is not going to work. Indeed, when I manually rewrite the manifest into an 'abab' one just like above, I can see that the 'a_000N' and 'b_000N' segments are a few seconds apart. So Epoch Locking is not working at all.
I am at a loss trying to figure out why.
Here's what I do:
- create an input of type 'HLS' , pull https://ntv1.akamaized.net/hls/live/2014075/NASA-NTV1-HLS/master_2000.m3u8 (NASA TV)
- Notice that the above source is at 29.97 fps = 30000/1001. It also (as far as I can tell) does not include timecode info.
- create a MediaLive channel, set the above as an input, configure it to add timecode from SYSTEMCLOCK ( channel->General Settings-> Timecode Configuration-> SYSTEMCLOCK ) The only point of this MediaLive is to add the timecode.
- configure this to output to rtp://IP:PORT, where IP and PORT is taken from the MediaConnect instance from point 5)
- Create a MediaConnect flow, taken note of its input IP and PORT, configure its Whitelist CIDR block so that the RTP from the previous MediaLive gets through.
- create two more Inputs from the MediaConnect flow.
- create two more MediaLive channels. Configure each one to take its input from the just created two inputs of type MediaConnect.
- configure the 2 channels to send their output to appropriate mediastoressl:// ... location
- configure their template to be 'Live Event (HLS)' which creates the HSL output with 4 renditions.
- Set their class to be ‘SINGLE_PIPELINE'
- Channel → General Settings → Global Configuration → Output Locking Mode -> set this to 'Epoch Locking'.
- In each output rendition's settings pages, make sure that the ‘Frame Rate’ is set to ‘SPECIFIED’ and Num/Denom is 30000/1001
- in each of the two Channels 'General Settings' -> 'Timecode Configuration' set the timecode to 'EMBEDDED' (meaning: take the timecode from the stream, because it's supposed to be there -> it's supposed to be added by the first MediaLive instance)
- again for each of the two MediaLive channels, create a Schedule of type 'Static image activate' and overlay 'A.png' (first) and 'B.png' (second)
- create a MediaStore container (in point 8 we gave this as a location of the output)
The above is, AFAIK, ecactly how I was creating this before. Only now it doesn't work.
Any advice?
LKoltunski
Given that the workflow doesn't seem to work the issue is most likely that the source does not contain timecode. In your procedure step 3 doesn't state that you enabled timecode insertion in the output. In the UDP output group, Video Settings - Codec Settings - Timecode - Timecode Insertion - Set to PIC_TIMING_SEI.
If with this change you still have issues then please either open a ticket in Support Center, providing the Channel ARNs for all three channels, (fastest response), or private mail the info to me, and we will investigate the issue further.
Regards
Is there any more update on this topic? I am having the same issue. Only the first time try, the segment sequence number was EPOCH/segment_duration. After I edited the medialive channel (added another resolution to the output group), the sequence number was then named from 00001.ts.
相关内容
- AWS 官方已更新 3 个月前
- AWS 官方已更新 2 年前
- AWS 官方已更新 4 个月前
- AWS 官方已更新 7 个月前