By using AWS re:Post, you agree to the AWS re:Post Terms of Use

Kinesis h264+AAC streaming

0

Gstreamer pipeline graph

In our application, we are trying to stream H264+AAC to kinesis server. We have taken ref. of https://github.com/awslabs/amazon-kinesis-video-streams-producer-sdk-cpp/blob/master/samples/kvs_gstreamer_audio_video_sample.cpp application to start with. The only change is we take input from rtsp source, padding dynamically. When run I am not able to get video+audio on kinesis.

I have read docs and found out below:

Basic abstract Frame structure and the StreamInfo.StreamCaps can be used to model different types of media pipelines. Below are some examples of different selections.

  1. H264 elementary stream produced by a camera in realtime needed for HLS/MPEG-DASH playback. In this case the encoder produces Annex-B format NALus for the elementary H264 and the HLS/MPEG-DASH playback requires AvCC format. Set the StreamInfo.streamingMode to Realtime, set the encoder to produce an I-frame every 2 - 4 seconds. Set StreamInfo.StreamCaps.frameTimestamps = TRUE, keyFrameFragmentation = TRUE, nalAdaptationMode = ADAPT_FRAME_NAL | ADAPT_CPD_ANNEXB. The settings will let the CPD be extracted from the I-frame and adapted from Annex-B to AvCC mode. The frames will also be adapted. The resulting stream can be consumed in realtime or played back using MPEG-DASH.
  1. Audio AAC streaming. In case of AAC every frame is self-contained so can be an I-frame (key-frame). StreamInfo.streamingMode to Realtime. Set StreamInfo.StreamCaps.frameTimestamps = FALSE, keyFrameFragmentation = FALSE, fragmentDuration = 4 * HUNDREDS_OF_NANOS_IN_A_SECOND, nalAdaptationMode = NAL_ADAPTION_MODE_NONE. Set the KEY_FRAME_FLAG on every frame. This will let the SDK to use the system clock to timestamp the frames as they get produced, each frame is a key-frame but the fragments will have the fragmentDuration length.

So we have set "keyFrameFragmentation = FALSE" , and then we are getting error as below. Enter image description here

If I don't add audio related stuff, only video is working fine. Please help me to figure out what could be issue here.

1 Answer
0

Based on the information provided, it appears you're encountering issues when trying to stream both H264 video and AAC audio to Amazon Kinesis Video Streams. The error message suggests that there's a problem with key frame fragmentation. Here are some points to consider:

  1. For H264 video streaming, key frame fragmentation is typically required. Setting "keyFrameFragmentation = FALSE" might be causing issues with the video stream.

  2. For AAC audio streaming, as you correctly noted, every frame can be considered a key frame. However, the settings for audio and video streams need to be different.

  3. When combining both audio and video streams, you need to ensure that the settings are appropriate for each track. You can't apply the same settings to both.

Here's a suggested approach:

  1. For the video track (H264):

    • Set keyFrameFragmentation = TRUE
    • Set nalAdaptationFlags = ADAPT_FRAME_NAL | ADAPT_CPD_ANNEXB
    • Ensure that I-frames are produced every 2-4 seconds
  2. For the audio track (AAC):

    • Set keyFrameFragmentation = FALSE
    • Set nalAdaptationFlags = NAL_ADAPTATION_FLAG_NONE
    • Set the KEY_FRAME_FLAG on every audio frame
  3. Make sure you're creating separate stream configurations for audio and video tracks.

  4. When adding tracks to your Kinesis Video stream, use the appropriate configuration for each track.

  5. Ensure that your RTSP source is correctly providing both audio and video streams.

  6. Check that your GStreamer pipeline is correctly demuxing and processing both the audio and video streams from your RTSP source.

If you're still encountering issues after making these adjustments, you may need to examine your GStreamer pipeline more closely to ensure it's correctly handling both the audio and video streams from your RTSP source. Also, make sure that the Kinesis Video Streams SDK is properly configured to handle multi-track streams.

Remember, when working with Kinesis Video Streams, it's crucial to adhere to the requirements for providing data through protocols like HLS or MPEG-DASH, which include specific codec private data formats for both video and audio tracks.
Sources
Example: Kinesis Video Streams producer SDK GStreamer Plugin - kvssink - Amazon Kinesis Video Streams
GetHLSStreamingSessionURL - Amazon Kinesis Video Streams
GetDASHStreamingSessionURL - Amazon Kinesis Video Streams

profile picture
answered a month ago
  • Thanks for quick response. I have gone through these property details "Set keyFrameFragmentation = TRUE Set nalAdaptationFlags = ADAPT_FRAME_NAL | ADAPT_CPD_ANNEXB" and found out that it is used for stream caps. And here I am using one stream only with two tracks. So I wonder how can I set different properties for different tracks in same stream.

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions