By using AWS re:Post, you agree to the AWS re:Post Terms of Use

How to send live video stream from clients to EC2 instance for processing and distribution?

0

I want to create a real time video enhancement service using AWS. Basically, clients (edge services) will be able to connect to the service and stream live video to the service and the service would apply some algorithm that is running on EC2 instance for video enhancement and would finally deliver the processed live stream through CDN to consume. I was going through some of the AWS services for live streaming like MediaLive, Kinesis Video stream and was able to setup send live video feed from client to the AWS service. But I am stuck on how to send those live stream from MediaLive or Kinesis Video stream to the EC2 instance where I will be running my algorithm? I have gone through the documentation of MediaLive and Kinesis Video stream, but didn't got anything useful there.

3 Answers
1
Accepted Answer

Hello Ayan,

First off, kudos for developing your own video enhancement solution! Now, if you are not going to utilize the features of MediaLive to process and distribute the video, you might just run all of your processes within EC2. I am going to post in an example from Github, which has some of the features I am recommending. The workflow is for a live subtitling workflow, so it isn't completely on point - and it is deprecated due to some underlying elements written in an old version of Python. However, I think if you backward engineer the process, you can modify the workflow to run your enhancement software on the video and stream it live. Here is what the diagram looks like as published:Enter image description here

You could use OBS to contribute video to Ngenex, which would work as your ingest server. In the above workflow, this is how the custom application is run to modify the video and insert subtitles: "When a video publish begins, the Nginx RTMP module will execute a shell script that has been pre-installed on the EC2 instance. This script kicks off a video production workflow to inject subtitles, using ffmpeg, NodeJS, and a modified version of libcaption that is hosted in this repo." (verbiage from the Github README)

You could use a workflow like this to have your video enhancement algorithm run in NodeJS, then send the enhanced live video via CloudFront, or whatever CDN you choose. You could also modify the workflow to save a copy of the source video to S3, so you could retain the original mezzanine level video for future processing and use in VOD workflows. You could also keep the subtitle insertion functionality, if you wish.

Also, how much latency can you tolerate in your workflow? Is it a simple broadcast with no interactive timed metadata? If so, no problem. But, running this workflow on live video streams will introduce some level of latency. You will have to test and see what that latency will amount to, in real-life.

I hope this brainstorming session helps you with your project. Please feel free to tell us more of your parameters, and maybe we can tighten up our suggestions. I'll keep looking for more examples of relevant workflows and I will update this thread when I find more content.

Good luck in your building efforts!

AWS
EXPERT
answered 5 months ago
profile picture
EXPERT
reviewed 4 months ago
  • Hey thanks. This approach really seems feasible. Thanks for this detailed explanation and link to the repo. I am looking for latency less than 5-6 secs at least for the first iteration as the algorithm intend to be use for real time security surveillance purpose. I will try the above mention approach and update the outcome here. Thanks!!

  • Hi @BCole2019, thanks for your suggestion. I have tried to follow a similar approach as mentioned above for my use case. Finally, I am able to process the live stream in an EC2 instance and then distribute!! I have setup up a NGINX RTMP server for ingestion which trigger a script to process the video frames and finally send to AWS IVS for playback. Though currently I am seeing a significant latency of upto 1 min from source to playback, it may be due to ffmpeg configuration which I believe, I need to tune for my use case. I would also like to hear any suggestion from you to improve the latency.

1

Hi,

Before going into proprietary developments, I'd suggest you to study AWS Elemental MediaConvert. See https://aws.amazon.com/mediaconvert/

It seems to do a lot of the things that you want to achieve in terms of automated video processing and conversion.

Best,

Didier

profile pictureAWS
EXPERT
answered 5 months ago
profile picture
EXPERT
reviewed 5 months ago
profile picture
EXPERT
reviewed 5 months ago
  • I have just gone through the media convert documentation on a high level. As far as I understood, it's for transcoding video. I didn't got any reference on how to connect EC2 with this service to trigger my video enhancement algorithm on the live stream. Any reference on that will be helpful.

    Thanks, Ayan

  • MediaConvert can ingest live HLS sources but outputs only VODs. Jobs are time-clipped to the most recent segment advertised in the source manifest at the start of the job. Jobs run asynchronously. Not a live stream processing solution.

0

The simplest answer is to send either a packet stream (rtmp or rtp-fec) or upload an HLS file group from MediaLive to the EC2 instance. A listener process of some kind will need to run on the instance, such as ffmpeg or your own code if ingesting a bitstream; or an HTTP server for receiving a file group. For live content processing you shouldn't need to transiently store more than a few minutes of content at time. The allocated disk should be fast but need not be large.

AWS
answered 4 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions