Transfer files (1GB to 2G) from web url to S3 on a schedule

0

My customer has a use case to access file from a web url periodically and upload to S3. This is currently implemented using lambda with schedule trigger and is working well for smaller files. But larger files need higher memory settings in lambda. Is there a way to directly upload to S3 without storing in memory or are there better serverless implementation options for this use case?

AWS
已提問 4 年前檢視次數 310 次
1 個回答
0
已接受的答案

I've previously used a very small container running in ECS for this. ECS has scheduling in-built, so you simply set it the schedule appropriately, and it runs regularly thereafter.

The container I used was built on Alpine, and used wget to pull the files I needed.

AWS
專家
已回答 4 年前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南