- Newest
- Most votes
- Most comments
You can consider using AWS Fargate with Amazon Elastic Container Service or Amazon Elastic Kubernetes Service for a serverless compute option without the 15 minutes duration constraint. Alas, Fargate wouldn't allow you to request GPU support either.
You can't specify the runtime environment for AWS Lambda functions, so no, you can't require the presence of a GPU. But you could run GPU workloads on AWS Batch, by using an AMI with GPU support. https://docs.aws.amazon.com/batch/latest/userguide/batch-gpu-ami.html AWS Batch automatically provisions compute resources. There's no need to install or manage batch computing software. AWS Batch jobs are not limited by the 15 minute runtime. Hope this helps
Yeah, I read about AWS Batch, I'm looking for tutorials on how to use it with a node script at the moment. Any clue? Thanks!
@dennis_a I'm struggling to run puppeteer on EC2 and these instructions I found don't seem to work: https://www.cloudsavvyit.com/13461/how-to-run-puppeteer-and-headless-chrome-in-a-docker-container/
Looks like I could get it to work following the settings used on this repo: https://github.com/beemi/puppeteer-headful
Looks like that once you are a bit more specific about the hardware requirements (i.e. GPU) it can take many minutes for a job to start. This means that I can have longer processes than 15 minutes. But that I can't really use hardware acceleration.
GPU presence is important.
Looks like that once I start requiring GPU in AWS Jobs, then the startup time becomes ridiculous. What's the cost/complexity difference between the option you proposed and AWS Batch? Thanks!