- Più recenti
- Maggior numero di voti
- Maggior numero di commenti
Thanks for your response. The use case is deep learning model inference. Our models only have 1 gb to 4 gb gpu ram utilization.
The AWS calculator has no way to filter by GPU RAM, so I've been looking at: https://instances.vantage.sh/?cost_duration=annually&reserved_term=yrTerm1Standard.allUpfront&selected=a1.2xlarge,g4ad.2xlarge,g4dn.8xlarge,g4dn.4xlarge
g4ad.xlarge with GPU RAM of 8 GiB has an on demand cost of $3315.9228 annually which appears to be the cheapest GPU VM option provided by AWS from my review.
Is this correct or is there a cheaper option? I think by using quantization or using smaller models we can get GPU RAM utilization below 4 gb. Is there no machines available with a cheaper price point taking into account that we only have GPU RAM utilization of up to 4 gb?
Thank you,
Aaron
Can you share the use case you have in mind? Is this for gaming, video editing or ML inference?
You can refer to documentation GPU instances for overview of EC2 instances with GPUs. For pricing, you can check On-demand pricing or use AWS Calculator. There are different pricing options such as Savings Plan
Some of the cost effective instance types include g5g, g4dn and g4ad. They start with 4 vCPUs and 16 GB RAM.
EDIT: You can go to EC2 console, Instance Types and filter by EC2 with GPUs (screenshot below).
Note that g4ad use AMD GPU. The g5g is on ARM64 architecture, comes with NVIDIA T4g and has 8 GB RAM
Do you need to run your inference 24 by 7? If not, on-demand may be more cost effective; stop and start the instance as required.
You may want to consider SageMaker especially Serverless Inference. Refer to Pricing page for pricing model.
Contenuto pertinente
- AWS UFFICIALEAggiornata un anno fa
- AWS UFFICIALEAggiornata un anno fa
- AWS UFFICIALEAggiornata 2 anni fa
- AWS UFFICIALEAggiornata 3 anni fa
Thanks for the details. I have updated my post on other options. SageMaker Serverless Inference may be suitable for your needs