Low GPU RAM VM options and pricing

0

Hi,

I am trying to get some pricing on some of the low GPU RAM AWS VM options with like 4 gb gpu ram, 8 gb gpu ram, etc. The pricing options I've seen so far either only have the CPU specs or 24 GB GPU RAM and above.

What are the names of some low GPU RAM VM's on AWS I should consider? And what is their associated pricing?

Thank you.

2개 답변
0

Thanks for your response. The use case is deep learning model inference. Our models only have 1 gb to 4 gb gpu ram utilization.

The AWS calculator has no way to filter by GPU RAM, so I've been looking at: https://instances.vantage.sh/?cost_duration=annually&reserved_term=yrTerm1Standard.allUpfront&selected=a1.2xlarge,g4ad.2xlarge,g4dn.8xlarge,g4dn.4xlarge

g4ad.xlarge with GPU RAM of 8 GiB has an on demand cost of $3315.9228 annually which appears to be the cheapest GPU VM option provided by AWS from my review.

Is this correct or is there a cheaper option? I think by using quantization or using smaller models we can get GPU RAM utilization below 4 gb. Is there no machines available with a cheaper price point taking into account that we only have GPU RAM utilization of up to 4 gb?

Thank you,

Aaron

Aaron
답변함 2달 전
  • Thanks for the details. I have updated my post on other options. SageMaker Serverless Inference may be suitable for your needs

0

Can you share the use case you have in mind? Is this for gaming, video editing or ML inference?

You can refer to documentation GPU instances for overview of EC2 instances with GPUs. For pricing, you can check On-demand pricing or use AWS Calculator. There are different pricing options such as Savings Plan

Some of the cost effective instance types include g5g, g4dn and g4ad. They start with 4 vCPUs and 16 GB RAM.

EDIT: You can go to EC2 console, Instance Types and filter by EC2 with GPUs (screenshot below). Enter image description here

Note that g4ad use AMD GPU. The g5g is on ARM64 architecture, comes with NVIDIA T4g and has 8 GB RAM

Do you need to run your inference 24 by 7? If not, on-demand may be more cost effective; stop and start the instance as required.

You may want to consider SageMaker especially Serverless Inference. Refer to Pricing page for pricing model.

AWS
전문가
Mike_L
답변함 2달 전
profile picture
전문가
검토됨 2달 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인