Which EC2 instance should I get to run fuzzy logic?

0

I have a fuzzy logic code from an excel file in 270,000 lines of excel. I need a recommendation on which instance is the perfect one to run this on, to get it to run as fast as possible.

2개 답변
1

I am not familiar with machine learning or inference, but I would guess that it would use quite a bit of CPU from processing fairly large Excel files.
Therefore, it may be better to use the C series instance family.
https://aws.amazon.com/ec2/instance-types/?nc1=h_ls

Compute Optimized instances are ideal for compute bound applications that benefit from high performance processors. Instances belonging to this category are well suited for batch processing workloads, media transcoding, high performance web servers, high performance computing (HPC), scientific modeling, dedicated gaming servers and ad server engines, machine learning inference and other compute intensive applications.

profile picture
전문가
답변함 일 년 전
profile pictureAWS
전문가
iBehr
검토됨 일 년 전
0

The choice of the best Amazon SageMaker instance type for training depends on various factors such as the size of your dataset, complexity of the model, and your budget. SageMaker offers a range of instance types optimized for different use cases. Here are some popular options:

ml.m5.xlarge: This instance type provides a balance between cost and performance. It offers a good starting point for many training tasks and is suitable for small to medium-sized datasets.

ml.m5.4xlarge: If you have larger datasets or more computationally intensive models, this instance type can provide more processing power and memory compared to the m5.xlarge instance.

ml.p3.2xlarge: If you are working with deep learning models that benefit from GPU acceleration, the P3 instances with NVIDIA V100 GPUs are a good choice. They offer significant computational power for training deep learning models and are particularly effective when dealing with large-scale image or text datasets. https://aws.amazon.com/ec2/instance-types/p3/

ml.g4dn.xlarge: Similar to the P3 instances, the G4 instances are optimized for GPU acceleration. However, they are more cost-effective and are suitable for training smaller deep learning models or running inference tasks.

ml.c5.18xlarge: When working with large datasets that require substantial computational resources, the C5 instances can be a good option. They offer a high-performance CPU with excellent memory capacity and are suitable for distributed training and large-scale processing.

You may want to take a look on this : https://pages.awscloud.com/rs/112-TZM-766/images/AL-ML%20for%20Startups%20-%20Select%20the%20Right%20ML%20Instance.pdf

AWS
답변함 일 년 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠