Which EC2 instance should I get to run fuzzy logic?


I have a fuzzy logic code from an excel file in 270,000 lines of excel. I need a recommendation on which instance is the perfect one to run this on, to get it to run as fast as possible.

2 Answers

I am not familiar with machine learning or inference, but I would guess that it would use quite a bit of CPU from processing fairly large Excel files.
Therefore, it may be better to use the C series instance family.

Compute Optimized instances are ideal for compute bound applications that benefit from high performance processors. Instances belonging to this category are well suited for batch processing workloads, media transcoding, high performance web servers, high performance computing (HPC), scientific modeling, dedicated gaming servers and ad server engines, machine learning inference and other compute intensive applications.

profile picture
answered 6 months ago
profile pictureAWS
reviewed 6 months ago

The choice of the best Amazon SageMaker instance type for training depends on various factors such as the size of your dataset, complexity of the model, and your budget. SageMaker offers a range of instance types optimized for different use cases. Here are some popular options:

ml.m5.xlarge: This instance type provides a balance between cost and performance. It offers a good starting point for many training tasks and is suitable for small to medium-sized datasets.

ml.m5.4xlarge: If you have larger datasets or more computationally intensive models, this instance type can provide more processing power and memory compared to the m5.xlarge instance.

ml.p3.2xlarge: If you are working with deep learning models that benefit from GPU acceleration, the P3 instances with NVIDIA V100 GPUs are a good choice. They offer significant computational power for training deep learning models and are particularly effective when dealing with large-scale image or text datasets. https://aws.amazon.com/ec2/instance-types/p3/

ml.g4dn.xlarge: Similar to the P3 instances, the G4 instances are optimized for GPU acceleration. However, they are more cost-effective and are suitable for training smaller deep learning models or running inference tasks.

ml.c5.18xlarge: When working with large datasets that require substantial computational resources, the C5 instances can be a good option. They offer a high-performance CPU with excellent memory capacity and are suitable for distributed training and large-scale processing.

You may want to take a look on this : https://pages.awscloud.com/rs/112-TZM-766/images/AL-ML%20for%20Startups%20-%20Select%20the%20Right%20ML%20Instance.pdf

answered 6 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions