How to load large amount of data from S3 onto Sagemaker?

0

I have a notebook on Sagemaker Studio, I want to read data from S3, I am using the code bellow:

s3_client = boto3.client('s3') bucket = 'bucket_name' data_key = 'file_key.csv' obj = s3_client.get_object(Bucket=bucket, Key=data_key) df = pd.read_csv(io.BytesIO(obj['Body'].read())) df.head()

It works for small datasets but fails along the way with the dataset I'm trying to load which is 15GB. I changed the instance to ml.g4dn.xlarge ( accelerated computing, 4vCPU + 16GiB + 1 GPU), still fails. what am I missing here? Is is about the instance type, or about the code? What is the best way to import large datasets from S3 to sagemaker?

Thank you

질문됨 3년 전4384회 조회
3개 답변
1

What is the need to load large dataset onto the notebook? If you are pre-processing then there are better ways to do this - Sagemaker Spark processing job, or have your own spark cluster and process or even possibly Glue. If you are exploring the data, you should just use a smaller data set. If you are loading the data for training, Sagemaker supports different modes to read the data and data doesnt have to be downloaded on the notebook.

AWS
답변함 3년 전
profile pictureAWS
전문가
검토됨 3년 전
  • @AWS-cm-1462515 @Hrushi_G can somebody please point to a resource on AWS (or someplace else) with instructions on how to read data in SageMaker for training and yet not load them in the notebook?

1

If you want to download the data onto the notebook so that you don't have to load it from S3 each time you want it in Pandas, you should confirm that the volume size is sufficient for the data. It is set to 5 GB by default, which lines up with your scenario.

To change this, you'll need to edit your notebook instance, expand the "additional configuration" drop down and look for the "Volume size in GB" field.

AWS
Ben_F
답변함 3년 전
0

I have used a much simpler approach to reading a single data file into S3 using pandas. For example:

import pandas as pd

bucket = 'bucket_name'

data_key = 'file_key.csv'

df = pd.read_csv( 's3://{}/{}'.format(bucket,data_key) )

df.head()

Maybe this will perform better?

답변함 3년 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인