After successfully uploading CSV files from S3 to SageMaker notebook instance, I am stuck on doing the reverse.
I have a dataframe and want to upload that to S3 Bucket as CSV or JSON. The code that I have is below:
bucket='bucketname'
data_key = 'test.csv'
data_location = 's3://{}/{}'.format(bucket, data_key)
df.to_csv(data_location)
I assumed since I successfully used pd.read_csv() while loading, using df.to_csv() would also work but it didn't. Probably it is generating error because this way I cannot pick the privacy options while uploading a file manually to S3. Is there a way to upload the data to S3 from SageMaker?