By using AWS re:Post, you agree to the AWS re:Post Terms of Use

Pyspark dataframe as RecordIO protobuf

0

I want to save my pyspark dataframe in RecordIO protobuf format. I am using Amazon EMR to run my pyspark scripts, and I want to use AWS SageMaker to train a machine learning model. SageMaker pipe mode only accept RecordIO protobuf as input, hence my question

I have tried to save my pyspark dataframe as recordio protobuf as the following:

output_path = f"s3://my_path/output_processed" 
df_transformed.write.format("sagemaker").mode("overwrite").save(output_path)

But when I run the sagemaker model I get an error of missing values eventhough my dataframe does not have missing values. Any idea what might help?

asked a year ago339 views
1 Answer
0

HI,

For missing value error, validate your data and ensure that the data types of columns in your dataset match the expected data types. Also check for any unexpected values or outliers in your dataset. When using RecordIO format, review the serialization process to ensure that it accurately captures all data points without misinterpreting or excluding any, leading to a perceived missing value issue. You can start with small data size and validate the data integrity checks. For more data preparation guide, you can refer Prepare data with advanced transformations documentation.

I hope it helps.

profile pictureAWS
answered a year ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions