_$folder$ while writing to S3 from a Glue PySpark job

0

Hi

I have written a few Glue Jobs and not faced this situation , but all of a sudden this has started appearing for a new job that I wrote. I am using the below code to write data to S3 . The S3 path is "s3://...."

unionData_df.repartition(1).write.mode("overwrite").parquet(test_path)

In my test env, when I first ran the glue job , it created an empty file with the suffix _$folder$ The same happened in Prod also . My other jobs do not have this problem.

Why is it creating this file ? How to avoid it ? Any pointes on why is it not happening for other jobs but for this one? What should I be checking ? Note , I think the file gets created the first time the prefix/folder is created. Some blogposts suggest to change the S3 path to s3a , but I am not sure if that is the right thing to do .

asked a year ago1250 views
2 Answers
1
Accepted Answer

This is done by Hadoop if the folder does not exist. This _$folder$ is just a placeholder. This is created by mkdir commands. The actual folder is only created when first file is placed. The other jobs where this is not happening might be writing to existing folders. These files should not cause a problem.

AWS
venky81
answered a year ago
AWS
EXPERT
reviewed a year ago
0

This happens because of the S3 path you use during writing.

s3:// vs s3a://

s3:// will make the folder s3a:// will not

They both have their ups and downs and is generally recommended to stick with s3://.

answered a year ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions