_$folder$ while writing to S3 from a Glue PySpark job

0

Hi

I have written a few Glue Jobs and not faced this situation , but all of a sudden this has started appearing for a new job that I wrote. I am using the below code to write data to S3 . The S3 path is "s3://...."

unionData_df.repartition(1).write.mode("overwrite").parquet(test_path)

In my test env, when I first ran the glue job , it created an empty file with the suffix _$folder$ The same happened in Prod also . My other jobs do not have this problem.

Why is it creating this file ? How to avoid it ? Any pointes on why is it not happening for other jobs but for this one? What should I be checking ? Note , I think the file gets created the first time the prefix/folder is created. Some blogposts suggest to change the S3 path to s3a , but I am not sure if that is the right thing to do .

已提问 1 年前1436 查看次数
2 回答
1
已接受的回答

This is done by Hadoop if the folder does not exist. This _$folder$ is just a placeholder. This is created by mkdir commands. The actual folder is only created when first file is placed. The other jobs where this is not happening might be writing to existing folders. These files should not cause a problem.

AWS
venky81
已回答 1 年前
AWS
专家
已审核 1 年前
0

This happens because of the S3 path you use during writing.

s3:// vs s3a://

s3:// will make the folder s3a:// will not

They both have their ups and downs and is generally recommended to stick with s3://.

已回答 1 年前

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则