_$folder$ while writing to S3 from a Glue PySpark job

0

Hi

I have written a few Glue Jobs and not faced this situation , but all of a sudden this has started appearing for a new job that I wrote. I am using the below code to write data to S3 . The S3 path is "s3://...."

unionData_df.repartition(1).write.mode("overwrite").parquet(test_path)

In my test env, when I first ran the glue job , it created an empty file with the suffix _$folder$ The same happened in Prod also . My other jobs do not have this problem.

Why is it creating this file ? How to avoid it ? Any pointes on why is it not happening for other jobs but for this one? What should I be checking ? Note , I think the file gets created the first time the prefix/folder is created. Some blogposts suggest to change the S3 path to s3a , but I am not sure if that is the right thing to do .

質問済み 1年前1436ビュー
2回答
1
承認された回答

This is done by Hadoop if the folder does not exist. This _$folder$ is just a placeholder. This is created by mkdir commands. The actual folder is only created when first file is placed. The other jobs where this is not happening might be writing to existing folders. These files should not cause a problem.

AWS
venky81
回答済み 1年前
AWS
エキスパート
レビュー済み 1年前
0

This happens because of the S3 path you use during writing.

s3:// vs s3a://

s3:// will make the folder s3a:// will not

They both have their ups and downs and is generally recommended to stick with s3://.

回答済み 1年前

ログインしていません。 ログイン 回答を投稿する。

優れた回答とは、質問に明確に答え、建設的なフィードバックを提供し、質問者の専門分野におけるスキルの向上を促すものです。

質問に答えるためのガイドライン

関連するコンテンツ