Writing from Athena PySpark to S3 is a lot slower today than yesterday. How possible ?

0

I have a 10GB dataset loaded in a PySpark dataframe.

df.coalesce(1).write.mode('overwrite').parquet("s3://xxxxxxxxxx-eu-west-1-athena-results-bucket-h1snx89wnc/output-data-parquet2") 

Yesterday, the parquet file was created in 6 to 7 minutes. Today, it won't even finish, as I am disconnected from AWS console before it completes (so at least 45mn+).

Is it possible or did I do something wrong ? (the source file hasn't changed)

lalvaro
質問済み 1年前219ビュー
1回答
0

Hello @lalvaro,

Looks like you are referring to an issue you faced during a specific Athena Spark Session. It is tricky to provide any recommendation here without additional information. We will need Calculation ID for further investigation into the parquet file write issue you are facing. If your account has Premium support subscription, please submit a support case with the Session and Calculation IDs and an engineer will work with you on this issue.

On the other hand, if the calculation has already been submitted, it will keep running even after your AWS console disconnects, you will be able to find the result of the calculation in calculation history.

As for the console/role timeout issue, please work with your AWS Account administrator to have your IAM role timeout extended.

AWS
サポートエンジニア
回答済み 1年前

ログインしていません。 ログイン 回答を投稿する。

優れた回答とは、質問に明確に答え、建設的なフィードバックを提供し、質問者の専門分野におけるスキルの向上を促すものです。

質問に答えるためのガイドライン

関連するコンテンツ