Writing from Athena PySpark to S3 is a lot slower today than yesterday. How possible ?

0

I have a 10GB dataset loaded in a PySpark dataframe.

df.coalesce(1).write.mode('overwrite').parquet("s3://xxxxxxxxxx-eu-west-1-athena-results-bucket-h1snx89wnc/output-data-parquet2") 

Yesterday, the parquet file was created in 6 to 7 minutes. Today, it won't even finish, as I am disconnected from AWS console before it completes (so at least 45mn+).

Is it possible or did I do something wrong ? (the source file hasn't changed)

lalvaro
已提問 1 年前檢視次數 220 次
1 個回答
0

Hello @lalvaro,

Looks like you are referring to an issue you faced during a specific Athena Spark Session. It is tricky to provide any recommendation here without additional information. We will need Calculation ID for further investigation into the parquet file write issue you are facing. If your account has Premium support subscription, please submit a support case with the Session and Calculation IDs and an engineer will work with you on this issue.

On the other hand, if the calculation has already been submitted, it will keep running even after your AWS console disconnects, you will be able to find the result of the calculation in calculation history.

As for the console/role timeout issue, please work with your AWS Account administrator to have your IAM role timeout extended.

AWS
支援工程師
已回答 1 年前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南