Writing from Athena PySpark to S3 is a lot slower today than yesterday. How possible ?

0

I have a 10GB dataset loaded in a PySpark dataframe.

df.coalesce(1).write.mode('overwrite').parquet("s3://xxxxxxxxxx-eu-west-1-athena-results-bucket-h1snx89wnc/output-data-parquet2") 

Yesterday, the parquet file was created in 6 to 7 minutes. Today, it won't even finish, as I am disconnected from AWS console before it completes (so at least 45mn+).

Is it possible or did I do something wrong ? (the source file hasn't changed)

lalvaro
已提问 1 年前219 查看次数
1 回答
0

Hello @lalvaro,

Looks like you are referring to an issue you faced during a specific Athena Spark Session. It is tricky to provide any recommendation here without additional information. We will need Calculation ID for further investigation into the parquet file write issue you are facing. If your account has Premium support subscription, please submit a support case with the Session and Calculation IDs and an engineer will work with you on this issue.

On the other hand, if the calculation has already been submitted, it will keep running even after your AWS console disconnects, you will be able to find the result of the calculation in calculation history.

As for the console/role timeout issue, please work with your AWS Account administrator to have your IAM role timeout extended.

AWS
支持工程师
已回答 1 年前

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则