Writing from Athena PySpark to S3 is a lot slower today than yesterday. How possible ?

0

I have a 10GB dataset loaded in a PySpark dataframe.

df.coalesce(1).write.mode('overwrite').parquet("s3://xxxxxxxxxx-eu-west-1-athena-results-bucket-h1snx89wnc/output-data-parquet2") 

Yesterday, the parquet file was created in 6 to 7 minutes. Today, it won't even finish, as I am disconnected from AWS console before it completes (so at least 45mn+).

Is it possible or did I do something wrong ? (the source file hasn't changed)

lalvaro
feita há um ano219 visualizações
1 Resposta
0

Hello @lalvaro,

Looks like you are referring to an issue you faced during a specific Athena Spark Session. It is tricky to provide any recommendation here without additional information. We will need Calculation ID for further investigation into the parquet file write issue you are facing. If your account has Premium support subscription, please submit a support case with the Session and Calculation IDs and an engineer will work with you on this issue.

On the other hand, if the calculation has already been submitted, it will keep running even after your AWS console disconnects, you will be able to find the result of the calculation in calculation history.

As for the console/role timeout issue, please work with your AWS Account administrator to have your IAM role timeout extended.

AWS
ENGENHEIRO DE SUPORTE
respondido há um ano

Você não está conectado. Fazer login para postar uma resposta.

Uma boa resposta responde claramente à pergunta, dá feedback construtivo e incentiva o crescimento profissional de quem perguntou.

Diretrizes para responder a perguntas