How to optimize a batch of Spark Jobs on EMR to reduce overall processing time by 4-5x?

0

A customer is running a batch of 25 nightly Spark jobs split across 2 EMR clusters processing in parallel. There are no dependencies between these jobs - They can all run in parallel. Overall they are fetching 250GB of data from these tables across all jobs. The job completion time varies from 20 minutes to 4 hours for each job. Their overall batch completion time is 12-14 hours. They need to cut this time down to 2-3 hours.

What will be the top-3 to 5 recommendations that they can try to achieve this in 1-2 weeks?

The Spark code is straightforward - 1) Run SparkSQL to read data over JDBC and load DataFrame, 2) Transform/Join DataFrames, 3) Write DataFrames to S3 partitions.

AWS
已提問 3 年前檢視次數 419 次
1 個回答
0
已接受的答案

Hello. There are many factors to this. I am listing some below:

a) What is the instance configuration? Is it sufficient? Do you want to reconsider it?

b) Is auto scaling turned on?

c) What does Spark UI say? Which task takes most time? Is it task that takes more time or more time is spent on waiting for resources?

c) Read over JDBC , how many parallel connections are being used?

d) Are you using dynamic partitions?

These are some high level checklist which needs to be answered.

Most important is the code , are you using repartiton/coalesce? Are you using any collect in code? Code is the main factor which usually causes performance issues. Please feel free to reach out to me if you will need any additional information.

AWS
Sundeep
已回答 3 年前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南