1 Answer
- Newest
- Most votes
- Most comments
0
You can use Spark section from this EMR best practices guide. Feel free to share here or create a specReq if customer has any specific question. Here are few basic things to keep in mind.
- Handle data skew
- Make sure there is no disk spill happening
- Optimal partition size to make sure not too many tasks are created
- Use the right data format for source and target (preferably parquet)
- Watch for excessive shuffle. Can be confirmed from Spark UI.
- Tune driver/executor size (memory, core) based on workload.
answered 2 years ago
Relevant content
- asked 2 years ago
- asked 6 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 3 months ago
- AWS OFFICIALUpdated 2 years ago