1 Answer
- Newest
- Most votes
- Most comments
0
Yes, The way to do this is with a pushdown predicate. When reading a dynamic frame, you would use the field push_down_predicate
.
https://aws.amazon.com/premiumsupport/knowledge-center/glue-job-specific-s3-partition/
answered 2 years ago
Relevant content
- asked 2 years ago
- Accepted Answerasked 6 months ago
- asked a year ago
- asked 9 months ago
- AWS OFFICIALUpdated 3 months ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 3 years ago
In my case, the table has maybe in the order of 1e9 lines which can be grouped into around 1e6 groups. Do I understand it right, that I should then start 1e6 Glue jobs for each partition/group in parallel and perform the selection
push_down_predicate
? This does not sound practical to me, as I assume that it would be better to effieciently use Glue's internal parallelisation.Yeah sorry, I haven't run tests on the scaling of the number of partitions so high with low data, but my assumption is that it scales and using partitions is better since the query you would be making to s3 is using presto to optimize the data grabbed and how it is grabbed with the partitions organization.
This might help. https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-presto-s3select.html