I have a large dataset (table) with >1e9 records (rows) in Glue. The tables are partitioned by column A, which is a n-letters subtring of column B. For example:
A (partition key) | B | ... |
---|
abc | abc123... | ... |
abc | abc123... | ... |
abc | abc456... | ... |
abc | abc456... | ... |
abc | abc456... | ... |
abc | abc789... | ... |
abc | abc789... | ... |
... | ... | ... |
xyz | xyz123... | ... |
xyz | xyz123... | ... |
xyz | xyz123... | ... |
xyz | xyz456... | ... |
xyz | xyz456... | ... |
xyz | xyz456... | ... |
xyz | xyz789... | ... |
xyz | xyz789... | ... |
There are >1e6 possible different values of column B and correspondingly significantly less for column A (maybe 1e3). Now I need to group records/rows by column B and the assumption is that it could be advantageous if the table was partitioned by column A, as it would be sufficient to load dataframes from single partitions for grouping instead of running the operation on the entire table. (Partitioning by column B would lead to unreasonably large numbers partitions.) Is my assumption right? How would I tell my Glue job the link between column A and B and profit from the partitioning?
Alternatively I could handle the 1e3 dataframes (one for each partition) separately in my Glue job and merge them lateron. But this looks a bit complicated to me.
This question is a follow-up question to https://repost.aws/questions/QUwxdl4EwTQcKBuL8MKCU0EQ/are-partitions-advantageous-for-groupby-operations-in-glue-jobs.