How to Efficiently Perform a Count Query in AWS Glue Job Before Data Processing

0

I'm using an AWS Glue job for my data processing tasks, and my source system provides monthly snapshots of data. Before executing the create_dynamicframe function, I want to execute a select count(*) query on the source to get an idea of the data volume.

Here's my current function:


def create_dynamicframe(database, table, push_down_predicate=None, filter_function=None, primary_keys=None):
    outputsource = glueContext.create_dynamic_frame.from_catalog(database=database, table_name=table,
                                                                 transformation_ctx="outputsource",
                                                                 push_down_predicate=push_down_predicate)
    if filter_function is not None:
        outputsource = Filter.apply(frame=outputsource, f=filter_function).select_fields(primary_keys)
    return outputsource

However, when dealing with monthly snapshots, the job times out due to the large volume of data in the source system.

I'm looking for suggestions on how to modify this function or if there's an alternative approach to efficiently perform a count query in Athena before data processing. Specifically, I want to execute a query similar to:


select count(*) from [database].[table]

Any advice or best practices to optimize this process and prevent timeouts would be greatly appreciated. Thank you!

Vinod
已提问 6 个月前371 查看次数
1 回答
1

If the files in the table are parquet, doing that select count on the SparkSession should use the parquet file statistics, but still needs to open each file.
If you have a single system updating the table, you could have that system updating the count every time data is added (or removed). I don't see other ways.

profile pictureAWS
专家
已回答 6 个月前

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则