AWS Glue reading glue catalog table VS reading files from s3

0

I am writing AWS Glue ETL job and I have 2 options to construct the spark dataframe :

Use the AWS Glue Data Catalog as the metastore for Spark SQL

df = spark.sql("select name from bronze_db.table_tbl")
df.write.save("s3://silver/...")

another options is to read directly from s3 location like this

df = spark.read.format("parquet").load("s3://bronze/table_tbl/1.parquet","s3://bronze/table_tbl/2.parquet")
df.write.save("s3://silver/...")

should I consider reading files directly to save cost or any limit on the number queries (select name from bronze_db.table_tbl) or to get better read performance ?

I am not sure if this query will be run on Athena to return the results

1 回答
0

Hi,

the query will not be run by Athena, and there will not be any additional cost. When using AWS Glue Catalog to power Spark, the catalog replaces the Hive Metastore in informing Spark SQL on how to access the S3 data.

The 2 methods are equivalent, the first is only a bit more concise and user friendly not having to remember or know where the files for the table are.

Hope this helps.

AWS
专家
已回答 2 年前

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则