AWS Glue reading glue catalog table VS reading files from s3

0

I am writing AWS Glue ETL job and I have 2 options to construct the spark dataframe :

Use the AWS Glue Data Catalog as the metastore for Spark SQL

df = spark.sql("select name from bronze_db.table_tbl")
df.write.save("s3://silver/...")

another options is to read directly from s3 location like this

df = spark.read.format("parquet").load("s3://bronze/table_tbl/1.parquet","s3://bronze/table_tbl/2.parquet")
df.write.save("s3://silver/...")

should I consider reading files directly to save cost or any limit on the number queries (select name from bronze_db.table_tbl) or to get better read performance ?

I am not sure if this query will be run on Athena to return the results

1回答
0

Hi,

the query will not be run by Athena, and there will not be any additional cost. When using AWS Glue Catalog to power Spark, the catalog replaces the Hive Metastore in informing Spark SQL on how to access the S3 data.

The 2 methods are equivalent, the first is only a bit more concise and user friendly not having to remember or know where the files for the table are.

Hope this helps.

AWS
エキスパート
回答済み 2年前

ログインしていません。 ログイン 回答を投稿する。

優れた回答とは、質問に明確に答え、建設的なフィードバックを提供し、質問者の専門分野におけるスキルの向上を促すものです。

質問に答えるためのガイドライン

関連するコンテンツ