AWS Glue reading glue catalog table VS reading files from s3

0

I am writing AWS Glue ETL job and I have 2 options to construct the spark dataframe :

Use the AWS Glue Data Catalog as the metastore for Spark SQL

df = spark.sql("select name from bronze_db.table_tbl")
df.write.save("s3://silver/...")

another options is to read directly from s3 location like this

df = spark.read.format("parquet").load("s3://bronze/table_tbl/1.parquet","s3://bronze/table_tbl/2.parquet")
df.write.save("s3://silver/...")

should I consider reading files directly to save cost or any limit on the number queries (select name from bronze_db.table_tbl) or to get better read performance ?

I am not sure if this query will be run on Athena to return the results

1 Respuesta
0

Hi,

the query will not be run by Athena, and there will not be any additional cost. When using AWS Glue Catalog to power Spark, the catalog replaces the Hive Metastore in informing Spark SQL on how to access the S3 data.

The 2 methods are equivalent, the first is only a bit more concise and user friendly not having to remember or know where the files for the table are.

Hope this helps.

AWS
EXPERTO
respondido hace 2 años

No has iniciado sesión. Iniciar sesión para publicar una respuesta.

Una buena respuesta responde claramente a la pregunta, proporciona comentarios constructivos y fomenta el crecimiento profesional en la persona que hace la pregunta.

Pautas para responder preguntas