AWS Glue writing to S3 but not creating table

0

Hey! I have a setup currently with a crawler that connects to a PostgreSQL database with JDBC, this works and the crawler generates around 20 tables for this database. I now want to create an ETL job that extracts the data from one table to a S3 bucket, and make S3 data queryable in Athena. I now have a ETL Glue flow that looks like this:

glue

This seems to work except no Table is being created in the database. The S3 target location contains the parquet files and the job succeeds. But no table.

The auto generated spark code looks like this:

S3bucket_node3 = glueContext.getSink(
    path="s3://data-lake",
    connection_type="s3",
    updateBehavior="UPDATE_IN_DATABASE",
    partitionKeys=[],
    enableUpdateCatalog=True,
    transformation_ctx="S3bucket_node3",
)
S3bucket_node3.setCatalogInfo(
    catalogDatabase="postgres_glue_database", catalogTableName="tableName"
)
S3bucket_node3.setFormat("glueparquet")
S3bucket_node3.writeFrame(ApplyMapping_node2)
job.commit()

Does anybody have any idea on where to look? There seems to be nothing wrong with the connection/crawler/bucket permissions. Its just not creating a table for the data its written to the bucket. I tried:

  • recreating the bucket / roles
  • giving the table other names
  • adding and removing additional Input arguments

Thanks in advance!

D Joe
已提問 1 年前檢視次數 930 次
1 個回答
0

So I managed to find the solution:

AWS Glue provides you with multiple log streams and apparently I missed one, there I found this:

com.amazonaws.services.glue.model.AccessDeniedException: Insufficient Lake Formation permission(s)

So I added AWSLakeFormationDataAdmin to the role and for now it seems to continue

D Joe
已回答 1 年前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南