Cannot drop iceberg table by SparkSQL with AWS Glue Catalog

0
  • EMR Version: 6.15.0
  • Spark Conf
    • "spark.sql.catalog.spark_catalog": "org.apache.iceberg.spark.SparkSessionCatalog"
    • "spark.sql.catalog.spark_catalog.catalog-impl": "org.apache.iceberg.aws.glue.GlueCatalog"
    • "spark.sql.catalog.spark_catalog.io-impl": "org.apache.iceberg.aws.s3.S3FileIO"
    • "spark.sql.extensions": "org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions"

Create Table, write, read is OK. But I get the following error when I drop table:

spark.sql("drop table spark_catalog.$db.test_iceberg_0 purge")

pyspark.errors.exceptions.captured.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to fetch table test_iceberg_0. StorageDescriptor#InputFormat cannot be null for table: test_iceberg_0 (Service: null; Status Code: 0; Error Code: null; Request ID: null; Proxy: null)

duwan
已提問 3 個月前檢視次數 256 次
1 個回答
0

That error seems to indicate it is not using the Iceberg catalog extension, does it work if you SELECT the same table? Do you really need to use "purge"?, without that it shouldn't error (maybe that is not supported on Iceberg tables)

profile pictureAWS
專家
已回答 3 個月前
  • Thank you for your help.

    1. SELECT the same table is OK.
    2. I tried without purge and got the same error..

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南