Cannot drop iceberg table by SparkSQL with AWS Glue Catalog

0
  • EMR Version: 6.15.0
  • Spark Conf
    • "spark.sql.catalog.spark_catalog": "org.apache.iceberg.spark.SparkSessionCatalog"
    • "spark.sql.catalog.spark_catalog.catalog-impl": "org.apache.iceberg.aws.glue.GlueCatalog"
    • "spark.sql.catalog.spark_catalog.io-impl": "org.apache.iceberg.aws.s3.S3FileIO"
    • "spark.sql.extensions": "org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions"

Create Table, write, read is OK. But I get the following error when I drop table:

spark.sql("drop table spark_catalog.$db.test_iceberg_0 purge")

pyspark.errors.exceptions.captured.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to fetch table test_iceberg_0. StorageDescriptor#InputFormat cannot be null for table: test_iceberg_0 (Service: null; Status Code: 0; Error Code: null; Request ID: null; Proxy: null)

duwan
已提问 3 个月前256 查看次数
1 回答
0

That error seems to indicate it is not using the Iceberg catalog extension, does it work if you SELECT the same table? Do you really need to use "purge"?, without that it shouldn't error (maybe that is not supported on Iceberg tables)

profile pictureAWS
专家
已回答 3 个月前
  • Thank you for your help.

    1. SELECT the same table is OK.
    2. I tried without purge and got the same error..

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则