AWS Athena notebook calculation result error: 'Error while getting calculation results', NoSuchKey

0

I have a problem with Amazon athena spark notebook. Every time when I try to run a spark command, even print(spark) it looks like it is running, but throws an exception at the end. For a command like print(spark) it sometimes shows that the calculation was completed, but mostly the error fails the calculation. Here is the output for print(spark):

Calculation started (calculation_id=f6c4ea70-4ae5-17ad-6feb-22b3dbef54e0) in (session=10c4ea6e-3099-d308-0daf-c31af358397d). Checking calculation status...
Progress:   0%|          |elapsed time = 00:00s
Calculation completed.
Exception encountered while running calculation: ('Error while getting calculation results', NoSuchKey('An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.'))

I have tried this with loading data, or just data aggregation, and I always get this error. When I try to inspect the calculation later in the session details, it shows this:

Session detail calculation error

The workgroup for pyspark engine was created with defalut parameters and should have access to the output s3 bucket. Any help on how to resolve this, so I could start with the Athena notebooks for real, would be appreciated.

vinc
asked a year ago579 views
2 Answers
0

This error occurs when the role is not able to fetch the results from S3, which is caused probably that the role did not have the proper S3 permissions. Can you confirm that the role you specified in work group has the following permissions or check if there is any explicit "Deny" for S3 output location ? (Please replace the S3 bucket with yours) You can read more about the role permissions in this documentation.

       {
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:ListBucket",
                "s3:DeleteObject",
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*",
                "arn:aws:s3:::DOC-EXAMPLE-BUCKET"
            ]
        }
AWS
answered a year ago
  • The spark workgroup IAM role has AWSAthenaSparkRolePolicy. It does not have any "Deny" and the first permission is exactly the permissions that you wrote only for my specified output bucket. I also added my other bucket with data, but this does not solve the issue.

0

I understand your issue still persists. To answer your question, we require details that are non-public information. Please open a support case with AWS using the following link.

AWS
answered a year ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions