data not being uploaded to s3 from zeppelin notebook.

0

I have a Zeppelin notebook for analysis and I want to insert the data analyzed into s3 bucket, but previously I was able to insert data and I was able to view all the data but now I am not. these are the connectors for the table

WITH (
   'connector'='filesystem',
   'path' = 's3://firehose-test-kin/data/',
   'format' = 'csv',
   'sink.partition-commit.policy.kind'='success-file',
   'sink.partition-commit.delay' = '1 min'
);

this is the insert command

%flink.ssql(type=update)


insert into s3_join
SELECT * FROM test 

JOIN test1
ON test1.seq_num = test.seq_num

public access is disabled on the ss3 bucket but it made no difference when it was turned on

  • Have you verified the S3 bucket policies? Do you use AWS Access keys for Zeppelin to access the S3 Bucket?

1 個回答
0

IAM Policy: Please ensure that the IAM role or user associated with Zeppelin has the necessary S3 permissions. This includes the s3:PutObject, s3:PutObjectAcl, and s3:PutObjectVersionAcl permissions for the S3 bucket. If you are using AWS access keys, ensure they are correctly configured in Zeppelin.

Bucket Policy: The bucket policy should allow the IAM role or user to perform s3:PutObject actions. If the bucket policy denies this action, the IAM policy won't be able to override this.

S3 Block Public Access settings: If all public access is blocked, including public access granted by ACLs, you might need to add specific IAM permissions for the entities that need to access the bucket.

Network Connectivity: Please ensure that the Zeppelin server has network access to S3. If the server is in a VPC, ensure that the necessary VPC endpoints are created.

S3 Path: Check if the S3 path mentioned in the 'path' attribute is correct. Please ensure that the path includes the correct bucket name and the key prefix (if any).

profile picture
專家
已回答 1 年前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南