Setting ACL in S3 objects written by an AWS Glue Job

0

I came across the following issue:

I run a Glue job from account A to write into an S3 bucket in account B. This meant that the owner of the object is account A and I couldn't do anything with the objects from account B.

Is there a way to tell the Glue job to apply an ACL with full control for the bucket owner?

profile pictureAWS
專家
Tasio
已提問 5 年前檢視次數 922 次
1 個回答
0
已接受的答案

You can use Hadoop java configurations in a spark job to update the S3 canned acls. I haven't tested this with Glue Dynamic Frames, but it works for native Spark DataFrames.

import sys
from pyspark import SparkConf
from pyspark.context import SparkContext
from pyspark.sql.functions import *
from pyspark.sql import SQLContext, Row


sc = SparkContext()
sc._jsc.hadoopConfiguration().set("fs.s3.canned.acl", "BucketOwnerFullControl")
AWS
已回答 5 年前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南