Setting ACL in S3 objects written by an AWS Glue Job

0

I came across the following issue:

I run a Glue job from account A to write into an S3 bucket in account B. This meant that the owner of the object is account A and I couldn't do anything with the objects from account B.

Is there a way to tell the Glue job to apply an ACL with full control for the bucket owner?

profile pictureAWS
전문가
Tasio
질문됨 5년 전905회 조회
1개 답변
0
수락된 답변

You can use Hadoop java configurations in a spark job to update the S3 canned acls. I haven't tested this with Glue Dynamic Frames, but it works for native Spark DataFrames.

import sys
from pyspark import SparkConf
from pyspark.context import SparkContext
from pyspark.sql.functions import *
from pyspark.sql import SQLContext, Row


sc = SparkContext()
sc._jsc.hadoopConfiguration().set("fs.s3.canned.acl", "BucketOwnerFullControl")
AWS
답변함 5년 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠