Setting ACL in S3 objects written by an AWS Glue Job

0

I came across the following issue:

I run a Glue job from account A to write into an S3 bucket in account B. This meant that the owner of the object is account A and I couldn't do anything with the objects from account B.

Is there a way to tell the Glue job to apply an ACL with full control for the bucket owner?

profile pictureAWS
EXPERT
Tasio
demandé il y a 5 ans912 vues
1 réponse
0
Réponse acceptée

You can use Hadoop java configurations in a spark job to update the S3 canned acls. I haven't tested this with Glue Dynamic Frames, but it works for native Spark DataFrames.

import sys
from pyspark import SparkConf
from pyspark.context import SparkContext
from pyspark.sql.functions import *
from pyspark.sql import SQLContext, Row


sc = SparkContext()
sc._jsc.hadoopConfiguration().set("fs.s3.canned.acl", "BucketOwnerFullControl")
AWS
répondu il y a 5 ans

Vous n'êtes pas connecté. Se connecter pour publier une réponse.

Une bonne réponse répond clairement à la question, contient des commentaires constructifs et encourage le développement professionnel de la personne qui pose la question.

Instructions pour répondre aux questions