Setting ACL in S3 objects written by an AWS Glue Job

0

I came across the following issue:

I run a Glue job from account A to write into an S3 bucket in account B. This meant that the owner of the object is account A and I couldn't do anything with the objects from account B.

Is there a way to tell the Glue job to apply an ACL with full control for the bucket owner?

profile pictureAWS
ESPECIALISTA
Tasio
feita há 5 anos922 visualizações
1 Resposta
0
Resposta aceita

You can use Hadoop java configurations in a spark job to update the S3 canned acls. I haven't tested this with Glue Dynamic Frames, but it works for native Spark DataFrames.

import sys
from pyspark import SparkConf
from pyspark.context import SparkContext
from pyspark.sql.functions import *
from pyspark.sql import SQLContext, Row


sc = SparkContext()
sc._jsc.hadoopConfiguration().set("fs.s3.canned.acl", "BucketOwnerFullControl")
AWS
respondido há 5 anos

Você não está conectado. Fazer login para postar uma resposta.

Uma boa resposta responde claramente à pergunta, dá feedback construtivo e incentiva o crescimento profissional de quem perguntou.

Diretrizes para responder a perguntas