1 Answer
- Newest
- Most votes
- Most comments
0
You can use Hadoop java configurations in a spark job to update the S3 canned acls. I haven't tested this with Glue Dynamic Frames, but it works for native Spark DataFrames.
import sys
from pyspark import SparkConf
from pyspark.context import SparkContext
from pyspark.sql.functions import *
from pyspark.sql import SQLContext, Row
sc = SparkContext()
sc._jsc.hadoopConfiguration().set("fs.s3.canned.acl", "BucketOwnerFullControl")
answered 5 years ago
Relevant content
- Accepted Answerasked 5 years ago
- asked 7 months ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated a year ago