1 個回答
- 最新
- 最多得票
- 最多評論
0
You can use Hadoop java configurations in a spark job to update the S3 canned acls. I haven't tested this with Glue Dynamic Frames, but it works for native Spark DataFrames.
import sys
from pyspark import SparkConf
from pyspark.context import SparkContext
from pyspark.sql.functions import *
from pyspark.sql import SQLContext, Row
sc = SparkContext()
sc._jsc.hadoopConfiguration().set("fs.s3.canned.acl", "BucketOwnerFullControl")
已回答 5 年前
相關內容
- AWS 官方已更新 2 年前
- AWS 官方已更新 1 年前