How to redirect entire output of spark-submit to s3

0

using spark submit to command how can we send complete application logs which we are going to see when submited spark submit command.

anudeep
已提问 1 年前368 查看次数
1 回答
0

First, make sure you have the proper AWS credentials for the worker running the spark-submit job. This will depend on what you're using to run the task (for example, Glue Job execution role, Fargate execution role, EC2 instance profile, etc.). Once you have this you can set the Amazon S3 bucket you want to save to as the output path parameter. You can use Spark's 'save' method to write the results to this output path. For example:

val outputDataFrame: data = // your data
outputDataFrame.write.parquet("s3://yourbucket/output")

Depending on where you run your job, you can gather your application logs in CloudWatch. EC2, Glue, Fargate, EKS, ECS all integrate with Amazon CloudWatch so you can enable the execution role to write job to CloudWatch. You can find your application logs there. It's then up to you if you want to send those logs to other storage destinations like S3, Splunk, DataDog, etc.

profile pictureAWS
专家
pechung
已回答 1 年前
AWS
支持工程师
已审核 1 个月前

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则