How to redirect entire output of spark-submit to s3

0

using spark submit to command how can we send complete application logs which we are going to see when submited spark submit command.

anudeep
asked a year ago353 views
1 Answer
0

First, make sure you have the proper AWS credentials for the worker running the spark-submit job. This will depend on what you're using to run the task (for example, Glue Job execution role, Fargate execution role, EC2 instance profile, etc.). Once you have this you can set the Amazon S3 bucket you want to save to as the output path parameter. You can use Spark's 'save' method to write the results to this output path. For example:

val outputDataFrame: data = // your data
outputDataFrame.write.parquet("s3://yourbucket/output")

Depending on where you run your job, you can gather your application logs in CloudWatch. EC2, Glue, Fargate, EKS, ECS all integrate with Amazon CloudWatch so you can enable the execution role to write job to CloudWatch. You can find your application logs there. It's then up to you if you want to send those logs to other storage destinations like S3, Splunk, DataDog, etc.

profile pictureAWS
EXPERT
pechung
answered a year ago
AWS
SUPPORT ENGINEER
reviewed 15 days ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions