1 réponse
- Le plus récent
- Le plus de votes
- La plupart des commentaires
0
Adding to Gonzalo's comment. You need more information from the logs to identify the exact issue. This error is happening when the write happens - your statement probably uses this method- glueContext.write_dynamic_frame.from_options
How can I find the exact error message?
- Navigate to the AWS Glue Console and select the job that failed.
- Click on the "Runs" tab and click on
error logs
. This would take you into/aws-glue/jobs/error
with a filter on job_id - If you see multiple logs, click on the shortest log - that corresponds to the driver, while others are logs from each executor (based on number of executors)
- Toward the bottom of this driver log, you will find the descriptive error along with line numbers etc. I typically find it using
glueexceptionanalysis.GlueExceptionAnalysisListener
or `glue.ProcessLauncher'
Some possible reasons for this error include:
- Invalid output configuration: The output configuration provided to the
glueContext.write_dynamic_frame.from_options
method might be invalid.
- For example, the destination S3 bucket might not exist or the IAM role used for the job might not have the required permissions to write to the bucket.
- Malformed data: The data being written to the destination might be malformed or incompatible with the expected data format.
- For example, the table is defined as JSON with 10 fields, but you have field names that dont match
- Network issues:
- For example, if the Glue job is running in a VPC, there might be issues with the VPC configuration or routing that prevent the data from being written.
répondu il y a 2 ans
Contenus pertinents
- demandé il y a 10 mois
- demandé il y a un an
- demandé il y a 6 mois
- demandé il y a un an
- AWS OFFICIELA mis à jour il y a 3 ans
- AWS OFFICIELA mis à jour il y a 2 ans
- Comment puis-je résoudre les erreurs de connexion à AWS Marketplace dans mes tâches ETL d’AWS Glue ?AWS OFFICIELA mis à jour il y a 7 mois
- AWS OFFICIELA mis à jour il y a 2 ans
Sounds the job gives up waiting for Redshift to load the temporary csv files from s3, please check the full stack trace and check in Redshift what happened with the "COPY" command, did it finish or error? how long it took?