- Newest
- Most votes
- Most comments
Hi,
the error seems to indicate that the step that initiated the write to the target got as input an empty DynamicFrame. Are you sure the table you are reading is populated?
You should try to look at the logs of the job.
In Glue Studio open the jobs then select the tab Runs or in the left pane select Monitoring then follow the instruction in this documentation page. Look for other errors, that may have happen before as connection time out or privilege issue that might have not interrupted the job but led to an empty input.
After you read the data, do you do any processing?
If there are no error and you can manage the code in PySpark, clone the job , open it in manual edit mode and then try to print the schema and show the content.
hope this helps,
Relevant content
- AWS OFFICIALUpdated 3 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated a year ago
Yes, It is populated I found the problem, I didn't define schema for input and output data sources. I set it up and the job succeeded
Thanks for your help!