2 Risposte
- Più recenti
- Maggior numero di voti
- Maggior numero di commenti
0
i found the fail, the "Resource" must have at the end .sync to replciate all the parameters, with that already works
con risposta un anno fa
0
The Glue DataBrew StartJobRun
API action returns a RunId
.
You can add this to the workflow state using the ResultPath in the Task state, like so: "ResultPath": "$.JobRun"
. See here for more information about using ResultPath
.
You can then access that value in the event passed to your Lambda function and use it, for example, to call DescribeJobRun
. This page describes how to manipulate workflow and task state. The Data Flow Simulator (linked from this page) is also very useful in visualizing the data flow of your state machine.
con risposta un anno fa
Contenuto pertinente
- AWS UFFICIALEAggiornata 2 anni fa
- AWS UFFICIALEAggiornata 2 anni fa
Hi MattK, i added the "ResultPath": "$.JobRun" in the State(databrew job), but their output is: { "profilejobname": "dataqualitytest2job", "StatePayload": "Starts the check of Data Quality", "AWS_STEP_FUNCTIONS_STARTED_BY_EXECUTION_ID": "arn:aws:states:eu-west-1:XXX:execution:dev-DQingestion_CheckRules:4de06712-b9c0-XXX-XXX-XXX", "JobRun": { "RunId": "db_c1f5f78adc0c02edd381b1370dadf54fb49154eaf5bbca9f430861961b59f729", "SdkHttpMetadata": { "AllHttpHeaders": { "X-Cache": [ "Miss from cloudfront" ], "x-amz-apigw-id": [ "BeuHXFZWDoEFVQg=" ], "Access-Control-Allow-Origin": [ "*" ...
the databrew job writes the result in a S3 bucket, so I need to read in the lambda State the bucket, filename generated to get the content of the file and then get the profilejson to know the status of the ruleset. next a piece of code in lambda function:
def lambda_handler(event, context): # TODO implement ... jobname = event["jobname"] for o in event["Outputs"]: bucketname = o["Location"]["Bucket"] if "dq-validation" in o["Location"]["Key"]: filename = o["Location"]["Key"] ... The event don't have the Outputs to know what was the file generated and which bucket.
the question would be, in the lambda State how to know in which bucket and what was the file generated in the previous State(databrewJob)
thanks for your help