output of databrew profile job in a step functions

0

i have a State that runs a databrew job and the next State is a lambda function. how to pass the results of the State(databrew job) to the State(lambda) to get the JobName, etc etc, tkx

Willi5
質問済み 1年前261ビュー
2回答
0
承認された回答

i found the fail, the "Resource" must have at the end .sync to replciate all the parameters, with that already works

Willi5
回答済み 1年前
0

The Glue DataBrew StartJobRun API action returns a RunId.

You can add this to the workflow state using the ResultPath in the Task state, like so: "ResultPath": "$.JobRun". See here for more information about using ResultPath.

You can then access that value in the event passed to your Lambda function and use it, for example, to call DescribeJobRun. This page describes how to manipulate workflow and task state. The Data Flow Simulator (linked from this page) is also very useful in visualizing the data flow of your state machine.

AWS
MattK
回答済み 1年前
  • Hi MattK, i added the "ResultPath": "$.JobRun" in the State(databrew job), but their output is: { "profilejobname": "dataqualitytest2job", "StatePayload": "Starts the check of Data Quality", "AWS_STEP_FUNCTIONS_STARTED_BY_EXECUTION_ID": "arn:aws:states:eu-west-1:XXX:execution:dev-DQingestion_CheckRules:4de06712-b9c0-XXX-XXX-XXX", "JobRun": { "RunId": "db_c1f5f78adc0c02edd381b1370dadf54fb49154eaf5bbca9f430861961b59f729", "SdkHttpMetadata": { "AllHttpHeaders": { "X-Cache": [ "Miss from cloudfront" ], "x-amz-apigw-id": [ "BeuHXFZWDoEFVQg=" ], "Access-Control-Allow-Origin": [ "*" ...

    the databrew job writes the result in a S3 bucket, so I need to read in the lambda State the bucket, filename generated to get the content of the file and then get the profilejson to know the status of the ruleset. next a piece of code in lambda function:

    def lambda_handler(event, context): # TODO implement ... jobname = event["jobname"] for o in event["Outputs"]: bucketname = o["Location"]["Bucket"] if "dq-validation" in o["Location"]["Key"]: filename = o["Location"]["Key"] ... The event don't have the Outputs to know what was the file generated and which bucket.

    the question would be, in the lambda State how to know in which bucket and what was the file generated in the previous State(databrewJob)

    thanks for your help

ログインしていません。 ログイン 回答を投稿する。

優れた回答とは、質問に明確に答え、建設的なフィードバックを提供し、質問者の専門分野におけるスキルの向上を促すものです。

質問に答えるためのガイドライン

関連するコンテンツ