1回答
- 新しい順
- 投票が多い順
- コメントが多い順
1
That's a fair point about clarifying the documentation, the history is based on the historical metric, if the rule wasn't applied then the metric wasn't created, the rule cannot backprocess the data to calculate the previous history if the rule wasn't applied.
HI can you please see if you have any thoughts on this question ?https://repost.aws/questions/QU88DiiUZ1STmIjySismV2Ow/how-does-aws-glue-data-quality-custom-sql-work-with-no-unique-column
関連するコンテンツ
- 質問済み 9ヶ月前
- 質問済み 6ヶ月前
- AWS公式更新しました 3年前
- AWS公式更新しました 3年前
- AWS公式更新しました 1年前
- AWS公式更新しました 3年前
Yep, it sounds it thinks the average is 100, not sure why. What did it say on the previous evaluations?
How did you make it process different files on different runs?, do you have any count that the job actually read them all? (maybe it has thrown away invalid rows)
This worked on 5th time. I think the issue is, in my first 3 runs this rule was not there. I added this rule in 4th run only. So it didnt took earlier runs into consideration. But this thing is not mentioned anywhere in documentation. There is no mention how Glue stores 'state' of runs.