1 Answer
- Newest
- Most votes
- Most comments
0
We have a number of examples:
- How to Architect Data Quality on the AWS Cloud
- Building a serverless data quality and analysis framework with Deequ and AWS Glue
- Build event-driven data quality pipelines with AWS Glue DataBrew
- Test data quality at scale with Deequ
- Monitor data quality in your data lake using PyDeequ and AWS Glue
There are likely more. Hope these help.
answered 2 years ago
Relevant content
- asked 2 months ago
- asked 2 years ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 9 months ago
- AWS OFFICIALUpdated a year ago
Thank you! I have checked some of these links and they certainly are helpful to design what I need. Do you have any recommendation for this scenario: Any new system that we onboard in the framework, it should be easily able to include the data quality checks with minimal coding. so some sort of metadata framework might help for ex: storing the business rules in dynamodb which can automatically run check different feeders/new data pipeline created