- Più recenti
- Maggior numero di voti
- Maggior numero di commenti
My understanding is that DataBrew is meant to be a no-code alternative to writing your own Spark code. As best I can tell, you can only export the declarative YAML/JSON definition of a DataBrew recipe. I created a recipe and couldn't find any way to export it as a "regular" Glue job definition, and the recipe does not appear in the standard Glue jobs window. I don't think there's any way to see the spark code.
If your customer wants visual design + ability to view/edit code, I think they'll need to look at AWS Glue Studio, instead: https://docs.aws.amazon.com/glue/latest/ug/what-is-glue-studio.html
DataBrew is not a code generator. You can export the recipe generated by Data Brew project to be used by other Data Brew Project or other Data Brew jobs but you cannot mix jobs Glue Studio and DataBrew jobs. But you can build your data pipeline flow which can use DataBrew Jobs and Glue Jobs independently , stitching them together with Step Functions or other orchestration tool.
In addition to what AWS-User-9164621 and Deen_P have aptly shared, in my experience, it is worthwhile to become acquainted with implementing the recipes into an AWS CloudFormation template, in conjunction with, various Glue DataBrew resource types, such as: projects, datasets, jobs, and/or schedules, that you may need for their composition and/or orchestration. Available "conditionals" in AWS Glue DataBrew recipe structure.
Contenuto pertinente
- AWS UFFICIALEAggiornata 3 anni fa
- AWS UFFICIALEAggiornata 3 anni fa
In addition to my answer below, there are repeatable mechanisms within Amazon Athena that would speak more to your conversion and compression requests. This may require a separate question and "re:Post" entry, so feel free to let me know if needed/interested. AWS-User-7347546