The main difference is in the use case, whether you're in an experimental phase of building up a recipe and trying it out (via a project job) or operationalizing (via a standalone recipe job).
A standalone recipe job (Recipe_A + Dataset_A) also requires that your recipe is published - 1.0, 2.0, etc. With the project job DataBrew implicitly creates a snapshot of the Working version of the recipe as a new minor version - 0.2, 0.3, etc
Hope this helps clarify!
This makes sense -- thanks for the reply!
Copying a Cross-Source DataSet in QuickSight using the JS SDKasked 2 months ago
How to create cognito dataset ? can't find example or explanation anywhereasked 3 years ago
Do I have to redownload dataset to training job every time I run a Sagemaker Estimator training job?asked 8 months ago
Difference between a Job that is tied to a Project vs Recipe + Dataset
Unable to create a project DLLAccepted Answerasked 5 years ago
Network error when creating a labeling job - S3 bucket in input dataset location cannot be reachedasked 14 days ago
Is there a way to logically group steps within a recipe?
Is there a way to add a step between other steps in a recipe with the UI?
How do I get the output of an AWS Glue DataBrew job to be a single CSV file?Accepted Answerasked 2 years ago
How can I delete dataset import job?asked 2 years ago