- Newest
- Most votes
- Most comments
Hi,
With regards to the environment capacity, the estimate shown on the documentation is more of a guideline on expected capacity considerations, when deploying DAGs.
These estimations are based on lightweight tasks and should be considered as reference points, and not absolute values.
Airflow tasks running on MWAA are executed within containers that run Python code, and the performance of tasks on the environment depend primarily on the computation and memory available to the workers and scheduler.
This information is also outlined in the Airflow Best Practices.
A smaller environment will have workers with less memory and processing power, and as such they will not be able to run as many DAGs (or tasks) as a larger environment.
It is important to consider the guideline as a rule of thumb, as not all tasks in DAGs will require the same amount of memory and processing (some DAGs and by extension tasks, will need more resource usage than others).
Therefore, it's essential to consider the complexity of your particular tasks to determine the expected number of tasks that would be applicable to your environment.
As the number of tasks per DAG would depend on your use case, you would need to do a benchmark test to find out the most accurate number of tasks per DAG that can be run for your particular use case.
Relevant content
- asked 2 years ago
- asked 2 years ago
- asked 3 years ago
- AWS OFFICIALUpdated 8 months ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 6 days ago
- AWS OFFICIALUpdated 6 days ago
Our Airflow dags are used for scheduling purposes only and are mostly glue job triggers (waiting for completion) and file watchers, but we may have only a few running, or up to 20 or more starting at the same time. We're finding some queueing for 10-15 minutes with our current environment size and are exploring the larger size.
Thanks for your response, I think even though our tasks are low complexity it may be the quantity that is causing the queuing.