Airflow Web Server crashes when deploying

0

We are using Amazon Managed Workflows for Apache Airflow (MWAA) and we noticed that suddenly after some Variable creation and minor DAG update we can't access the Apache Airflow UI and getting the error message “ERR_EMPTY_RESPONSE” in the UI web page and while looking at the webserver console log we see the error:

Log stream name: webserver_console_ip-xx-x-xx-xxx.ec2.internal_xxxxxxxxxx.xxxxxxx.log

In script: sqlalchemy.py

In function: process_result_value

In line 176 "return type_map[data['type']](**data['attrs'])"

Error message:

KeyError: 'type'

The Error chain starts from “File "/usr/local/lib64/python3.7/site-packages/sqlalchemy/util/compat.py", line 178, in raise_”

Looking at the logs, the webserver tries to boot itself up and crashes repeatedly in the same line of code.

Important to note that we see that the environment is still working as some DAGs (but not all) are still completing their tasks (we are receiving emails of confirmation).

Before the error occurred, we created a new variable and used it in a DAG, I don’t believe this should affect the Web Server… The site should start normally even in a case where a DAG is "broken". We have not made any changes to the environment other than that.

asked 2 years ago787 views
1 Answer
0

If you are using requirements.txt it may be that one or more are causing conflicts as MWAA does not currently install requirements on the web server.

It may also be that newer Airflow packages may be trying to upgrade the Scheduler, which can be prevented by using --constraint in your requirements.txt

AWS
John_J
answered 2 years ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions