status
stringclasses 1
value | repo_name
stringclasses 13
values | repo_url
stringclasses 13
values | issue_id
int64 1
104k
| updated_files
stringlengths 10
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 38
55
| pull_url
stringlengths 38
53
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | apache/airflow | https://github.com/apache/airflow | 29,112 | ["airflow/utils/log/file_task_handler.py"] | "Operation not permitted" error when chmod on log folder | ### Official Helm Chart version
1.7.0 (latest released)
### Apache Airflow version
2.5.1
### Kubernetes Version
1.24.6
### Helm Chart configuration
executor: "KubernetesExecutor" # however same issue happens with LocalExecutor
logs:
persistence:
enabled: true
size: 50Gi
storageClassName: azurefile-csi
### Docker Image customizations
Using airflow-2.5.1-python3.10 as a base image.
Copy custom shared libraries into folder under /opt/airflow/company
Copy DAGs /opt/airflow/dags
### What happened
```console
After migrating from airflow 2.4.3 to 2.5.1 start getting error below. No other changes to custom image. No task is running because of this error:
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/__main__.py", line 39, in main
args.func(args)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/cli/cli_parser.py", line 52, in command
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/cli.py", line 108, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/cli/commands/task_command.py", line 384, in task_run
ti.init_run_context(raw=args.raw)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/taskinstance.py", line 2414, in init_run_context
self._set_context(self)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/log/logging_mixin.py", line 77, in _set_context
set_context(self.log, context)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/log/logging_mixin.py", line 213, in set_context
flag = cast(FileTaskHandler, handler).set_context(value)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/log/file_task_handler.py", line 71, in set_context
local_loc = self._init_file(ti)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/log/file_task_handler.py", line 382, in _init_file
self._prepare_log_folder(Path(full_path).parent)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/log/file_task_handler.py", line 358, in _prepare_log_folder
directory.chmod(mode)
File "/usr/local/lib/python3.10/pathlib.py", line 1191, in chmod
self._accessor.chmod(self, mode, follow_symlinks=follow_symlinks)
PermissionError: [Errno 1] Operation not permitted: '/opt/airflow/logs/dag_id=***/run_id=manual__2023-01-22T02:59:43.752407+00:00/task_id=***'
```
### What you think should happen instead
Seem like airflow attempts to set change log folder permissions and not permissioned to do it.
Getting same error when executing command manually (confirmed folder path exists): chmod 511 '/opt/airflow/logs/dag_id=***/run_id=manual__2023-01-22T02:59:43.752407+00:00/task_id=***'
chmod: changing permissions of '/opt/airflow/logs/dag_id=***/run_id=scheduled__2023-01-23T15:30:00+00:00/task_id=***': Operation not permitted
### How to reproduce
My understanding is that this error happens before any custom code is executed.
### Anything else
Error happens every time, unable to start any DAG while using airflow 2.5.1. Exactly same configuration works with 2.5.0 and 2.4.3.
Same image and configuration works fine while running locally using docker-composer.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29112 | https://github.com/apache/airflow/pull/30123 | f5ed6ae67d0788ea2a737d781b27fbcae1e8e8af | b87cbc388bae281e553da699212ebfc6bb723eea | "2023-01-23T17:44:10Z" | python | "2023-03-15T20:44:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,105 | ["airflow/www/static/js/graph.js"] | graph disappears during run time when using branch_task and a dynamic classic operator | ### Apache Airflow version
2.5.1
### What happened
when using a dynamically generated task that gets the expand data from xcom after a branch_task the graph doesn't render.
It reappears once the dag run is finished.
tried with BashOperator and a KubernetesPodOperator.
the developer console in the browser shows the error:
`Uncaught TypeError: Cannot read properties of undefined (reading 'length')
at z (graph.1c0596dfced26c638bfe.js:2:17499)
at graph.1c0596dfced26c638bfe.js:2:17654
at Array.map (<anonymous>)
at z (graph.1c0596dfced26c638bfe.js:2:17646)
at graph.1c0596dfced26c638bfe.js:2:26602
at graph.1c0596dfced26c638bfe.js:2:26655
at graph.1c0596dfced26c638bfe.js:2:26661
at graph.1c0596dfced26c638bfe.js:2:222
at graph.1c0596dfced26c638bfe.js:2:227
z @ graph.1c0596dfced26c638bfe.js:2
(anonymous) @ graph.1c0596dfced26c638bfe.js:2
z @ graph.1c0596dfced26c638bfe.js:2
(anonymous) @ graph.1c0596dfced26c638bfe.js:2
(anonymous) @ graph.1c0596dfced26c638bfe.js:2
(anonymous) @ graph.1c0596dfced26c638bfe.js:2
(anonymous) @ graph.1c0596dfced26c638bfe.js:2
(anonymous) @ graph.1c0596dfced26c638bfe.js:2
`
grid view renders fine.
### What you think should happen instead
graph should be rendered.
### How to reproduce
```@dag('branch_dynamic', schedule_interval=None, default_args=default_args, catchup=False)
def branch_dynamic_flow():
@branch_task
def choose_path():
return 'b'
@task
def a():
print('a')
@task
def get_args():
return ['echo 1', 'echo 2']
b = BashOperator.partial(task_id="b").expand(bash_command=get_args())
path = choose_path()
path >> a()
path >> b
```
### Operating System
red hat
### Versions of Apache Airflow Providers
apache-airflow-providers-cncf-kubernetes | 5.1.1 | Kubernetes
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29105 | https://github.com/apache/airflow/pull/29042 | b2825e11852890cf0b0f4d0bcaae592311781cdf | 33ba242d7eb8661bf936a9b99a8cad4a74b29827 | "2023-01-23T14:55:28Z" | python | "2023-01-24T15:27:44Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,100 | ["airflow/www/static/js/dag/details/Dag.tsx", "airflow/www/static/js/dag/details/dagRun/index.tsx", "airflow/www/static/js/dag/details/taskInstance/Logs/LogBlock.tsx", "airflow/www/static/js/dag/details/taskInstance/index.tsx", "airflow/www/static/js/dag/grid/index.tsx", "airflow/www/static/js/utils/useOffsetHeight.tsx"] | Unnecessary scrollbars in grid view | ### Apache Airflow version
2.5.0
### What happened
Compare the same DAG grid view in 2.4.3: (everything is scrolled using the "main" scrollbar of the window)
![image](https://user-images.githubusercontent.com/3342974/213983669-c5a701f1-a4d8-4d02-b29b-caf5f9c9a2db.png)
and in 2.5.0 (and 2.5.1) (left and right side of the grid have their own scrollbars):
![image](https://user-images.githubusercontent.com/3342974/213983866-b9b60533-87b4-4f1e-b68b-e5062b7f86c2.png)
It was much more ergonomic previously when only the main scrollbar was used.
I think the relevant change was in #27560, where `maxHeight={offsetHeight}` was added to some places.
Is this the intended way the grid view should look like or did happen as an accident?
I tried to look around in the developer tools and it seems like removing the `max-height` from this element restores the old look: `div#react-container div div.c-1rr4qq7 div.c-k008qs div.c-19srwsc div.c-scptso div.c-l7cpmp`. Well it does for the left side of the grid view. Similar change has to be done for some other divs also.
![image](https://user-images.githubusercontent.com/3342974/213984637-106cf7ed-b776-48ec-90e8-991d8ad1b315.png)
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29100 | https://github.com/apache/airflow/pull/29367 | 1b18a501fe818079e535838fa4f232b03365fc75 | 643d736ebb32c488005b3832c2c3f226a77900b2 | "2023-01-23T07:19:18Z" | python | "2023-02-05T23:15:03Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,070 | ["airflow/providers/ftp/operators/ftp.py", "airflow/providers/sftp/operators/sftp.py", "tests/providers/ftp/operators/test_ftp.py"] | FTP operator has logic in __init__ | ### Body
Similarly to SFTP (fixed in https://github.com/apache/airflow/pull/29068) the logic from __init__ should be moved to execute.
The #29068 provides a blueprint for that.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/29070 | https://github.com/apache/airflow/pull/29073 | 8eb348911f2603feba98787d79b88bbd84bd17be | 2b7071c60022b3c483406839d3c0ef734db5daad | "2023-01-20T19:31:08Z" | python | "2023-01-21T00:29:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,049 | ["airflow/models/taskinstance.py", "tests/models/test_cleartasks.py"] | Recursively cleared external task sensors using reschedule mode instantly time out if previous run is older than sensor timeout | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Using Airflow 2.3.3, when recursively clearing downstream tasks any cleared external task sensors in other DAGs which are using reschedule mode will instantly fail with an `AirflowSensorTimeout` exception if the previous run is older than the sensor's timeout.
### What you think should happen instead
The recursively cleared external task sensors should run normally, waiting for the cleared upstream task to complete, retrying up to the configured number of times and within the configured sensor timeout counting from the point in time when the sensor was cleared.
### How to reproduce
1. Load the following DAGs:
```python
from datetime import datetime, timedelta, timezone
from time import sleep
from airflow.decorators import task
from airflow.models import DAG
from airflow.sensors.external_task import ExternalTaskMarker, ExternalTaskSensor
from airflow.utils import timezone
default_args = {
'start_date': datetime.now(timezone.utc).replace(second=0, microsecond=0),
'retries': 2,
'retry_delay': timedelta(seconds=10),
}
with DAG('parent_dag', schedule_interval='* * * * *', catchup=False, default_args=default_args) as parent_dag:
@task(task_id='parent_task')
def parent_sleep():
sleep(10)
parent_task = parent_sleep()
child_dag__wait_for_parent_task = ExternalTaskMarker(
task_id='child_dag__wait_for_parent_task',
external_dag_id='child_dag',
external_task_id='wait_for_parent_task',
)
parent_task >> child_dag__wait_for_parent_task
with DAG('child_dag', schedule_interval='* * * * *', catchup=False, default_args=default_args) as child_dag:
wait_for_parent_task = ExternalTaskSensor(
task_id='wait_for_parent_task',
external_dag_id='parent_dag',
external_task_id='parent_task',
mode='reschedule',
poke_interval=15,
timeout=60,
)
@task(task_id='child_task')
def child_sleep():
sleep(10)
child_task = child_sleep()
wait_for_parent_task >> child_task
```
2. Enable the `parent_dag` and `child_dag` DAGs and wait for them to automatically run (they're scheduled to run every minute).
3. Wait for at least one additional minute (because the sensor timeout is configured to be one minute).
4. Clear the earliest `parent_dag.parent_task` task instance with the "Downstream" and "Recursive" options enabled.
5. When the cleared `child_dag.wait_for_parent_task` task tries to run it will immediately fail with an `AirflowSensorTimeout` exception.
### Operating System
Debian 10.13
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
This appears to be due to a bug in `airflow.models.taskinstance.clear_task_instances()` where [it only increments the task instance's `max_tries` property if the task is found in the DAG passed in](https://github.com/apache/airflow/blob/2.3.3/airflow/models/taskinstance.py#L219-L223), but when recursively clearing tasks that won't work properly for tasks in downstream DAGs, because all task instances to be recursively cleared are passed to `clear_task_instances()` with [the DAG of the initial task being cleared](https://github.com/apache/airflow/blob/2.3.3/airflow/models/dag.py#L1905).
When a cleared task instance for a sensor using reschedule mode doesn't have its `max_tries` property incremented that causes the [logic in `BaseSensorOperator.execute()`](https://github.com/apache/airflow/blob/2.3.3/airflow/sensors/base.py#L247-L264) to incorrectly choose an older `first_try_number` value, calculate the sensor run duration as the total time passed since that previous run, and fail with an `AirflowSensorTimeout` exception if that inflated run duration exceeds the sensor timeout.
While I tested this in Airflow 2.3.3 because that's what my company is running, I also looked at the current `main` branch code and this appears to still be a problem in the latest version.
IMO the best solution would be to change `airflow.models.taskinstance.clear_task_instances()` to make an effort to get the associated DAGs for all the task instances being cleared so their associated tasks can be read and their `max_tries` property can be incremented correctly.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29049 | https://github.com/apache/airflow/pull/29065 | 7074167d71c93b69361d24c1121adc7419367f2a | 0d2e6dce709acebdb46288faef17d322196f29a2 | "2023-01-19T21:46:25Z" | python | "2023-04-14T17:17:38Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,036 | ["airflow/providers/amazon/aws/transfers/sql_to_s3.py"] | Top level code imports in AWS transfer | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
_No response_
### Apache Airflow version
2.3.3
### Operating System
MacOs/Linux
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
sql_to_s3.py transfer has top level python code imports considered as Bad Practices:
https://github.com/apache/airflow/blob/be31214dcf14db39b7a5f422ca272cdc13e08268/airflow/providers/amazon/aws/transfers/sql_to_s3.py#L26
According to the [official docs](https://airflow.apache.org/docs/apache-airflow/2.3.3/best-practices.html#top-level-python-code):
```python
import numpy as np # <-- THIS IS A VERY BAD IDEA! DON'T DO THAT!
```
All imports that are not related to DAG structure and creation should be moved to callable functions, such as the `execute` method.
This causes timeout errors while filling the `DagBag`:
```
File "/opt/airflow/dags/mydag.py", line 6, in <module>
from airflow.providers.amazon.aws.transfers.sql_to_s3 import SqlToS3Operator
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/transfers/sql_to_s3.py", line 25, in <module>
import pandas as pd
File "/home/airflow/.local/lib/python3.7/site-packages/pandas/__init__.py", line 50, in <module>
from pandas.core.api import (
File "/home/airflow/.local/lib/python3.7/site-packages/pandas/core/api.py", line 48, in <module>
from pandas.core.groupby import (
File "/home/airflow/.local/lib/python3.7/site-packages/pandas/core/groupby/__init__.py", line 1, in <module>
from pandas.core.groupby.generic import (
File "/home/airflow/.local/lib/python3.7/site-packages/pandas/core/groupby/generic.py", line 73, in <module>
from pandas.core.frame import DataFrame
File "/home/airflow/.local/lib/python3.7/site-packages/pandas/core/frame.py", line 193, in <module>
from pandas.core.series import Series
File "/home/airflow/.local/lib/python3.7/site-packages/pandas/core/series.py", line 141, in <module>
import pandas.plotting
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 724, in exec_module
File "<frozen importlib._bootstrap_external>", line 859, in get_code
File "<frozen importlib._bootstrap_external>", line 917, in get_data
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/timeout.py", line 68, in handle_timeout
raise AirflowTaskTimeout(self.error_message)
airflow.exceptions.AirflowTaskTimeout: DagBag import timeout for /opt/airflow/dags/mydag.py after 30.0s.
Please take a look at these docs to improve your DAG import time:
* https://airflow.apache.org/docs/apache-airflow/2.3.3/best-practices.html#top-level-python-code
* https://airflow.apache.org/docs/apache-airflow/2.3.3/best-practices.html#reducing-dag-complexity, PID: 7
```
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29036 | https://github.com/apache/airflow/pull/29045 | af0bbe62a5fc26bac189acd9039f5bbc83c2d429 | 62825678b3100b0e0ea3b4e14419d259a36ba074 | "2023-01-19T11:51:48Z" | python | "2023-01-30T23:37:20Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,013 | ["airflow/jobs/scheduler_job.py"] | Metrics dagrun.duration.failed.<dag_id> not updated when the dag run failed due to timeout | ### Apache Airflow version
2.5.0
### What happened
When the dag was set with `dagrun_timeout` [parameter](https://airflow.apache.org/docs/apache-airflow/stable/_api/airflow/models/dag/index.html#airflow.models.dag.DAG) and the dag run failed due to time out reason, the metrics `dagrun.duration.failed.<dag_id>` was not triggered.
### What you think should happen instead
According to the [doc](https://airflow.apache.org/docs/apache-airflow/stable/logging-monitoring/metrics.html#timers), the metrics `dagrun.duration.failed.<dag_id>` should capture `Milliseconds taken for a DagRun to reach failed state`. Then it should capture all kinds of dag failure including the failure caused by dag level time out.
### How to reproduce
set `dagrun_timeout` [parameter](https://airflow.apache.org/docs/apache-airflow/stable/_api/airflow/models/dag/index.html#airflow.models.dag.DAG) (e.g. `dagrun_timeout=timedelta(seconds=5)`), then set up a BashOperator task run longer than dagrun_timeout. (e.g., `bash_command='sleep 120'`,).
Then check the metrics, dagrun.duration.failed.<dag_id> can not capture this failed dag run due to timeout reason.
### Operating System
Ubuntu 22.04.1 LTS
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==7.1.0
apache-airflow-providers-common-sql==1.3.3
apache-airflow-providers-ftp==3.3.0
apache-airflow-providers-http==4.1.1
apache-airflow-providers-imap==3.1.1
apache-airflow-providers-postgres==5.4.0
apache-airflow-providers-sqlite==3.3.1
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
According to the [doc](https://airflow.apache.org/docs/apache-airflow/stable/logging-monitoring/metrics.html#timers), the metrics `dagrun.duration.failed.<dag_id>` should capture `Milliseconds taken for a DagRun to reach failed state`. However, if the dag run was failed due to the dag run level timeout, the metric can not capture the failed dag run.
I deep dive to the airflow code and figured out the reason.
The timer `dagrun.duration.failed.{self.dag_id}` was triggered in the method _emit_duration_stats_for_finished_state. [code](https://github.com/apache/airflow/blob/2.5.0/airflow/models/dagrun.py#L880-L894)
```
def _emit_duration_stats_for_finished_state(self):
if self.state == State.RUNNING:
return
if self.start_date is None:
self.log.warning("Failed to record duration of %s: start_date is not set.", self)
return
if self.end_date is None:
self.log.warning("Failed to record duration of %s: end_date is not set.", self)
return
duration = self.end_date - self.start_date
if self.state == State.SUCCESS:
Stats.timing(f"dagrun.duration.success.{self.dag_id}", duration)
elif self.state == State.FAILED:
Stats.timing(f"dagrun.duration.failed.{self.dag_id}", duration)
```
The function `_emit_duration_stats_for_finished_state` was only called in the update_state() method for class DagRun(). [code](https://github.com/apache/airflow/blob/2.5.0/airflow/models/dagrun.py#L650-L677) If the update_state() method was not call, then `_emit_duration_stats_for_finished_state` will not used.
```
if self._state == DagRunState.FAILED or self._state == DagRunState.SUCCESS:
msg = (
"DagRun Finished: dag_id=%s, execution_date=%s, run_id=%s, "
"run_start_date=%s, run_end_date=%s, run_duration=%s, "
"state=%s, external_trigger=%s, run_type=%s, "
"data_interval_start=%s, data_interval_end=%s, dag_hash=%s"
)
self.log.info(
msg,
self.dag_id,
self.execution_date,
self.run_id,
self.start_date,
self.end_date,
(self.end_date - self.start_date).total_seconds()
if self.start_date and self.end_date
else None,
self._state,
self.external_trigger,
self.run_type,
self.data_interval_start,
self.data_interval_end,
self.dag_hash,
)
session.flush()
self._emit_true_scheduling_delay_stats_for_finished_state(finished_tis)
self._emit_duration_stats_for_finished_state()
```
When a dag run was timed out, in the scheduler job, it will only call set_state(). [code](https://github.com/apache/airflow/blob/2.5.0/airflow/jobs/scheduler_job.py#L1280-L1312)
```
if (
dag_run.start_date
and dag.dagrun_timeout
and dag_run.start_date < timezone.utcnow() - dag.dagrun_timeout
):
dag_run.set_state(DagRunState.FAILED)
unfinished_task_instances = (
session.query(TI)
.filter(TI.dag_id == dag_run.dag_id)
.filter(TI.run_id == dag_run.run_id)
.filter(TI.state.in_(State.unfinished))
)
for task_instance in unfinished_task_instances:
task_instance.state = TaskInstanceState.SKIPPED
session.merge(task_instance)
session.flush()
self.log.info("Run %s of %s has timed-out", dag_run.run_id, dag_run.dag_id)
active_runs = dag.get_num_active_runs(only_running=False, session=session)
# Work out if we should allow creating a new DagRun now?
if self._should_update_dag_next_dagruns(dag, dag_model, active_runs):
dag_model.calculate_dagrun_date_fields(dag, dag.get_run_data_interval(dag_run))
callback_to_execute = DagCallbackRequest(
full_filepath=dag.fileloc,
dag_id=dag.dag_id,
run_id=dag_run.run_id,
is_failure_callback=True,
processor_subdir=dag_model.processor_subdir,
msg="timed_out",
)
dag_run.notify_dagrun_state_changed()
return callback_to_execute
```
From the above code, we can see that when the DAG run was timed out, it will call the set_state() method only. Here update_state() method was not called and that is why the metrics dagrun.duration.failed.{self.dag_id} was not set up accordingly.
Please fix this bug to let the timer `dagrun.duration.failed.<dag_id>` can capture the failed dag run due to dag level timed out.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29013 | https://github.com/apache/airflow/pull/29076 | 9dedf81fa18e57755aa7d317f08f0ea8b6c7b287 | ca9a59b3e8c08286c8efd5ca23a509f9178a3cc9 | "2023-01-18T12:25:00Z" | python | "2023-01-21T03:31:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,973 | ["airflow/models/xcom_arg.py", "tests/models/test_taskinstance.py"] | Dynamic Task Mapping skips tasks before upstream has started | ### Apache Airflow version
2.5.0
### What happened
In some cases we are seeing dynamic mapped task being skipped before upstream tasks have started & the dynamic count for the task can be calculated. We see this both locally in a with the `LocalExecutor` & on our cluster with the `KubernetesExecutor`.
To trigger the issue we need multiple dynamic tasks merging into a upstream task, see the images below for example. If there is no merging the tasks run as expected. The tasks also need to not know the number of dynamic tasks that will be created on DAG start, for example by chaining in an other dynamic task output.
![screenshot_2023-01-16_at_14-57-23_test_skip_-_graph_-_airflow](https://user-images.githubusercontent.com/1442084/212699549-8bfc80c6-02c7-4187-8dad-91020c94616f.png)
![screenshot_2023-01-16_at_14-56-44_test_skip_-_graph_-_airflow](https://user-images.githubusercontent.com/1442084/212699551-428c7efd-d044-472c-8fc3-92c9b146a6da.png)
If the DAG, task, or upstream tasks are cleared the skipped task runs as expected.
The issue exists both on airflow 2.4.x & 2.5.0.
Happy to help debug this further & answer any questions!
### What you think should happen instead
The tasks should run after upstream tasks are done.
### How to reproduce
The following code is able to reproduce the issue on our side:
```python
from datetime import datetime
from airflow import DAG
from airflow.decorators import task
from airflow.utils.task_group import TaskGroup
from airflow.operators.empty import EmptyOperator
# Only one chained tasks results in only 1 of the `skipped_tasks` skipping.
# Add in extra tasks results in both `skipped_tasks` skipping, but
# no earlier tasks are ever skipped.
CHAIN_TASKS = 1
@task()
def add(x, y):
return x, y
with DAG(
dag_id="test_skip",
schedule=None,
start_date=datetime(2023, 1, 13),
) as dag:
init = EmptyOperator(task_id="init_task")
final = EmptyOperator(task_id="final")
for i in range(2):
with TaskGroup(f"task_group_{i}") as tg:
chain_task = [i]
for j in range(CHAIN_TASKS):
chain_task = add.partial(x=j).expand(y=chain_task)
skipped_task = (
add.override(task_id="skipped").partial(x=i).expand(y=chain_task)
)
# Task isn't skipped if final (merging task) is removed.
init >> tg >> final
```
### Operating System
MacOS
### Versions of Apache Airflow Providers
This can be reproduced without any extra providers installed.
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28973 | https://github.com/apache/airflow/pull/30641 | 8cfc0f6332c45ca750bc2317ea1e283aaf2ac5bd | 5f2628d36cb8481ee21bd79ac184fd8fdce3e47d | "2023-01-16T14:18:41Z" | python | "2023-04-22T19:00:34Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,951 | ["airflow/providers/docker/operators/docker.py", "tests/providers/docker/decorators/test_docker.py", "tests/providers/docker/operators/test_docker.py"] | Add a way to skip Docker Operator task | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Airflow 2.3.3
Raising the `AirflowSkipException` in the source code, using the `DockerOperator`, is supposed to mark the task as skipped, according to the [docs](https://airflow.apache.org/docs/apache-airflow/stable/concepts/tasks.html#special-exceptions). However, what happens is that the task is marked as failed with the logs showing `ERROR - Task failed with exception`.
### What you think should happen instead
Tasks should be marked as skipped, not failed.
### How to reproduce
Raise the `AirflowSkipException` in the python source code, while using the `DockerOperator`.
### Operating System
Ubuntu 20.04.5 LTS (GNU/Linux 5.4.0-125-generic x86_64)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28951 | https://github.com/apache/airflow/pull/28996 | bc5cecc0db27cb8684c238b36ad12c7217d0c3ca | 3a7bfce6017207218889b66976dbee1ed84292dc | "2023-01-15T11:36:04Z" | python | "2023-01-18T21:04:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,933 | ["airflow/providers/cncf/kubernetes/decorators/kubernetes.py", "airflow/providers/cncf/kubernetes/python_kubernetes_script.jinja2", "tests/providers/cncf/kubernetes/decorators/test_kubernetes.py"] | @task.kubernetes TaskFlow decorator fails with IndexError and is unable to receive input | ### Apache Airflow Provider(s)
cncf-kubernetes
### Versions of Apache Airflow Providers
apache-airflow-providers-cncf-kubernetes 5.0.0
### Apache Airflow version
2.5.0
### Operating System
Debian 11
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
When passing arguments (either args or kwargs) to a @task.kubernetes decorated function, the following exception occurs:
Task Logs:
```
[2023-01-13, 22:05:40 UTC] {kubernetes_pod.py:621} INFO - Building pod k8s-airflow-pod-5c285c340fdf4e-81721f4662e247e793f497ada2f1ce55 with labels: {'dag_id': 'test_k8s_input_1673647477', 'task_id': 'k8s_with_input', 'run_id': 'backfill__2023-01-01T0000000000-c16e0472d', 'kubernetes_pod_operator': 'True', 'try_number': '1'}
[2023-01-13, 22:05:40 UTC] {kubernetes_pod.py:404} INFO - Found matching pod k8s-airflow-pod-5c285c340fdf4e-81721f4662e247e793f497ada2f1ce55 with labels {'airflow_kpo_in_cluster': 'True', 'airflow_version': '2.5.0', 'dag_id': 'test_k8s_input_1673647477', 'kubernetes_pod_operator': 'True', 'run_id': 'backfill__2023-01-01T0000000000-c16e0472d', 'task_id': 'k8s_with_input', 'try_number': '1'}
[2023-01-13, 22:05:40 UTC] {kubernetes_pod.py:405} INFO - `try_number` of task_instance: 1
[2023-01-13, 22:05:40 UTC] {kubernetes_pod.py:406} INFO - `try_number` of pod: 1
[2023-01-13, 22:05:40 UTC] {pod_manager.py:189} WARNING - Pod not yet started: k8s-airflow-pod-5c285c340fdf4e-81721f4662e247e793f497ada2f1ce55
[2023-01-13, 22:05:41 UTC] {pod_manager.py:189} WARNING - Pod not yet started: k8s-airflow-pod-5c285c340fdf4e-81721f4662e247e793f497ada2f1ce55
[2023-01-13, 22:05:42 UTC] {pod_manager.py:189} WARNING - Pod not yet started: k8s-airflow-pod-5c285c340fdf4e-81721f4662e247e793f497ada2f1ce55
[2023-01-13, 22:05:43 UTC] {pod_manager.py:189} WARNING - Pod not yet started: k8s-airflow-pod-5c285c340fdf4e-81721f4662e247e793f497ada2f1ce55
[2023-01-13, 22:05:44 UTC] {pod_manager.py:237} INFO - + python -c 'import base64, os;x = os.environ["__PYTHON_SCRIPT"];f = open("/tmp/script.py", "w"); f.write(x); f.close()'
[2023-01-13, 22:05:44 UTC] {pod_manager.py:237} INFO - + python /tmp/script.py
[2023-01-13, 22:05:44 UTC] {pod_manager.py:237} INFO - Traceback (most recent call last):
[2023-01-13, 22:05:44 UTC] {pod_manager.py:237} INFO - File "/tmp/script.py", line 14, in <module>
[2023-01-13, 22:05:44 UTC] {pod_manager.py:237} INFO - with open(sys.argv[1], "rb") as file:
[2023-01-13, 22:05:44 UTC] {pod_manager.py:237} INFO - IndexError: list index out of range
[2023-01-13, 22:05:44 UTC] {kubernetes_pod.py:499} INFO - Deleting pod: k8s-airflow-pod-5c285c340fdf4e-81721f4662e247e793f497ada2f1ce55
[2023-01-13, 22:05:44 UTC] {taskinstance.py:1772} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/cncf/kubernetes/decorators/kubernetes.py", line 104, in execute
return super().execute(context)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/decorators/base.py", line 217, in execute
return_value = super().execute(context)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 465, in execute
self.cleanup(
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 489, in cleanup
raise AirflowException(
airflow.exceptions.AirflowException: Pod k8s-airflow-pod-5c285c340fdf4e-81721f4662e247e793f497ada2f1ce55 returned a failure:
```
### What you think should happen instead
K8's decorator should properly receive input. The [python command invoked here](https://github.com/apache/airflow/blob/2.5.0/airflow/providers/cncf/kubernetes/decorators/kubernetes.py#L75) does not pass input. Contrast this with the [docker version of the decorator](https://github.com/apache/airflow/blob/2.5.0/airflow/providers/docker/decorators/docker.py#L105) which does properly pass pickled input.
### How to reproduce
Create a dag:
```py
import os
from airflow import DAG
from airflow.decorators import task
DEFAULT_TASK_ARGS = {
"owner": "gcp-data-platform",
"start_date": "2022-12-16",
"retries": 0,
}
@task.kubernetes(
image="python:3.8-slim-buster",
namespace=os.getenv("AIRFLOW__KUBERNETES_EXECUTOR__NAMESPACE"),
in_cluster=False,
)
def k8s_with_input(val: str) -> str:
import datetime
print(f"Got val: {val}")
return val
with DAG(
schedule_interval="@daily",
max_active_runs=1,
max_active_tasks=5,
catchup=False,
dag_id="test_oom_dag",
default_args=DEFAULT_TASK_ARGS,
) as dag:
output = k8s_with_input.override(task_id="k8s_with_input")("a")
```
Run and observe failure:
<img width="907" alt="image" src="https://user-images.githubusercontent.com/9200263/212427952-15466317-4e61-4b71-9971-2cdedba4f7ba.png">
Task logs above.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28933 | https://github.com/apache/airflow/pull/28942 | 73c8e7df0be8b254e3727890b51ca0f76308e6b5 | 9a5c3e0ac0b682d7f2c51727a56e06d68bc9f6be | "2023-01-13T22:08:52Z" | python | "2023-02-18T17:42:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,919 | ["airflow/api/auth/backend/kerberos_auth.py", "docs/apache-airflow/administration-and-deployment/security/api.rst"] | Airflow API kerberos authentication error | ### Apache Airflow version
2.5.0
### What happened
Configured AUTH_DB authentication for web server and Kerberos authentication for API. Web server works well.
Try to get any API endpoint and get an error 500. I see Kerberos authentication step is done, but authorization step fails.
'User' object (now it is just a string) doesn't have such parameter.
Request error
```
янв 13 13:54:14 nginx-test airflow[238738]: [2023-01-13 13:54:14,923] {app.py:1741} ERROR - Exception on /api/v1/dags [GET]
янв 13 13:54:14 nginx-test airflow[238738]: Traceback (most recent call last):
янв 13 13:54:14 nginx-test airflow[238738]: File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 2525, in wsgi_app
янв 13 13:54:14 nginx-test airflow[238738]: response = self.full_dispatch_request()
янв 13 13:54:14 nginx-test airflow[238738]: File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 1822, in full_dispatch_request
янв 13 13:54:14 nginx-test airflow[238738]: rv = self.handle_user_exception(e)
янв 13 13:54:14 nginx-test airflow[238738]: File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 1820, in full_dispatch_request
янв 13 13:54:14 nginx-test airflow[238738]: rv = self.dispatch_request()
янв 13 13:54:14 nginx-test airflow[238738]: File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 1796, in dispatch_request
янв 13 13:54:14 nginx-test airflow[238738]: return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
янв 13 13:54:14 nginx-test airflow[238738]: File "/usr/local/lib/python3.8/dist-packages/connexion/decorators/decorator.py", line 68, in wrapper
янв 13 13:54:14 nginx-test airflow[238738]: response = function(request)
янв 13 13:54:14 nginx-test airflow[238738]: File "/usr/local/lib/python3.8/dist-packages/connexion/decorators/uri_parsing.py", line 149, in wrapper
янв 13 13:54:14 nginx-test airflow[238738]: response = function(request)
янв 13 13:54:14 nginx-test airflow[238738]: File "/usr/local/lib/python3.8/dist-packages/connexion/decorators/validation.py", line 399, in wrapper
янв 13 13:54:14 nginx-test airflow[238738]: return function(request)
янв 13 13:54:14 nginx-test airflow[238738]: File "/usr/local/lib/python3.8/dist-packages/connexion/decorators/response.py", line 112, in wrapper
янв 13 13:54:14 nginx-test airflow[238738]: response = function(request)
янв 13 13:54:14 nginx-test airflow[238738]: File "/usr/local/lib/python3.8/dist-packages/connexion/decorators/parameter.py", line 120, in wrapper
янв 13 13:54:14 nginx-test airflow[238738]: return function(**kwargs)
янв 13 13:54:14 nginx-test airflow[238738]: File "/usr/local/lib/python3.8/dist-packages/airflow/api_connexion/security.py", line 50, in decorated
янв 13 13:54:14 nginx-test airflow[238738]: if appbuilder.sm.check_authorization(permissions, kwargs.get("dag_id")):
янв 13 13:54:14 nginx-test airflow[238738]: File "/usr/local/lib/python3.8/dist-packages/airflow/www/security.py", line 715, in check_authorization
янв 13 13:54:14 nginx-test airflow[238738]: can_access_all_dags = self.has_access(*perm)
янв 13 13:54:14 nginx-test airflow[238738]: File "/usr/local/lib/python3.8/dist-packages/airflow/www/security.py", line 419, in has_access
янв 13 13:54:14 nginx-test airflow[238738]: if (action_name, resource_name) in user.perms:
янв 13 13:54:14 nginx-test airflow[238738]: AttributeError: 'str' object has no attribute 'perms'
янв 13 13:54:14 nginx-test airflow[238738]: 127.0.0.1 - - [13/Jan/2023:13:54:14 +0300] "GET /api/v1/dags HTTP/1.1" 500 1561 "-" "curl/7.68.0"
```
Starting airflow-webserver log (no errors)
```
янв 13 13:38:51 nginx-test airflow[238502]: ____________ _____________
янв 13 13:38:51 nginx-test airflow[238502]: ____ |__( )_________ __/__ /________ __
янв 13 13:38:51 nginx-test airflow[238502]: ____ /| |_ /__ ___/_ /_ __ /_ __ \_ | /| / /
янв 13 13:38:51 nginx-test airflow[238502]: ___ ___ | / _ / _ __/ _ / / /_/ /_ |/ |/ /
янв 13 13:38:51 nginx-test airflow[238502]: _/_/ |_/_/ /_/ /_/ /_/ \____/____/|__/
янв 13 13:38:51 nginx-test airflow[238502]: Running the Gunicorn Server with:
янв 13 13:38:51 nginx-test airflow[238502]: Workers: 4 sync
янв 13 13:38:51 nginx-test airflow[238502]: Host: 0.0.0.0:10000
янв 13 13:38:51 nginx-test airflow[238502]: Timeout: 120
янв 13 13:38:51 nginx-test airflow[238502]: Logfiles: - -
янв 13 13:38:51 nginx-test airflow[238502]: Access Logformat:
янв 13 13:38:51 nginx-test airflow[238502]: =================================================================
янв 13 13:38:51 nginx-test airflow[238502]: [2023-01-13 13:38:51,209] {webserver_command.py:431} INFO - Received signal: 15. Closing gunicorn.
янв 13 13:38:51 nginx-test airflow[238519]: [2023-01-13 13:38:51 +0300] [238519] [WARNING] Worker with pid 238525 was terminated due to signal 15
янв 13 13:38:51 nginx-test airflow[238519]: [2023-01-13 13:38:51 +0300] [238519] [WARNING] Worker with pid 238523 was terminated due to signal 15
янв 13 13:38:51 nginx-test airflow[238519]: [2023-01-13 13:38:51 +0300] [238519] [WARNING] Worker with pid 238526 was terminated due to signal 15
янв 13 13:38:51 nginx-test airflow[238519]: [2023-01-13 13:38:51 +0300] [238519] [WARNING] Worker with pid 238524 was terminated due to signal 15
янв 13 13:38:51 nginx-test airflow[238519]: [2023-01-13 13:38:51 +0300] [238519] [INFO] Shutting down: Master
янв 13 13:38:52 nginx-test systemd[1]: airflow-webserver.service: Succeeded.
янв 13 13:38:52 nginx-test systemd[1]: Stopped Airflow webserver daemon.
янв 13 13:38:52 nginx-test systemd[1]: Started Airflow webserver daemon.
янв 13 13:38:54 nginx-test airflow[238732]: /usr/local/lib/python3.8/dist-packages/airflow/api/auth/backend/kerberos_auth.py:50 DeprecationWarning: '_request_ctx_stack' is dep>
янв 13 13:38:54 nginx-test airflow[238732]: [2023-01-13 13:38:54,393] {kerberos_auth.py:78} INFO - Kerberos: hostname nginx-test.mycompany
янв 13 13:38:54 nginx-test airflow[238732]: [2023-01-13 13:38:54,393] {kerberos_auth.py:88} INFO - Kerberos init: airflow nginx-test.mycompany
янв 13 13:38:54 nginx-test airflow[238732]: [2023-01-13 13:38:54,394] {kerberos_auth.py:93} INFO - Kerberos API: server is airflow/nginx-test.mycompany@MYCOMPANY>
янв 13 13:38:56 nginx-test airflow[238732]: [2023-01-13 13:38:56 +0300] [238732] [INFO] Starting gunicorn 20.1.0
янв 13 13:38:56 nginx-test airflow[238732]: [2023-01-13 13:38:56 +0300] [238732] [INFO] Listening at: http://0.0.0.0:10000 (238732)
янв 13 13:38:56 nginx-test airflow[238732]: [2023-01-13 13:38:56 +0300] [238732] [INFO] Using worker: sync
янв 13 13:38:56 nginx-test airflow[238735]: [2023-01-13 13:38:56 +0300] [238735] [INFO] Booting worker with pid: 238735
янв 13 13:38:57 nginx-test airflow[238736]: [2023-01-13 13:38:57 +0300] [238736] [INFO] Booting worker with pid: 238736
янв 13 13:38:57 nginx-test airflow[238737]: [2023-01-13 13:38:57 +0300] [238737] [INFO] Booting worker with pid: 238737
янв 13 13:38:57 nginx-test airflow[238738]: [2023-01-13 13:38:57 +0300] [238738] [INFO] Booting worker with pid: 238738
```
I tried to skip rights check, commenting problem lines and returning True from has_access function and if I remember it right in one more function from security.py. And I got it working. But it has been just a hack to check where is the problem.
### What you think should happen instead
It should return right json answer with code 200.
### How to reproduce
1. webserver_config.py: default
2. airflow.cfg changed lines:
```
[core]
security = kerberos
[api]
auth_backends = airflow.api.auth.backend.kerberos_auth,airflow.api.auth.backend.session
[kerberos]
ccache = /tmp/airflow_krb5_ccache
principal = airflow/nginx-test.mycompany
reinit_frequency = 3600
kinit_path = kinit
keytab = /root/airflow/airflow2.keytab
forwardable = True
include_ip = True
[webserver]
base_url = http://localhost:10000
web_server_port = 10000
```
3. Create keytab file with airflow principal
4. Log in as domain user, make request (for example):
curl --verbose --negotiate -u : http://nginx-test.mycompany:10000/api/v1/dags
### Operating System
Ubuntu. VERSION="20.04.5 LTS (Focal Fossa)"
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28919 | https://github.com/apache/airflow/pull/29054 | 80dbfbc7ad8f63db8565baefa282bc01146803fe | 135aef30be3f9b8b36556f3ff5e0d184b0f74f22 | "2023-01-13T11:27:58Z" | python | "2023-01-20T16:05:38Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,912 | ["docs/apache-airflow/start.rst"] | quick start fails: DagRun for example_bash_operator with run_id or execution_date of '2015-01-01' not found | ### Apache Airflow version
2.5.0
### What happened
I follow the [quick start guide](https://airflow.apache.org/docs/apache-airflow/stable/start.html)
When I execute `airflow tasks run example_bash_operator runme_0 2015-01-01` I got the following error:
```
[2023-01-13 15:50:42,493] {dagbag.py:538} INFO - Filling up the DagBag from /root/airflow/dags
[2023-01-13 15:50:42,761] {taskmixin.py:205} WARNING - Dependency <Task(_PythonDecoratedOperator): prepare_email>, send_email already registered for DAG: example_dag_decorator
[2023-01-13 15:50:42,761] {taskmixin.py:205} WARNING - Dependency <Task(EmailOperator): send_email>, prepare_email already registered for DAG: example_dag_decorator
[2023-01-13 15:50:42,830] {taskmixin.py:205} WARNING - Dependency <Task(BashOperator): create_entry_group>, delete_entry_group already registered for DAG: example_complex
[2023-01-13 15:50:42,830] {taskmixin.py:205} WARNING - Dependency <Task(BashOperator): delete_entry_group>, create_entry_group already registered for DAG: example_complex
[2023-01-13 15:50:42,831] {taskmixin.py:205} WARNING - Dependency <Task(BashOperator): create_entry_gcs>, delete_entry already registered for DAG: example_complex
[2023-01-13 15:50:42,831] {taskmixin.py:205} WARNING - Dependency <Task(BashOperator): delete_entry>, create_entry_gcs already registered for DAG: example_complex
[2023-01-13 15:50:42,831] {taskmixin.py:205} WARNING - Dependency <Task(BashOperator): create_tag>, delete_tag already registered for DAG: example_complex
[2023-01-13 15:50:42,831] {taskmixin.py:205} WARNING - Dependency <Task(BashOperator): delete_tag>, create_tag already registered for DAG: example_complex
[2023-01-13 15:50:42,852] {taskmixin.py:205} WARNING - Dependency <Task(_PythonDecoratedOperator): print_the_context>, log_sql_query already registered for DAG: example_python_operator
[2023-01-13 15:50:42,852] {taskmixin.py:205} WARNING - Dependency <Task(_PythonDecoratedOperator): log_sql_query>, print_the_context already registered for DAG: example_python_operator
[2023-01-13 15:50:42,853] {taskmixin.py:205} WARNING - Dependency <Task(_PythonDecoratedOperator): print_the_context>, log_sql_query already registered for DAG: example_python_operator
[2023-01-13 15:50:42,853] {taskmixin.py:205} WARNING - Dependency <Task(_PythonDecoratedOperator): log_sql_query>, print_the_context already registered for DAG: example_python_operator
[2023-01-13 15:50:42,854] {taskmixin.py:205} WARNING - Dependency <Task(_PythonDecoratedOperator): print_the_context>, log_sql_query already registered for DAG: example_python_operator
[2023-01-13 15:50:42,854] {taskmixin.py:205} WARNING - Dependency <Task(_PythonDecoratedOperator): log_sql_query>, print_the_context already registered for DAG: example_python_operator
[2023-01-13 15:50:42,855] {taskmixin.py:205} WARNING - Dependency <Task(_PythonDecoratedOperator): print_the_context>, log_sql_query already registered for DAG: example_python_operator
[2023-01-13 15:50:42,855] {taskmixin.py:205} WARNING - Dependency <Task(_PythonDecoratedOperator): log_sql_query>, print_the_context already registered for DAG: example_python_operator
[2023-01-13 15:50:42,855] {example_python_operator.py:90} WARNING - The virtalenv_python example task requires virtualenv, please install it.
[2023-01-13 15:50:43,608] {tutorial_taskflow_api_virtualenv.py:29} WARNING - The tutorial_taskflow_api_virtualenv example DAG requires virtualenv, please install it.
/root/miniconda3/lib/python3.7/site-packages/airflow/models/dag.py:3524 RemovedInAirflow3Warning: Param `schedule_interval` is deprecated and will be removed in a future release. Please use `schedule` instead.
Traceback (most recent call last):
File "/root/miniconda3/bin/airflow", line 8, in <module>
sys.exit(main())
File "/root/miniconda3/lib/python3.7/site-packages/airflow/__main__.py", line 39, in main
args.func(args)
File "/root/miniconda3/lib/python3.7/site-packages/airflow/cli/cli_parser.py", line 52, in command
return func(*args, **kwargs)
File "/root/miniconda3/lib/python3.7/site-packages/airflow/utils/cli.py", line 108, in wrapper
return f(*args, **kwargs)
File "/root/miniconda3/lib/python3.7/site-packages/airflow/cli/commands/task_command.py", line 384, in task_run
ti, _ = _get_ti(task, args.map_index, exec_date_or_run_id=args.execution_date_or_run_id, pool=args.pool)
File "/root/miniconda3/lib/python3.7/site-packages/airflow/utils/session.py", line 75, in wrapper
return func(*args, session=session, **kwargs)
File "/root/miniconda3/lib/python3.7/site-packages/airflow/cli/commands/task_command.py", line 163, in _get_ti
session=session,
File "/root/miniconda3/lib/python3.7/site-packages/airflow/cli/commands/task_command.py", line 118, in _get_dag_run
) from None
airflow.exceptions.DagRunNotFound: DagRun for example_bash_operator with run_id or execution_date of '2023-11-01' not found
```
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Ubuntu 18.04.3 LTS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28912 | https://github.com/apache/airflow/pull/28949 | c57c23dce39992eafcf86dc08a1938d7d407803f | a4f6f3d6fe614457ff95ac803fd15e9f0bd38d27 | "2023-01-13T07:55:02Z" | python | "2023-01-15T21:01:08Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,910 | ["airflow/providers/amazon/aws/operators/ecs.py"] | Misnamed param in EcsRunTaskOperator | ### What do you see as an issue?
In the `EcsRunTaskOperator`, one of the params in the docstring is `region_name`, but it should be `region`:
https://github.com/apache/airflow/blob/2.5.0/airflow/providers/amazon/aws/operators/ecs.py#L281
### Solving the problem
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28910 | https://github.com/apache/airflow/pull/29562 | eb46eeb33d58436aa5860f2f0031fad3dea3ce3b | cadab59e8df90588b07cf8d9ee3ce13f9a79f656 | "2023-01-13T01:21:52Z" | python | "2023-02-16T03:13:56Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,891 | ["chart/templates/pgbouncer/pgbouncer-deployment.yaml", "chart/values.schema.json", "chart/values.yaml"] | Pgbouncer metrics exporter restarts | ### Official Helm Chart version
1.6.0
### Apache Airflow version
2.4.2
### Kubernetes Version
1.21
### Helm Chart configuration
Nothing really specific
### Docker Image customizations
_No response_
### What happened
From time to time we have pg_bouncer metrics exporter that fails its healthcheck.
When it fails its healtchecks three times in a row, pgbouncer stop being reachable and drops all the ongoing connection.
Is it possible to make the pgbouncer healtcheck configurable at least the timeout parameter of one second that seems really short?
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28891 | https://github.com/apache/airflow/pull/29752 | d0fba865aed1fc21d82f0a61cddb1fa0bd4b7d0a | 44f89c6db115d91aba91955fde42475d1a276628 | "2023-01-12T15:18:28Z" | python | "2023-02-27T18:20:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,888 | ["airflow/www/app.py", "tests/www/views/test_views_base.py"] | `webserver.instance_name` shows markup text in `<title>` tag | ### Apache Airflow version
2.5.0
### What happened
https://github.com/apache/airflow/pull/20888 enables the use of markup to style the `webserver.instance_name`.
However, if the instance name has HTML code, this will also be reflected in the `<title>` tag, as shown in the screenshot below.
![image](https://user-images.githubusercontent.com/562969/212091882-d33bb0f7-75c2-4c92-bd4f-4bc7ba6be8db.png)
This is not a pretty behaviour.
### What you think should happen instead
Ideally, if `webserver. instance_name_has_markup = True`, then the text inside the `<title>` should be stripped of HTML code.
For example:
- Set `webserver.instance_name` to some text with markup, like `<b style="color: red">title</b>`
- Set `webserver.Instance_name_has_markup` to `true`
This is how the `<title>` tag should look like:
```html
<title>DAGs - title</title>
```
Instead of:
```
<title>DAGs - <b style="color: red">title<b></title>
```
### How to reproduce
- Airflow version 2.3+, which is [when this change has been introduced](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#instance-name-has-markup)
- Set `webserver.instance_name` to some text with markup, like `<b style="color: red">title</b>`
- Set `webserver.Instance_name_has_markup` to `true`
### Operating System
Doesn't matter
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28888 | https://github.com/apache/airflow/pull/28894 | 696b91fafe4a557f179098e0609eb9d9dcb73f72 | 971e3226dc3ca43900f0b79c42afffb14c59d691 | "2023-01-12T14:32:55Z" | python | "2023-03-16T11:34:39Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,884 | ["airflow/providers/microsoft/azure/hooks/wasb.py", "tests/providers/microsoft/azure/hooks/test_wasb.py"] | Azure Blob storage exposes crendentials in UI | ### Apache Airflow version
Other Airflow 2 version (please specify below)
2.3.3
### What happened
Azure Blob Storage exposes credentials in the UI
<img width="1249" alt="Screenshot 2023-01-12 at 14 00 05" src="https://user-images.githubusercontent.com/35199552/212072943-adca75c4-2226-4251-9446-e8f18fb22081.png">
### What you think should happen instead
_No response_
### How to reproduce
Create an Azure Blob storage connection. then click on the edit button on the connection.
### Operating System
debain
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28884 | https://github.com/apache/airflow/pull/28914 | 6f4544cfbdfa3cabb3faaeea60a651206cd84e67 | 3decb189f786781bb0dfb3420a508a4a2a22bd8b | "2023-01-12T13:01:24Z" | python | "2023-01-13T15:02:59Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,847 | ["airflow/www/static/js/callModal.js", "airflow/www/templates/airflow/dag.html", "airflow/www/views.py"] | Graph UI: Add Filter Downstream & Filter DownStream & Upstream | ### Description
Currently Airflow has a `Filter Upstream` View/option inside the graph view. (As documented [here](https://docs.astronomer.io/learn/airflow-ui#graph-view) under `Filter Upstream`)
<img width="682" alt="image" src="https://user-images.githubusercontent.com/9246654/211711759-670a1180-7f90-4ecd-84b0-2f3b290ff477.png">
It would be great if there were also the options
1. `Filter Downstream` &
2. `Filter Downstream & Upstream`
### Use case/motivation
Sometimes it is useful to view downstream tasks & down & upstream tasks when reviewing dags. This feature would make it as easy to view those as it is to view upstream today.
### Related issues
I found nothing with a quick search
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28847 | https://github.com/apache/airflow/pull/29226 | 624520db47f736af820b4bc834a5080111adfc96 | a8b2de9205dd805ee42cf6b0e15e7e2805752abb | "2023-01-11T03:35:33Z" | python | "2023-02-03T15:04:32Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,830 | ["airflow/providers/amazon/aws/transfers/dynamodb_to_s3.py", "airflow/providers/amazon/aws/waiters/README.md", "airflow/providers/amazon/aws/waiters/dynamodb.json", "docs/apache-airflow-providers-amazon/transfer/dynamodb_to_s3.rst", "tests/providers/amazon/aws/transfers/test_dynamodb_to_s3.py", "tests/providers/amazon/aws/waiters/test_custom_waiters.py", "tests/system/providers/amazon/aws/example_dynamodb_to_s3.py"] | Export DynamoDB table to S3 with PITR | ### Description
Airflow provides the Amazon DynamoDB to Amazon S3 below.
https://airflow.apache.org/docs/apache-airflow-providers-amazon/stable/operators/transfer/dynamodb_to_s3.html
Most of Data Engineer build their "export DDB data to s3" pipeline using "within the point in time recovery window".
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/dynamodb.html#DynamoDB.Client.export_table_to_point_in_time
I appreciate if airflow has this function as a native function.
### Use case/motivation
My daily batch job exports its data with pitr option. All of tasks is written by apache-airflow-providers-amazon except "export_table_to_point_in_time" task.
"export_table_to_point_in_time" task only used the python operator. I expect I can unify the task as apache-airflow-providers-amazon library.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28830 | https://github.com/apache/airflow/pull/31142 | 71c26276bcd3ddd5377d620e6b8baef30b72eaa0 | cd3fa33e82922e01888d609ed9c24b9c2dadfa27 | "2023-01-10T13:44:29Z" | python | "2023-05-09T23:56:29Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,825 | ["airflow/api_connexion/endpoints/dag_run_endpoint.py", "airflow/api_connexion/schemas/dag_run_schema.py", "tests/api_connexion/endpoints/test_dag_run_endpoint.py"] | Bad request when triggering dag run with `note` in payload | ### Apache Airflow version
2.5.0
### What happened
Specifying a `note` in the payload (as mentioned [in the doc](https://airflow.apache.org/docs/apache-airflow/2.5.0/stable-rest-api-ref.html#operation/post_dag_run)) when triggering a new dag run yield a 400 bad request
(Git Version: .release:2.5.0+fa2bec042995004f45b914dd1d66b466ccced410)
### What you think should happen instead
As far as I understand the documentation, I should be able to set a note for this dag run, and it is not the case.
### How to reproduce
This is a local airflow, using default credentials and default setup when following [this guide](https://airflow.apache.org/docs/apache-airflow/stable/howto/docker-compose/index.html#)
DAG:
<details>
```
import airflow
from airflow import DAG
import logging
from airflow.operators.python import PythonOperator
from airflow.operators.dummy import DummyOperator
from datetime import timedelta
logger = logging.getLogger("airflow.task")
default_args = {
"owner": "airflow",
"depends_on_past": False,
"retries": 0,
"retry_delay": timedelta(minutes=5),
}
def log_body(**context):
logger.info(f"Body: {context['dag_run'].conf}")
with DAG(
"my-validator",
default_args=default_args,
schedule_interval=None,
start_date=airflow.utils.dates.days_ago(0),
catchup=False
) as dag:
(
PythonOperator(
task_id="abcde",
python_callable=log_body,
provide_context=True
)
>> DummyOperator(
task_id="todo"
)
)
```
</details>
Request:
<details>
```
curl --location --request POST '0.0.0.0:8080/api/v1/dags/my-validator/dagRuns' \
--header 'Authorization: Basic YWlyZmxvdzphaXJmbG93' \
--header 'Content-Type: application/json' \
--data-raw '{
"conf": {
"key":"value"
},
"note": "test"
}'
```
</details>
Response:
<details>
```
{
"detail": "{'note': ['Unknown field.']}",
"status": 400,
"title": "Bad Request",
"type": "https://airflow.apache.org/docs/apache-airflow/2.5.0/stable-rest-api-ref.html#section/Errors/BadRequest"
}
```
</details>
Removing the `note` key, returns 200... with a null `note`!
<details>
```
{
"conf": {
"key": "value"
},
"dag_id": "my-validator",
"dag_run_id": "manual__2023-01-10T10:45:26.102802+00:00",
"data_interval_end": "2023-01-10T10:45:26.102802+00:00",
"data_interval_start": "2023-01-10T10:45:26.102802+00:00",
"end_date": null,
"execution_date": "2023-01-10T10:45:26.102802+00:00",
"external_trigger": true,
"last_scheduling_decision": null,
"logical_date": "2023-01-10T10:45:26.102802+00:00",
"note": null,
"run_type": "manual",
"start_date": null,
"state": "queued"
}
```
</details>
### Operating System
Ubuntu 20.04.5 LTS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
Everytime.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28825 | https://github.com/apache/airflow/pull/29228 | e626131563efb536f325a35c78585b74d4482ea3 | b94f36bf563f5c8372086cec63b74eadef638ef8 | "2023-01-10T10:53:02Z" | python | "2023-02-01T19:37:39Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,812 | ["airflow/providers/databricks/hooks/databricks.py", "airflow/providers/databricks/operators/databricks.py", "tests/providers/databricks/operators/test_databricks.py"] | DatabricksSubmitRunOperator Get failed for Multi Task Databricks Job Run | ### Apache Airflow Provider(s)
databricks
### Versions of Apache Airflow Providers
As we are running DatabricksSubmitRunOperator to run multi task databricks job as I am using airflow providers with mostly all flavours of versions, but when the databricks job get failed, DatabricksSubmitRunOperator gives below error its because this operator running get-output API, hence taking job run id instead of taking task run id
Error
```console
File "/home/ubuntu/.venv/airflow/lib/python3.8/site-packages/airflow/providers/databricks/hooks/databricks_base.py", line 355, in _do_api_call
for attempt in self._get_retry_object():
File "/home/ubuntu/.venv/airflow/lib/python3.8/site-packages/tenacity/__init__.py", line 382, in __iter__
do = self.iter(retry_state=retry_state)
File "/home/ubuntu/.venv/airflow/lib/python3.8/site-packages/tenacity/__init__.py", line 349, in iter
return fut.result()
File "/usr/lib/python3.8/concurrent/futures/_base.py", line 437, in result
return self.__get_result()
File "/usr/lib/python3.8/concurrent/futures/_base.py", line 389, in __get_result
raise self._exception
File "/home/ubuntu/.venv/airflow/lib/python3.8/site-packages/airflow/providers/databricks/hooks/databricks_base.py", line 365, in _do_api_call
response.raise_for_status()
File "/home/ubuntu/.venv/airflow/lib/python3.8/site-packages/requests/models.py", line 960, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url:
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ubuntu/.venv/airflow/lib/python3.8/site-packages/airflow/providers/databricks/operators/databricks.py", line 375, in execute
_handle_databricks_operator_execution(self, hook, self.log, context)
File "/home/ubuntu/.venv/airflow/lib/python3.8/site-packages/airflow/providers/databricks/operators/databricks.py", line 90, in _handle_databricks_operator_execution
run_output = hook.get_run_output(operator.run_id)
File "/home/ubuntu/.venv/airflow/lib/python3.8/site-packages/airflow/providers/databricks/hooks/databricks.py", line 280, in get_run_output
run_output = self._do_api_call(OUTPUT_RUNS_JOB_ENDPOINT, json)
File "/home/ubuntu/.venv/airflow/lib/python3.8/site-packages/airflow/providers/databricks/hooks/databricks_base.py", line 371, in _do_api_call
raise AirflowException(
airflow.exceptions.AirflowException: Response: b'{"error_code":"INVALID_PARAMETER_VALUE","message":"Retrieving the output of runs with multiple tasks is not supported. Please retrieve the output of each individual task run instead."}', Status Code: 400
[2023-01-10, 05:15:12 IST] {taskinstance.py} INFO - Marking task as FAILED. dag_id=experiment_metrics_store_experiment_4, task_id=, execution_date=20230109T180804, start_date=20230109T180810, end_date=20230109T181512
[2023-01-10, 05:15:13 IST] {warnings.py} WARNING - /home/ubuntu/.venv/airflow/lib/python3.8/site-packages/airflow/utils/email.py:119: PendingDeprecationWarning: Fetching SMTP credentials from configuration variables will be deprecated in a future release. Please set credentials using a connection instead.
send_mime_email(e_from=mail_from, e_to=recipients, mime_msg=msg, conn_id=conn_id, dryrun=dryrun)
```
### Apache Airflow version
2.3.2
### Operating System
macos
### Deployment
Other
### Deployment details
_No response_
### What happened
_No response_
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28812 | https://github.com/apache/airflow/pull/25427 | 87a0bd969b5bdb06c6e93236432eff6d28747e59 | 679a85325a73fac814c805c8c34d752ae7a94312 | "2023-01-09T19:20:39Z" | python | "2022-08-03T10:42:42Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,806 | ["airflow/providers/google/cloud/transfers/sql_to_gcs.py", "tests/providers/google/cloud/transfers/test_sql_to_gcs.py"] | BaseSQLToGCSOperator no longer returns at least one file even if empty | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google==8.6.0
### Apache Airflow version
2.5.0
### Operating System
Debian GNU/Linux
### Deployment
Astronomer
### Deployment details
_No response_
### What happened
PR `Expose SQL to GCS Metadata (https://github.com/apache/airflow/pull/24382)` made a breaking change [here](https://github.com/apache/airflow/blob/3eee33ac8cb74cfbb08bce9090e9c601cf98da44/airflow/providers/google/cloud/transfers/sql_to_gcs.py#L286) that results in no files being returned when there are no data rows (empty table) rather than a single empty file as in the past.
### What you think should happen instead
I would like to preserve the original behavior of having at least one file returned even if it is empty. Or to make that behavior optional via a new parameter.
The original behavior can be implemented with the following code change:
FROM:
```
if file_to_upload["file_row_count"] > 0:
yield file_to_upload
```
TO:
```
if file_no == 0 or file_to_upload["file_row_count"] > 0:
yield file_to_upload
```
### How to reproduce
Create a DAG that uses BaseSQLToGCSOperator with a SQL command that references an empty SQL table or returns no rows. The `execute` method will not write any files.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28806 | https://github.com/apache/airflow/pull/28959 | 7f2b065ccd01071cff8f298b944d81f3ff3384b5 | 5350be2194250366536db7f78b88dc8e49c9620e | "2023-01-09T16:56:12Z" | python | "2023-01-19T17:10:36Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,803 | ["airflow/datasets/manager.py", "airflow/jobs/scheduler_job.py", "docs/apache-airflow/administration-and-deployment/logging-monitoring/metrics.rst"] | statsd metric for dataset count | ### Description
A count of datasets that are currently registered/declared in an Airflow deployment.
### Use case/motivation
Would be nice to see how deployments are adopting datasets.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28803 | https://github.com/apache/airflow/pull/28907 | 5d84b59554c93fd22e92b46a1061b40b899a8dec | 7689592c244111b24bc52e7428c5a3bb80a4c2d6 | "2023-01-09T14:51:24Z" | python | "2023-01-18T09:35:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,789 | ["airflow/cli/cli_parser.py", "setup.cfg"] | Add colors in help outputs of Airfow CLI commands | ### Body
Folowing up after https://github.com/apache/airflow/pull/22613#issuecomment-1374530689 - seems that there is a new [rich-argparse](https://github.com/hamdanal/rich-argparse) project that might give us the option without rewriting Airflow's argument parsing to click (click has a number of possible performance issues that might impact airlfow's speed of CLI command parsing)
Seems this might be rather easy thing to do (just adding the formatter class for argparse).
Would be nice if someone implements it and tests (also for performance of CLI).
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/28789 | https://github.com/apache/airflow/pull/29116 | c310fb9255ba458b2842315f65f59758b76df9d5 | fdac67b3a5350ab4af79fd98612592511ca5f3fc | "2023-01-07T23:05:56Z" | python | "2023-02-08T11:04:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,785 | ["airflow/api_internal/endpoints/rpc_api_endpoint.py", "airflow/dag_processing/manager.py"] | AIP-44 Migrate DagFileProcessorManager.clear_nonexistent_import_errors to Internal API | https://github.com/apache/airflow/blob/main/airflow/dag_processing/manager.py#L773 | https://github.com/apache/airflow/issues/28785 | https://github.com/apache/airflow/pull/28976 | ca9a59b3e8c08286c8efd5ca23a509f9178a3cc9 | 09b3a29972430e5749d772359692fe4a9d528e48 | "2023-01-07T20:06:27Z" | python | "2023-01-21T03:33:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,772 | ["airflow/utils/json.py", "airflow/www/utils.py", "airflow/www/views.py", "tests/www/test_utils.py"] | DAG Run List UI Breaks when a non-JSON serializable value is added to dag_run.conf | ### Apache Airflow version
2.5.0
### What happened
When accessing `dag_run.conf` via a task's context, I was able to add a value that is non-JSON serializable. When I tried to access the Dag Run List UI (`/dagrun/list/`) or the Dag's Grid View, I was met with these error messages respectively:
**Dag Run List UI**
```
Ooops!
Something bad has happened.
Airflow is used by many users, and it is very likely that others had similar problems and you can easily find
a solution to your problem.
Consider following these steps:
* gather the relevant information (detailed logs with errors, reproduction steps, details of your deployment)
* find similar issues using:
* [GitHub Discussions](https://github.com/apache/airflow/discussions)
* [GitHub Issues](https://github.com/apache/airflow/issues)
* [Stack Overflow](https://stackoverflow.com/questions/tagged/airflow)
* the usual search engine you use on a daily basis
* if you run Airflow on a Managed Service, consider opening an issue using the service support channels
* if you tried and have difficulty with diagnosing and fixing the problem yourself, consider creating a [bug report](https://github.com/apache/airflow/issues/new/choose).
Make sure however, to include all relevant details and results of your investigation so far.
```
**Grid View**
```
Auto-refresh Error
<!DOCTYPE html> <html lang="en"> <head> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css"> </head> <body> <div class="container"> <h1> Ooops! </h1> <div> <pre> Something bad has happened. Airflow is used by many users, and it is very likely that others had similar problems and you can easily find a solution to your problem. Consider following these steps: * gather the relevant information (detailed logs
```
I was able to push the same value to XCom with `AIRFLOW__CORE__ENABLE_XCOM_PICKLING=True`, and the XCom List UI (`/xcom/list/`) did **not** throw an error.
In the postgres instance I am using for the Airflow DB, both `dag_run.conf` & `xcom.value` have `BYTEA` types.
### What you think should happen instead
Since we are able to add (and commit) a non-JSON serializable value into a Dag Run's conf, the UI should not break when trying to load this value. We could also ensure that one DAG Run's conf does not break the List UI for all Dag Runs (across all DAGs), and the DAG's Grid View.
### How to reproduce
- Set `AIRFLOW__CORE__ENABLE_XCOM_PICKLING=True`
- Trigger this DAG:
```
import datetime
from airflow.decorators import dag, task
from airflow.models.xcom import XCom
@dag(
schedule_interval=None,
start_date=datetime.datetime(2023, 1, 1),
)
def ui_issue():
@task()
def update_conf(**kwargs):
dag_conf = kwargs["dag_run"].conf
dag_conf["non_json_serializable_value"] = b"1234"
print(dag_conf)
@task()
def push_to_xcom(**kwargs):
dag_conf = kwargs["dag_run"].conf
print(dag_conf)
XCom.set(key="dag_conf", value=dag_conf, dag_id=kwargs["ti"].dag_id, task_id=kwargs["ti"].task_id, run_id=kwargs["ti"].run_id)
return update_conf() >> push_to_xcom()
dag = ui_issue()
```
- View both the Dag Runs and XCom lists in the UI.
- The DAG Run List UI should break, and the XCom List UI should show a value of `{'non_json_serializable_value': b'1234'}` for `ui_issue.push_to_xcom`.
### Operating System
Debian Bullseye
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
The XCom List UI was able to render this value. We could extend this capability to the DAG Run List UI.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28772 | https://github.com/apache/airflow/pull/28777 | 82c5a5f343d2310822f7bb0d316efa0abe9d4a21 | 8069b500e8487675df0472b4a5df9081dcfa9d6c | "2023-01-06T19:10:49Z" | python | "2023-04-03T08:46:06Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,766 | ["airflow/cli/commands/connection_command.py", "tests/cli/commands/test_connection_command.py"] | Cannot create connection without defining host using CLI | ### Apache Airflow version
2.5.0
### What happened
In order to send logs to s3 bucket after finishing the task, I added a connection to airflow using cli.
```airflow connections add connection_id_1 --conn-uri aws://s3/?region_name=eu-west-1&endpoint_url=https%3A%2F%2Fs3.eu-west-1.amazonaws.com```
Then I got a logging warning saying:
[2023-01-06T13:28:39.585+0000] {logging_mixin.py:137} WARNING - <string>:8 DeprecationWarning: Host s3 specified in the connection is not used. Please, set it on extra['endpoint_url'] instead
Instead I was trying to remove the host from the `conn-uri` I provided but every attempt to create a connection failed (list of my attempts below):
```airflow connections add connection_id_1 --conn-uri aws://?region_name=eu-west-1&endpoint_url=https%3A%2F%2Fs3.eu-west-1.amazonaws.com```
```airflow connections add connection_id_1 --conn-uri aws:///?region_name=eu-west-1&endpoint_url=https%3A%2F%2Fs3.eu-west-1.amazonaws.com```
### What you think should happen instead
I believe there are 2 options:
1. Allow to create connection without defining host
or
2. Remove the warning log
### How to reproduce
Create an S3 connection using CLI:
```airflow connections add connection_id_1 --conn-uri aws://s3/?region_name=eu-west-1&endpoint_url=https%3A%2F%2Fs3.eu-west-1.amazonaws.com```
### Operating System
Linux - official airflow image from docker hub apache/airflow:slim-2.5.0
### Versions of Apache Airflow Providers
```
apache-airflow-providers-cncf-kubernetes | 5.0.0 | Kubernetes
apache-airflow-providers-common-sql | 1.3.1 | Common SQL Provider
apache-airflow-providers-databricks | 4.0.0 | Databricks
apache-airflow-providers-ftp | 3.2.0 | File Transfer Protocol (FTP)
apache-airflow-providers-hashicorp | 3.2.0 | Hashicorp including Hashicorp Vault
apache-airflow-providers-http | 4.1.0 | Hypertext Transfer Protocol (HTTP)
apache-airflow-providers-imap | 3.1.0 | Internet Message Access Protocol (IMAP)
apache-airflow-providers-postgres | 5.3.1 | PostgreSQL
apache-airflow-providers-sqlite | 3.3.1 | SQLite
```
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
This log message is printed every second minute so it is pretty annoying.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28766 | https://github.com/apache/airflow/pull/28922 | c5ee4b8a3a2266ef98b379ee28ed68ff1b59ac5f | d8b84ce0e6d36850cd61b1ce37840c80aaec0116 | "2023-01-06T13:43:51Z" | python | "2023-01-13T21:41:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,756 | ["airflow/configuration.py", "tests/core/test_configuration.py"] | All Airflow Configurations set via Environment Variable are masked when `expose_config` is set as `non-sensitive-only` | ### Apache Airflow version
2.5.0
### What happened
In [Airflow 2.4.0](https://github.com/apache/airflow/blob/main/RELEASE_NOTES.rst#airflow-240-2022-09-19), a new feature was added that added an option to mask sensitive data in UI configuration page ([PR](https://github.com/apache/airflow/pull/25346)). I have set `AIRFLOW__WEBSERVER__EXPOSE_CONFIG` as `NON-SENSITIVE-ONLY`.
The feature is working partially as the `airflow.cfg` file display only has [sensitive configurations](https://github.com/apache/airflow/blob/2.5.0/airflow/configuration.py#L149-L160) marked as `< hidden >`. However, the `Running Configuration` table below the file display has all configuration set via environment variables marked as `< hidden >` which I believe is unintended.
I did not change `airflow.cfg` so the value here is displaying the default value of `False` as expected.
![Screen Shot 2023-01-05 at 1 39 11 PM](https://user-images.githubusercontent.com/5952735/210891805-1a5f6a6b-1afe-4d05-b03d-61ac583441fc.png)
The value for `expose_config` I expect to be shown as `NON-SENSITIVE-ONLY` but it shown as `< hidden >`.
![Screen Shot 2023-01-05 at 1 39 27 PM](https://user-images.githubusercontent.com/5952735/210891803-dba826d4-2d3c-4781-aeae-43c46e31fa89.png)
### What you think should happen instead
As mentioned previously, the value for `expose_config` I expect to be shown as `NON-SENSITIVE-ONLY`.
Only the [sensitive variables](https://github.com/apache/airflow/blob/2.5.0/airflow/configuration.py#L149-L160) should be set as `< hidden >`.
### How to reproduce
Set an Airflow configuration through the environment variable and check on the Configuration page.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28756 | https://github.com/apache/airflow/pull/28802 | 9a7f07491e603123182adfd5706fbae524e33c0d | 0a8d0ab56689c341e65a36c0287c9d635bae1242 | "2023-01-05T22:46:30Z" | python | "2023-01-09T16:43:51Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,751 | ["airflow/providers/google/cloud/operators/cloud_base.py", "tests/providers/google/cloud/operators/test_cloud_base.py"] | KubernetesExecutor leaves failed pods due to deepcopy issue with Google providers | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
With Airflow 2.3 and 2.4 there appears to be a bug in the KubernetesExecutor when used in conjunction with the Google airflow providers. This bug does not affect Airflow 2.2 due to the pip version requirements.
The bug specifically presents itself when using nearly any Google provider operator. During the pod lifecycle, all is well until the executor in the pod starts to clean up following a successful run. Airflow itself still see's the task marked as a success, but in Kubernetes, while the task is finishing up after reporting status, it actually crashes and puts the pod into a Failed state silently:
```
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/__main__.py", line 39, in main
args.func(args)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 52, in command
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/cli.py", line 103, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/commands/task_command.py", line 382, in task_run
_run_task_by_selected_method(args, dag, ti)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/commands/task_command.py", line 189, in _run_task_by_selected_method
_run_task_by_local_task_job(args, ti)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/commands/task_command.py", line 247, in _run_task_by_local_task_job
run_job.run()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/base_job.py", line 247, in run
self._execute()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/local_task_job.py", line 137, in _execute
self.handle_task_exit(return_code)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/local_task_job.py", line 168, in handle_task_exit
self._run_mini_scheduler_on_child_tasks()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/session.py", line 75, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/local_task_job.py", line 253, in _run_mini_scheduler_on_child_tasks
partial_dag = task.dag.partial_subset(
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/dag.py", line 2188, in partial_subset
dag.task_dict = {
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/dag.py", line 2189, in <dictcomp>
t.task_id: _deepcopy_task(t)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/dag.py", line 2186, in _deepcopy_task
return copy.deepcopy(t, memo)
File "/usr/local/lib/python3.9/copy.py", line 153, in deepcopy
y = copier(memo)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/baseoperator.py", line 1163, in __deepcopy__
setattr(result, k, copy.deepcopy(v, memo))
File "/usr/local/lib/python3.9/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/usr/local/lib/python3.9/copy.py", line 264, in _reconstruct
y = func(*args)
File "/usr/local/lib/python3.9/enum.py", line 384, in __call__
return cls.__new__(cls, value)
File "/usr/local/lib/python3.9/enum.py", line 702, in __new__
raise ve_exc
ValueError: <object object at 0x7f570181a3c0> is not a valid _MethodDefault
```
Based on a quick look, it appears to be related to the default argument that Google is using in its operators which happens to be an Enum, and fails during a deepcopy at the end of the task.
Example operator that is affected: https://github.com/apache/airflow/blob/403ed7163f3431deb7fc21108e1743385e139907/airflow/providers/google/cloud/hooks/dataproc.py#L753
Reference to the Google Python API core which has the Enum causing the problem: https://github.com/googleapis/python-api-core/blob/main/google/api_core/gapic_v1/method.py#L31
### What you think should happen instead
Kubernetes pods should succeed, be marked as `Completed`, and then be gracefully terminated.
### How to reproduce
Use any `apache-airflow-providers-google` >= 7.0.0 which includes `google-api-core` >= 2.2.2. Run a DAG with a task which uses any of the Google operators which have `_MethodDefault` as a default argument.
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==6.0.0
apache-airflow-providers-apache-hive==5.0.0
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-cncf-kubernetes==4.4.0
apache-airflow-providers-common-sql==1.3.1
apache-airflow-providers-docker==3.2.0
apache-airflow-providers-elasticsearch==4.2.1
apache-airflow-providers-ftp==3.1.0
apache-airflow-providers-google==8.4.0
apache-airflow-providers-grpc==3.0.0
apache-airflow-providers-hashicorp==3.1.0
apache-airflow-providers-http==4.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-microsoft-azure==4.3.0
apache-airflow-providers-mysql==3.2.1
apache-airflow-providers-odbc==3.1.2
apache-airflow-providers-postgres==5.2.2
apache-airflow-providers-presto==4.2.0
apache-airflow-providers-redis==3.0.0
apache-airflow-providers-sendgrid==3.0.0
apache-airflow-providers-sftp==4.1.0
apache-airflow-providers-slack==6.0.0
apache-airflow-providers-sqlite==3.2.1
apache-airflow-providers-ssh==3.2.0
### Deployment
Other 3rd-party Helm chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28751 | https://github.com/apache/airflow/pull/29518 | ec31648be4c2fc4d4a7ef2bd23be342ca1150956 | 5a632f78eb6e3dcd9dc808e73b74581806653a89 | "2023-01-05T17:31:57Z" | python | "2023-03-04T22:44:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,746 | ["airflow/www/utils.py", "tests/test_utils/www.py", "tests/www/views/conftest.py", "tests/www/views/test_views_home.py"] | UIAlert returns AttributeError: 'NoneType' object has no attribute 'roles' when specifying AUTH_ROLE_PUBLIC | ### Apache Airflow version
2.5.0
### What happened
When adding a [role-based UIAlert following these docs](https://airflow.apache.org/docs/apache-airflow/stable/howto/customize-ui.html#add-custom-alert-messages-on-the-dashboard), I received the below stacktrace:
```
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.9/site-packages/flask/app.py", line 2525, in wsgi_app
response = self.full_dispatch_request()
File "/home/airflow/.local/lib/python3.9/site-packages/flask/app.py", line 1822, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/airflow/.local/lib/python3.9/site-packages/flask/app.py", line 1820, in full_dispatch_request
rv = self.dispatch_request()
File "/home/airflow/.local/lib/python3.9/site-packages/flask/app.py", line 1796, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/www/auth.py", line 47, in decorated
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/www/views.py", line 780, in index
dashboard_alerts = [
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/www/views.py", line 781, in <listcomp>
fm for fm in settings.DASHBOARD_UIALERTS if fm.should_show(get_airflow_app().appbuilder.sm)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/www/utils.py", line 820, in should_show
user_roles = {r.name for r in securitymanager.current_user.roles}
AttributeError: 'NoneType' object has no attribute 'roles'
```
On further inspection, I realized this is happening because my webserver_config.py has this specification:
```py
# Uncomment and set to desired role to enable access without authentication
AUTH_ROLE_PUBLIC = 'Viewer'
```
When we set AUTH_ROLE_PUBLIC to a role like Viewer, [this line](https://github.com/apache/airflow/blob/ad7f8e09f8e6e87df2665abdedb22b3e8a469b49/airflow/www/utils.py#L828) returns an exception because `securitymanager.current_user` is None.
Relevant code snippet:
```py
def should_show(self, securitymanager) -> bool:Open an interactive python shell in this frame
"""Determine if the user should see the message based on their role membership"""
if self.roles:
user_roles = {r.name for r in securitymanager.current_user.roles}
if not user_roles.intersection(set(self.roles)):
return False
return True
```
### What you think should happen instead
If we detect that the securitymanager.current_user is None, we should not attempt to get its `roles` attribute.
Instead, we can check to see if the AUTH_ROLE_PUBLIC is set in webserver_config.py which will tell us if a public role is being used. If it is, we can assume that because the current_user is None, the current_user's role is the public role.
In code, this might look like this:
```py
def should_show(self, securitymanager) -> bool:
"""Determine if the user should see the message based on their role membership"""
if self.roles:
user_roles = set()
if hasattr(securitymanager.current_user, "roles"):
user_roles = {r.name for r in securitymanager.current_user.roles}
elif "AUTH_ROLE_PUBLIC" in securitymanager.appbuilder.get_app.config:
# Give anonymous user public role
user_roles = set([securitymanager.appbuilder.get_app.config["AUTH_ROLE_PUBLIC"]])
if not user_roles.intersection(set(self.roles)):
return False
return True
```
Expected result on the webpage:
<img width="1440" alt="image" src="https://user-images.githubusercontent.com/9200263/210823778-4c619b75-40a3-4caa-9a2c-073651da7f0d.png">
### How to reproduce
Start breeze:
```
breeze --python 3.7 --backend postgres start-airflow
```
After the webserver, triggerer, and scheduler are started, modify webserver_config.py to uncomment AUTH_ROLE_PUBLIC and add airflow_local_settings.py:
```bash
cd $AIRFLOW_HOME
# Uncomment AUTH_ROLE_PUBLIC
vi webserver_config.py
mkdir -p config
# Add sample airflow_local_settings.py below
vi config/airflow_local_settings.py
```
```py
from airflow.www.utils import UIAlert
DASHBOARD_UIALERTS = [
UIAlert("Role based alert", category="warning", roles=["Viewer"]),
]
```
Restart the webserver and navigate to airflow. You should see this page:
<img width="1440" alt="image" src="https://user-images.githubusercontent.com/9200263/210820838-e74ffc23-7b6b-42dc-85f1-29ab8b0ee3d5.png">
### Operating System
Debian 11
### Versions of Apache Airflow Providers
2.5.0
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
Locally
### Anything else
This problem only occurs if you add a role based UIAlert and are using AUTH_ROLE_PUBLIC
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28746 | https://github.com/apache/airflow/pull/28781 | 1e9c8e52fda95a0a30b3ae298d5d3adc1971ed45 | f17e2ba48b59525655a92e04684db664a672918f | "2023-01-05T15:55:51Z" | python | "2023-01-10T05:51:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,745 | ["chart/templates/logs-persistent-volume-claim.yaml", "chart/values.schema.json", "chart/values.yaml"] | annotations in logs pvc | ### Official Helm Chart version
1.7.0 (latest released)
### Apache Airflow version
2.5.0
### Kubernetes Version
v1.22.8+d48376b
### Helm Chart configuration
_No response_
### Docker Image customisations
_No response_
### What happened
When creating the dags pvc, it is possible to inject annotations to the object.
### What you think should happen instead
There should be the possibility to inject annotations to the logs pvc as well.
### How to reproduce
_No response_
### Anything else
We are using annotations on pvc to disable the creation of backup snapshots provided by our company platform. (OpenShift)
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28745 | https://github.com/apache/airflow/pull/29270 | 6ef5ba9104f5a658b003f8ade274f19d7ec1b6a9 | 5835b08e8bc3e11f4f98745266d10bbae510b258 | "2023-01-05T13:22:16Z" | python | "2023-02-20T22:57:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,691 | ["airflow/providers/amazon/aws/utils/waiter.py", "tests/providers/amazon/aws/utils/test_waiter.py"] | Fix custom waiter function in AWS provider package | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
_No response_
### Apache Airflow version
2.5.0
### Operating System
MacOS
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
Discussed in #28294
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28691 | https://github.com/apache/airflow/pull/28753 | 2b92c3c74d3259ebac714f157c525836f0af50f0 | ce188e509389737b3c0bdc282abea2425281c2b7 | "2023-01-03T14:34:10Z" | python | "2023-01-05T22:09:24Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,680 | ["airflow/providers/amazon/aws/operators/batch.py", "tests/providers/amazon/aws/operators/test_batch.py"] | Improve AWS Batch hook and operator | ### Description
AWS Batch hook and operator do not support the boto3 parameter shareIdentifier, which is required to submit jobs to specific types of queues.
### Use case/motivation
I wish that AWS Batch hook and operator support the submit of jobs to queues that require shareIdentifier parameter.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28680 | https://github.com/apache/airflow/pull/30829 | bd542fdf51ad9550e5c4348f11e70b5a6c9adb48 | 612676b975a2ff26541bb2581fbdf2befc6c3de9 | "2023-01-02T14:47:23Z" | python | "2023-04-28T22:04:16Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,670 | ["airflow/providers/telegram/CHANGELOG.rst", "airflow/providers/telegram/hooks/telegram.py", "airflow/providers/telegram/provider.yaml", "docs/spelling_wordlist.txt", "generated/provider_dependencies.json", "tests/providers/telegram/hooks/test_telegram.py"] | Support telegram-bot v20+ | ### Body
Currently our telegram integration uses Telegram v13 telegram-bot library. On 1st of Jan 2023 a new, backwards incompatible version of Telegram-bot has been released : https://pypi.org/project/python-telegram-bot/20.0/#history and at least as reported by MyPy and our test suite test failures, Telegram 20 needs some changes to work:
Here is a transition guide that might be helpful.
Transition guide is here: https://github.com/python-telegram-bot/python-telegram-bot/wiki/Transition-guide-to-Version-20.0
In the meantime we limit telegram to < 20.0.0
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/28670 | https://github.com/apache/airflow/pull/28953 | 68412e166414cbf6228385e1e118ec0939857496 | 644cea14fff74d34f823b5c52c9dbf5bad33bd52 | "2023-01-02T06:58:45Z" | python | "2023-02-23T03:24:13Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,662 | ["airflow/providers/apache/beam/operators/beam.py"] | BeamRunGoPipelineOperator: temp dir with Go file from GCS is removed before starting the pipeline | ### Apache Airflow Provider(s)
apache-beam
### Versions of Apache Airflow Providers
apache-airflow-providers-apache-beam==4.1.0
apache-airflow-providers-google==8.6.0
### Apache Airflow version
2.5.0
### Operating System
macOS 13.1
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
When using the `BeamRunGoPipelineOperator` with a `go_file` on GCS, the object is downloaded to a temporary directory, however the directory with the file has already been removed by the time it is needed, i.e. when executing `go mod init` and starting the pipeline.
### What you think should happen instead
The `BeamRunGoPipelineOperator.execute` method enters into a `tempfile.TemporaryDirectory` context manager using [with](https://github.com/apache/airflow/blob/2.5.0/airflow/providers/apache/beam/operators/beam.py#L588) when downloading the `go_file` from GCS to the local filesystem. On completion of the context, this temporary directory is removed. `BeamHook.start_go_pipeline`, which uses the file, is called outside of the context however, which means the file no longer exists when `go mod init` is called.
A suggested solution is to use the `enter_context` method of the existing `ExitStack` to also enter into the TemporaryDirectory context manager. This allows the go_file to still exist when it is time to initialize the go module and start the pipeline:
```python
with ExitStack() as exit_stack:
if self.go_file.lower().startswith("gs://"):
gcs_hook = GCSHook(self.gcp_conn_id, self.delegate_to)
tmp_dir = exit_stack.enter_context(tempfile.TemporaryDirectory(prefix="apache-beam-go"))
tmp_gcs_file = exit_stack.enter_context(
gcs_hook.provide_file(object_url=self.go_file, dir=tmp_dir)
)
self.go_file = tmp_gcs_file.name
self.should_init_go_module = True
```
### How to reproduce
The problem can be reproduced by creating a DAG which uses the `BeamRunGoPipelineOperator` and passing a `go_file` with a GS URI:
```python
import pendulum
from airflow import DAG
from airflow.providers.apache.beam.operators.beam import BeamRunGoPipelineOperator
with DAG(
dag_id="beam_go_dag",
start_date=pendulum.today("UTC"),
) as dag:
BeamRunGoPipelineOperator(
task_id="beam_go_pipeline",
go_file="gs://my-bucket/main.go"
)
```
### Anything else
Relevant logs:
```
[2023-01-01T12:41:06.155+0100] {taskinstance.py:1303} INFO - Executing <Task(BeamRunGoPipelineOperator): beam_go_pipeline> on 2023-01-01 00:00:00+00:00
[2023-01-01T12:41:06.411+0100] {taskinstance.py:1510} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_OWNER=airflow
AIRFLOW_CTX_DAG_ID=beam_go_dag
AIRFLOW_CTX_TASK_ID=beam_go_pipeline
AIRFLOW_CTX_EXECUTION_DATE=2023-01-01T00:00:00+00:00
AIRFLOW_CTX_TRY_NUMBER=1
AIRFLOW_CTX_DAG_RUN_ID=backfill__2023-01-01T00:00:00+00:00
[2023-01-01T12:41:06.430+0100] {base.py:73} INFO - Using connection ID 'google_cloud_default' for task execution.
[2023-01-01T12:41:06.441+0100] {credentials_provider.py:323} INFO - Getting connection using `google.auth.default()` since no key file is defined for hook.
[2023-01-01T12:41:08.701+0100] {gcs.py:323} INFO - File downloaded to /var/folders/1_/7h5npt456j5f063tq7ngyxdw0000gn/T/apache-beam-gosmk3lv_4/tmp6j9g5090main.go
[2023-01-01T12:41:08.704+0100] {process_utils.py:179} INFO - Executing cmd: go mod init main
[2023-01-01T12:41:08.712+0100] {taskinstance.py:1782} ERROR - Task failed with exception
Traceback (most recent call last):
File "/Users/johannaojeling/repo/johannaojeling/airflow/airflow/providers/google/cloud/hooks/gcs.py", line 402, in provide_file
yield tmp_file
File "/Users/johannaojeling/repo/johannaojeling/airflow/airflow/providers/apache/beam/operators/beam.py", line 621, in execute
self.beam_hook.start_go_pipeline(
File "/Users/johannaojeling/repo/johannaojeling/airflow/airflow/providers/apache/beam/hooks/beam.py", line 339, in start_go_pipeline
init_module("main", working_directory)
File "/Users/johannaojeling/repo/johannaojeling/airflow/airflow/providers/google/go_module_utils.py", line 37, in init_module
execute_in_subprocess(go_mod_init_cmd, cwd=go_module_path)
File "/Users/johannaojeling/repo/johannaojeling/airflow/airflow/utils/process_utils.py", line 168, in execute_in_subprocess
execute_in_subprocess_with_kwargs(cmd, cwd=cwd)
File "/Users/johannaojeling/repo/johannaojeling/airflow/airflow/utils/process_utils.py", line 180, in execute_in_subprocess_with_kwargs
with subprocess.Popen(
File "/Users/johannaojeling/.pyenv/versions/3.10.6/lib/python3.10/subprocess.py", line 969, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/Users/johannaojeling/.pyenv/versions/3.10.6/lib/python3.10/subprocess.py", line 1845, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: '/var/folders/1_/7h5npt456j5f063tq7ngyxdw0000gn/T/apache-beam-gosmk3lv_4'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/johannaojeling/repo/johannaojeling/airflow/airflow/providers/apache/beam/operators/beam.py", line 584, in execute
with ExitStack() as exit_stack:
File "/Users/johannaojeling/.pyenv/versions/3.10.6/lib/python3.10/contextlib.py", line 576, in __exit__
raise exc_details[1]
File "/Users/johannaojeling/.pyenv/versions/3.10.6/lib/python3.10/contextlib.py", line 561, in __exit__
if cb(*exc_details):
File "/Users/johannaojeling/.pyenv/versions/3.10.6/lib/python3.10/contextlib.py", line 153, in __exit__
self.gen.throw(typ, value, traceback)
File "/Users/johannaojeling/repo/johannaojeling/airflow/airflow/providers/google/cloud/hooks/gcs.py", line 399, in provide_file
with NamedTemporaryFile(suffix=file_name, dir=dir) as tmp_file:
File "/Users/johannaojeling/.pyenv/versions/3.10.6/lib/python3.10/tempfile.py", line 502, in __exit__
self.close()
File "/Users/johannaojeling/.pyenv/versions/3.10.6/lib/python3.10/tempfile.py", line 509, in close
self._closer.close()
File "/Users/johannaojeling/.pyenv/versions/3.10.6/lib/python3.10/tempfile.py", line 446, in close
unlink(self.name)
FileNotFoundError: [Errno 2] No such file or directory: '/var/folders/1_/7h5npt456j5f063tq7ngyxdw0000gn/T/apache-beam-gosmk3lv_4/tmp6j9g5090main.go'
[2023-01-01T12:41:08.829+0100] {taskinstance.py:1321} INFO - Marking task as FAILED. dag_id=beam_go_dag, task_id=beam_go_pipeline, execution_date=20230101T000000, start_date=20230101T114106, end_date=20230101T114108
[...]
```
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28662 | https://github.com/apache/airflow/pull/28664 | 675af73ceb5bc8b03d46a7cd903a73f9b8faba6f | 8da678ccd2e5a30f9c2d22c7526b7a238c185d2f | "2023-01-01T15:27:59Z" | python | "2023-01-03T09:08:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,658 | ["tests/jobs/test_local_task_job.py"] | Fix Quarantine tests | ### Body
We have several tests marked in the code base with `@pytest.mark.quarantined`
It means that the tests are flaky and if fail in CI it does not fail the build.
The goal is to fix the tests and make them stable.
This task is to gather all of them under the same issue instead of dedicated issue per test.
- [x] [TestImpersonation](https://github.com/apache/airflow/blob/bfcae349b88fd959e32bfacd027a5be976fe2132/tests/core/test_impersonation_tests.py#L117)
- [x] [TestImpersonationWithCustomPythonPath](https://github.com/apache/airflow/blob/bfcae349b88fd959e32bfacd027a5be976fe2132/tests/core/test_impersonation_tests.py#L181)
- [x] [test_exception_propagation](https://github.com/apache/airflow/blob/76f81cd4a7433b7eeddb863b2ae6ee59176cf816/tests/jobs/test_local_task_job.py#L772)
- [x] [test_localtaskjob_maintain_heart_rate](https://github.com/apache/airflow/blob/76f81cd4a7433b7eeddb863b2ae6ee59176cf816/tests/jobs/test_local_task_job.py#L402)
- [x] [test_exception_propagation](https://github.com/apache/airflow/blob/4d0fd8ef6adc35f683c7561f05688a65fd7451f4/tests/executors/test_celery_executor.py#L103)
- [x] [test_process_sigterm_works_with_retries](https://github.com/apache/airflow/blob/65010fda091242870a410c65478eae362899763b/tests/jobs/test_local_task_job.py#L770)
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/28658 | https://github.com/apache/airflow/pull/29087 | 90ce88bf34b2337f89eed67e41092f53bf24e9c1 | a6e21bc6ce428eadf44f62b05aeea7bbd3447a7b | "2022-12-31T15:37:37Z" | python | "2023-01-25T22:49:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,637 | ["docs/helm-chart/index.rst"] | version 2.4.1 migration job "run-airflow-migrations" run once only when deploy via helm or flux/kustomization | ### Official Helm Chart version
1.7.0 (latest released)
### Apache Airflow version
2.4.1
### Kubernetes Version
v4.5.4
### Helm Chart configuration
_No response_
### Docker Image customisations
_No response_
### What happened
manually copied from [the Q & A 27992 migration job](https://github.com/apache/airflow/discussions/27992) (the button create issue from discussion did not work)
I found my migration job would not restart for the 2nd time (the 1st time run was when the default airflow is deployed onto Kubernetes and it had no issues), and then i started to apply changes to the values.yaml file such as **make the database to be azure postgresql**; but then it would not take the values into effect, see screen shots;
of course my debug skills on kubernetes are not high, so i would need extra help if extra info is needed.
![image](https://user-images.githubusercontent.com/11322886/209687297-7d83e4aa-9096-467e-851a-2557928da2b6.png)
![image](https://user-images.githubusercontent.com/11322886/209687323-fc853fcc-438c-4bea-8182-793dac722cae.png)
![image](https://user-images.githubusercontent.com/11322886/209687349-5c043188-3393-49b2-a73f-a997e55d6c3c.png)
```
database:
sql_alchemy_conn_secret: airflow-postgres-redis
sql_alchemy_connect_args:
{
"keepalives": 1,
"keepalives_idle": 30,
"keepalives_interval": 5,
"keepalives_count": 5,
}
postgresql:
enabled: false
pgbouncer:
enabled: false
# Airflow database & redis config
data:
metadataSecretName: airflow-postgres-redis
```
check again the pod for waiting for the migration:
![image](https://user-images.githubusercontent.com/11322886/209689640-fdeed08d-19b3-43d5-a736-466cf36237ba.png)
and below was the 1st success at the initial installation (which did not use external db)
```
kubectl describe job airflow-airflow-run-airflow-migrations
Name: airflow-airflow-run-airflow-migrations
Namespace: airflow
Selector: controller-uid=efdc3c7b-5172-4841-abcf-17e055fa6e2e
Labels: app.kubernetes.io/managed-by=Helm
chart=airflow-1.7.0
component=run-airflow-migrations
helm.toolkit.fluxcd.io/name=airflow
helm.toolkit.fluxcd.io/namespace=airflow
heritage=Helm
release=airflow-airflow
tier=airflow
Annotations: batch.kubernetes.io/job-tracking:
meta.helm.sh/release-name: airflow-airflow
meta.helm.sh/release-namespace: airflow
Parallelism: 1
Completions: 1
Completion Mode: NonIndexed
Start Time: Tue, 27 Dec 2022 14:21:50 +0100
Completed At: Tue, 27 Dec 2022 14:22:29 +0100
Duration: 39s
Pods Statuses: 0 Active (0 Ready) / 1 Succeeded / 0 Failed
Pod Template:
Labels: component=run-airflow-migrations
controller-uid=efdc3c7b-5172-4841-abcf-17e055fa6e2e
job-name=airflow-airflow-run-airflow-migrations
release=airflow-airflow
tier=airflow
Service Account: airflow-airflow-migrate-database-job
Containers:
run-airflow-migrations:
Image: apache/airflow:2.4.1
Port: <none>
Host Port: <none>
Args:
bash
-c
exec \
airflow db upgrade
Environment:
PYTHONUNBUFFERED: 1
AIRFLOW__CORE__FERNET_KEY: <set to the key 'fernet-key' in secret 'airflow-airflow-fernet-key'> Optional: false
AIRFLOW__CORE__SQL_ALCHEMY_CONN: <set to the key 'connection' in secret 'airflow-airflow-airflow-metadata'> Optional: false
AIRFLOW__DATABASE__SQL_ALCHEMY_CONN: <set to the key 'connection' in secret 'airflow-airflow-airflow-metadata'> Optional: false
AIRFLOW_CONN_AIRFLOW_DB: <set to the key 'connection' in secret 'airflow-airflow-airflow-metadata'> Optional: false
AIRFLOW__WEBSERVER__SECRET_KEY: <set to the key 'webserver-secret-key' in secret 'airflow-airflow-webserver-secret-key'> Optional: false
AIRFLOW__CELERY__BROKER_URL: <set to the key 'connection' in secret 'airflow-airflow-broker-url'> Optional: false
Mounts:
/opt/airflow/airflow.cfg from config (ro,path="airflow.cfg")
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: airflow-airflow-airflow-config
Optional: false
Events: <none>
```
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Anything else
my further experiment/try tells me the jobs were only run once.
more independent tests could be done with a bit help, such as what kind of changes will trigger migration job to run.
See below helm release history: the 1st installation worked; and i could not make the 3rd release to succeed even though the values are 100% correct; so **the bug/issue short description is: helmRelease in combination with `flux` have issues with db migration jobs (only run once <can be successful>) which makes it a stopper for further upgrade**
```
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
1 Wed Dec 28 02:22:42 2022 superseded airflow-1.7.0 2.4.1 Install complete
2 Wed Dec 28 02:43:25 2022 deployed airflow-1.7.0 2.4.1 Upgrade complete
```
see below equivalent values , even tried to disable the db migration did not make flux to work with it.
```
createUserJob:
useHelmHooks: false
migrateDatabaseJob:
useHelmHooks: false
config:
webserver:
expose_config: 'non-sensitive-only'
postgresql:
enabled: false
pgbouncer:
enabled: true
# The maximum number of connections to PgBouncer
maxClientConn: 100
# The maximum number of server connections to the metadata database from PgBouncer
metadataPoolSize: 10
# The maximum number of server connections to the result backend database from PgBouncer
resultBackendPoolSize: 5
# Airflow database & redis config
data:
metadataSecretName: airflow-postgres-redis
# to generate strong secret: python3 -c 'import secrets; print(secrets.token_hex(16))'
webserverSecretKeySecretName: airflow-webserver-secret
```
and see below 2 jobs
```
$ kubectl describe job -n airflow
Name: airflow-airflow-create-user
Namespace: airflow
Selector: controller-uid=8b09e28b-ba3a-4cee-b20f-693a3aa15363
Labels: app.kubernetes.io/managed-by=Helm
chart=airflow-1.7.0
component=create-user-job
helm.toolkit.fluxcd.io/name=airflow
helm.toolkit.fluxcd.io/namespace=airflow
heritage=Helm
release=airflow-airflow
tier=airflow
Annotations: batch.kubernetes.io/job-tracking:
meta.helm.sh/release-name: airflow-airflow
meta.helm.sh/release-namespace: airflow
Parallelism: 1
Completions: 1
Completion Mode: NonIndexed
Start Time: Wed, 28 Dec 2022 03:22:46 +0100
Completed At: Wed, 28 Dec 2022 03:24:32 +0100
Duration: 106s
Pods Statuses: 0 Active (0 Ready) / 1 Succeeded / 0 Failed
Pod Template:
Labels: component=create-user-job
controller-uid=8b09e28b-ba3a-4cee-b20f-693a3aa15363
job-name=airflow-airflow-create-user
release=airflow-airflow
tier=airflow
Service Account: airflow-airflow-create-user-job
Containers:
create-user:
Image: apache/airflow:2.4.1
Port: <none>
Host Port: <none>
Args:
bash
-c
exec \
airflow users create "$@"
--
-r
Admin
-u
admin
-e
[email protected]
-f
admin
-l
user
-p
admin
Environment:
AIRFLOW__CORE__FERNET_KEY: <set to the key 'fernet-key' in secret 'airflow-airflow-fernet-key'> Optional: false
AIRFLOW__CORE__SQL_ALCHEMY_CONN: <set to the key 'connection' in secret 'airflow-postgres-redis'> Optional: false
AIRFLOW__DATABASE__SQL_ALCHEMY_CONN: <set to the key 'connection' in secret 'airflow-postgres-redis'> Optional: false
AIRFLOW_CONN_AIRFLOW_DB: <set to the key 'connection' in secret 'airflow-postgres-redis'> Optional: false
AIRFLOW__WEBSERVER__SECRET_KEY: <set to the key 'webserver-secret-key' in secret 'airflow-airflow-webserver-secret-key'> Optional: false
AIRFLOW__CELERY__BROKER_URL: <set to the key 'connection' in secret 'airflow-airflow-broker-url'> Optional: false
Mounts:
/opt/airflow/airflow.cfg from config (ro,path="airflow.cfg")
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: airflow-airflow-airflow-config
Optional: false
Events: <none>
Name: airflow-airflow-run-airflow-migrations
Namespace: airflow
Selector: controller-uid=5da8c81f-7920-4eaf-9d7a-58a48c740bdc
Labels: app.kubernetes.io/managed-by=Helm
chart=airflow-1.7.0
component=run-airflow-migrations
helm.toolkit.fluxcd.io/name=airflow
helm.toolkit.fluxcd.io/namespace=airflow
heritage=Helm
release=airflow-airflow
tier=airflow
Annotations: batch.kubernetes.io/job-tracking:
meta.helm.sh/release-name: airflow-airflow
meta.helm.sh/release-namespace: airflow
Parallelism: 1
Completions: 1
Completion Mode: NonIndexed
Start Time: Wed, 28 Dec 2022 03:22:46 +0100
Completed At: Wed, 28 Dec 2022 03:23:07 +0100
Duration: 21s
Pods Statuses: 0 Active (0 Ready) / 1 Succeeded / 0 Failed
Pod Template:
Labels: component=run-airflow-migrations
controller-uid=5da8c81f-7920-4eaf-9d7a-58a48c740bdc
job-name=airflow-airflow-run-airflow-migrations
release=airflow-airflow
tier=airflow
Service Account: airflow-airflow-migrate-database-job
Containers:
run-airflow-migrations:
Image: apache/airflow:2.4.1
Port: <none>
Host Port: <none>
Args:
bash
-c
exec \
airflow db upgrade
Environment:
PYTHONUNBUFFERED: 1
AIRFLOW__CORE__FERNET_KEY: <set to the key 'fernet-key' in secret 'airflow-airflow-fernet-key'> Optional: false
AIRFLOW__CORE__SQL_ALCHEMY_CONN: <set to the key 'connection' in secret 'airflow-postgres-redis'> Optional: false
AIRFLOW__DATABASE__SQL_ALCHEMY_CONN: <set to the key 'connection' in secret 'airflow-postgres-redis'> Optional: false
AIRFLOW_CONN_AIRFLOW_DB: <set to the key 'connection' in secret 'airflow-postgres-redis'> Optional: false
AIRFLOW__WEBSERVER__SECRET_KEY: <set to the key 'webserver-secret-key' in secret 'airflow-airflow-webserver-secret-key'> Optional: false
AIRFLOW__CELERY__BROKER_URL: <set to the key 'connection' in secret 'airflow-airflow-broker-url'> Optional: false
Mounts:
/opt/airflow/airflow.cfg from config (ro,path="airflow.cfg")
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: airflow-airflow-airflow-config
Optional: false
Events: <none>
```
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28637 | https://github.com/apache/airflow/pull/29078 | 30ad26e705f50442f05dd579990372196323fc86 | 6c479437b1aedf74d029463bda56b42950278287 | "2022-12-29T10:27:55Z" | python | "2023-01-27T20:58:56Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,615 | ["airflow/dag_processing/processor.py", "airflow/models/dagbag.py", "tests/models/test_dagbag.py"] | AIP-44 Migrate Dagbag.sync_to_db to internal API. | This method is used in DagFileProcessor.process_file - it may be easier to migrate all it's internal calls instead of the whole method. | https://github.com/apache/airflow/issues/28615 | https://github.com/apache/airflow/pull/29188 | 05242e95bbfbaf153e4ae971fc0d0a5314d5bdb8 | 5c15b23023be59a87355c41ab23a46315cca21a5 | "2022-12-27T20:09:25Z" | python | "2023-03-12T10:02:57Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,614 | ["airflow/api_internal/endpoints/rpc_api_endpoint.py", "airflow/api_internal/internal_api_call.py", "airflow/models/dag.py", "tests/api_internal/test_internal_api_call.py"] | AIP-44 Migrate DagModel.get_paused_dag_ids to Internal API | null | https://github.com/apache/airflow/issues/28614 | https://github.com/apache/airflow/pull/28693 | f114c67c03a9b4257cc98bb8a970c6aed8d0c673 | ad738198545431c1d10619f8e924d082bf6a3c75 | "2022-12-27T20:09:14Z" | python | "2023-01-20T19:08:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,613 | ["airflow/api_internal/endpoints/rpc_api_endpoint.py", "airflow/models/trigger.py"] | AIP-44 Migrate Trigger class to Internal API | null | https://github.com/apache/airflow/issues/28613 | https://github.com/apache/airflow/pull/29099 | 69babdcf7449c95fea7fe3b9055c677b92a74298 | ee0a56a2caef0ccfb42406afe57b9d2169c13a01 | "2022-12-27T20:09:03Z" | python | "2023-02-20T21:26:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,612 | ["airflow/api_internal/endpoints/rpc_api_endpoint.py", "airflow/models/xcom.py"] | AIP-44 Migrate XCom get*/clear* to Internal API | null | https://github.com/apache/airflow/issues/28612 | https://github.com/apache/airflow/pull/29083 | 9bc48747ddbd609c2bd3baa54a5d0472e9fdcbe4 | a1ffb26e5bcf4547e3b9e494cf7ccd24af30c2e6 | "2022-12-27T20:08:50Z" | python | "2023-01-22T19:19:01Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,510 | [".pre-commit-config.yaml", "STATIC_CODE_CHECKS.rst", "airflow/cli/commands/info_command.py", "scripts/ci/pre_commit/pre_commit_check_provider_yaml_files.py", "scripts/in_container/run_provider_yaml_files_check.py"] | Add pre-commit/test to verify extra links refer to existed classes | ### Body
We had an issue where extra link class (`AIPlatformConsoleLink`) was removed in [PR](https://github.com/apache/airflow/pull/26836) without removing the class from the `provider.yaml` extra links this resulted in web server exception as shown in https://github.com/apache/airflow/pull/28449
**The Task:**
Add validation that classes of extra-links in provider.yaml are importable
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/28510 | https://github.com/apache/airflow/pull/28516 | 7ccbe4e7eaa529641052779a89e34d54c5a20f72 | e47c472e632effbfe3ddc784788a956c4ca44122 | "2022-12-20T22:35:11Z" | python | "2022-12-22T02:25:08Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,483 | ["airflow/www/static/css/main.css"] | Issues with Custom Menu Items on Smaller Windows | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
We take advantage of the custom menu items with flask appbuilder offer a variety of dropdown menus with custom DAG filters. We've notice two things:
1. When you have too many dropdown menu items in a single category, several menu items are unreachable when using the Airflow UI on a small screen:
<img width="335" alt="Screenshot 2022-12-19 at 6 34 24 PM" src="https://user-images.githubusercontent.com/40223998/208548419-f9d1ff57-6cad-4a40-bc58-dbf20148a92a.png">
2. When you have too many menu categories, multiple rows of dropdown menus are displayed, but cover some other components.
<img width="1077" alt="Screenshot 2022-12-19 at 6 32 05 PM" src="https://user-images.githubusercontent.com/40223998/208548222-44e50717-9040-4899-be06-d503a8c0f69a.png">
### What you think should happen instead
1. When you have too many dropdown menu items in a single category, there should be a scrollbar.
2. When you have too many menu categories, multiple rows of dropdown menus are displayed, the menu shouldn't cover the dag import errors or any part of the UI
### How to reproduce
1. Add a bunch of menu items under the same category in a custom plugin and resize your window smaller
2. Add a large number of menu item categories in a custom plugin and resize your window smaller.
### Operating System
Debian GNU/Linux 10 (buster)
### Versions of Apache Airflow Providers
2.4.3
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
I'm happy to make a PR for this. I just don't have the frontend context. If someone can point me in the right direction that'd be great
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28483 | https://github.com/apache/airflow/pull/28561 | ea3be1a602b3e109169c6e90e555a418e2649f9a | 2aa52f4ce78e1be7f34b0995d40be996b4826f26 | "2022-12-19T23:40:01Z" | python | "2022-12-30T01:50:45Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,465 | ["airflow/providers/jenkins/hooks/jenkins.py", "docs/apache-airflow-providers-jenkins/connections.rst", "tests/providers/jenkins/hooks/test_jenkins.py"] | Airflow 2.2.4 Jenkins Connection - unable to set as the hook expects to be | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Hello team,
I am trying to use the `JenkinsJobTriggerOperator` version v3.1.0 on an Airflow instance version 2.2.4
Checking the documentation regards how to set up the connection and the hook in order to use `https` instead of the default `http`, I see https://airflow.apache.org/docs/apache-airflow-providers-jenkins/3.1.0/connections.html
```
Extras (optional)
Specify whether you want to use http or https scheme by entering true to use https or false for http in extras. Default is http.
```
Unfortunately from the Airflow UI when trying to specify the connection and especially the `Extras` options it accepts a JSON-like object, so whatever you put differently to a dictionary the code fails to update the extra options for that connection.
Checking in more details what the [Jenkins hook](https://airflow.apache.org/docs/apache-airflow-providers-jenkins/3.1.0/_modules/airflow/providers/jenkins/hooks/jenkins.html#JenkinsHook.conn_name_attr) does:
```
self.connection = connection
connection_prefix = "http"
# connection.extra contains info about using https (true) or http (false)
if to_boolean(connection.extra):
connection_prefix = "https"
url = f"{connection_prefix}://{connection.host}:{connection.port}"
```
where the `connection.extra` cannot be a simple true/false string!
### What you think should happen instead
Either we should get the `http` or `https` from the `Schema`
Or we should update the [JenkinsHook](https://airflow.apache.org/docs/apache-airflow-providers-jenkins/stable/_modules/airflow/providers/jenkins/hooks/jenkins.html#JenkinsHook.default_conn_name) to read the provided dictionary for http value:
`if to_boolean(connection.extra.https)`
### How to reproduce
_No response_
### Operating System
macos Monterey 12.6.2
### Versions of Apache Airflow Providers
```
pip freeze | grep apache-airflow-providers
apache-airflow-providers-celery==2.1.0
apache-airflow-providers-common-sql==1.3.1
apache-airflow-providers-ftp==2.0.1
apache-airflow-providers-http==2.0.3
apache-airflow-providers-imap==2.2.0
apache-airflow-providers-jenkins==3.1.0
apache-airflow-providers-sqlite==2.1.0
```
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28465 | https://github.com/apache/airflow/pull/30301 | f7d5b165fcb8983bd82a852dcc5088b4b7d26a91 | 1f8bf783b89d440ecb3e6db536c63ff324d9fc62 | "2022-12-19T14:43:00Z" | python | "2023-03-25T19:37:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,452 | ["airflow/providers/docker/operators/docker_swarm.py", "tests/providers/docker/operators/test_docker_swarm.py"] | TaskInstances do not succeed when using enable_logging=True option in DockerSwarmOperator | ### Apache Airflow Provider(s)
docker
### Versions of Apache Airflow Providers
apache-airflow-providers-celery==3.1.0
apache-airflow-providers-docker==3.3.0
### Apache Airflow version
2.5.0
### Operating System
centos 7
### Deployment
Other Docker-based deployment
### Deployment details
Running an a docker-swarm cluster deployed locally.
### What happened
Same issue as https://github.com/apache/airflow/issues/13675
With logging_enabled=True the DAG never completes and stays in running.
When using DockerSwarmOperator together with the default enable_logging=True option, tasks do not succeed and stay in state running. When checking the docker service logs I can clearly see that the container ran and ended successfully. Airflow however does not recognize that the container finished and keeps the tasks in state running.
### What you think should happen instead
DAG should complete.
### How to reproduce
Docker-compose deployment:
```console
curl -LfO 'https://airflow.apache.org/docs/apache-airflow/2.5.0/docker-compose.yaml'
docker compose up airflow-init
docker compose up -d
```
DAG code:
```python
from airflow import DAG
from docker.types import Mount, SecretReference
from airflow.providers.docker.operators.docker_swarm import DockerSwarmOperator
from datetime import timedelta
from airflow.utils.dates import days_ago
from airflow.models import Variable
# Setup default args for the job
default_args = {
'owner': 'airflow',
'start_date': days_ago(2),
'retries': 0
}
# Create the DAG
dag = DAG(
'test_dag', # DAG ID
default_args=default_args,
schedule_interval='0 0 * * *',
catchup=False
)
# # Create the DAG object
with dag as dag:
docker_swarm_task = DockerSwarmOperator(
task_id="job_run",
image="<any image>",
execution_timeout=timedelta(minutes=5),
command="<specific code>",
api_version='auto',
tty=True,
enable_logging=True
)
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28452 | https://github.com/apache/airflow/pull/35677 | 3bb5978e63f3be21a5bb7ae89e7e3ce9d06a4ab8 | 882108862dcaf08e7f5da519b3d186048d4ec7f9 | "2022-12-19T03:51:53Z" | python | "2023-12-06T22:07:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,441 | ["airflow/providers/google/cloud/transfers/gcs_to_bigquery.py", "tests/providers/google/cloud/transfers/test_gcs_to_bigquery.py"] | GCSToBigQueryOperator fails when schema_object is specified without schema_fields | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow 2.5.0
apache-airflow-providers-apache-beam 4.1.0
apache-airflow-providers-cncf-kubernetes 5.0.0
apache-airflow-providers-google 8.6.0
apache-airflow-providers-grpc 3.1.0
### Apache Airflow version
2.5.0
### Operating System
Debian 11
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
KubernetesExecutor
### What happened
GCSToBigQueryOperator allows multiple ways to specify schema of the BigQuery table:
1. Setting autodetect == True
1. Setting schema_fields directly with autodetect == False
1. Setting a schema_object and optionally a schema_object_bucket with autodetect == False
This third method seems to be broken in the latest provider version (8.6.0) and will always result in this error:
```
[2022-12-16, 21:06:18 UTC] {taskinstance.py:1772} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/google/cloud/transfers/gcs_to_bigquery.py", line 395, in execute
self.configuration = self._check_schema_fields(self.configuration)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/google/cloud/transfers/gcs_to_bigquery.py", line 524, in _check_schema_fields
raise RuntimeError(
RuntimeError: Table schema was not found. Set autodetect=True to automatically set schema fields from source objects or pass schema_fields explicitly
```
The reason for this is because [this block](https://github.com/apache/airflow/blob/25bdbc8e6768712bad6043618242eec9c6632618/airflow/providers/google/cloud/transfers/gcs_to_bigquery.py#L318-L320) where `if self.schema_object and self.source_format != "DATASTORE_BACKUP":`. fails to set self.schema_fields. It only sets the local variable, schema_fields. When self._check_schema_fields is subsequently called [here](https://github.com/apache/airflow/blob/25bdbc8e6768712bad6043618242eec9c6632618/airflow/providers/google/cloud/transfers/gcs_to_bigquery.py#L395), we enter the [first block](https://github.com/apache/airflow/blob/25bdbc8e6768712bad6043618242eec9c6632618/airflow/providers/google/cloud/transfers/gcs_to_bigquery.py#L523-L528) because autodetect is false and schema_fields is not set.
### What you think should happen instead
No error should be raised if autodetect is set to False and a valid schema_object is provided
### How to reproduce
1. Create a simple BigQuery table with a single column col1:
```sql
CREATE TABLE `my-project.my_dataset.test_gcs_to_bigquery` (col1 INT);
```
2. Upload a json blob for this object to a bucket (e.g., data/schemas/table.json)
3. Upload a simple CSV for the source file to load to a bucket (e.g., data/source/file.csv)
4. Run the following command:
```py
gcs_to_biquery = GCSToBigQueryOperator(
task_id="gcs_to_bigquery",
destination_project_dataset_table="my-project.my_dataset.test_gcs_to_bigquery",
bucket="my_bucket_name",
create_disposition="CREATE_IF_NEEDED",
write_disposition="WRITE_TRUNCATE",
source_objects=["data/source/file.csv"],
source_format="CSV",
autodetect=False,
schema_object="data/schemas/table.json",
)
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28441 | https://github.com/apache/airflow/pull/28444 | 032a542feeb617d1f92580b97fa0ad3cdca09d63 | 9eacf607be109eb6ab80f7e27d234a17fb128ae0 | "2022-12-18T13:48:28Z" | python | "2022-12-20T06:14:29Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,393 | ["airflow/providers/google/provider.yaml"] | Webserver reports "ImportError: Module "airflow.providers.google.cloud.operators.mlengine" does not define a "AIPlatformConsoleLink" attribute/class" | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow 2.5.0
apache-airflow-providers-apache-beam 4.1.0
apache-airflow-providers-cncf-kubernetes 5.0.0
apache-airflow-providers-google 8.6.0
apache-airflow-providers-grpc 3.1.0
### Apache Airflow version
2.5.0
### Operating System
Debian 11
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
KubernetesExecutor
### What happened
We are seeing this stacktrace on our webserver when a task is clicked:
```
10.253.8.251 - - [15/Dec/2022:18:32:58 +0000] "GET /object/next_run_datasets/recs_ranking_purchase_ranker_dag HTTP/1.1" 200 2 "https://web.airflow.etsy-syseng-gke-prod.etsycloud.com/dags/recs_ranking_purchase_ranker_dag/code" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36"
raise ImportError(f'Module "{module_path}" does not define a "{class_name}" attribute/class')
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/module_loading.py", line 38, in import_string
imported_class = import_string(class_name)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers_manager.py", line 275, in _sanity_check
Traceback (most recent call last):
During handling of the above exception, another exception occurred:
AttributeError: module 'airflow.providers.google.cloud.operators.mlengine' has no attribute 'AIPlatformConsoleLink'
return getattr(module, class_name)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/module_loading.py", line 36, in import_string
Traceback (most recent call last):
[2022-12-15 18:32:58,068] {providers_manager.py:243} WARNING - Exception when importing 'airflow.providers.google.cloud.operators.mlengine.AIPlatformConsoleLink' from 'apache-airflow-providers-google' package
ImportError: Module "airflow.providers.google.cloud.operators.mlengine" does not define a "AIPlatformConsoleLink" attribute/class
```
### What you think should happen instead
These errors should now appear.
### How to reproduce
Start webserver anew, navigate to a dag, click on a task, and tail webserver logs
### Anything else
[This YAML file](https://github.com/apache/airflow/blob/providers-google/8.6.0/airflow/providers/google/provider.yaml#L968) is being utilized as config which then results in the import error here: https://github.com/apache/airflow/blob/providers-google/8.6.0/airflow/providers_manager.py#L885-L891
```
extra-links:
- airflow.providers.google.cloud.operators.bigquery.BigQueryConsoleLink
- airflow.providers.google.cloud.operators.bigquery.BigQueryConsoleIndexableLink
- airflow.providers.google.cloud.operators.mlengine.AIPlatformConsoleLink
```
We should remove this from extra-links as it was removed as of apache-airflow-providers-google 8.5.0
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28393 | https://github.com/apache/airflow/pull/28449 | b213f4fd2627bb2a2a4c96fe2845471db430aa5d | 7950fb9711384f8ac4609fc19f319edb17e296ef | "2022-12-15T22:04:26Z" | python | "2022-12-21T05:29:56Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,391 | ["airflow/cli/commands/task_command.py", "airflow/executors/kubernetes_executor.py", "airflow/www/views.py"] | Manual task trigger fails for kubernetes executor with psycopg2 InvalidTextRepresentation error | ### Apache Airflow version
main (development)
### What happened
Manual task trigger fails for kubernetes executor with the following error. Manual trigger of dag works without any issue.
```
[2022-12-15 20:05:38,442] {app.py:1741} ERROR - Exception on /run [POST]
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context
self.dialect.do_execute(
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.InvalidTextRepresentation: invalid input syntax for integer: "manual"
LINE 3: ...ate = 'queued' AND task_instance.queued_by_job_id = 'manual'
^
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.10/site-packages/flask/app.py", line 2525, in wsgi_app
response = self.full_dispatch_request()
File "/home/airflow/.local/lib/python3.10/site-packages/flask/app.py", line 1822, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/airflow/.local/lib/python3.10/site-packages/flask/app.py", line 1820, in full_dispatch_request
rv = self.dispatch_request()
File "/home/airflow/.local/lib/python3.10/site-packages/flask/app.py", line 1796, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/www/auth.py", line 47, in decorated
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/www/decorators.py", line 125, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/session.py", line 75, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/www/views.py", line 1896, in run
executor.start()
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/executors/kubernetes_executor.py", line 586, in start
self.clear_not_launched_queued_tasks()
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/session.py", line 75, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/executors/kubernetes_executor.py", line 510, in clear_not_launched_queued_tasks
queued_tis: list[TaskInstance] = query.all()
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/orm/query.py", line 2773, in all
return self._iter().all()
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter
result = self.session.execute(
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1714, in execute
result = conn._execute_20(statement, params or {}, execution_options)
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1705, in _execute_20
return meth(self, args_10style, kwargs_10style, execution_options)
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/sql/elements.py", line 334, in _execute_on_connection
return connection._execute_clauseelement(
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1572, in _execute_clauseelement
ret = self._execute_context(
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1943, in _execute_context
self._handle_dbapi_exception(
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 2124, in _handle_dbapi_exception
util.raise_(
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context
self.dialect.do_execute(
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.DataError: (psycopg2.errors.InvalidTextRepresentation) invalid input syntax for integer: "manual"
LINE 3: ...ate = 'queued' AND task_instance.queued_by_job_id = 'manual'
```
^
### What you think should happen instead
should be able to trigger the task manually from the UI
### How to reproduce
deploy the main branch with kubernetes executor and postgres db.
### Operating System
ubuntu 20
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
Python version: 3.10.9
Airflow version: 2.6.0.dev0
helm.sh/chart=postgresql-10.5.3
### Anything else
the issue is caused due to this check:
https://github.com/apache/airflow/blob/b263dbcb0f84fd9029591d1447a7c843cb970f15/airflow/executors/kubernetes_executor.py#L505-L507
in `celery_executor` there is a similar check, but i believe it is not called at the ti executor time. and also since it is in a try/catch the exception is not visible.
https://github.com/apache/airflow/blob/b263dbcb0f84fd9029591d1447a7c843cb970f15/airflow/executors/celery_executor.py#L394-L412
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28391 | https://github.com/apache/airflow/pull/28394 | be0e35321f0bbd7d21c75096cad45dbe20c2359a | 9510043546d1ac8ac56b67bafa537e4b940d68a4 | "2022-12-15T20:37:26Z" | python | "2023-01-24T15:18:45Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,381 | ["Dockerfile.ci", "airflow/www/extensions/init_views.py", "airflow/www/package.json", "airflow/www/templates/swagger-ui/index.j2", "airflow/www/webpack.config.js", "airflow/www/yarn.lock", "setup.cfg"] | CVE-2019-17495 for swagger-ui | ### Apache Airflow version
2.5.0
### What happened
this issue https://github.com/apache/airflow/issues/18383 still isn't closed. It seems like the underlying swagger-ui bundle has been abandoned by its maintainer, and we should instead point swagger UI bundle to this version which is kept up-to-date
https://github.com/bartsanchez/swagger_ui_bundle
edit : it seems like this might not be coming from the swagger_ui_bundle any more but instead perhaps from connexion. I'm not familiar with python dependencies, so forgive me if I'm mis-reporting this.
There are CVE scanner tools that notifies https://github.com/advisories/GHSA-c427-hjc3-wrfw using the apache/airflow:2.1.4
The python deps include swagger-ui-2.2.10 and swagger-ui-3.30.0 as part of the bundle. It is already included at ~/.local/lib/python3.6/site-packages/swagger_ui_bundle
swagger-ui-2.2.10 swagger-ui-3.30.0
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
any
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28381 | https://github.com/apache/airflow/pull/28788 | 35a8ffc55af220b16ea345d770f80f698dcae3fb | 35ad16dc0f6b764322b1eb289709e493fbbb0ae0 | "2022-12-15T13:50:45Z" | python | "2023-01-10T10:24:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,356 | ["airflow/config_templates/default_webserver_config.py"] | CSRF token should be expire with session | ### Apache Airflow version
2.5.0
### What happened
In the default configuration, the CSRF token [expires in one hour](https://pythonhosted.org/Flask-WTF/config.html#forms-and-csrf). This setting leads to frequent errors in the UI – for no good reason.
### What you think should happen instead
A short expiration date for the CSRF token is not the right value in my view and I [agree with this answer](https://security.stackexchange.com/a/56520/22108) that the CSRF token should basically never expire, instead pegging itself to the current session.
That is, the CSRF token should last as long as the current session. The easiest way to accomplish this is by generating the CSRF token from the session id.
### How to reproduce
_No response_
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28356 | https://github.com/apache/airflow/pull/28730 | 04306f18b0643dfed3ed97863bbcf24dc50a8973 | 543e9a592e6b9dc81467c55169725e192fe95e89 | "2022-12-14T10:21:12Z" | python | "2023-01-10T23:25:29Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,328 | ["airflow/executors/kubernetes_executor.py"] | Scheduler pod hang when K8s API call fail | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Airflow version: `2.3.4`
I have deployed airflow with the official Helm in K8s with `KubernetesExecutor`. Sometimes the scheduler hang when calling K8s API. The log:
``` bash
ERROR - Exception when executing Executor.end
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 752, in _execute
self._run_scheduler_loop()
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 842, in _run_scheduler_loop
self.executor.heartbeat()
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/executors/base_executor.py", line 171, in heartbeat
self.sync()
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/executors/kubernetes_executor.py", line 649, in sync
next_event = self.event_scheduler.run(blocking=False)
File "/usr/local/lib/python3.8/sched.py", line 151, in run
action(*argument, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/event_scheduler.py", line 36, in repeat
action(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/executors/kubernetes_executor.py", line 673, in _check_worker_pods_pending_timeout
for pod in pending_pods().items:
File "/home/airflow/.local/lib/python3.8/site-packages/kubernetes/client/api/core_v1_api.py", line 15697, in list_namespaced_pod
return self.list_namespaced_pod_with_http_info(namespace, **kwargs) # noqa: E501
File "/home/airflow/.local/lib/python3.8/site-packages/kubernetes/client/api/core_v1_api.py", line 15812, in list_namespaced_pod_with_http_info
return self.api_client.call_api(
File "/home/airflow/.local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 348, in call_api
return self.__call_api(resource_path, method,
File "/home/airflow/.local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 180, in __call_api
response_data = self.request(
File "/home/airflow/.local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 373, in request
return self.rest_client.GET(url,
File "/home/airflow/.local/lib/python3.8/site-packages/kubernetes/client/rest.py", line 240, in GET
return self.request("GET", url,
File "/home/airflow/.local/lib/python3.8/site-packages/kubernetes/client/rest.py", line 213, in request
r = self.pool_manager.request(method, url,
File "/home/airflow/.local/lib/python3.8/site-packages/urllib3/request.py", line 74, in request
return self.request_encode_url(
File "/home/airflow/.local/lib/python3.8/site-packages/urllib3/request.py", line 96, in request_encode_url
return self.urlopen(method, url, **extra_kw)
File "/home/airflow/.local/lib/python3.8/site-packages/urllib3/poolmanager.py", line 376, in urlopen
response = conn.urlopen(method, u.request_uri, **kw)
File "/home/airflow/.local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 815, in urlopen
return self.urlopen(
File "/home/airflow/.local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
File "/home/airflow/.local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 386, in _make_request
self._validate_conn(conn)
File "/home/airflow/.local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn
conn.connect()
File "/home/airflow/.local/lib/python3.8/site-packages/urllib3/connection.py", line 358, in connect
self.sock = conn = self._new_conn()
File "/home/airflow/.local/lib/python3.8/site-packages/urllib3/connection.py", line 174, in _new_conn
conn = connection.create_connection(
File "/home/airflow/.local/lib/python3.8/site-packages/urllib3/util/connection.py", line 85, in create_connection
sock.connect(sa)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 182, in _exit_gracefully
sys.exit(os.EX_OK)
SystemExit: 0
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 773, in _execute
self.executor.end()
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/executors/kubernetes_executor.py", line 823, in end
self._flush_task_queue()
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/executors/kubernetes_executor.py", line 776, in _flush_task_queue
self.log.debug('Executor shutting down, task_queue approximate size=%d', self.task_queue.qsize())
File "<string>", line 2, in qsize
File "/usr/local/lib/python3.8/multiprocessing/managers.py", line 835, in _callmethod
kind, result = conn.recv()
File "/usr/local/lib/python3.8/multiprocessing/connection.py", line 250, in recv
buf = self._recv_bytes()
File "/usr/local/lib/python3.8/multiprocessing/connection.py", line 414, in _recv_bytes
buf = self._recv(4)
File "/usr/local/lib/python3.8/multiprocessing/connection.py", line 379, in _recv
chunk = read(handle, remaining)
ConnectionResetError: [Errno 104] Connection reset by peer
```
Then the executor process was killed and the pod was still running. But the scheduler does not work.
After restarting, the scheduler worked usually.
### What you think should happen instead
When the error occurs, the executor needs to auto restart or the scheduler should be killed.
### How to reproduce
_No response_
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28328 | https://github.com/apache/airflow/pull/28685 | 57a889de357b269ae104b721e2a4bb78b929cea9 | a3de721e2f084913e853aff39d04adc00f0b82ea | "2022-12-13T07:49:50Z" | python | "2023-01-03T11:53:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,296 | ["airflow/ti_deps/deps/prev_dagrun_dep.py", "tests/models/test_dagrun.py"] | Dynamic task mapping does not correctly handle depends_on_past | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Using Airflow 2.4.2.
I've got a task that retrieves some filenames, which then creates dynamically mapped tasks to move the files, one per task.
I'm using a similar task across multiple DAGs. However, task mapping fails on some DAG runs: it inconsistently happens per DAG run, and some DAGs do not seem to be affected at all. These seem to be the DAGs where no task was ever mapped, so that the mapped task instance ended up in a Skipped state.
What happens is that multiple files will be found, but only a single dynamically mapped task will be created. This task never starts and has map_index of -1. It can be found under the "List instances, all runs" menu, but says "No Data found." under the "Mapped Tasks" tab.
When I press the "Run" button when the mapped task is selected, the following error appears:
```
Could not queue task instance for execution, dependencies not met: Previous Dagrun State: depends_on_past is true for this task's DAG, but the previous task instance has not run yet., Task has been mapped: The task has yet to be mapped!
```
The previous task *has* run however. No errors appeared in my Airflow logs.
### What you think should happen instead
The appropriate amount of task instances should be created, they should correctly resolve the ```depends_on_past``` check and then proceed to run correctly.
### How to reproduce
This DAG reliably reproduces the error for me. The first set of mapped tasks succeeds, the subsequent ones do not.
```python
from airflow import DAG
from airflow.decorators import task
import datetime as dt
from airflow.operators.python import PythonOperator
@task
def get_filenames_kwargs():
return [
{"file_name": i}
for i in range(10)
]
def print_filename(file_name):
print(file_name)
with DAG(
dag_id="dtm_test",
start_date=dt.datetime(2022, 12, 10),
default_args={
"owner": "airflow",
"depends_on_past": True,
},
schedule="@daily",
) as dag:
get_filenames_task = get_filenames_kwargs.override(task_id="get_filenames_task")()
print_filename_task = PythonOperator.partial(
task_id="print_filename_task",
python_callable=print_filename,
).expand(op_kwargs=get_filenames_task)
# Perhaps redundant
get_filenames_task >> print_filename_task
```
### Operating System
Amazon Linux 2
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28296 | https://github.com/apache/airflow/pull/28379 | a62840806c37ef87e4112c0138d2cdfd980f1681 | 8aac56656d29009dbca24a5948c2a2097043f4f3 | "2022-12-12T07:36:52Z" | python | "2022-12-15T16:43:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,272 | ["airflow/providers/amazon/aws/sensors/s3.py", "tests/providers/amazon/aws/sensors/test_s3_key.py"] | S3KeySensor 'bucket_key' instantiates as a nested list when rendered as a templated_field | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==6.2.0
### Apache Airflow version
2.5.0
### Operating System
Red Hat Enterprise Linux Server 7.6 (Maipo)
### Deployment
Virtualenv installation
### Deployment details
Simple virtualenv deployment
### What happened
bucket_key is a template_field in S3KeySensor, which means that is expected to be rendered as a template field.
The supported types for the attribute are both 'str' and 'list'. There is also a [conditional operation in the __init__ function](https://github.com/apache/airflow/blob/main/airflow/providers/amazon/aws/sensors/s3.py#L89) of the class that relies on the type of the input data, that converts the attribute to a list of strings. If a list of str is passed in through Jinja template, **self.bucket_key** is available as a _**doubly-nested list of strings**_, rather than a list of strings.
This is because the input value of **bucket_key** can only be a string type that represents the template-string when used as a template_field. These template_fields are then converted to their corresponding values when instantiated as a task_instance.
Example log from __init__ function:
` scheduler | DEBUG | type: <class 'list'> | val: ["{{ ti.xcom_pull(task_ids='t1') }}"]`
Example log from poke function:
`poke | DEBUG | type: <class 'list'> | val: [["s3://test_bucket/test_key1", "s3://test_bucket/test_key2"]]`
This leads to the poke function throwing an [exception](https://github.com/apache/airflow/blob/main/airflow/providers/amazon/aws/hooks/s3.py#L172) as each individual key needs to be a string value to parse the url, but is being passed as a list (since self.bucket_key is a nested list).
### What you think should happen instead
Instead of putting the input value of **bucket_key** in a list, we should store the value as-is upon initialization of the class, and just conditionally check the type of the attribute within the poke function.
[def \_\_init\_\_](https://github.com/apache/airflow/blob/main/airflow/providers/amazon/aws/sensors/s3.py#L89)
`self.bucket_key = bucket_key`
(which willstore the input values correctly as a str or a list when the task instance is created and the template fields are rendered)
[def poke](https://github.com/apache/airflow/blob/main/airflow/providers/amazon/aws/sensors/s3.py#L127)
```
def poke(self, context: Context):
if isinstance(self.bucket_key, str):
return self._check_key(key)
else:
return all(self._check_key(key) for key in self.bucket_key)
```
### How to reproduce
1. Use a template field as the bucket_key attribute in S3KeySensor
2. Pass a list of strings as the rendered template input value for the bucket_key attribute in the S3KeySensor task. (e.g. as an XCOM or Variable pulled value)
Example:
```
with DAG(
...
render_template_as_native_obj=True,
) as dag:
@task(task_id="get_list_of_str", do_xcom_push=True)
def get_list_of_str():
return ["s3://test_bucket/test_key1", "s3://test_bucket/test_key1"]
t = get_list_of_str()
op = S3KeySensor(task_id="s3_key_sensor", bucket_key="{{ ti.xcom_pull(task_ids='get_list_of_str') }}")
t >> op
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28272 | https://github.com/apache/airflow/pull/28340 | 9d9b15989a02042a9041ff86bc7e304bb06caa15 | 381160c0f63a15957a631da9db875f98bb8e9d64 | "2022-12-09T20:17:11Z" | python | "2022-12-14T07:47:46Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,271 | ["airflow/api_internal/endpoints/rpc_api_endpoint.py", "airflow/models/variable.py"] | AIP-44 Migrate Variable to Internal API | Link: https://github.com/apache/airflow/blob/main/airflow/models/variable.py
Methods to migrate:
- val
- set
- delete
- update
Note that get_variable_from_secrets shouls still be executed locally.
It may be better to first close https://github.com/apache/airflow/issues/28267 | https://github.com/apache/airflow/issues/28271 | https://github.com/apache/airflow/pull/28795 | 9c3cd3803f0c4c83b1f8220525e1ac42dd676549 | bea49094be3e9d84243383017ca7d21dda62f329 | "2022-12-09T20:09:08Z" | python | "2023-01-23T11:21:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,270 | ["airflow/api_internal/endpoints/rpc_api_endpoint.py", "airflow/dag_processing/manager.py", "tests/api_internal/endpoints/test_rpc_api_endpoint.py", "tests/api_internal/test_internal_api_call.py", "tests/dag_processing/test_manager.py"] | AIP-44 Migrate DagFileProcessorManager._deactivate_stale_dags to Internal API | null | https://github.com/apache/airflow/issues/28270 | https://github.com/apache/airflow/pull/28476 | c18dbe963ad87c03d49e95dfe189b765cc18fbec | 29a26a810ee8250c30f8ba0d6a72bc796872359c | "2022-12-09T19:55:02Z" | python | "2023-01-25T21:26:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,268 | ["airflow/api_internal/endpoints/rpc_api_endpoint.py", "airflow/dag_processing/processor.py", "airflow/utils/log/logging_mixin.py", "tests/dag_processing/test_processor.py"] | AIP-44 Migrate DagFileProcessor.manage_slas to Internal API | null | https://github.com/apache/airflow/issues/28268 | https://github.com/apache/airflow/pull/28502 | 7e2493e3c8b2dbeb378dba4e40110ab1e4ad24da | 0359a42a3975d0d7891a39abe4395bdd6f210718 | "2022-12-09T19:54:41Z" | python | "2023-01-23T20:54:25Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,267 | ["airflow/api_internal/internal_api_call.py", "airflow/cli/commands/internal_api_command.py", "airflow/cli/commands/scheduler_command.py", "airflow/www/app.py", "tests/api_internal/test_internal_api_call.py"] | AIP-44 Provide information to internal_api_call decorator about the running component | Scheduler/Webserver should never use Internal API, so calling any method decorated with internal_api_call should still execute them locally | https://github.com/apache/airflow/issues/28267 | https://github.com/apache/airflow/pull/28783 | 50b30e5b92808e91ad9b6b05189f560d58dd8152 | 6046aef56b12331b2bb39221d1935b2932f44e93 | "2022-12-09T19:53:23Z" | python | "2023-02-15T01:37:16Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,266 | [".pre-commit-config.yaml", "airflow/cli/cli_parser.py", "airflow/cli/commands/internal_api_command.py", "airflow/www/extensions/init_views.py", "tests/cli/commands/test_internal_api_command.py"] | AIP-44 Implement standalone internal-api component | https://github.com/apache/airflow/pull/27892 added Internal API as part of Webserver.
We need to introduce `airlfow internal-api` CLI command that starts Internal API as a independent component. | https://github.com/apache/airflow/issues/28266 | https://github.com/apache/airflow/pull/28425 | 760c52949ac41ffa7a2357aa1af0cdca163ddac8 | 367e8f135c2354310b67b3469317f15cec68dafa | "2022-12-09T19:51:08Z" | python | "2023-01-20T18:19:19Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,242 | ["airflow/cli/commands/role_command.py", "airflow/www/extensions/init_appbuilder.py"] | Airflow CLI to list roles is slow | ### Apache Airflow version
2.5.0
### What happened
We're currently running a suboptimal setup where database connectivity is laggy, 125ms roundtrip.
This has interesting consequences. For example, `airflow roles list` is really slow. Turns out that it's doing a lot of individual queries.
### What you think should happen instead
Ideally, listing roles should be a single (perhaps complex) query.
### How to reproduce
We're using py-spy to sample program execution:
```bash
$ py-spy record -o spy.svg -i --rate 250 --nonblocking airflow roles list
```
Now, to see the bad behavior, the database should incur significant latency.
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28242 | https://github.com/apache/airflow/pull/28244 | 2f5c77b0baa0ab26d2c51fa010850653ded80a46 | e24733662e95ad082e786d4855066cd4d36015c9 | "2022-12-08T22:18:08Z" | python | "2022-12-09T12:47:16Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,227 | ["airflow/utils/sqlalchemy.py", "tests/utils/test_sqlalchemy.py"] | Scheduler error: 'V1PodSpec' object has no attribute '_ephemeral_containers' | ### Apache Airflow version
2.5.0
### What happened
After upgrade 2.2.5 -> 2.5.0 scheduler failing with error:
```
AttributeError: 'V1PodSpec' object has no attribute '_ephemeral_containers'
```
tried with no luck:
```
airflow dags reserialize
```
Full Traceback:
```verilog
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/__main__.py", line 39, in main
args.func(args)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 52, in command
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/cli.py", line 108, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/commands/scheduler_command.py", line 73, in scheduler
_run_scheduler_job(args=args)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/commands/scheduler_command.py", line 43, in _run_scheduler_job
job.run()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/base_job.py", line 247, in run
self._execute()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 759, in _execute
self._run_scheduler_loop()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 889, in _run_scheduler_loop
num_finished_events = self._process_executor_events(session=session)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 705, in _process_executor_events
self.executor.send_callback(request)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/executors/celery_kubernetes_executor.py", line 213, in send_callback
self.callback_sink.send(request)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/session.py", line 75, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/callbacks/database_callback_sink.py", line 34, in send
db_callback = DbCallbackRequest(callback=callback, priority_weight=10)
File "<string>", line 4, in __init__
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/state.py", line 480, in _initialize_instance
manager.dispatch.init_failure(self, args, kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
compat.raise_(
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/state.py", line 477, in _initialize_instance
return manager.original_init(*mixed[1:], **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/db_callback_request.py", line 46, in __init__
self.callback_data = callback.to_json()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/callbacks/callback_requests.py", line 91, in to_json
val = BaseSerialization.serialize(self.__dict__, strict=True)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 407, in serialize
{str(k): cls.serialize(v, strict=strict) for k, v in var.items()}, type_=DAT.DICT
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 407, in <dictcomp>
{str(k): cls.serialize(v, strict=strict) for k, v in var.items()}, type_=DAT.DICT
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 450, in serialize
return cls._encode(cls.serialize(var.__dict__, strict=strict), type_=DAT.SIMPLE_TASK_INSTANCE)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 407, in serialize
{str(k): cls.serialize(v, strict=strict) for k, v in var.items()}, type_=DAT.DICT
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 407, in <dictcomp>
{str(k): cls.serialize(v, strict=strict) for k, v in var.items()}, type_=DAT.DICT
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 407, in serialize
{str(k): cls.serialize(v, strict=strict) for k, v in var.items()}, type_=DAT.DICT
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 407, in <dictcomp>
{str(k): cls.serialize(v, strict=strict) for k, v in var.items()}, type_=DAT.DICT
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 412, in serialize
json_pod = PodGenerator.serialize_pod(var)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/kubernetes/pod_generator.py", line 411, in serialize_pod
return api_client.sanitize_for_serialization(pod)
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 241, in sanitize_for_serialization
return {key: self.sanitize_for_serialization(val)
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 241, in <dictcomp>
return {key: self.sanitize_for_serialization(val)
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 237, in sanitize_for_serialization
obj_dict = {obj.attribute_map[attr]: getattr(obj, attr)
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 239, in <dictcomp>
if getattr(obj, attr) is not None}
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/models/v1_pod_spec.py", line 397, in ephemeral_containers
return self._ephemeral_containers
AttributeError: 'V1PodSpec' object has no attribute '_ephemeral_containers'
```
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Debian 11 (bullseye)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
AWS EKS
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28227 | https://github.com/apache/airflow/pull/28454 | dc06bb0e26a0af7f861187e84ce27dbe973b731c | 27f07b0bf5ed088c4186296668a36dc89da25617 | "2022-12-08T15:44:30Z" | python | "2022-12-26T07:56:13Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,195 | ["airflow/providers/common/sql/operators/sql.py", "airflow/providers/common/sql/operators/sql.pyi", "tests/providers/common/sql/operators/test_sql.py"] | SQLTableCheckOperator doesn't correctly handle templated partition clause | ### Apache Airflow Provider(s)
common-sql
### Versions of Apache Airflow Providers
apache-airflow-providers-common-sql==1.3.1
### Apache Airflow version
2.5.0
### Operating System
Debian GNU/Linux 11 (bullseye)
### Deployment
Other Docker-based deployment
### Deployment details
Docker image based on the official image (couple of tools added) deployed on AWS ECS Fargate
### What happened
I have a task which uses the table check operator to run some table stakes data validation. But I don't want it failing every time it runs in our test environments which do not get records every day. So there's a templated switch in the sql:
```python
test_data_changed_yesterday = SQLTableCheckOperator(
task_id="test_data_changed_yesterday",
table="reporting.events",
conn_id="pg_conn",
checks={"changed_record_count": {"check_statement": "count(*) > 0"}},
partition_clause="""
{% if var.value.get('is_test_env', False) %}
modifieddate >= '2015-12-01T01:01:01.000Z'
{% else %}
modifieddate >= '{{ data_interval_start.isoformat() }}' and
modifieddate < '{{ data_interval_end.isoformat() }}'
{% endif %}
""",
)
```
This shows correctly in the rendered field for the task. Not pretty, but it works:
```
('\n'
' \n'
" modifieddate >= '2015-12-01T01:01:01.000Z'\n"
' \n'
' ')
```
However, in the logs I see it's running this query:
```
[2022-12-07, 11:49:34 UTC] {sql.py:364} INFO - Running statement: SELECT check_name, check_result FROM (
SELECT 'changed_record_count' AS check_name, MIN(changed_record_count) AS check_result
FROM (SELECT CASE WHEN count(*) > 0 THEN 1 ELSE 0 END AS changed_record_count
FROM reporting.events WHERE
{% if var.value.get('is_test_env', False) %}
modifieddate >= '2015-12-01T01:01:01.000Z'
{% else %}
modifieddate >= '{{ data_interval_start.isoformat() }}' and
modifieddate < '{{ data_interval_end.isoformat() }}'
{% endif %}
) AS sq
) AS check_table, parameters: None
```
Which unsurprisingly the DB rejects as invalid sql!
### What you think should happen instead
The rendered code should be used in the sql which is run!
I think this error comes about through this line: https://github.com/apache/airflow/blob/main/airflow/providers/common/sql/operators/sql.py#L576 which is run in the `__init__` method of the operator. i.e. before templating is applied in the build up to calling `execute(context)`.
### How to reproduce
Use the operator with a templated `partition_clause`
### Anything else
Happens every time
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28195 | https://github.com/apache/airflow/pull/28202 | aace30b50cab3c03479fd0c889d145b7435f26a9 | a6cda7cd230ef22f7fe042d6d5e9f78c660c4a75 | "2022-12-07T15:21:29Z" | python | "2022-12-09T23:04:38Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,171 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/models/abstractoperator.py", "airflow/models/taskinstance.py", "newsfragments/28172.misc.rst"] | Invalid retry date crashes scheduler "OverflowError: date value out of range" | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Our scheduler started failing with this trace:
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1187, in get_failed_dep_statuses
for dep_status in dep.get_dep_statuses(self, session, dep_context):
File "/usr/local/lib/python3.9/site-packages/airflow/ti_deps/deps/base_ti_dep.py", line 95, in get_dep_statuses
yield from self._get_dep_statuses(ti, session, dep_context)
File "/usr/local/lib/python3.9/site-packages/airflow/ti_deps/deps/not_in_retry_period_dep.py", line 47, in _get_dep_statuses
next_task_retry_date = ti.next_retry_datetime()
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1243, in next_retry_datetime
return self.end_date + delay
OverflowError: date value out of range
We found a dag with a large # of retries and exponential backoff will trigger this date error and take down the entire scheduler. The workaround is to force a max_delay setting.
The bug is here:
https://github.com/apache/airflow/blob/2.3.3/airflow/models/taskinstance.py#L1243
The current version seems to use the same code:
https://github.com/apache/airflow/blob/main/airflow/models/taskinstance.py#L1147
### What you think should happen instead
There are a few solutions. Exponential backoff should probably require a max delay value.
At the very least, it shouldn't kill the scheduler.
### How to reproduce
Create dag with exponential delay and force it to retry until it overflows.
### Operating System
linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28171 | https://github.com/apache/airflow/pull/28172 | e948b55a087f98d25a6a4730bf58f61689cdb116 | 2cbe5960476b1f444e940d11177145e5ffadf613 | "2022-12-06T19:48:31Z" | python | "2022-12-08T18:51:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,167 | ["airflow/www/.babelrc", "airflow/www/babel.config.js", "airflow/www/jest.config.js", "airflow/www/package.json", "airflow/www/static/js/components/ReactMarkdown.tsx", "airflow/www/static/js/dag/details/NotesAccordion.tsx", "airflow/www/yarn.lock"] | Allow Markdown in Task comments | ### Description
Implement the support for Markdown in Task notes inside Airflow.
### Use case/motivation
It would be helpful to use markdown syntax in Task notes/comments for the following usecases:
- Formatting headers, lists, and tables to allow more complex note-taking.
- Parsing a URL to reference a ticket in an Issue ticketing system (Jira, Pagerduty, etc.)
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28167 | https://github.com/apache/airflow/pull/28245 | 78b72f4fa07cac009ddd6d43d54627381e3e9c21 | 74e82af7eefe1d0d5aa6ea1637d096e4728dea1f | "2022-12-06T16:57:16Z" | python | "2022-12-19T15:32:04Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,155 | ["airflow/www/views.py"] | Links to dag graph some times display incorrect dagrun | ### Apache Airflow version
2.5.0
### What happened
Open url `dags/gate/graph?dag_run_id=8256-8-1670328803&execution_date=2022-12-06T12%3A13%3A23.174592+00%3A00`
The graph is displaying a completely different dagrun.
![image](https://user-images.githubusercontent.com/89977373/205916845-8acdf310-6073-46f9-aea3-8e144f6e4fba.png)
If you are not careful to review all the content, you might continue looking at the wrong results, or worse cancel a run with Mark failed.
I got the link from one of our users, so not 100% sure if it was the original url. I believe there could be something wrong with the url-encoding of the last `+` character. In any case, if there are any inconsistencies in the URL parameters vs the found dagruns, it should not display another dagrun, rather redirect to grid-view or error message.
### What you think should happen instead
* dag_run_id should be only required parameter, or have precedence over execution_date
* Provided dag_run_id should always be the same run-id that is displayed in graph
* Inconsistencies in any parameters should display error or redirect to grid view.
### How to reproduce
_No response_
### Operating System
Ubuntu 22.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28155 | https://github.com/apache/airflow/pull/29066 | 48cab7cfebf2c7510d9fdbffad5bd06d8f4751e2 | 9dedf81fa18e57755aa7d317f08f0ea8b6c7b287 | "2022-12-06T12:53:33Z" | python | "2023-01-21T03:13:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,146 | ["airflow/models/xcom.py", "tests/models/test_taskinstance.py"] | Dynamic task context fails to be pickled | ### Apache Airflow version
2.5.0
### What happened
When I upgrade to 2.5.0, run dynamic task test failed.
```py
from airflow.decorators import task, dag
import pendulum as pl
@dag(
dag_id='test-dynamic-tasks',
schedule=None,
start_date=pl.today().add(days=-3),
tags=['example'])
def test_dynamic_tasks():
@task.virtualenv(requirements=[])
def sum_it(values):
print(values)
@task.virtualenv(requirements=[])
def add_one(value):
return value + 1
added_values = add_one.expand(value = [1,2])
sum_it(added_values)
dag = test_dynamic_tasks()
```
```log
*** Reading local file: /home/andi/airflow/logs/dag_id=test-dynamic-tasks/run_id=manual__2022-12-06T10:07:41.355423+00:00/task_id=sum_it/attempt=1.log
[2022-12-06, 18:07:53 CST] {taskinstance.py:1087} INFO - Dependencies all met for <TaskInstance: test-dynamic-tasks.sum_it manual__2022-12-06T10:07:41.355423+00:00 [queued]>
[2022-12-06, 18:07:53 CST] {taskinstance.py:1087} INFO - Dependencies all met for <TaskInstance: test-dynamic-tasks.sum_it manual__2022-12-06T10:07:41.355423+00:00 [queued]>
[2022-12-06, 18:07:53 CST] {taskinstance.py:1283} INFO -
--------------------------------------------------------------------------------
[2022-12-06, 18:07:53 CST] {taskinstance.py:1284} INFO - Starting attempt 1 of 1
[2022-12-06, 18:07:53 CST] {taskinstance.py:1285} INFO -
--------------------------------------------------------------------------------
[2022-12-06, 18:07:53 CST] {taskinstance.py:1304} INFO - Executing <Task(_PythonVirtualenvDecoratedOperator): sum_it> on 2022-12-06 10:07:41.355423+00:00
[2022-12-06, 18:07:53 CST] {standard_task_runner.py:55} INFO - Started process 25873 to run task
[2022-12-06, 18:07:53 CST] {standard_task_runner.py:82} INFO - Running: ['airflow', 'tasks', 'run', 'test-dynamic-tasks', 'sum_it', 'manual__2022-12-06T10:07:41.355423+00:00', '--job-id', '41164', '--raw', '--subdir', 'DAGS_FOLDER/andi/test-dynamic-task.py', '--cfg-path', '/tmp/tmphudvake2']
[2022-12-06, 18:07:53 CST] {standard_task_runner.py:83} INFO - Job 41164: Subtask sum_it
[2022-12-06, 18:07:53 CST] {task_command.py:389} INFO - Running <TaskInstance: test-dynamic-tasks.sum_it manual__2022-12-06T10:07:41.355423+00:00 [running]> on host sh-dataops-airflow.jinde.local
[2022-12-06, 18:07:53 CST] {taskinstance.py:1511} INFO - Exporting the following env vars:
[email protected]
AIRFLOW_CTX_DAG_OWNER=andi
AIRFLOW_CTX_DAG_ID=test-dynamic-tasks
AIRFLOW_CTX_TASK_ID=sum_it
AIRFLOW_CTX_EXECUTION_DATE=2022-12-06T10:07:41.355423+00:00
AIRFLOW_CTX_TRY_NUMBER=1
AIRFLOW_CTX_DAG_RUN_ID=manual__2022-12-06T10:07:41.355423+00:00
[2022-12-06, 18:07:53 CST] {process_utils.py:179} INFO - Executing cmd: /home/andi/airflow/venv38/bin/python -m virtualenv /tmp/venv7lc4m6na --system-site-packages
[2022-12-06, 18:07:53 CST] {process_utils.py:183} INFO - Output:
[2022-12-06, 18:07:54 CST] {process_utils.py:187} INFO - created virtual environment CPython3.8.0.final.0-64 in 220ms
[2022-12-06, 18:07:54 CST] {process_utils.py:187} INFO - creator CPython3Posix(dest=/tmp/venv7lc4m6na, clear=False, no_vcs_ignore=False, global=True)
[2022-12-06, 18:07:54 CST] {process_utils.py:187} INFO - seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/home/andi/.local/share/virtualenv)
[2022-12-06, 18:07:54 CST] {process_utils.py:187} INFO - added seed packages: pip==22.2.1, setuptools==63.2.0, wheel==0.37.1
[2022-12-06, 18:07:54 CST] {process_utils.py:187} INFO - activators BashActivator,CShellActivator,FishActivator,NushellActivator,PowerShellActivator,PythonActivator
[2022-12-06, 18:07:54 CST] {process_utils.py:179} INFO - Executing cmd: /tmp/venv7lc4m6na/bin/pip install -r /tmp/venv7lc4m6na/requirements.txt
[2022-12-06, 18:07:54 CST] {process_utils.py:183} INFO - Output:
[2022-12-06, 18:07:55 CST] {process_utils.py:187} INFO - Looking in indexes: http://pypi:8081
[2022-12-06, 18:08:00 CST] {process_utils.py:187} INFO -
[2022-12-06, 18:08:00 CST] {process_utils.py:187} INFO - [notice] A new release of pip available: 22.2.1 -> 22.3.1
[2022-12-06, 18:08:00 CST] {process_utils.py:187} INFO - [notice] To update, run: python -m pip install --upgrade pip
[2022-12-06, 18:08:00 CST] {taskinstance.py:1772} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/andi/airflow/venv38/lib/python3.8/site-packages/airflow/decorators/base.py", line 217, in execute
return_value = super().execute(context)
File "/home/andi/airflow/venv38/lib/python3.8/site-packages/airflow/operators/python.py", line 356, in execute
return super().execute(context=serializable_context)
File "/home/andi/airflow/venv38/lib/python3.8/site-packages/airflow/operators/python.py", line 175, in execute
return_value = self.execute_callable()
File "/home/andi/airflow/venv38/lib/python3.8/site-packages/airflow/operators/python.py", line 553, in execute_callable
return self._execute_python_callable_in_subprocess(python_path, tmp_path)
File "/home/andi/airflow/venv38/lib/python3.8/site-packages/airflow/operators/python.py", line 397, in _execute_python_callable_in_subprocess
self._write_args(input_path)
File "/home/andi/airflow/venv38/lib/python3.8/site-packages/airflow/operators/python.py", line 367, in _write_args
file.write_bytes(self.pickling_library.dumps({"args": self.op_args, "kwargs": self.op_kwargs}))
_pickle.PicklingError: Can't pickle <class 'sqlalchemy.orm.session.Session'>: it's not the same object as sqlalchemy.orm.session.Session
[2022-12-06, 18:08:00 CST] {taskinstance.py:1322} INFO - Marking task as FAILED. dag_id=test-dynamic-tasks, task_id=sum_it, execution_date=20221206T100741, start_date=20221206T100753, end_date=20221206T100800
[2022-12-06, 18:08:00 CST] {warnings.py:109} WARNING - /home/andi/airflow/venv38/lib/python3.8/site-packages/airflow/utils/email.py:120: RemovedInAirflow3Warning: Fetching SMTP credentials from configuration variables will be deprecated in a future release. Please set credentials using a connection instead.
send_mime_email(e_from=mail_from, e_to=recipients, mime_msg=msg, conn_id=conn_id, dryrun=dryrun)
[2022-12-06, 18:08:00 CST] {configuration.py:635} WARNING - section/key [smtp/smtp_user] not found in config
[2022-12-06, 18:08:00 CST] {email.py:229} INFO - Email alerting: attempt 1
[2022-12-06, 18:08:01 CST] {email.py:241} INFO - Sent an alert email to ['[email protected]']
[2022-12-06, 18:08:01 CST] {standard_task_runner.py:100} ERROR - Failed to execute job 41164 for task sum_it (Can't pickle <class 'sqlalchemy.orm.session.Session'>: it's not the same object as sqlalchemy.orm.session.Session; 25873)
[2022-12-06, 18:08:01 CST] {local_task_job.py:159} INFO - Task exited with return code 1
[2022-12-06, 18:08:01 CST] {taskinstance.py:2582} INFO - 0 downstream tasks scheduled from follow-on schedule check
```
### What you think should happen instead
I expect this sample run passed.
### How to reproduce
_No response_
### Operating System
centos 7.9 3.10.0-1160.el7.x86_64
### Versions of Apache Airflow Providers
```
airflow-code-editor==5.2.2
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-microsoft-mssql==3.1.0
apache-airflow-providers-microsoft-psrp==2.0.0
apache-airflow-providers-microsoft-winrm==3.0.0
apache-airflow-providers-mysql==3.0.0
apache-airflow-providers-redis==3.0.0
apache-airflow-providers-samba==4.0.0
apache-airflow-providers-sftp==3.0.0
autopep8==1.6.0
brotlipy==0.7.0
chardet==3.0.4
pip-chill==1.0.1
pyopenssl==19.1.0
pysocks==1.7.1
python-ldap==3.4.2
requests-credssp==2.0.0
swagger-ui-bundle==0.0.9
tqdm==4.51.0
virtualenv==20.16.2
yapf==0.32.0
```
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28146 | https://github.com/apache/airflow/pull/28191 | 84a5faff0de2a56f898b8a02aca578b235cb12ba | e981dfab4e0f4faf1fb932ac6993c3ecbd5318b2 | "2022-12-06T10:40:01Z" | python | "2022-12-15T09:20:25Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,143 | ["airflow/www/static/js/api/useTaskLog.ts", "airflow/www/static/js/dag/details/taskInstance/Logs/LogBlock.tsx", "airflow/www/static/js/dag/details/taskInstance/Logs/index.tsx"] | Logs tab is automatically scrolling to the bottom while user is reading logs | ### Apache Airflow version
2.5.0
### What happened
Open the logs tab for a task that is currently running.
Scroll up to read things further up the log.
Every 30 seconds or so the log automatically scrolls down to the bottom again.
### What you think should happen instead
If the user has scrolled away from the bottom in the logs-panel, the live tailing of new logs should not scroll the view back to the bottom automatically.
### How to reproduce
_No response_
### Operating System
Ubuntu 22.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28143 | https://github.com/apache/airflow/pull/28386 | 5b54e8d21b1801d5e0ccd103592057f0b5a980b1 | 5c80d985a3102a46f198aec1c57a255e00784c51 | "2022-12-06T07:35:40Z" | python | "2022-12-19T01:00:34Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,121 | ["airflow/providers/sftp/sensors/sftp.py", "tests/providers/sftp/sensors/test_sftp.py"] | SFTP Sensor fails to locate file | ### Apache Airflow version
2.5.0
### What happened
While creating SFTP sensor I have tried to find a file under directory. But I was getting error as Time Out, not found.
So after debugging code found that there is a issue with [poke function](https://airflow.apache.org/docs/apache-airflow-providers-sftp/stable/_modules/airflow/providers/sftp/sensors/sftp.html#SFTPSensor.poke).
As after getting matched file we are trying to find last modified time of the file using [self.hook.get_mod_time](https://airflow.apache.org/docs/apache-airflow-providers-sftp/stable/_modules/airflow/providers/sftp/hooks/sftp.html#SFTPHook.get_mod_time) which take full path (path + filename) and we are giving only filename as arguments.
### What you think should happen instead
I have solved that issue by adding path with filename and then calling [self.hook.get_mod_time](https://airflow.apache.org/docs/apache-airflow-providers-sftp/stable/_modules/airflow/providers/sftp/hooks/sftp.html#SFTPHook.get_mod_time) function.
Here is modified code,
```
def poke(self, context: Context) -> bool:
self.hook = SFTPHook(self.sftp_conn_id)
self.log.info("Poking for %s, with pattern %s", self.path, self.file_pattern)
if self.file_pattern:
file_from_pattern = self.hook.get_file_by_pattern(self.path, self.file_pattern)
if file_from_pattern:
'''actual_file_to_check = file_from_pattern'''
actual_file_to_check = self.path + file_from_pattern
else:
return False
else:
actual_file_to_check = self.path
try:
mod_time = self.hook.get_mod_time(actual_file_to_check)
self.log.info("Found File %s last modified: %s", str(actual_file_to_check), str(mod_time))
except OSError as e:
if e.errno != SFTP_NO_SUCH_FILE:
raise e
return False
self.hook.close_conn()
if self.newer_than:
_mod_time = convert_to_utc(datetime.strptime(mod_time, "%Y%m%d%H%M%S"))
_newer_than = convert_to_utc(self.newer_than)
return _newer_than <= _mod_time
else:
return True
```
### How to reproduce
You can get same issue by creating a DAG as mentioned
```
with DAG(
dag_id='sftp_sensor_dag',
max_active_runs=1,
default_args=default_args,
) as dag:
file_sensing_task = SFTPSensor(
task_id='sensor_for_file',
path= "Weekly/11/",
file_pattern = "*pdf*,
sftp_conn_id='sftp_hook_conn',
poke_interval=30
)
```
### Operating System
Microsoft Windows [Version 10.0.19044.2251]
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28121 | https://github.com/apache/airflow/pull/29467 | 72c3817a44eea5005761ae3b621e8c39fde136ad | 8e24387d6db177c662342245bb183bfd73fb9ee8 | "2022-12-05T15:15:46Z" | python | "2023-02-13T23:12:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,118 | ["airflow/jobs/base_job.py", "tests/jobs/test_base_job.py"] | Scheduler heartbeat warning message in Airflow UI displaying that scheduler is down sometimes incorrect | ### Apache Airflow version
main (development)
### What happened
Steps to reproduce:
1. run 2 replicas of scheduler
2. initiate shut down of one of the schedulers
3. In Airflow UI observe message
<img width="1162" alt="image" src="https://user-images.githubusercontent.com/1017130/205650336-fb1d8e39-2213-4aec-8530-abd1417db426.png">
3rd step should be done immediately after 2nd (refreshing UI page few times). 2nd and 3rd steps might be repeated for couple of times in order to reproduce.
### What you think should happen instead
Warning message shouldn't be displayed.
The issue is that for this warning message recent (with latest heartbet) scheduler job is fetched
https://github.com/apache/airflow/blob/f02a7e9a8292909b369daae6d573f58deed04440/airflow/jobs/base_job.py#L133.
And this may point to job which is not running (state!="running") and that is why we see warning message.
The warning message in this case is misleading as another replica of scheduler is running in parallel.
### How to reproduce
_No response_
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Composer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28118 | https://github.com/apache/airflow/pull/28119 | 3cd70ffee974c9f345aabb3a365dde4dbcdd84a4 | 56c0871dce2fb2b7ed2252e4b2d1d8d5d0c07c58 | "2022-12-05T13:38:56Z" | python | "2022-12-07T05:48:00Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,106 | ["airflow/models/dagrun.py"] | IndexError in `airflow dags test` re: scheduling delay stats | ### Apache Airflow version
2.5.0
### What happened
Very simple dag:
```python3
from airflow import DAG
from airflow.operators.bash import BashOperator
from datetime import datetime, timedelta
with DAG(dag_id="hello_world", schedule=timedelta(days=30 * 365), start_date=datetime(1970, 1, 1)) as dag:
(
BashOperator(task_id="hello", bash_command="echo hello")
>> BashOperator(task_id="world", bash_command="echo world")
)
```
Run it like `airflow dags test hello_world $(date +%Y-%m-%d)`
End of output:
```
[2022-12-04 21:24:02,993] {dagrun.py:606} INFO - Marking run <DagRun hello_world @ 2022-12-04T00:00:00+00:00: manual__2022-12-04T00:00:00+00:00, state:running, queued_at: None. externally triggered: False> successful
[2022-12-04 21:24:03,003] {dagrun.py:657} INFO - DagRun Finished: dag_id=hello_world, execution_date=2022-12-04T00:00:00+00:00, run_id=manual__2022-12-04T00:00:00+00:00, run_start_date=2022-12-04T00:00:00+00:00, run_end_date=2022-12-05 04:24:02.995279+00:00, run_duration=102242.995279, state=success, external_trigger=False, run_type=manual, data_interval_start=2022-12-04T00:00:00+00:00, data_interval_end=2052-11-26T00:00:00+00:00, dag_hash=None
[2022-12-04 21:24:03,004] {dagrun.py:878} WARNING - Failed to record first_task_scheduling_delay metric:
Traceback (most recent call last):
File "/home/matt/2022/12/04/venv/lib/python3.9/site-packages/airflow/models/dagrun.py", line 866, in _emit_true_scheduling_delay_stats_for_finished_state
first_start_date = ordered_tis_by_start_date[0].start_date
IndexError: list index out of range
```
### What you think should happen instead
No warning (or, an explanation of what I can do address whatever it's warning about).
### How to reproduce
_No response_
### Operating System
NixOS 22.11 (gnu/linux)
### Versions of Apache Airflow Providers
n/a
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28106 | https://github.com/apache/airflow/pull/28138 | 7adf8a53ec8bd08a9c14418bf176574e149780c5 | b3d7e17e72c05fd149a5514e3796d46a241ac4f7 | "2022-12-05T04:28:57Z" | python | "2022-12-06T11:27:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,103 | ["airflow/providers/amazon/aws/transfers/dynamodb_to_s3.py", "tests/providers/amazon/aws/transfers/test_dynamodb_to_s3.py"] | Type Error while using dynamodb_to_s3 operator | ### Discussed in https://github.com/apache/airflow/discussions/28102
<div type='discussions-op-text'>
<sup>Originally posted by **p-madduri** December 1, 2022</sup>
### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
https://github.com/apache/airflow/blob/430e930902792fc37cdd2c517783f7dd544fbebf/airflow/providers/amazon/aws/transfers/dynamodb_to_s3.py#L39
if we use the below class at line 39:
def _convert_item_to_json_bytes(item: dict[str, Any]) -> bytes:
return (json.dumps(item) + "\n").encode("utf-8")
its throwing below error
TypeError: Object of type Decimal is not JSON serializable.
can we use
class DecimalEncoder(json.JSONEncoder):
def encode(self, obj):
if isinstance(obj, Mapping):
return '{' + ', '.join(f'{self.encode(k)}: {self.encode(v)}' for (k, v) in obj.items()) + '}'
elif isinstance(obj, Iterable) and (not isinstance(obj, str)):
return '[' + ', '.join(map(self.encode, obj)) + ']'
elif isinstance(obj, Decimal):
return f'{obj.normalize():f}' # using normalize() gets rid of trailing 0s, using ':f' prevents scientific notation
else:
print(obj)
return super().encode(obj)
and need to update the code at line
https://github.com/apache/airflow/blob/430e930902792fc37cdd2c517783f7dd544fbebf/airflow/providers/amazon/aws/transfers/dynamodb_to_s3.py#L99
This solution is suggested in this article:
https://randomwits.com/blog/export-dynamodb-s3
Airflow version of MWAA : 2.0.2
### What you think should happen instead
mentioned in what happened section
### How to reproduce
mentioned in what happened section
### Operating System
MAC
### Versions of Apache Airflow Providers
from airflow.providers.amazon.aws.transfers.dynamodb_to_s3 import DynamoDBToS3Operator
### Deployment
MWAA
### Deployment details
n/a
### Anything else
n/a
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
</div> | https://github.com/apache/airflow/issues/28103 | https://github.com/apache/airflow/pull/28158 | 39f501d4f4e87635c80d97bb599daf61096d23b8 | 0d90c62bac49de9aef6a31ee3e62d02e458b0d33 | "2022-12-05T01:50:23Z" | python | "2022-12-06T21:23:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,071 | ["airflow/executors/kubernetes_executor.py", "tests/executors/test_kubernetes_executor.py"] | Kubernetes logging errors - attempting to adopt taskinstance which was not specified by database | ### Apache Airflow version
2.4.3
### What happened
Using following config
```
executor = CeleryKubernetesExecutor
delete_worker_pods = False
```
1. Start a few dags running in kubernetes, wait for them to complete.
2. Restart Scheduler.
3. Logs are flooded with hundreds of errors like` ERROR - attempting to adopt taskinstance which was not specified by database: TaskInstanceKey(dag_id='xxx', task_id='yyy', run_id='zzz', try_number=1, map_index=-1)`
This is problematic because:
* Our installation has thousands of dags and pods so this becomes very noisy and the adoption-process adds excessive startup-time to the scheduler, up to a minute some times.
* It's hiding actual errors with resetting orphaned tasks, something that also happens for inexplicable reasons on scheduler restart with following log: `Reset the following 6 orphaned TaskInstances`. Making such much harder to debug. The cause of them can not be easily correlated with those that were not specified by database.
The cause of these logs are the Kubernetes executor on startup loads all pods (`try_adopt_task_instances`), it then cross references them with all `RUNNING` TaskInstances loaded via `scheduler_job.adopt_or_reset_orphaned_tasks`.
For all pods where a running TI can not be found, it logs the error above - But for TIs that were already completed this is not an error, and the pods should not have to be loaded at all.
I have an idea of adding some code in the kubernetes_executor that patches in something like a `completion-acknowleged`-label whenever a pod is completed (unless `delete_worker_pods` is set). Then on startup, all pods having this label can be excluded. Is this a good idea or do you see other potential solutions?
Another potential solution is to inside `try_adopt_task_instances` only fetch the exact pod-id specified in each task-instance, instead of listing all to later cross-reference them.
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Ubuntu 22.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28071 | https://github.com/apache/airflow/pull/28899 | f2bedcbd6722cd43772007eecf7f55333009dc1d | f64ac5978fb3dfa9e40a0e5190ef88e9f9615824 | "2022-12-02T17:46:41Z" | python | "2023-01-18T20:05:50Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,070 | ["airflow/www/static/js/dag/InstanceTooltip.test.tsx", "airflow/www/static/js/dag/InstanceTooltip.tsx", "airflow/www/static/js/dag/details/dagRun/index.tsx", "airflow/www/static/js/dag/details/taskInstance/Details.tsx", "airflow/www/yarn.lock"] | task duration in grid view is different when viewed at different times. | ### Apache Airflow version
2.4.3
### What happened
I wrote this dag to test the celery executor's ability to tolerate OOMkills:
```python3
import numpy as np
from airflow import DAG
from airflow.decorators import task
from datetime import datetime, timedelta
from airflow.models.variable import Variable
import subprocess
import random
def boom():
np.ones((1_000_000_000_000))
def maybe_boom(boom_hostname, boom_count, boom_modulus):
"""
call boom(), but only under certain conditions
"""
try:
proc = subprocess.Popen("hostname", shell=True, stdout=subprocess.PIPE)
hostname = proc.stdout.readline().decode().strip()
# keep track of which hosts parsed the dag
parsed = Variable.get("parsed", {}, deserialize_json=True)
parsed.setdefault(hostname, 0)
parsed[hostname] = parsed[hostname] + 1
Variable.set("parsed", parsed, serialize_json=True)
# only blow up when the caller's condition is met
print(parsed)
try:
count = parsed[boom_hostname]
if hostname == boom_hostname and count % boom_modulus == boom_count:
print("boom")
boom()
except (KeyError, TypeError):
pass
print("no boom")
except:
# key errors show up because of so much traffic on the variable
# don't hold up parsing in those cases
pass
@task
def do_stuff():
# tasks randomly OOMkill also
if random.randint(1, 256) == 13:
boom()
run_size = 100
with DAG(
dag_id="oom_on_parse",
schedule=timedelta(seconds=30),
start_date=datetime(1970, 1, 1),
catchup=False,
):
# OOM part-way through the second run
# and every 3th run after that
maybe_boom(
boom_hostname="airflow-worker-0",
boom_count=run_size + 50,
boom_modulus=run_size * 3,
)
[do_stuff() for _ in range(run_size)]
```
I'm not surprised that tasks are failing. The dag occasionally tries to allocate 1Tb of memory. That's a good reason to fail. What surprises me is that occasionally, the run durations are reported as 23:59:30 when I've only been running the test for 5 minutes. Also, this number changes if I view it later, behold:
![2022-12-02 09 44 32](https://user-images.githubusercontent.com/5834582/205346230-1173a79e-b6f1-43bd-b232-5cbda29e1d13.gif)
23:55:09 -> 23:55:03 -> 23:55:09, they're decreasing.
### What you think should happen instead
The duration should never be longer than I've had the deployment up, and whatever is reported, it should not change when viewed later on.
### How to reproduce
Using the celery executor, unpause the dag above. Wait for failures to show up. View their duration in the grid view.
This gist includes a script which shows all of the parameters I'm using (e.g. to helm and such): https://gist.github.com/MatrixManAtYrService/6e90a3b8c7c65b8d8b1deaccc8b6f042
### Operating System
k8s / helm / docker / macos
### Versions of Apache Airflow Providers
n/a
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
See script in this gist? https://gist.github.com/MatrixManAtYrService/6e90a3b8c7c65b8d8b1deaccc8b6f042
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28070 | https://github.com/apache/airflow/pull/28395 | 4d0fa01f72ac4a947db2352e18f4721c2e2ec7a3 | 11f30a887c77f9636e88e31dffd969056132ae8c | "2022-12-02T17:10:57Z" | python | "2022-12-16T18:04:36Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,065 | ["airflow/www/views.py", "tests/www/views/test_views_dagrun.py"] | Queue up new tasks always returns an empty list | ### Apache Airflow version
main (development)
### What happened
Currently when a new task is added to a dag and in the grid view, a user selects the top level of a dag run and then clicks on "Queue up new tasks", the list returned by the confirmation box is always empty.
It appears that where the list of tasks is expected to be set, [here](https://github.com/apache/airflow/blob/ada91b686508218752fee176d29d63334364a7f2/airflow/api/common/mark_tasks.py#L516), `res` will always be an empty list.
### What you think should happen instead
The UI should return a list of tasks that will be queued up once the confirmation button is pressed.
### How to reproduce
Create a dag, trigger the dag, allow it to complete.
Add a new task to the dag, click on "Queue up new tasks", the list will be empty.
### Operating System
n/a
### Versions of Apache Airflow Providers
2.3.3 and upwards including main. I've not looked at earlier releases.
### Deployment
Other 3rd-party Helm chart
### Deployment details
_No response_
### Anything else
I have a PR prepared for this issue.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28065 | https://github.com/apache/airflow/pull/28066 | e29d33b89f7deea6eafb03006c37b60692781e61 | af29ff0a8aa133f0476bf6662e6c06c67de21dd5 | "2022-12-02T11:45:05Z" | python | "2022-12-05T18:51:57Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,010 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/config_templates/default_celery.py", "docs/apache-airflow/core-concepts/executor/celery.rst"] | Airflow does not pass through Celery's support for Redis Sentinel over SSL. | ### Apache Airflow version
2.4.3
### What happened
When configuring Airflow/Celery to use Redis Sentinel as a broker, the following pops up:
```
airflow.exceptions.AirflowException: The broker you configured does not support SSL_ACTIVE to be True. Please use RabbitMQ or Redis if you would like to use SSL for broker.
```
### What you think should happen instead
Celery has supported TLS on Redis Sentinel [for a while](https://docs.celeryq.dev/en/latest/history/whatsnew-5.1.html#support-redis-sentinel-with-ssl) now.
It looks like [this piece of code](https://github.com/apache/airflow/blob/main/airflow/config_templates/default_celery.py#L68-L88) explicitly prohibits from passing a valid Redis Sentinel TLS configuration through to Celery. (Since Sentinel broker URL's are prefixed with `sentinel://` instead of `redis://`.)
### How to reproduce
This problem can be reproduced by deploying Airflow using Docker with the following environment variables:
```
AIRFLOW__CELERY__BROKER_URL=sentinel://sentinel1:26379;sentinel://sentinel2:26379;sentinel://sentinel3:26379
AIRFLOW__CELERY__SSL_ACTIVE=true
AIRFLOW__CELERY_BROKER_TRANSPORT_OPTIONS__MASTER_NAME='some-master-name'
AIRFLOW__CELERY_BROKER_TRANSPORT_OPTIONS__PASSWORD='some-password'
AIRFLOW__LOGGING__LOGGING_LEVEL=DEBUG
```
Note that I'm not 100% certain of the syntax for the password environment var. I can't get to the point of testing this because without TLS connections to our internal brokers are denied (because they require TLS), and with TLS it doesn't attempt a connection because of the earlier linked code.
I've verified with the reference `redis-cli` that the settings we use for `master-name` does result in a valid response and the Sentinel set-up works as expected.
### Operating System
Docker (apache/airflow:2.4.3-python3.10)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
Deployed using Nomad.
### Anything else
This is my first issue with this open source project. Please let me know if there's more relevant information I can provide to follow through on this issue.
I will try to make some time available soon to see if a simple code change in the earlier mentioned file would work, but as this is my first issue here I would still have to set-up a full development environment.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
If this is indeed a simple fix I'd be willing to look into making a PR. I would like some feedback on the problem first though if possible!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28010 | https://github.com/apache/airflow/pull/30352 | 800ade7da6ae49c52b4fe412c1c5a60ceffb897c | 2c270db714b7693a624ce70d178744ccc5f9e73e | "2022-11-30T15:15:32Z" | python | "2023-05-05T11:55:38Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,002 | ["airflow/models/dag.py", "airflow/www/views.py"] | Clearing dag run via UI fails on main branch and 2.5.0rc2 | ### Apache Airflow version
main (development)
### What happened
Create a simple dag, allow it to completely run through.
Next, when in grid view, on the left hand side click on the dag run at the top level.
On the right hand side, then click on "Clear existing tasks". This will error with the following on the web server:
```
[2022-11-29 17:55:05,939] {app.py:1742} ERROR - Exception on /dagrun_clear [POST]
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 2525, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1822, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1820, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1796, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/opt/airflow/airflow/www/auth.py", line 47, in decorated
return func(*args, **kwargs)
File "/opt/airflow/airflow/www/decorators.py", line 83, in wrapper
return f(*args, **kwargs)
File "/opt/airflow/airflow/www/views.py", line 2184, in dagrun_clear
confirmed=confirmed,
File "/opt/airflow/airflow/www/views.py", line 2046, in _clear_dag_tis
session=session,
File "/opt/airflow/airflow/utils/session.py", line 72, in wrapper
return func(*args, **kwargs)
File "/opt/airflow/airflow/models/dag.py", line 2030, in clear
exclude_task_ids=exclude_task_ids,
File "/opt/airflow/airflow/models/dag.py", line 1619, in _get_task_instances
tis = session.query(TaskInstance)
AttributeError: 'NoneType' object has no attribute 'query'
```
https://github.com/apache/airflow/blob/527fbce462429fc9836837378f801eed4e9d194f/airflow/models/dag.py#L1619
As per issue title, fails on main branch and `2.5.0rc2`. Works fine on `2.3.3` and `2.4.3`.
### What you think should happen instead
Tasks within the dag should be cleared as expected.
### How to reproduce
Run a dag, attempt to clear it within the UI at the top level of the dag.
### Operating System
Ran via breeze
### Versions of Apache Airflow Providers
N/A
### Deployment
Other 3rd-party Helm chart
### Deployment details
Tested via breeze.
### Anything else
Happens every time.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28002 | https://github.com/apache/airflow/pull/28003 | 527fbce462429fc9836837378f801eed4e9d194f | f43f50e3f11fa02a2025b4b68b8770d6456ba95d | "2022-11-30T08:18:26Z" | python | "2022-11-30T10:27:00Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,000 | ["airflow/providers/amazon/aws/hooks/redshift_sql.py", "docs/apache-airflow-providers-amazon/connections/redshift.rst", "tests/providers/amazon/aws/hooks/test_redshift_sql.py"] | Add IAM authentication to Amazon Redshift Connection by AWS Connection | ### Description
Allow authenticating to Redshift Cluster in `airflow.providers.amazon.aws.hooks.redshift_sql.RedshiftSQLHook` with temporary IAM Credentials.
This might be implemented by the same way as it already implemented into PostgreSQL Hook - manual obtain credentials by call [GetClusterCredentials](https://docs.aws.amazon.com/redshift/latest/APIReference/API_GetClusterCredentials.html) thought Redshift API.
https://github.com/apache/airflow/blob/56b5f3f4eed6a48180e9d15ba9bb9664656077b1/airflow/providers/postgres/hooks/postgres.py#L221-L235
Or by passing obtained temporary credentials into [redshift-connector](https://github.com/aws/amazon-redshift-python-driver#example-using-iam-credentials)
### Use case/motivation
This allows users connect to Redshift Cluster by re-use already existed [Amazon Web Services Connection](https://airflow.apache.org/docs/apache-airflow-providers-amazon/stable/connections/aws.html)
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28000 | https://github.com/apache/airflow/pull/28187 | b7e5b47e2794fa0eb9ac2b22f2150d2fdd9ef2b1 | 2f247a2ba2fb7c9f1fe71567a80f0063e21a5f55 | "2022-11-30T05:09:08Z" | python | "2023-05-02T13:58:13Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,999 | ["airflow/configuration.py", "tests/core/test_configuration.py"] | References to the 'kubernetes' section cause parse errors | ### Apache Airflow version
main (development)
### What happened
Here's a dag:
```python3
from airflow import DAG
from airflow.decorators import task
import airflow.configuration as conf
from datetime import datetime
@task
def print_this(this):
print(this)
with DAG(dag_id="config_ref", schedule=None, start_date=datetime(1970, 1, 1)) as dag:
namespace = conf.get("kubernetes", "NAMESPACE")
print_this(namespace)
```
In 2.4.3 it parses without error, but in main (as of 2e7a4bcb550538283f28550208b01515d348fb51) the reference to the "kubernetes" section breaks. Likely because of this: https://github.com/apache/airflow/pull/26873
```
❯ airflow dags list-import-errors
filepath | error
======================================+========================================================================================================
/Users/matt/2022/11/29/dags/config.py | Traceback (most recent call last):
| File "/Users/matt/src/airflow/airflow/configuration.py", line 595, in get
| return self._get_option_from_default_config(section, key, **kwargs)
| File "/Users/matt/src/airflow/airflow/configuration.py", line 605, in _get_option_from_default_config
| raise AirflowConfigException(f"section/key [{section}/{key}] not found in config")
| airflow.exceptions.AirflowConfigException: section/key not found in config
|
❯ python dags/config.py
[2022-11-29 21:30:05,300] {configuration.py:603} WARNING - section/key [kubernetes/namespace] not found in config
Traceback (most recent call last):
File "/Users/matt/2022/11/29/dags/config.py", line 13, in <module>
namespace = conf.get("kubernetes", "NAMESPACE")
File "/Users/matt/src/airflow/airflow/configuration.py", line 1465, in get
return conf.get(*args, **kwargs)
File "/Users/matt/src/airflow/airflow/configuration.py", line 595, in get
return self._get_option_from_default_config(section, key, **kwargs)
File "/Users/matt/src/airflow/airflow/configuration.py", line 605, in _get_option_from_default_config
raise AirflowConfigException(f"section/key [{section}/{key}] not found in config")
airflow.exceptions.AirflowConfigException: section/key [kubernetes/namespace] not found in config
```
To quote @jedcunningham :
> The backcompat layer only expects you to use the “new” section name.
### What you think should happen instead
The recent section name change should be registered so that the old name still works.
### How to reproduce
See above
### Operating System
Mac OS / venv
### Versions of Apache Airflow Providers
n/a
### Deployment
Virtualenv installation
### Deployment details
`pip install -e ~/src/airflow` into a fresh venv
### Anything else
Also, it's kind of weird that the important part of the error message (which section?) is missing from `list-import-errors`. I had to run the dag def like a script to realize that it was the kubernetes section that it was complaining about.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27999 | https://github.com/apache/airflow/pull/28008 | f1c4c27e4aed79eef01f2873fab3a66af2aa3fa0 | 3df03cc9331cb8984f39c5dbf0c9775ac362421e | "2022-11-30T04:36:36Z" | python | "2022-12-01T07:41:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,978 | ["airflow/providers/snowflake/CHANGELOG.rst", "airflow/providers/snowflake/hooks/snowflake.py", "airflow/providers/snowflake/operators/snowflake.py", "tests/providers/snowflake/hooks/test_sql.py", "tests/providers/snowflake/operators/test_snowflake_sql.py"] | KeyError: 0 error with common-sql version 1.3.0 | ### Apache Airflow Provider(s)
common-sql
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon==6.0.0
apache-airflow-providers-apache-hive==4.0.1
apache-airflow-providers-apache-livy==3.1.0
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-cncf-kubernetes==4.4.0
apache-airflow-providers-common-sql==1.3.0
apache-airflow-providers-databricks==3.3.0
apache-airflow-providers-dbt-cloud==2.2.0
apache-airflow-providers-elasticsearch==4.2.1
apache-airflow-providers-ftp==3.1.0
apache-airflow-providers-google==8.4.0
apache-airflow-providers-http==4.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-microsoft-azure==4.3.0
apache-airflow-providers-postgres==5.2.2
apache-airflow-providers-redis==3.0.0
apache-airflow-providers-sftp==4.1.0
apache-airflow-providers-snowflake==3.3.0
apache-airflow-providers-sqlite==3.2.1
apache-airflow-providers-ssh==3.2.0
```
### Apache Airflow version
2.4.3
### Operating System
Debian Bullseye
### Deployment
Astronomer
### Deployment details
_No response_
### What happened
With the latest version of common-sql provider, the `get_records` from hook is now a ordinary dictionary, causing this KeyError with SqlSensor:
```
[2022-11-29, 00:39:18 UTC] {taskinstance.py:1851} ERROR - Task failed with exception
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/airflow/sensors/base.py", line 189, in execute
poke_return = self.poke(context)
File "/usr/local/lib/python3.9/site-packages/airflow/providers/common/sql/sensors/sql.py", line 98, in poke
first_cell = records[0][0]
KeyError: 0
```
I have only tested with Snowflake, I haven't tested it with other databases. Reverting back to 1.2.0 solves the issue.
### What you think should happen instead
It should return an iterable list as usual with the query.
### How to reproduce
```
from datetime import datetime
from airflow import DAG
from airflow.providers.common.sql.sensors.sql import SqlSensor
with DAG(
dag_id="sql_provider_snowflake_test",
schedule=None,
start_date=datetime(2022, 1, 1),
catchup=False,
):
t1 = SqlSensor(
task_id="snowflake_test",
conn_id="snowflake",
sql="select 0",
fail_on_empty=False,
poke_interval=20,
mode="poke",
timeout=60 * 5,
)
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27978 | https://github.com/apache/airflow/pull/28006 | 6c62985055e7f9a715c3ae47f6ff584ad8378e2a | d9cefcd0c50a1cce1c3c8e9ecb99cfacde5eafbf | "2022-11-29T00:52:53Z" | python | "2022-12-01T13:53:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,976 | ["airflow/providers/snowflake/CHANGELOG.rst", "airflow/providers/snowflake/hooks/snowflake.py", "airflow/providers/snowflake/operators/snowflake.py", "tests/providers/snowflake/hooks/test_sql.py", "tests/providers/snowflake/operators/test_snowflake_sql.py"] | `SQLColumnCheckOperator` failures after upgrading to `common-sql==1.3.0` | ### Apache Airflow Provider(s)
common-sql
### Versions of Apache Airflow Providers
apache-airflow-providers-google==8.2.0
apache-airflow-providers-http==4.0.0
apache-airflow-providers-salesforce==5.0.0
apache-airflow-providers-slack==5.1.0
apache-airflow-providers-snowflake==3.2.0
Issue:
apache-airflow-providers-common-sql==1.3.0
### Apache Airflow version
2.4.3
### Operating System
Debian GNU/Linux 11 (bullseye)
### Deployment
Astronomer
### Deployment details
_No response_
### What happened
Problem occurred when upgrading from common-sql=1.2.0 to common-sql=1.3.0
Getting a `KEY_ERROR` when running a unique_check and null_check on a column.
1.3.0 log:
<img width="1609" alt="Screen Shot 2022-11-28 at 2 01 20 PM" src="https://user-images.githubusercontent.com/15257610/204390144-97ae35b7-1a2c-4ee1-9c12-4f3940047cde.png">
1.2.0 log:
<img width="1501" alt="Screen Shot 2022-11-28 at 2 00 15 PM" src="https://user-images.githubusercontent.com/15257610/204389994-7e8eae17-a346-41ac-84c4-9de4be71af20.png">
### What you think should happen instead
Potential causes:
- seems to be indexing based on the test query column `COL_NAME` instead of the table column `STRIPE_ID`
- the `record` from the test changed types went from a tuple to a list of dictionaries.
- no `tolerance` is specified for these tests, so `.get('tolerance')` looks like it will cause an error without a default specified like `.get('tolerance', None)`
Expected behavior:
- these tests continue to pass with the upgrade
- `tolerance` is not a required key.
### How to reproduce
```
from datetime import datetime
from airflow import DAG
from airflow.providers.snowflake.operators.snowflake import SnowflakeOperator
from airflow.providers.common.sql.operators.sql import SQLColumnCheckOperator
my_conn_id = "snowflake_default"
default_args={"conn_id": my_conn_id}
with DAG(
dag_id="airflow_providers_example",
schedule=None,
start_date=datetime(2022, 11, 27),
default_args=default_args,
) as dag:
create_table = SnowflakeOperator(
task_id="create_table",
sql=""" CREATE OR REPLACE TABLE testing AS (
SELECT
1 AS row_num,
'not null' AS field
UNION ALL
SELECT
2 AS row_num,
'test' AS field
UNION ALL
SELECT
3 AS row_num,
'test 2' AS field
)""",
)
column_checks = SQLColumnCheckOperator(
task_id="column_checks",
table="testing",
column_mapping={
"field": {"unique_check": {"equal_to": 0}, "null_check": {"equal_to": 0}}
},
)
create_table >> column_checks
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27976 | https://github.com/apache/airflow/pull/28006 | 6c62985055e7f9a715c3ae47f6ff584ad8378e2a | d9cefcd0c50a1cce1c3c8e9ecb99cfacde5eafbf | "2022-11-28T23:03:13Z" | python | "2022-12-01T13:53:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,955 | ["airflow/api/common/mark_tasks.py", "airflow/models/taskinstance.py", "airflow/utils/log/file_task_handler.py", "airflow/utils/log/log_reader.py", "airflow/utils/state.py", "airflow/www/utils.py", "airflow/www/views.py", "tests/www/views/test_views_grid.py"] | Latest log not shown in grid view for deferred task | ### Apache Airflow version
2.4.3
### What happened
In the grid view I do not see the logs for the latest try number if the task is in deferred state. I do see it in the "old" log view.
Grid view:
![image](https://user-images.githubusercontent.com/3342974/204215464-989e70e3-9dfc-490b-908c-f47f491316d5.png)
"Old" view:
![image](https://user-images.githubusercontent.com/3342974/204215513-05a6b0ac-f3da-44fa-b8c2-e31013e2467c.png)
It could have something to do with the deferred task getting its try_number reduced by 1 - in my example try_number=1 and next_try_number=2.
https://github.com/apache/airflow/blob/3e288abd0bc3e5788dcd7f6d9f6bef26ec4c7281/airflow/models/taskinstance.py#L1617-L1618
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Red Hat
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27955 | https://github.com/apache/airflow/pull/26993 | ad7f8e09f8e6e87df2665abdedb22b3e8a469b49 | f110cb11bf6fdf6ca9d0deecef9bd51fe370660a | "2022-11-28T07:12:45Z" | python | "2023-01-05T16:42:23Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,952 | ["airflow/sensors/external_task.py", "tests/sensors/test_external_task_sensor.py"] | can not use output of task decorator as input for external_task_ids of ExternalTaskSensor | ### Apache Airflow version
2.4.3, 2.5.0
### What happened
when use output from task decorator as parameter (external_task_ids) in ExternalTaskSensor, it show up this log:
```
Broken DAG: [+++++/airflow/dags/TEST_NEW_PIPELINE.py] Traceback (most recent call last):
File "+++++/env3.10.5/lib/python3.10/site-packages/airflow/models/baseoperator.py", line 408, in apply_defaults
result = func(self, **kwargs, default_args=default_args)
File "+++++/env3.10.5/lib/python3.10/site-packages/airflow/sensors/external_task.py", line 164, in __init__
if external_task_ids and len(external_task_ids) > len(set(external_task_ids)):
TypeError: object of type 'PlainXComArg' has no len()
```
note: +++++ is just a mask for irrelevant information.
### What you think should happen instead
this document https://airflow.apache.org/docs/apache-airflow/stable/tutorial/taskflow.html
show that we can use it without any warning, note about it.
found relative problem in [https://github.com/apache/airflow/issues/27328](url)
should move all the check in __\_\_init\_\___ into __poke__ method.
### How to reproduce
```
from airflow.decorators import dag, task
from airflow.operators.python import get_current_context
from airflow.sensors.external_task import ExternalTaskSensor
from datetime import datetime
configure = {"dag_id": "test_new_skeleton",
"schedule": None,
"start_date": datetime(2022,1,1),
}
@task
def preprocess_dependency() -> list:
return ["random-task-name"]
@dag(**configure)
def pipeline():
t_preprocess = preprocess_dependency()
task_dependency = ExternalTaskSensor(task_id=f"Check_Dependency",
external_dag_id='random-dag-name-that-exist',
external_task_ids=t_preprocess ,
poke_interval=60,
mode="reschedule",
timeout=172800,
allowed_states=['success'],
failed_states=['failed', 'skipped'],
check_existence=True,)
dag = pipeline()
```
### Operating System
REHL 7
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27952 | https://github.com/apache/airflow/pull/28692 | 3d89797889e43bda89d4ceea37130bdfbc3db32c | 7f18fa96e434c64288d801904caf1fcde18e2cbf | "2022-11-27T15:47:45Z" | python | "2023-01-04T11:39:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,936 | ["airflow/www/static/js/components/Table/Cells.tsx"] | Datasets triggered run modal is not scrollable | ### Apache Airflow version
main (development)
### What happened
Datasets modal which used to display triggered runs is not scrollable even if there are records
![2022-11-26 12 03 26](https://user-images.githubusercontent.com/88504849/204077213-dabc2ac2-eac7-47ed-96b4-a20d5b27422d.gif)
### What you think should happen instead
It should be scrollable if there are records to display
### How to reproduce
1. trigger a datasets dag with multiple triggered runs
2. click on datasets
3. click on uri which have multiple triggered runs
DAG-
```
from airflow import Dataset, DAG
from airflow.operators.python import PythonOperator
from datetime import datetime
fan_out = Dataset("fan_out")
fan_in = Dataset("fan_in")
# the leader
with DAG(
dag_id="momma_duck", start_date=datetime(1970, 1, 1), schedule_interval=None
) as leader:
PythonOperator(
task_id="has_outlet", python_callable=lambda: None, outlets=[fan_out]
)
# the many
for i in range(1, 40):
with DAG(
dag_id=f"duckling_{i}", start_date=datetime(1970, 1, 1), schedule=[fan_out]
) as duck:
PythonOperator(
task_id="has_outlet", python_callable=lambda: None, outlets=[fan_in]
)
globals()[f"duck_{i}"] = duck
# the straggler
with DAG(
dag_id="straggler_duck", start_date=datetime(1970, 1, 1), schedule=[fan_in]
) as straggler:
PythonOperator(task_id="has_outlet", python_callable=lambda: None)
```
### Operating System
mac os
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27936 | https://github.com/apache/airflow/pull/27965 | a158fbb6bde07cd20003680a4cf5e7811b9eda98 | 5e4f4a3556db5111c2ae36af1716719a8494efc7 | "2022-11-26T07:18:43Z" | python | "2022-11-29T01:16:04Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,932 | ["airflow/executors/base_executor.py", "airflow/providers/celery/executors/celery_executor.py", "airflow/providers/cncf/kubernetes/executors/kubernetes_executor.py", "docs/apache-airflow-providers-celery/cli-ref.rst", "docs/apache-airflow-providers-celery/index.rst", "docs/apache-airflow-providers-cncf-kubernetes/cli-ref.rst", "docs/apache-airflow-providers-cncf-kubernetes/index.rst"] | AIP-51 - Executor Specific CLI Commands | ### Overview
Some Executors have their own first class CLI commands (now that’s hardcoding/coupling!) which setup or modify various components related to that Executor.
### Examples
- **5a**) Celery Executor commands: https://github.com/apache/airflow/blob/27e2101f6ee5567b2843cbccf1dca0b0e7c96186/airflow/cli/cli_parser.py#L1689-L1734
- **5b**) Kubernetes Executor commands: https://github.com/apache/airflow/blob/27e2101f6ee5567b2843cbccf1dca0b0e7c96186/airflow/cli/cli_parser.py#L1754-L1771
- **5c**) Default CLI parser has hardcoded logic for Celery and Kubernetes Executors specifically: https://github.com/apache/airflow/blob/27e2101f6ee5567b2843cbccf1dca0b0e7c96186/airflow/cli/cli_parser.py#L63-L99
### Proposal
Update the BaseExecutor interface with a pluggable mechanism to vend CLI `GroupCommands` and parsers. Executor subclasses would then implement these methods, if applicable, which would then be called to fetch commands and parsers from within Airflow Core cli parser code. We would then migrate the existing Executor CLI code from cli_parser to the respective Executor class.
Pseudo-code example for vending `GroupCommand`s:
```python
# Existing code in cli_parser.py
...
airflow_commands: List[CLICommand] = [
GroupCommand(
name='dags',
help='Manage DAGs',
subcommands=DAGS_COMMANDS,
),
...
]
# New code to add groups vended by executor classes
executor_cls, _ = ExecutorLoader.import_executor_cls(conf.get('core', 'EXECUTOR'))
airflow_commands.append(executor_cls.get_cli_group_commands())
...
``` | https://github.com/apache/airflow/issues/27932 | https://github.com/apache/airflow/pull/33081 | bbc096890512ba2212f318558ca1e954ab399657 | 879fd34e97a5343e6d2bbf3d5373831b9641b5ad | "2022-11-25T23:28:44Z" | python | "2023-08-04T17:26:49Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,929 | ["airflow/executors/base_executor.py", "airflow/executors/celery_kubernetes_executor.py", "airflow/executors/debug_executor.py", "airflow/executors/local_kubernetes_executor.py", "airflow/sensors/base.py", "tests/sensors/test_base.py"] | AIP-51 - Single Threaded Executors | ### Overview
Some Executors, currently a subset of the local Executors, run in a single threaded fashion and have certain limitations and requirements, many of which are hardcoded. To add a new single threaded Executor would require changes to core Airflow code.
Note: This coupling often shows up with SQLite compatibility checks since it does not support multiple connections.
### Examples
- **2a**) SQLite check done in configuration.py: https://github.com/apache/airflow/blob/26f94c5370587f73ebd935cecf208c6a36bdf9b6/airflow/configuration.py#L412-L419
- **2b**) When running in standalone mode SQLite compatibility is checked: https://github.com/apache/airflow/blob/26f94c5370587f73ebd935cecf208c6a36bdf9b6/airflow/cli/commands/standalone_command.py#L160-L165
- **2c**) Sensors in `poke` mode can block execution of DAGs when running with single process Executors, currently hardcoded to DebugExecutor (although should also include SequentialExecutor): https://github.com/apache/airflow/blob/27e2101f6ee5567b2843cbccf1dca0b0e7c96186/airflow/sensors/base.py#L243
### Proposal
A static method or attribute on the Executor class which can be checked by core code.
There is a precedent already set with the `supports_ad_hoc_ti_run` attribute, see:
https://github.com/apache/airflow/blob/fb741fd87254e235f99d7d67e558dafad601f253/airflow/executors/kubernetes_executor.py#L435 https://github.com/apache/airflow/blob/26f94c5370587f73ebd935cecf208c6a36bdf9b6/airflow/www/views.py#L1735-L1737 | https://github.com/apache/airflow/issues/27929 | https://github.com/apache/airflow/pull/28934 | 0359a42a3975d0d7891a39abe4395bdd6f210718 | e5730364b4eb5a3b30e815ca965db0f0e710edb6 | "2022-11-25T23:28:05Z" | python | "2023-01-23T21:26:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,909 | ["airflow/providers/google/cloud/transfers/bigquery_to_gcs.py"] | Add export_format to template_fields of BigQueryToGCSOperator | ### Description
There might be an use case where the export_format can be based on some dynamic values. So, adding export_format will help developers in future
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27909 | https://github.com/apache/airflow/pull/27910 | 3fef6a47834b89b99523db6d97d6aa530657a008 | f0820e8d9e8a36325987278bcda2bd69bd53f3a5 | "2022-11-25T10:10:10Z" | python | "2022-11-25T20:26:34Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,907 | ["airflow/www/decorators.py"] | Password is not masked in audit logs for connections/variables | ### Apache Airflow version
main (development)
### What happened
Password for connections and variables with secret in the name are not masked in audit logs.
<img width="1337" alt="Screenshot 2022-11-25 at 12 58 59 PM" src="https://user-images.githubusercontent.com/88504849/203932123-c47fd66f-8e63-4bc6-9bf1-b9395cb26675.png">
<img width="1352" alt="Screenshot 2022-11-25 at 12 56 32 PM" src="https://user-images.githubusercontent.com/88504849/203932220-3f02984c-94b5-4773-8767-6f19cb0ceff0.png">
<img width="1328" alt="Screenshot 2022-11-25 at 1 43 40 PM" src="https://user-images.githubusercontent.com/88504849/203933183-e97b2358-9414-45c8-ab8f-d2f913117301.png">
### What you think should happen instead
Password/value should be masked
### How to reproduce
1. Create a connection or variable(with secret in the name i.e. test_secret)
2. Open audit logs
3. Observe the password
### Operating System
mac os
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27907 | https://github.com/apache/airflow/pull/27923 | 5e45cb019995e8b80104b33da1c93eefae12d161 | 1e73b1cea2d507d6d09f5eac6a16b649f8b52522 | "2022-11-25T08:14:51Z" | python | "2022-11-25T21:23:24Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,864 | ["airflow/models/taskinstance.py", "tests/models/test_taskinstance.py"] | current_state method of TaskInstance fails for mapped task instance | ### Apache Airflow version
2.4.3
### What happened
`current_state` method on TaskInstance doesn't filter by `map_index` so calling this method on mapped task instance fails.
https://github.com/apache/airflow/blob/fb7c6afc8cb7f93909bd2e654ea185eb6abcc1ea/airflow/models/taskinstance.py#L708-L726
### What you think should happen instead
map_index should also be filtered in the query to return single TaskInstance object.
### How to reproduce
```python
with create_session() as session:
print(session.query(TaskInstance).filter(TaskInstance.dag_id == "divide_by_zero",
TaskInstance.map_index == 1,
TaskInstance.run_id == 'scheduled__2022-11-22T00:00:00+00:00')
.scalar().current_state())
---------------------------------------------------------------------------
MultipleResultsFound Traceback (most recent call last)
Input In [7], in <cell line: 1>()
1 with create_session() as session:
----> 2 print(session.query(TaskInstance).filter(TaskInstance.dag_id == "divide_by_zero", TaskInstance.map_index == 1, TaskInstance.run_id == 'scheduled__2022-11-22T00:00:00+00:00').scalar().current_state())
File ~/stuff/python/airflow/airflow/utils/session.py:75, in provide_session.<locals>.wrapper(*args, **kwargs)
73 else:
74 with create_session() as session:
---> 75 return func(*args, session=session, **kwargs)
File ~/stuff/python/airflow/airflow/models/taskinstance.py:725, in TaskInstance.current_state(self, session)
708 @provide_session
709 def current_state(self, session: Session = NEW_SESSION) -> str:
710 """
711 Get the very latest state from the database, if a session is passed,
712 we use and looking up the state becomes part of the session, otherwise
(...)
715 :param session: SQLAlchemy ORM Session
716 """
717 return (
718 session.query(TaskInstance.state)
719 .filter(
720 TaskInstance.dag_id == self.dag_id,
721 TaskInstance.task_id == self.task_id,
722 TaskInstance.run_id == self.run_id,
723
724 )
--> 725 .scalar()
726 )
File ~/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/orm/query.py:2803, in Query.scalar(self)
2801 # TODO: not sure why we can't use result.scalar() here
2802 try:
-> 2803 ret = self.one()
2804 if not isinstance(ret, collections_abc.Sequence):
2805 return ret
File ~/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/orm/query.py:2780, in Query.one(self)
2762 def one(self):
2763 """Return exactly one result or raise an exception.
2764
2765 Raises ``sqlalchemy.orm.exc.NoResultFound`` if the query selects
(...)
2778
2779 """
-> 2780 return self._iter().one()
File ~/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/engine/result.py:1162, in Result.one(self)
1134 def one(self):
1135 # type: () -> Row
1136 """Return exactly one row or raise an exception.
1137
1138 Raises :class:`.NoResultFound` if the result returns no
(...)
1160
1161 """
-> 1162 return self._only_one_row(True, True, False)
File ~/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/engine/result.py:620, in ResultInternal._only_one_row(self, raise_for_second_row, raise_for_none, scalar)
618 if next_row is not _NO_ROW:
619 self._soft_close(hard=True)
--> 620 raise exc.MultipleResultsFound(
621 "Multiple rows were found when exactly one was required"
622 if raise_for_none
623 else "Multiple rows were found when one or none "
624 "was required"
625 )
626 else:
627 next_row = _NO_ROW
MultipleResultsFound: Multiple rows were found when exactly one was required
```
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27864 | https://github.com/apache/airflow/pull/27898 | c931d888936a958ae40b69077d35215227bf1dff | 51c70a5d6990a6af1188aab080ae2cbe7b935eb2 | "2022-11-23T17:27:58Z" | python | "2022-12-03T16:08:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,842 | ["airflow/providers/google/cloud/transfers/gcs_to_bigquery.py", "tests/providers/google/cloud/transfers/test_gcs_to_bigquery.py"] | GCSToBigQueryOperator no longer uses field_delimiter or time_partitioning | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
google=8.5.0
### Apache Airflow version
2.4.3
### Operating System
Debian GNU/Linux 11 (bullseye)
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### What happened
The newest version of the google providers no longer provides the `field_delimiter` or `time_partitioning` fields to the bq job configuration for the GCStoBQ transfers. Looking at the code it seems like this behavior was removed during the change to use deferrable operations
### What you think should happen instead
These fields should continue to be provided
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27842 | https://github.com/apache/airflow/pull/27961 | 5cdff505574822ad3d2a226056246500e4adea2f | 2d663df0552542efcef6e59bc2bc1586f8d1c7f3 | "2022-11-22T17:31:55Z" | python | "2022-12-04T19:02:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,838 | ["airflow/providers/common/sql/operators/sql.py"] | apache-airflow-providers-common-sql==1.3.0 breaks BigQuery operators | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
**Airflow version**: 2.3.4 (Cloud Composer 2.0.32)
**Issue**: `apache-airflow-providers-common-sql==1.3.0` breaks all BigQuery operators provided by the `apache-airflow-providers-google==8.4.0` package. The error is as follows:
```python
Broken DAG: [/home/airflow/gcs/dags/test-dag.py] Traceback (most recent call last):
File "/home/airflow/gcs/dags/test-dag.py", line 6, in <module>
from airflow.providers.google.cloud.operators.bigquery import BigQueryExecuteQueryOperator
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/operators/bigquery.py", line 35, in <module>
from airflow.providers.common.sql.operators.sql import (
ImportError: cannot import name '_get_failed_checks' from 'airflow.providers.common.sql.operators.sql' (/opt/python3.8/lib/python3.8/site-packages/airflow/providers/common/sql/operators/sql.py)
```
**Why this issue is tricky**: other providers such as `apache-airflow-providers-microsoft-mssql==3.3.0` and `apache-airflow-providers-oracle==3.5.0` have a dependency on `apache-airflow-providers-common-sql>=1.3.0` and will therefore install it when adding to the Composer environment
**Current mitigation**: Downgrade provider packages such that `apache-airflow-providers-common-sql==1.2.0` is installed instead
### What you think should happen instead
A minor version upgrade of `apache-airflow-providers-common-sql` (1.2.0 to 1.3.0) should not break other providers (e.g. apache-airflow-providers-google==8.4.0)
### How to reproduce
- Deploy fresh deployment of Composer `composer-2.0.32-airflow-2.3.4`
- Install `apache-airflow-providers-common-sql==1.3.0` via Pypi package install feature
- Deploy a dag that uses one of the BigQuery operators, such as
```python
import airflow
from airflow import DAG
from datetime import timedelta
from airflow.providers.google.cloud.operators.bigquery import BigQueryExecuteQueryOperator
default_args = {
'start_date': airflow.utils.dates.days_ago(0),
'retries': 1,
'retry_delay': timedelta(minutes=5)
}
dag = DAG(
'test-dag',
default_args=default_args,
schedule_interval=None,
dagrun_timeout=timedelta(minutes=20))
t1 = BigQueryExecuteQueryOperator(
...
)
```
### Operating System
Ubuntu 18.04.6 LTS
### Versions of Apache Airflow Providers
- apache-airflow-providers-apache-beam @ file:///usr/local/lib/airflow-pypi-dependencies-2.3.4/python3.8/apache_airflow_providers_apache_beam-4.0.0-py3-none-any.whl
- apache-airflow-providers-cncf-kubernetes @ file:///usr/local/lib/airflow-pypi-dependencies-2.3.4/python3.8/apache_airflow_providers_cncf_kubernetes-4.4.0-py3-none-any.whl
- apache-airflow-providers-common-sql @ file:///usr/local/lib/airflow-pypi-dependencies-2.3.4/python3.8/apache_airflow_providers_common_sql-1.3.0-py3-none-any.whl
- apache-airflow-providers-dbt-cloud @ file:///usr/local/lib/airflow-pypi-dependencies-2.3.4/python3.8/apache_airflow_providers_dbt_cloud-2.2.0-py3-none-any.whl
- apache-airflow-providers-ftp @ file:///usr/local/lib/airflow-pypi-dependencies-2.3.4/python3.8/apache_airflow_providers_ftp-3.1.0-py3-none-any.whl
- apache-airflow-providers-google @ file:///usr/local/lib/airflow-pypi-dependencies-2.3.4/python3.8/apache_airflow_providers_google-8.4.0-py3-none-any.whl
- apache-airflow-providers-hashicorp @ file:///usr/local/lib/airflow-pypi-dependencies-2.3.4/python3.8/apache_airflow_providers_hashicorp-3.1.0-py3-none-any.whl
- apache-airflow-providers-http @ file:///usr/local/lib/airflow-pypi-dependencies-2.3.4/python3.8/apache_airflow_providers_http-4.0.0-py3-none-any.whl
- apache-airflow-providers-imap @ file:///usr/local/lib/airflow-pypi-dependencies-2.3.4/python3.8/apache_airflow_providers_imap-3.0.0-py3-none-any.whl
- apache-airflow-providers-mysql @ file:///usr/local/lib/airflow-pypi-dependencies-2.3.4/python3.8/apache_airflow_providers_mysql-3.2.1-py3-none-any.whl
- apache-airflow-providers-postgres @ file:///usr/local/lib/airflow-pypi-dependencies-2.3.4/python3.8/apache_airflow_providers_postgres-5.2.2-py3-none-any.whl
- apache-airflow-providers-sendgrid @ file:///usr/local/lib/airflow-pypi-dependencies-2.3.4/python3.8/apache_airflow_providers_sendgrid-3.0.0-py3-none-any.whl
- apache-airflow-providers-sqlite @ file:///usr/local/lib/airflow-pypi-dependencies-2.3.4/python3.8/apache_airflow_providers_sqlite-3.2.1-py3-none-any.whl
- apache-airflow-providers-ssh @ file:///usr/local/lib/airflow-pypi-dependencies-2.3.4/python3.8/apache_airflow_providers_ssh-3.2.0-py3-none-any.whl
### Deployment
Composer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27838 | https://github.com/apache/airflow/pull/27843 | 0b0d2990fdb31749396305433b0f8cc54db7aee8 | dbb4b59dcbc8b57243d1588d45a4d2717c3e7758 | "2022-11-22T14:50:19Z" | python | "2022-11-23T10:12:41Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,837 | ["airflow/providers/databricks/operators/databricks.py", "tests/providers/databricks/operators/test_databricks.py"] | Databricks - Run job by job name not working with DatabricksRunNowDeferrableOperator | ### Apache Airflow Provider(s)
databricks
### Versions of Apache Airflow Providers
apache-airflow-providers-databricks==3.3.0
### Apache Airflow version
2.4.2
### Operating System
Mac OS 13.0
### Deployment
Virtualenv installation
### Deployment details
Virtualenv deployment with Python 3.10
### What happened
Submitting a Databricks job run by name (`job_name`) with the deferrable version (`DatabricksRunNowDeferrableOperator`) does not actually fill the `job_id` and the Databricks API responds with an HTTP 400 bad request - attempting to run a job (POST `https://<databricks-instance>/api/2.1/jobs/run-now`) without an ID specidied.
Sample errors from the Airflow logs:
```
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://[subdomain].azuredatabricks.net/api/2.1/jobs/run-now
During handling of the above exception, another exception occurred:
[...truncated message...]
airflow.exceptions.AirflowException: Response: b'{"error_code":"INVALID_PARAMETER_VALUE","message":"Job 0 does not exist."}', Status Code: 400
```
### What you think should happen instead
The deferrable version (`DatabricksRunNowDeferrableOperator`) should maintain the behavior of the parent class (`DatabricksRunNowOperator`) and use the `job_name` to find the `job_id`.
The following logic is missing in the deferrable version:
```
# Sample from the DatabricksRunNowOperator#execute
hook = self._hook
if "job_name" in self.json:
job_id = hook.find_job_id_by_name(self.json["job_name"])
if job_id is None:
raise AirflowException(f"Job ID for job name {self.json['job_name']} can not be found")
self.json["job_id"] = job_id
del self.json["job_name"]
```
### How to reproduce
To reproduce, use a deferrable run now operator with the job name as an argument in an airflow task:
```
from airflow.providers.databricks.operators.databricks import DatabricksRunNowDeferrableOperator
DatabricksRunNowDeferrableOperator(
job_name='some-name',
# Other args
)
```
### Anything else
The problem occurs at every call.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27837 | https://github.com/apache/airflow/pull/32806 | c4b6f06f6e2897b3f1ee06440fc66f191acee9a8 | 58e21c66fdcc8a416a697b4efa852473ad8bd6fc | "2022-11-22T13:54:22Z" | python | "2023-07-25T03:21:41Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,824 | ["airflow/models/dagrun.py", "tests/models/test_dagrun.py"] | DAG Run fails when chaining multiple empty mapped tasks | ### Apache Airflow version
2.4.3
### What happened
A significant fraction of the DAG Runs of a DAG that has 2+ consecutive mapped tasks which are are being passed an empty list are marked as failed when all tasks are either succeeding or being skipped. This was supposedly fixed with issue #25200 but the problem still persists.
![image](https://user-images.githubusercontent.com/46539900/203193331-db94c793-36e8-4fbd-bc45-29865c44fbfc.png)
### What you think should happen instead
The DAG Run should be marked success.
### How to reproduce
The real world version of this DAG has several mapped tasks that all point to the same list, and that list is frequently empty. I have made a minimal reproducible example.
```
from datetime import datetime
from airflow import DAG
from airflow.decorators import task
with DAG(dag_id="break_mapping", start_date=datetime(2022, 3, 4)) as dag:
@task
def add_one(x: int):
return x + 1
@task
def say_hi():
print("Hi")
@task
def say_bye():
print("Bye")
added_values = add_one.expand(x=[])
added_more_values = add_one.expand(x=[])
added_more_more_values = add_one.expand(x=[])
say_hi() >> say_bye() >> added_values
added_values >> added_more_values >> added_more_more_values
```
### Operating System
Debian Bullseye
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27824 | https://github.com/apache/airflow/pull/27964 | b60006ae26c41e887ec0102bce8b726fce54007d | f89ca94c3e60bfae888dfac60c7472d207f60f22 | "2022-11-22T01:31:41Z" | python | "2022-11-29T07:34:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,818 | ["airflow/models/dag.py", "tests/models/test_dag.py"] | Triggering a DAG with the same run_id as a scheduled one causes the scheduler to crash | ### Apache Airflow version
2.5.0
### What happened
A user with access to manually triggering DAGs can trigger a DAG. provide a run_id that matches the pattern used when creating scheduled runs and cause the scheduler to crash due to database unique key violation:
```
2022-12-12 12:58:00,793] {scheduler_job.py:776} ERROR - Exception when executing SchedulerJob._run_scheduler_loop
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 759, in _execute
self._run_scheduler_loop()
File "/usr/local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 885, in _run_scheduler_loop
num_queued_tis = self._do_scheduling(session)
File "/usr/local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 956, in _do_scheduling
self._create_dagruns_for_dags(guard, session)
File "/usr/local/lib/python3.8/site-packages/airflow/utils/retries.py", line 78, in wrapped_function
for attempt in run_with_db_retries(max_retries=retries, logger=logger, **retry_kwargs):
File "/usr/local/lib/python3.8/site-packages/tenacity/__init__.py", line 384, in __iter__
do = self.iter(retry_state=retry_state)
File "/usr/local/lib/python3.8/site-packages/tenacity/__init__.py", line 351, in iter
return fut.result()
File "/usr/local/lib/python3.8/concurrent/futures/_base.py", line 437, in result
return self.__get_result()
File "/usr/local/lib/python3.8/concurrent/futures/_base.py", line 389, in __get_result
raise self._exception
File "/usr/local/lib/python3.8/site-packages/airflow/utils/retries.py", line 87, in wrapped_function
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1018, in _create_dagruns_for_dags
query, dataset_triggered_dag_info = DagModel.dags_needing_dagruns(session)
File "/usr/local/lib/python3.8/site-packages/airflow/models/dag.py", line 3341, in dags_needing_dagruns
for x in session.query(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 2773, in all
return self._iter().all()
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter
result = self.session.execute(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 1713, in execute
conn = self._connection_for_bind(bind)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind
return self._transaction._connection_for_bind(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 721, in _connection_for_bind
self._assert_active()
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 601, in _assert_active
raise sa_exc.PendingRollbackError(
sqlalchemy.exc.PendingRollbackError: This Session's transaction has been rolled back due to a previous exception during flush. To begin a new transaction with this Session, first issue Session.rollback(). Original exception was: (psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint "dag_run_dag_id_run_id_key"
DETAIL: Key (dag_id, run_id)=(example_branch_dop_operator_v3, scheduled__2022-12-12T12:57:00+00:00) already exists.
[SQL: INSERT INTO dag_run (dag_id, queued_at, execution_date, start_date, end_date, state, run_id, creating_job_id, external_trigger, run_type, conf, data_interval_start, data_interval_end, last_scheduling_decision, dag_hash, log_template_id, updated_at) VALUES (%(dag_id)s, %(queued_at)s, %(execution_date)s, %(start_date)s, %(end_date)s, %(state)s, %(run_id)s, %(creating_job_id)s, %(external_trigger)s, %(run_type)s, %(conf)s, %(data_interval_start)s, %(data_interval_end)s, %(last_scheduling_decision)s, %(dag_hash)s, (SELECT max(log_template.id) AS max_1
FROM log_template), %(updated_at)s) RETURNING dag_run.id]
[parameters: {'dag_id': 'example_branch_dop_operator_v3', 'queued_at': datetime.datetime(2022, 12, 12, 12, 58, 0, 435945, tzinfo=Timezone('UTC')), 'execution_date': DateTime(2022, 12, 12, 12, 57, 0, tzinfo=Timezone('UTC')), 'start_date': None, 'end_date': None, 'state': <DagRunState.QUEUED: 'queued'>, 'run_id': 'scheduled__2022-12-12T12:57:00+00:00', 'creating_job_id': 1, 'external_trigger': False, 'run_type': <DagRunType.SCHEDULED: 'scheduled'>, 'conf': <psycopg2.extensions.Binary object at 0x7f283a82af60>, 'data_interval_start': DateTime(2022, 12, 12, 12, 57, 0, tzinfo=Timezone('UTC')), 'data_interval_end': DateTime(2022, 12, 12, 12, 58, 0, tzinfo=Timezone('UTC')), 'last_scheduling_decision': None, 'dag_hash': '1653a588de69ed25c5b1dcfef928479c', 'updated_at': datetime.datetime(2022, 12, 12, 12, 58, 0, 436871, tzinfo=Timezone('UTC'))}]
(Background on this error at: https://sqlalche.me/e/14/gkpj) (Background on this error at: https://sqlalche.me/e/14/7s2a)
```
Worse yet, the scheduler will keep crashing after a restart with the same exception.
### What you think should happen instead
A user should not be able to crash the scheduler from the UI.
I see 2 alternatives for solving this:
1. Reject custom run_id that would (or could) collide with a scheduled one, preventing this situation from happening.
2. Handle the database error and assign a different run_id to the scheduled run.
### How to reproduce
1. Find an unpaused DAG.
2. Trigger DAG w/ config, set the run id to something like scheduled__2022-11-21T12:00:00+00:00 (adjust the time to be in the future where there is no run yet).
3. Let the manual DAG run finish.
4. Wait for the scheduler to try to schedule another DAG run with the same run id.
5. :boom:
6. Attempt to restart the scheduler.
7. :boom:
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
apache-airflow-providers-postgres==5.3.1
### Deployment
Docker-Compose
### Deployment details
I'm using a Postgres docker container as a metadata database that is linked via docker networking to the scheduler and the rest of the components. Scheduler, workers and webserver are all running in separate containers (using CeleryExecutor backed by a Redis container), though I do not think it is relevant in this case.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27818 | https://github.com/apache/airflow/pull/28397 | 8fb7be2fb5c64cc2f31a05034087923328b1137a | 7ccbe4e7eaa529641052779a89e34d54c5a20f72 | "2022-11-21T12:38:19Z" | python | "2022-12-22T01:54:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,799 | ["airflow/cli/commands/task_command.py", "airflow/utils/cli.py"] | First task in Quick Start fails | ### Apache Airflow version
2.4.3
### What happened
When I ran
`airflow tasks run example_bash_operator runme_0 2015-01-01`
I got the error
```
(venv) myusername@MacBook-Air airflow % airflow tasks run example_bash_operator runme_0 2015-01-01
[2022-11-14 23:49:19,228] {dagbag.py:537} INFO - Filling up the DagBag from /Users/myusername/airflow/dags
[2022-11-14 23:49:19,228] {cli.py:225} WARNING - Dag '\x1b[01mexample_bash_operator\x1b[22m' not found in path /Users/myusername/airflow/dags; trying path /Users/myusername/airflow/dags
[2022-11-14 23:49:19,228] {dagbag.py:537} INFO - Filling up the DagBag from /Users/myusername/airflow/dags
Traceback (most recent call last):
File "/Users/myusername/airflow/venv/bin/airflow", line 8, in <module>
sys.exit(main())
File "/Users/myusername/airflow/venv/lib/python3.10/site-packages/airflow/__main__.py", line 39, in main
args.func(args)
File "/Users/myusername/airflow/venv/lib/python3.10/site-packages/airflow/cli/cli_parser.py", line 52, in command
return func(*args, **kwargs)
File "/Users/myusername/airflow/venv/lib/python3.10/site-packages/airflow/utils/cli.py", line 103, in wrapper
return f(*args, **kwargs)
File "/Users/myusername/airflow/venv/lib/python3.10/site-packages/airflow/cli/commands/task_command.py", line 366, in task_run
dag = get_dag(args.subdir, args.dag_id, include_examples=False)
File "/Users/myusername/airflow/venv/lib/python3.10/site-packages/airflow/utils/cli.py", line 228, in get_dag
raise AirflowException(
airflow.exceptions.AirflowException: Dag 'example_bash_operator' could not be found; either it does not exist or it failed to parse.
```
### What you think should happen instead
Successful completion of the task
### How to reproduce
1. Create a Python venv based on Python 3.10
2. Follow the [Quick Start instructions](https://airflow.apache.org/docs/apache-airflow/stable/start.html) through `airflow tasks run example_bash_operator runme_0 2015-01-01`
3. The error should appear
### Operating System
MacOS 12.5.1
### Versions of Apache Airflow Providers
Not applicable
### Deployment
Other
### Deployment details
This is just on my local machine
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27799 | https://github.com/apache/airflow/pull/27813 | d8dbdccef7cc14af7bacbfd4ebc48d8aabfaf7f0 | b9729d9e469f7822212e0d6d76e10d95411e739a | "2022-11-20T02:41:51Z" | python | "2022-11-21T09:29:54Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,715 | [".pre-commit-config.yaml", "STATIC_CODE_CHECKS.rst", "dev/breeze/src/airflow_breeze/pre_commit_ids.py", "images/breeze/output-commands-hash.txt", "images/breeze/output_static-checks.svg"] | Add pre-commit rule to validate using `urlsplit` rather than `urlparse` | ### Body
Originally suggested in https://github.com/apache/airflow/pull/27389#issuecomment-1297252026
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/27715 | https://github.com/apache/airflow/pull/27841 | cd01650192b74573b49a20803e4437e611a4cf33 | a99254ffd36f9de06feda6fe45773495632e3255 | "2022-11-16T14:49:46Z" | python | "2023-02-20T01:06:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,714 | ["airflow/www/static/js/trigger.js", "airflow/www/templates/airflow/trigger.html", "airflow/www/utils.py", "airflow/www/views.py"] | Re-use recent DagRun JSON-configurations | ### Description
Allow users to re-use recent DagRun configurations upon running a DAG.
This can be achieved by adding a dropdown that contains some information about recent configurations. When user selects an item, the relevant JSON configuration can be pasted to the "Configuration JSON" textbox.
<img width="692" alt="Screen Shot 2022-11-16 at 16 22 30" src="https://user-images.githubusercontent.com/39705397/202209536-c709ec75-c768-48ab-97d4-82b02af60569.png">
<img width="627" alt="Screen Shot 2022-11-16 at 16 22 38" src="https://user-images.githubusercontent.com/39705397/202209553-08828521-dba2-4e83-8e2a-6dec850086de.png">
<img width="612" alt="Screen Shot 2022-11-16 at 16 38 40" src="https://user-images.githubusercontent.com/39705397/202209755-0946521a-e1a5-44cb-ae74-d43ca3735f31.png">
### Use case/motivation
Commonly, DAGs are triggered using repetitive configurations. Sometimes the same configuration is used for triggering a DAG, and sometimes, the configuration differs by just a few parameters.
This interaction forces a user to store the templates he uses somewhere on his machine or to start searching for the configuration he needs in `dagrun/list/`, which does take extra time.
It will be handy to offer a user an option to select one of the recent configurations upon running a DAG.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27714 | https://github.com/apache/airflow/pull/27805 | 7f0332de2d1e57cde2e031f4bb7b4e6844c4b7c1 | e2455d870056391eed13e32e2d0ed571cc7089b4 | "2022-11-16T14:39:23Z" | python | "2022-12-01T22:03:16Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,698 | ["airflow/kubernetes/pod_template_file_examples/git_sync_template.yaml", "chart/values.schema.json", "chart/values.yaml", "newsfragments/27698.significant.rst"] | Update git-sync with newer version | ### Official Helm Chart version
1.7.0 (latest released)
### What happened
The current git-sync image that is used is coming up on one year old. It is also using the deprecated `--wait` arg.
### What you think should happen instead
In order to stay current, we should update git-sync from 3.4.0 to 3.6.1.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27698 | https://github.com/apache/airflow/pull/27848 | af9143eacdff62738f6064ae7556dd8f4ca8d96d | 98221da0d96b102b009d422870faf7c5d3d931f4 | "2022-11-15T23:01:42Z" | python | "2023-01-21T18:00:41Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,695 | ["airflow/providers/apache/hive/hooks/hive.py", "tests/providers/apache/hive/hooks/test_hive.py"] | Improve filtering for invalid schemas in Hive hook | ### Description
#27647 has introduced filtering for invalid schemas in Hive hook based on the characters `;` and `!`. I'm wondering if a more generic filtering could be introduced, e.g. one that adheres to the regex `[^a-z0-9_]`, since Hive schemas (and table names) can only contain alphanumeric characters and the character `_`.
Note: since the Hive metastore [stores schemas and tables in lowercase](https://stackoverflow.com/questions/57181316/how-to-keep-column-names-in-camel-case-in-hive/57183048#57183048), checking against `[^a-z0-9_]` is probably better than `[^a-zA-Z0-9_]`.
### Use case/motivation
Ensure that Hive schemas used in `apache-airflow-providers-apache-hive` hooks contain no invalid characters.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27695 | https://github.com/apache/airflow/pull/27808 | 017ed9ac662d50b6e2767f297f36cb01bf79d825 | 2d45f9d6c30aabebce3449eae9f152ba6d2306e2 | "2022-11-15T17:04:45Z" | python | "2022-11-27T13:31:49Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,645 | ["airflow/www/views.py"] | Calendar view does not load when using CronTriggerTimeTable | ### Apache Airflow version
2.4.2
### What happened
Create a DAG and set the schedule parameter using a CronTriggerTimeTable instance. Enable the DAG so that there is DAG run data. Try to access the Calendar View for the DAG. An ERR_EMPTY_RESPONSE error is displayed instead of the page.
The Calendar View is accessible for other DAGs that are using the schedule_interval set to a cron string instead.
### What you think should happen instead
The Calendar View should have been displayed.
### How to reproduce
Create a DAG and set the schedule parameter to a CronTriggerTimeTable instance. Enable the DAG and allow some DAG runs to occur. Try to access the Calender View for the DAG.
### Operating System
Red Hat Enterprise Linux 8.6
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
Airflow 2.4.2 installed via pip with Python3.9 to venv using constraints.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27645 | https://github.com/apache/airflow/pull/28411 | 4b3eb77e65748b1a6a31116b0dd55f8295fe8a20 | 467a5e3ab287013db2a5381ef4a642e912f8b45b | "2022-11-13T19:53:24Z" | python | "2022-12-28T05:52:54Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,622 | ["airflow/jobs/scheduler_job.py", "tests/jobs/test_scheduler_job.py"] | AirflowException Crashing the Scheduler During the scheduling loop (_verify_integrity_if_dag_changed) | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
### Deployment
* Airflow Version: 2.4.1
* Infrastructure: AWS ECS
* Number of DAG: 162
```
Version: [v2.4.1](https://pypi.python.org/pypi/apache-airflow/2.4.1)
Git Version: .release:2.4.1+7b979def75923ba28dd64e31e613043d29f34fce
```
### The issue
We have seen this issue when the Scheduler is trying to schedule **too many DAG (140+)** around the same time
```
[2022-11-11T00:15:00.311+0000] {{dagbag.py:196}} WARNING - Serialized DAG mongodb-assistedbreakdown-jobs-processes no longer exists
[2022-11-11T00:15:00.312+0000] {{scheduler_job.py:763}} ERROR - Exception when executing SchedulerJob._run_scheduler_loop
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/jobs/scheduler_job.py", line 746, in _execute
self._run_scheduler_loop()
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/jobs/scheduler_job.py", line 866, in _run_scheduler_loop
num_queued_tis = self._do_scheduling(session)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/jobs/scheduler_job.py", line 948, in _do_scheduling
callback_to_run = self._schedule_dag_run(dag_run, session)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/jobs/scheduler_job.py", line 1292, in _schedule_dag_run
self._verify_integrity_if_dag_changed(dag_run=dag_run, session=session)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/jobs/scheduler_job.py", line 1321, in _verify_integrity_if_dag_changed
dag_run.verify_integrity(session=session)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/session.py", line 72, in wrapper
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/dagrun.py", line 874, in verify_integrity
dag = self.get_dag()
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/dagrun.py", line 484, in get_dag
raise AirflowException(f"The DAG (.dag) for {self} needs to be set")
airflow.exceptions.AirflowException: The DAG (.dag) for <DagRun mongodb-assistedbreakdown-jobs-processes @ 2022-11-10 00:10:00+00:00: scheduled__2022-11-10T00:10:00+00:00, state:running, queued_at: 2022-11-11 00:10:09.363852+00:00. externally triggered: False> needs to be set
```
Main Cause
```
raise AirflowException(f"The DAG (.dag) for {self} needs to be set")
```
[We believe this is happening here, airflow github](https://github.com/apache/airflow/blob/7b979def75923ba28dd64e31e613043d29f34fce/airflow/jobs/scheduler_job.py#L1318)
We saw a large amount of Connection hitting our airflow Database, but CPU was around 60%. Is there any workaround or configuration that can help the scheduler not crash when this happen?
### What you think should happen instead
Can the scheduler be safe, or when it come back to reschedule the dags that got stuck
### How to reproduce
_No response_
### Operating System
Amazon Linux 2, Fargate deployment using the airflow Image
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
AWS ECS Fargate
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27622 | https://github.com/apache/airflow/pull/27720 | a5d5bd0232b98c6b39e587dd144086f4b7d8664d | 15e842da56d9b3a1c2f47f9dec7682a4230dbc41 | "2022-11-11T15:58:20Z" | python | "2022-11-27T10:51:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,592 | ["airflow/providers/amazon/aws/hooks/glue.py", "tests/providers/amazon/aws/hooks/test_glue.py"] | AWS GlueJobOperator is not updating job config if job exists | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==6.0.0
### Apache Airflow version
2.2.5
### Operating System
Linux Ubuntu
### Deployment
Virtualenv installation
### Deployment details
Airflow deployed on ec2 instance
### What happened
`GlueJobOperator` from airflow-amazon-provider is not updating job configuration (like its arguments or number of workers for example) if the job already exists and if there was a change in the configuration for example:
```python
def get_or_create_glue_job(self) -> str:
"""
Creates(or just returns) and returns the Job name
:return:Name of the Job
"""
glue_client = self.get_conn()
try:
get_job_response = glue_client.get_job(JobName=self.job_name)
self.log.info("Job Already exist. Returning Name of the job")
return get_job_response['Job']['Name']
except glue_client.exceptions.EntityNotFoundException:
self.log.info("Job doesn't exist. Now creating and running AWS Glue Job")
...
```
Is there a particular reason to not doing it? Or it was just not done during the implementation of the operarot?
### What you think should happen instead
_No response_
### How to reproduce
Create a `GlueJobOperator` with a simple configuration:
```python
from airflow.providers.amazon.aws.operators.glue import GlueJobOperator
submit_glue_job = GlueJobOperator(
task_id='submit_glue_job',
job_name='test_glue_job
job_desc='test glue job',
script_location='s3://bucket/path/to/the/script/file',
script_args={},
s3_bucket='bucket',
concurrent_run_limit=1,
retry_limit=0,
num_of_dpus=5,
wait_for_completion=False
)
```
Then update one of the initial configuration like `num_of_dpus=10` and validate that the operator is not updating glue job configuration on AWS when it is run again.
### Anything else
There is `GlueCrawlerOperator` which is similar to GlueJobOperator and is doing it:
```python
def execute(self, context: Context):
"""
Executes AWS Glue Crawler from Airflow
:return: the name of the current glue crawler.
"""
crawler_name = self.config['Name']
if self.hook.has_crawler(crawler_name):
self.hook.update_crawler(**self.config)
else:
self.hook.create_crawler(**self.config)
...
```
This behavior could be reproduced in the AWSGlueJobOperator if we agree to do it.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27592 | https://github.com/apache/airflow/pull/27893 | 4fdfef909e3b9a22461c95e4ee123a84c47186fd | b609ab9001102b67a047b3078dc0b67fbafcc1e1 | "2022-11-10T16:00:05Z" | python | "2022-12-06T14:29:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,556 | ["airflow/providers/amazon/aws/hooks/glue_crawler.py", "airflow/providers/amazon/aws/operators/glue_crawler.py", "tests/providers/amazon/aws/hooks/test_glue_crawler.py", "tests/providers/amazon/aws/operators/test_glue_crawler.py"] | Using GlueCrawlerOperator fails when using tags | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
We are using tags on resource in AWS. When setting tags when using `GlueCrawlerOperator` it works the first time, when Airflow creates the crawler. However on subsequent runs in fails because `boto3.get_crawler()` does not return the Tags. Hence we get the error below.
```
[2022-11-08, 14:48:49 ] {taskinstance.py:1774} ERROR - Task failed with exception
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/airflow/providers/amazon/aws/operators/glue_crawler.py", line 80, in execute
self.hook.update_crawler(**self.config)
File "/usr/local/lib/python3.7/site-packages/airflow/providers/amazon/aws/hooks/glue_crawler.py", line 86, in update_crawler
key: value for key, value in crawler_kwargs.items() if current_crawler[key] != crawler_kwargs[key]
File "/usr/local/lib/python3.7/site-packages/airflow/providers/amazon/aws/hooks/glue_crawler.py", line 86, in <dictcomp>
key: value for key, value in crawler_kwargs.items() if current_crawler[key] != crawler_kwargs[key]
KeyError: 'Tags'
```
### What you think should happen instead
Ignore tags when checking if the crawler should be updated.
### How to reproduce
Use `GlueCrawlerOperator` with Tags like below and trigger the task multiple times. It will fail the second time around.
```
GlueCrawlerOperator(
dag=dag,
task_id="the_task_id",
config={
"Name": "name_of_the_crwaler",
"Role": "some-role",
"DatabaseName": "some_database",
"Targets": {"S3Targets": [{"Path": "s3://..."}]},
"TablePrefix": "a_table_prefix",
"RecrawlPolicy": {
"RecrawlBehavior": "CRAWL_EVERYTHING"
},
"SchemaChangePolicy": {
"UpdateBehavior": "UPDATE_IN_DATABASE",
"DeleteBehavior": "DELETE_FROM_DATABASE"
},
"Tags": {
"TheTag": "value-of-my-tag"
}
}
```
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
```
apache-airflow-providers-cncf-kubernetes==3.0.0
apache-airflow-providers-google==6.7.0
apache-airflow-providers-amazon==3.2.0
apache-airflow-providers-slack==4.2.3
apache-airflow-providers-http==2.1.2
apache-airflow-providers-mysql==2.2.3
apache-airflow-providers-ssh==2.4.3
apache-airflow-providers-jdbc==2.1.3
```
### Deployment
Other 3rd-party Helm chart
### Deployment details
Airflow v2.2.5
Self-hosted Airflow in Kubernetes.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27556 | https://github.com/apache/airflow/pull/28005 | b3d7e17e72c05fd149a5514e3796d46a241ac4f7 | 3ee5c404b7a0284fc1f3474519b3833975aaa644 | "2022-11-08T14:16:14Z" | python | "2022-12-06T11:37:33Z" |