status
stringclasses 1
value | repo_name
stringclasses 13
values | repo_url
stringclasses 13
values | issue_id
int64 1
104k
| updated_files
stringlengths 10
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 38
55
| pull_url
stringlengths 38
53
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | apache/airflow | https://github.com/apache/airflow | 32,106 | ["airflow/providers/google/cloud/transfers/bigquery_to_gcs.py", "airflow/providers/google/cloud/transfers/gcs_to_bigquery.py", "tests/providers/google/cloud/transfers/test_bigquery_to_gcs.py", "tests/providers/google/cloud/transfers/test_gcs_to_bigquery.py"] | GCSToBigQueryOperator and BigQueryToGCSOperator do not respect their project_id arguments | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
We experienced this issue Airflow 2.6.1, but the problem exists in the Google provider rather than core Airflow, and were introduced with [these changes](https://github.com/apache/airflow/pull/30053/files). We are using version 10.0.0 of the provider.
The [issue](https://github.com/apache/airflow/issues/29958) that resulted in these changes seems to be based on an incorrect understanding of how projects interact in BigQuery -- namely that the project used for storage and the project used for compute can be separate. The user reporting the issue appears to mistake an error about compute (`User does not have bigquery.jobs.create permission in project {project-A}.` for an error about storage, and this incorrect diagnosis resulted in a fix that inappropriately defaults the compute project to the project named in destination/source (depending on the operator) table.
The change attempts to allow users to override this (imo incorrect) default, but unfortunately this does not currently work because `self.project_id` gets overridden with the named table's project [here](https://github.com/apache/airflow/pull/30053/files#diff-875bf3d1bfbba7067dc754732c0e416b8ebe7a5b722bc9ac428b98934f04a16fR512) and [here](https://github.com/apache/airflow/pull/30053/files#diff-875bf3d1bfbba7067dc754732c0e416b8ebe7a5b722bc9ac428b98934f04a16fR587).
### What you think should happen instead
I think that the easiest fix would be to revert the change, and return to defaulting the compute project to the one specified in the default google cloud connection. However, since I can understand the desire to override the `project_id`, I think handling it correctly, and clearly distinguishing between the concepts of storage and compute w/r/t projects would also work.
### How to reproduce
Attempt to use any other project for running the job, besides the one named in the source/destination table
### Operating System
debian
### Versions of Apache Airflow Providers
apache-airflow-providers-google==10.0.0
### Deployment
Other 3rd-party Helm chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32106 | https://github.com/apache/airflow/pull/32232 | b3db4de4985eccb859a30a07a2350499370c6a9a | 2d690de110825ba09b9445967b47c44edd8f151c | "2023-06-23T19:08:10Z" | python | "2023-07-06T23:12:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,091 | ["airflow/jobs/triggerer_job_runner.py", "tests/jobs/test_triggerer_job.py"] | Triggerer intermittent failure when running many triggerers | ### Apache Airflow version
2.6.2
### What happened
We are running a dag with many deferrable tasks using a custom trigger that waits for an Azure Batch task to complete. When many tasks have been deferred, we can an intermittent error in the Triggerer. The logged error message is the following:
```
Exception in thread Thread-2:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/threading.py", line 980, in _bootstrap_inner
self.run()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/triggerer_job_runner.py", line 457, in run
asyncio.run(self.arun())
File "/usr/local/lib/python3.9/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/local/lib/python3.9/asyncio/base_events.py", line 647, in run_until_complete
return future.result()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/triggerer_job_runner.py", line 470, in arun
await self.create_triggers()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/triggerer_job_runner.py", line 492, in create_triggers
dag_id = task_instance.dag_id
AttributeError: 'NoneType' object has no attribute 'dag_id'
```
After this error occurs, the trigger still reports as healthy, but no events are triggered. Restarting the triggerer fixes the problem.
### What you think should happen instead
The specific error in the trigger should be addressed to prevent the triggerer async thread from crashing.
The triggerer should not perform heartbeat updates when the async triggerer thread has crashed.
### How to reproduce
This occurs intermittently, and seems to be the results of running more than one triggerer. Running many deferred tasks eventually ends up with this error occurring.
### Operating System
linux (standard airflow slim images extended with custom code running on kubernetes)
### Versions of Apache Airflow Providers
postgres,celery,redis,ssh,statsd,papermill,pandas,github_enterprise
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
Azure Kubernetes and helm chart 1.9.0.
2 replicas of both triggerer and scheduler.
### Anything else
It seems that as triggers fire, the link between the trigger row and the associated task_instance for the trigger is removed before the trigger row is removed. This leaves a small amount of time where the trigger exists without an associated task_instance. The database updates are performed in a synchronous loop inside the triggerer, so with one triggerer, this is not a problem. However, it can be a problem with more than one triggerer.
Also, once the triggerer async loop (that handles the trigger code) fails, the triggers no longer fire. However, the heartbeat is handled by the synchronous loop so the job still reports as healthy.
I have included an associated PR to resolve these issues.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32091 | https://github.com/apache/airflow/pull/32092 | 14785bc84c984b8747fa062b84e800d22ddc0477 | e585b588fc49b1b1c73a8952e9b257d7a9e13314 | "2023-06-23T11:08:50Z" | python | "2023-06-27T21:46:16Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,069 | ["airflow/providers/google/cloud/hooks/dataproc.py", "tests/providers/google/cloud/hooks/test_dataproc.py"] | AioRpcError in DataprocCreateBatchOperator | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Airflow version: 2.3.4 (Composer 2.1.12)
I've been using the DataprocCreateBatchOperator with the deferrable=True option. It worked well for the past few months, but an error started appearing on June 21, 2023, at 16:51 UTC. The error message is as follows:
```
grpc.aio._call.AioRpcError: <AioRpcError of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = "Request contains an invalid argument."
debug_error_string = "UNKNOWN:Error received from peer ipv4:74.125.69.95:443 {grpc_message:"Request contains an invalid argument.", grpc_status:3, created_time:"2023-06-21T16:51:22.992951359+00:00"}"
```
### What you think should happen instead
The name argument in the [hook code](https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/hooks/dataproc.py#L1746) follows the format "projects/PROJECT_ID/regions/DATAPROC_REGION/batches/BATCH_ID". However, according to the [Google Cloud DataProc API Reference](https://cloud.google.com/dataproc-serverless/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.GetBatchRequest), it should be in the format "projects/PROJECT_ID/locations/DATAPROC_REGION/batches/BATCH_ID .
### How to reproduce
just run a dataproc operator like this
```
create_batch = DataprocCreateBatchOperator(
task_id="create_batch",
batch_id="batch_test",
deferrable=True,
)
```
### Operating System
Ubuntu 20.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Google Cloud Composer
### Deployment details
Composer 2.1.12
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32069 | https://github.com/apache/airflow/pull/32070 | 479719297ff4efa8373dc7b6909bfc59a5444c3a | 59d64d8f2ed3c0e7b93d3c07041d47883cabb908 | "2023-06-22T06:42:33Z" | python | "2023-06-22T21:20:44Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,045 | ["airflow/executors/celery_executor_utils.py", "tests/integration/executors/test_celery_executor.py"] | Celery Executor cannot connect to the database to get information, resulting in a scheduler exit abnormally | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
We use Celery Executor where using RabbitMQ as a broker and postgresql as a result backend
Airflow Version: 2.2.3
Celery Version: 5.2.3
apache-airflow-providers-celery==2.1.0
Below is the error message:
```
_The above exception was the direct cause of the following exception: Traceback (most recent call last):
File"/app/airflow2.2.3/airflow/airflow/jobs/schedulerjob.py”, line 672, in _execute self._run_scheduler_loop()
File"/app/airflow2.2.3/airflow/airflow/jobs/scheduler_job.py", line 754, in _run_scheduler_loop self.executor.heartbeat()
File"/app/airflow2.2.3/airflow/airflow/executors/base_executor.py”, line 168, in heartbeat self.sync()
File"/app/airflow2.2.3/airflow/airflow/executors/celery_executorpy”, line 330, in sync self.update_all_task_states()
File"/app/airflow223/airflow/airflow/executors/celery_executor.py”,line 442,in update_all_task_states state_and_info_by_celery_task_id=self.bulk_state_fetcher.get_many(self.tasks. values()) File"/app/airflow2.2.3/airflow/airflow/executors/celery_executorpy”,line 598, in get_many result = self._get many_from db backend(async_results)
File"/app/airflow2.2.3/airflow/airflow/executors/celery_executor.py”,line 618, in get_many_from_db_backend tasks-session.query(task_cls).filter(task_cls.task_id.in(task_ids)).all()
File“/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/orm/query.py”, line 3373, in all return list(self)
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/orm/query.py”, line 3535, in iter return self._execute_and_instances(context)
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/orm/query.py”,line 3556, in _execute_and_instances conn =self._get bind args(
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/salalchemy/orm/query.py”, line 3571, in _get_bind_args return fn(
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/orm/query.py”,line 3550, in _connection_from_session conn=self.session.connection(**kw)
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/orm/session.py”, line 1142, in connection return self._connection_for_bind(
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/orm/session.py”,line 1150, in _connection_for_bind return self.transaction.connection_for bind(
File“/app/airflow2.2.3/airflow2_env/Iib/python3.8/site-packages/sqlalchemy/orm/session.py”, line 433, in _connection_for_bind conn=bind._contextual_connect()
File“/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/salalchemy/engine/base.py”,line 2302, in _contextual_connect self._wrap_pool_connect(self.pool.connect,None),
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/engine/base.py”, line 2339, in _wrap_pool_connect
Tracking Connection.handle dbapi_exception_noconnection(
File "/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/engine/base.py”, line 1583,in handle_dbapi_exception_noconnection util.raise (
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/util/compat.py”, line 182, in raise
ents raise exception File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/salalchemy/engine/base.py”, line 2336, in _wrap_pool_connect
return fn()
2023-06-05 16:39:05.069 ERROR -Exception when executing SchedulerJob. run scheduler loop Traceback (most recent call last):
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/engine/base.py”, line 2336,in _wrap_pool_connect return fno
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/pool/base.py”, line 364, in connect returnConnectionFairy.checkout(self)
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/pool/base.py”, line 778, in _checkout fairy=ConnectionRecordcheckout(pool)
File"/app/airflow2.2.3/airflow2_env/lib/python3.8/site-packages/sqlalchemy/pool/base.py”, line 495, in checkout rec=pool. do_get()
File“/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/pool/impl.py”, line 241, in _do_get return self._createconnection()
File"/app/airflow2.2.3/airflow2_env/lib/python3.8/site-packages/salalchemy/pool/base.py”, line 309, in _create_connection return _ConnectionRecord(self)
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/sitepackages/sqlalchemy/pool/base.py”, line 440, in init self. connect(firstconnectcheck=True)
File"/app/airflow2.2.3/airflow2_env/lib/python3.8/site-packages/sqlalchemy/pool/base.py”, line 661, in connect pool.logger.debug"Error onconnect(:%s",e)
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/util/langhelpers.py”, line 68, in exit compat.raise(
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/salalchemy/util/compat.py", line 182, in raise raise exception
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/pool/base.py”, line 656, in _connect connection =pool.invoke_creator(sel f)
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/engine/strategies.py”, line 114, in connect return dialect.connect(*cargs, **cparans)
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/engine/default.py”,line 508, in connect return self.dbapi.connect(*cargs, **cparams)
File"/app/airflow2.2.3/airflow2_env/lib/python3.8/site-packages/psycopg2/init.py”, line 126, in connect conn=connect(dsn,connection_factory=connection_factory, **kwasync) psycopg2.0perationalError: could not connect to server: Connection timed out
Is the server running on host"xxxxxxxxxx”and accepting TCP/IP connections on port 5432?
```
### What you think should happen instead
I think it may be caused by network jitter issues, add retries to solve it
### How to reproduce
celeryExecutor fails to create a PG connection while retrieving metadata information, and it can be reproduced
### Operating System
NAME="RedFlag Asianux" VERSION="7 (Lotus)"
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32045 | https://github.com/apache/airflow/pull/31998 | de585f521b5898ba7687072a7717fd3b67fa8c5c | c3df47efc2911706897bf577af8a475178de4b1b | "2023-06-21T08:09:17Z" | python | "2023-06-26T17:01:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,020 | ["airflow/cli/cli_config.py", "airflow/cli/commands/task_command.py", "airflow/utils/cli.py", "tests/cli/commands/test_task_command.py"] | Airflow tasks run -m cli command giving 504 response | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Hello ,
we are facing "504 Gateway Time-out" error when running "tasks run -m" CLI command. We are trying to create Complex DAG and run tasks from cli command. When we are trying to run "tasks run -m" then we received gateway timeout error.
We also observed high resources spike in web server when we tried to execute this cli command. After looking into further, when we run "task run -m" Airflow CLI command, what it does is it parses the list of DAGs and the parses through the task list. Because this , we observed high resources of webserver and received gateway timeout error.
### What you think should happen instead
We are expecting that when to execute "tasks run" cli command, it should only parse DAG name and task name provided in the command and not parse DAG lists followed by task list.
### How to reproduce
please follow below steps to reproduce this issue.
1. we have 900 DAG in airflow environment.
2. we have created web login token to access web server.
3. after that we tried to run "tasks run" using python script
### Operating System
Amazon linux
### Versions of Apache Airflow Providers
Airflow version 2.2.2
### Deployment
Amazon (AWS) MWAA
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32020 | https://github.com/apache/airflow/pull/32038 | d49fa999a94a2269dd6661fe5eebbb4c768c7848 | 05a67efe32af248ca191ea59815b3b202f893f46 | "2023-06-20T08:51:13Z" | python | "2023-06-23T22:31:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,007 | ["airflow/sensors/external_task.py", "tests/sensors/test_external_task_sensor.py"] | ZeroDivisionError while poking ExternalTaskSensor | ### Apache Airflow version
2.6.2
### What happened
After upgrading to version 2.6.2, we started getting a `ZeroDivisionError` the first time some _ExternalTaskSensor_ were poked.
### What you think should happen instead
Sensor should exit with return code 0, as it did when cleared after the first fail:
```
[2023-06-19, 10:03:57 UTC] {external_task.py:240} INFO - Poking for task_group 'lote2db_etl' in dag 'malha_batch' on 2023-06-16T08:30:00+00:00 ...
[2023-06-19, 10:03:57 UTC] {base.py:255} INFO - Success criteria met. Exiting.
[2023-06-19, 10:03:57 UTC] {taskinstance.py:1345} INFO - Marking task as SUCCESS. dag_id=rotinas_risco, task_id=ets_malha_lote2db_etl, execution_date=20230616T080000, start_date=20230619T100357, end_date=20230619T100357
[2023-06-19, 10:03:57 UTC] {local_task_job_runner.py:225} INFO - Task exited with return code 0
```
### How to reproduce
We have a DAG, called *malha_batch*, whose `schedule` parameter equals to `"30 8 * * 1-5"`. We then have another one, called *rotinas_risco*, whose `schedule` parameter equals to `"0 8 * * 1-5"`, with four _ExternalTaskSensor_ pointing to *malha_batch*. Below are their definitions:
<details><summary>Excerpt from rotinas_risco.py</summary>
``` python
ets_malha_bbg_post_processing = ExternalTaskSensor(
task_id="ets_malha_bbg_post_processing",
external_dag_id="malha_batch",
external_task_group_id="bloomberg.post_processing",
allowed_states=[State.SUCCESS, State.SKIPPED],
failed_states=[State.FAILED],
execution_delta=timedelta(minutes=-30),
poke_interval=300,
mode="reschedule",
)
ets_malha_bbg_refreshes = ExternalTaskSensor(
task_id="ets_malha_bbg_refreshes",
external_dag_id="malha_batch",
external_task_group_id="bloomberg.refreshes",
allowed_states=[State.SUCCESS, State.SKIPPED],
failed_states=[State.FAILED],
execution_delta=timedelta(minutes=-30),
poke_interval=300,
mode="reschedule",
)
ets_malha_bbg_conversion_factor_to_base = ExternalTaskSensor(
task_id="ets_malha_bbg_conversion_factor_to_base",
external_dag_id="malha_batch",
external_task_id="prices.conversion_factor_to_base",
allowed_states=[State.SUCCESS, State.SKIPPED],
failed_states=[State.FAILED],
execution_delta=timedelta(minutes=-30),
poke_interval=300,
mode="reschedule",
)
ets_malha_lote2db_etl = ExternalTaskSensor(
task_id="ets_malha_lote2db_etl",
external_dag_id="malha_batch",
external_task_group_id="lote2db_etl",
allowed_states=[State.SUCCESS, State.SKIPPED],
failed_states=[State.FAILED],
execution_delta=timedelta(minutes=-30),
poke_interval=300,
mode="reschedule",
)
```
</details>
Out of those four _ExternalTaskSensor_, just one behave as expected, while the three others failed upon the first poking attempt with the following traceback:
```
[2023-06-19, 08:00:02 UTC] {external_task.py:240} INFO - Poking for task_group 'lote2db_etl' in dag 'malha_batch' on 2023-06-16T08:30:00+00:00 ...
[2023-06-19, 08:00:02 UTC] {taskinstance.py:1824} ERROR - Task failed with exception
Traceback (most recent call last):
File "/opt/conda/envs/airflow/lib/python3.9/site-packages/airflow/sensors/base.py", line 225, in execute
raise e
File "/opt/conda/envs/airflow/lib/python3.9/site-packages/airflow/sensors/base.py", line 212, in execute
poke_return = self.poke(context)
File "/opt/conda/envs/airflow/lib/python3.9/site-packages/airflow/utils/session.py", line 76, in wrapper
return func(*args, session=session, **kwargs)
File "/opt/conda/envs/airflow/lib/python3.9/site-packages/airflow/sensors/external_task.py", line 260, in poke
count_failed = self.get_count(dttm_filter, session, self.failed_states)
File "/opt/conda/envs/airflow/lib/python3.9/site-packages/airflow/sensors/external_task.py", line 369, in get_count
count = (
ZeroDivisionError: division by zero
```
The successful _ExternalTaskSensor_ logged as follows:
```
[2023-06-19, 08:00:02 UTC] {external_task.py:232} INFO - Poking for tasks ['prices.conversion_factor_to_base'] in dag malha_batch on 2023-06-16T08:30:00+00:00 ...
[2023-06-19, 08:00:02 UTC] {taskinstance.py:1784} INFO - Rescheduling task, marking task as UP_FOR_RESCHEDULE
[2023-06-19, 08:00:02 UTC] {local_task_job_runner.py:225} INFO - Task exited with return code 0
[2023-06-19, 08:00:02 UTC] {taskinstance.py:2653} INFO - 0 downstream tasks scheduled from follow-on schedule check
```
I was not able to reproduce the error with a smaller example, but the mere fact that, out of four similarly-defined sensors, three failed and one succeeded, to me, suggests we are facing a bug. Additionally, the problem did not arise with version 2.6.1.
### Operating System
Ubuntu 20.04.6 LTS (Focal Fossa)
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon==8.1.0
apache-airflow-providers-celery==3.2.0
apache-airflow-providers-common-sql==1.5.1
apache-airflow-providers-ftp==3.4.1
apache-airflow-providers-http==4.4.1
apache-airflow-providers-imap==3.2.1
apache-airflow-providers-postgres==5.5.0
apache-airflow-providers-redis==3.2.0
apache-airflow-providers-sqlite==3.4.1
```
### Deployment
Virtualenv installation
### Deployment details
Just a vanilla setup following https://airflow.apache.org/docs/apache-airflow/stable/installation/installing-from-pypi.html.
### Anything else
Please let me know whether additional log files from the scheduler or executor (Celery) should be provided.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32007 | https://github.com/apache/airflow/pull/32009 | c508b8e5310447b302128d8fbcc5c297a3e6e244 | 14eb1d3116ecef15be7be9a8f9d08757e74f981c | "2023-06-19T14:44:04Z" | python | "2023-06-21T09:55:45Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,005 | ["airflow/www/views.py"] | webserver - MAPPED tasks - rendered-templates page FAIL | ### Apache Airflow version
2.6.2
### What happened
If I want see the common information of the N mapped task .
I edit manually the URL cause there is no button in the current airflow console
```url
http://localhost:8090/task?dag_id=kubernetes_dag&task_id=task-one&execution_date=2023-06-18T00%3A00%3A00%2B00%3A00&map_index=0
```
by removing the `&map_index=0`
if I click on `rendered-templates` it fail
```log
{app.py:1744} ERROR - Exception on /rendered-templates [GET]
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.11/site-packages/flask/app.py", line 2529, in wsgi_app
response = self.full_dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/airflow/.local/lib/python3.11/site-packages/flask/app.py", line 1825, in full_dispatch_request
rv = self.handle_user_exception(e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/airflow/.local/lib/python3.11/site-packages/flask/app.py", line 1823, in full_dispatch_request
rv = self.dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/airflow/.local/lib/python3.11/site-packages/flask/app.py", line 1799, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/airflow/.local/lib/python3.11/site-packages/airflow/www/auth.py", line 47, in decorated
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/airflow/.local/lib/python3.11/site-packages/airflow/www/decorators.py", line 125, in wrapper
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/home/airflow/.local/lib/python3.11/site-packages/airflow/utils/session.py", line 76, in wrapper
return func(*args, session=session, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/airflow/.local/lib/python3.11/site-packages/airflow/www/views.py", line 1354, in rendered_templates
ti.refresh_from_task(raw_task)
^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'refresh_from_task'
```
### What you think should happen instead
_No response_
### How to reproduce
[Screencast from 19-06-2023 15:07:02.webm](https://github.com/apache/airflow/assets/10202690/10a3546f-f2ef-4c2d-a704-7eb43573ad83)
### Operating System
ubuntu 22.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32005 | https://github.com/apache/airflow/pull/32011 | 2c645d59d505a99c8e7507ef05d6f3ecf430d578 | 62a534dbc7fa8ddb4c249ade85c558b64d1630dd | "2023-06-19T13:08:24Z" | python | "2023-06-25T08:06:34Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,002 | ["setup.cfg", "setup.py"] | log url breaks on login redirect | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
on 2.5.3:
log url is https://airflow.hostname.de/log?execution_date=2023-06-19T04%3A00%3A00%2B00%3A00&task_id=task_name&dag_id=dag_name&map_index=-1
this url works when I am logged in.
If I am logged out, the login screen will redirect me to https://airflow.hostname.de/log?execution_date=2023-06-19T04%3A00%3A00+00%3A00&task_id=task_name&dag_id=dag_name&map_index=-1 which shows me an empty log.
the redirect seems to convert the `%2B` back to a `+` in the timezone component of the execution_date, while leaving all other escaped characters untouched.
### What you think should happen instead
log url works correctly after login redirect
### How to reproduce
https://airflow.hostname.de/log?execution_date=2023-06-19T04%3A00%3A00%2B00%3A00&task_id=task_name&dag_id=dag_name&map_index=-1
have a log url with a execution date using a timezone with a positive utc offset
### Operating System
linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32002 | https://github.com/apache/airflow/pull/32054 | e39362130b8659942672a728a233887f0b02dc8b | 92497fa727a23ef65478ef56572c7d71427c4a40 | "2023-06-19T11:14:59Z" | python | "2023-07-08T19:18:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,986 | ["airflow/providers/google/cloud/operators/vertex_ai/auto_ml.py", "tests/providers/google/cloud/operators/test_vertex_ai.py"] | Unable to find project ID causing authentication failure in AutoML task | ### Apache Airflow version
Other Airflow 2 version (please specify below) : 2.6.1
### What happened
I encountered an issue while running an AutoML task. The task failed with an authentication error due to the inability to find the project ID. Here are the details of the error:
```
[2023-06-17T18:42:48.916+0530] {[taskinstance.py:1308](http://taskinstance.py:1308/)} INFO - Starting attempt 1 of 1
[2023-06-17T18:42:48.931+0530] {[taskinstance.py:1327](http://taskinstance.py:1327/)} INFO - Executing <Task(CreateAutoMLTabularTrainingJobOperator): auto_ml_tabular_task> on 2023-06-17 13:12:33+00:00
[2023-06-17T18:42:48.964+0530] {[standard_task_runner.py:57](http://standard_task_runner.py:57/)} INFO - Started process 12974 to run task
[2023-06-17T18:42:48.971+0530] {[standard_task_runner.py:84](http://standard_task_runner.py:84/)} INFO - Running: ['airflow', 'tasks', 'run', 'vi_create_auto_ml_tabular_training_job_dag', 'auto_ml_tabular_task', 'manual__2023-06-17T13:12:33+00:00', '--job-id', '175', '--raw', '--subdir', 'DAGS_FOLDER/vi_create_model_train.py', '--cfg-path', '/tmp/tmprijpfzql']
[2023-06-17T18:42:48.974+0530] {[standard_task_runner.py:85](http://standard_task_runner.py:85/)} INFO - Job 175: Subtask auto_ml_tabular_task
[2023-06-17T18:42:49.043+0530] {[logging_mixin.py:149](http://logging_mixin.py:149/)} INFO - Changing /mnt/d/projects/airflow/logs/dag_id=vi_create_auto_ml_tabular_training_job_dag/run_id=manual__2023-06-17T13:12:33+00:00/task_id=auto_ml_tabular_task permission to 509
[2023-06-17T18:42:49.044+0530] {[task_command.py:410](http://task_command.py:410/)} INFO - Running <TaskInstance: vi_create_auto_ml_tabular_training_job_dag.auto_ml_tabular_task manual__2023-06-17T13:12:33+00:00 [running]> on host DESKTOP-EIFUHU2.localdomain
[2023-06-17T18:42:49.115+0530] {[taskinstance.py:1545](http://taskinstance.py:1545/)} INFO - Exporting env vars: AIRFLOW_CTX_DAG_OWNER='airflow' AIRFLOW_CTX_DAG_ID='vi_create_auto_ml_tabular_training_job_dag' AIRFLOW_CTX_TASK_ID='auto_ml_tabular_task' AIRFLOW_CTX_EXECUTION_DATE='2023-06-17T13:12:33+00:00' AIRFLOW_CTX_TRY_NUMBER='1' AIRFLOW_CTX_DAG_RUN_ID='manual__2023-06-17T13:12:33+00:00'
[2023-06-17T18:42:49.120+0530] {[base.py:73](http://base.py:73/)} INFO - Using connection ID 'gcp_conn' for task execution.
[2023-06-17T18:42:52.123+0530] {[_metadata.py:141](http://_metadata.py:141/)} WARNING - Compute Engine Metadata server unavailable on attempt 1 of 3. Reason: timed out
[2023-06-17T18:42:55.125+0530] {[_metadata.py:141](http://_metadata.py:141/)} WARNING - Compute Engine Metadata server unavailable on attempt 2 of 3. Reason: timed out
[2023-06-17T18:42:58.128+0530] {[_metadata.py:141](http://_metadata.py:141/)} WARNING - Compute Engine Metadata server unavailable on attempt 3 of 3. Reason: timed out
[2023-06-17T18:42:58.129+0530] {[_default.py:340](http://_default.py:340/)} WARNING - Authentication failed using Compute Engine authentication due to unavailable metadata server.
[2023-06-17T18:42:58.131+0530] {[taskinstance.py:1824](http://taskinstance.py:1824/)} ERROR - Task failed with exception
Traceback (most recent call last):
File "/mnt/d/projects/tvenv/lib/python3.8/site-packages/google/cloud/aiplatform/initializer.py", line 244, in project
self._set_project_as_env_var_or_google_auth_default()
File "/mnt/d/projects/tvenv/lib/python3.8/site-packages/google/cloud/aiplatform/initializer.py", line 81, in _set_project_as_env_var_or_google_auth_default
credentials, project = google.auth.default()
File "/mnt/d/projects/tvenv/lib/python3.8/site-packages/google/auth/_default.py", line 692, in default
raise exceptions.DefaultCredentialsError(_CLOUD_SDK_MISSING_CREDENTIALS)
google.auth.exceptions.DefaultCredentialsError: Your default credentials were not found. To set up Application Default Credentials, see [https://cloud.google.com/docs/authentication/external/set-up-adc](https://cloud.google.com/docs/authentication/external/set-up-adc?authuser=0) for more information.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/mnt/d/projects/tvenv/lib/python3.8/site-packages/airflow/providers/google/cloud/operators/vertex_ai/auto_ml.py", line 359, in execute
dataset=datasets.TabularDataset(dataset_name=self.dataset_id),
File "/mnt/d/projects/tvenv/lib/python3.8/site-packages/google/cloud/aiplatform/datasets/dataset.py", line 77, in __init__
super().__init__(
File "/mnt/d/projects/tvenv/lib/python3.8/site-packages/google/cloud/aiplatform/base.py", line 925, in __init__
VertexAiResourceNoun.__init__(
File "/mnt/d/projects/tvenv/lib/python3.8/site-packages/google/cloud/aiplatform/base.py", line 507, in __init__
self.project = project or initializer.global_config.project
File "/mnt/d/projects/tvenv/lib/python3.8/site-packages/google/cloud/aiplatform/initializer.py", line 247, in project
raise GoogleAuthError(project_not_found_exception_str) from exc
google.auth.exceptions.GoogleAuthError: Unable to find your project. Please provide a project ID by:
- Passing a constructor argument
- Using aiplatform.init()
- Setting project using 'gcloud config set project my-project'
- Setting a GCP environment variable
[2023-06-17T18:42:58.139+0530] {[taskinstance.py:1345](http://taskinstance.py:1345/)} INFO - Marking task as FAILED. dag_id=vi_create_auto_ml_tabular_training_job_dag, task_id=auto_ml_tabular_task, execution_date=20230617T131233, start_date=20230617T131248, end_date=20230617T131258
[2023-06-17T18:42:58.152+0530] {[standard_task_runner.py:104](http://standard_task_runner.py:104/)} ERROR - Failed to execute job 175 for task auto_ml_tabular_task (Unable to find your project. Please provide a project ID by:
- Passing a constructor argument
- Using aiplatform.init()
- Setting project using 'gcloud config set project my-project'
- Setting a GCP environment variable; 12974)
[2023-06-17T18:42:58.166+0530] {[local_task_job_runner.py:225](http://local_task_job_runner.py:225/)} INFO - Task exited with return code 1
[2023-06-17T18:42:58.183+0530] {[taskinstance.py:2651](http://taskinstance.py:2651/)} INFO - 0 downstream tasks scheduled from follow-on schedule check
```
### What you think should happen instead
Expected Behavior:
The AutoML task should execute successfully, using the appropriate project ID and credentials for authentication given as per the gcs_con_id provided in the dag.
Actual Behavior:
The task fails with an authentication error due to the inability to find the project ID and default credentials.
### How to reproduce
To reproduce the issue and execute the CreateAutoMLTabularTrainingJobOperator task in Apache Airflow, follow these steps:
Ensure that Apache Airflow is installed. If not, run the following command to install it:
```
pip install apache-airflow
```
Create an instance of the CreateAutoMLTabularTrainingJobOperator within the DAG context:
```
create_auto_ml_tabular_training_job = CreateAutoMLTabularTrainingJobOperator(
gcp_conn_id='gcp_conn',
task_id="auto_ml_tabular_task",
display_name=TABULAR_DISPLAY_NAME,
optimization_prediction_type="regression",
optimization_objective="minimize-rmse",
#column_transformations=COLUMN_TRANSFORMATIONS,
dataset_id=tabular_dataset_id, # Get this //
target_column="mean_temp",
training_fraction_split=0.8,
validation_fraction_split=0.1,
test_fraction_split=0.1,
model_display_name='your-model-display-name',
disable_early_stopping=False,
region=REGION,
project_id=PROJECT_ID
)
```
Start the Apache Airflow scheduler and webserver. Open a terminal or command prompt and run the following commands:
```
# Start the scheduler
airflow scheduler
# Start the webserver
airflow webserver
```
Access the Apache Airflow web UI by opening a web browser and navigating to http://localhost:8080. Ensure that the scheduler and webserver are running without any errors.
Navigate to the DAGs page in the Airflow UI and locate the vi_create_auto_ml_tabular_training_job_dag DAG. Trigger the DAG manually, either by clicking the "Trigger DAG" button or using the Airflow CLI command.
Monitor the DAG execution status and check if the auto_ml_tabular_task completes successfully or encounters any errors.
### Operating System
DISTRIB_ID=Ubuntu DISTRIB_RELEASE=20.04 DISTRIB_CODENAME=focal DISTRIB_DESCRIPTION="Ubuntu 20.04 LTS"
### Versions of Apache Airflow Providers
$ pip freeze
aiofiles==23.1.0
aiohttp==3.8.4
aiosignal==1.3.1
alembic==1.11.1
anyio==3.7.0
apache-airflow==2.6.1
apache-airflow-providers-common-sql==1.5.1
apache-airflow-providers-ftp==3.4.1
apache-airflow-providers-google==10.1.1
apache-airflow-providers-http==4.4.1
apache-airflow-providers-imap==3.2.1
apache-airflow-providers-sqlite==3.4.1
apispec==5.2.2
argcomplete==3.1.1
asgiref==3.7.2
async-timeout==4.0.2
attrs==23.1.0
Babel==2.12.1
backoff==2.2.1
blinker==1.6.2
cachelib==0.9.0
cachetools==5.3.1
cattrs==23.1.2
certifi==2023.5.7
cffi==1.15.1
chardet==5.1.0
charset-normalizer==3.1.0
click==8.1.3
clickclick==20.10.2
colorama==0.4.6
colorlog==4.8.0
ConfigUpdater==3.1.1
connexion==2.14.2
cron-descriptor==1.4.0
croniter==1.4.1
cryptography==41.0.1
db-dtypes==1.1.1
Deprecated==1.2.14
dill==0.3.6
dnspython==2.3.0
docutils==0.20.1
email-validator==1.3.1
exceptiongroup==1.1.1
Flask==2.2.5
Flask-AppBuilder==4.3.0
Flask-Babel==2.0.0
Flask-Caching==2.0.2
Flask-JWT-Extended==4.5.2
Flask-Limiter==3.3.1
Flask-Login==0.6.2
flask-session==0.5.0
Flask-SQLAlchemy==2.5.1
Flask-WTF==1.1.1
frozenlist==1.3.3
future==0.18.3
gcloud-aio-auth==4.2.1
gcloud-aio-bigquery==6.3.0
gcloud-aio-storage==8.2.0
google-ads==21.2.0
google-api-core==2.11.1
google-api-python-client==2.89.0
google-auth==2.20.0
google-auth-httplib2==0.1.0
google-auth-oauthlib==1.0.0
google-cloud-aiplatform==1.26.0
google-cloud-appengine-logging==1.3.0
google-cloud-audit-log==0.2.5
google-cloud-automl==2.11.1
google-cloud-bigquery==3.11.1
google-cloud-bigquery-datatransfer==3.11.1
google-cloud-bigquery-storage==2.20.0
google-cloud-bigtable==2.19.0
google-cloud-build==3.16.0
google-cloud-compute==1.11.0
google-cloud-container==2.24.0
google-cloud-core==2.3.2
google-cloud-datacatalog==3.13.0
google-cloud-dataflow-client==0.8.3
google-cloud-dataform==0.5.1
google-cloud-dataplex==1.5.0
google-cloud-dataproc==5.4.1
google-cloud-dataproc-metastore==1.11.0
google-cloud-dlp==3.12.1
google-cloud-kms==2.17.0
google-cloud-language==2.10.0
google-cloud-logging==3.5.0
google-cloud-memcache==1.7.1
google-cloud-monitoring==2.15.0
google-cloud-orchestration-airflow==1.9.0
google-cloud-os-login==2.9.1
google-cloud-pubsub==2.17.1
google-cloud-redis==2.13.0
google-cloud-resource-manager==1.10.1
google-cloud-secret-manager==2.16.1
google-cloud-spanner==3.36.0
google-cloud-speech==2.20.0
google-cloud-storage==2.9.0
google-cloud-tasks==2.13.1
google-cloud-texttospeech==2.14.1
google-cloud-translate==3.11.1
google-cloud-videointelligence==2.11.2
google-cloud-vision==3.4.2
google-cloud-workflows==1.10.1
google-crc32c==1.5.0
google-resumable-media==2.5.0
googleapis-common-protos==1.59.1
graphviz==0.20.1
greenlet==2.0.2
grpc-google-iam-v1==0.12.6
grpcio==1.54.2
grpcio-gcp==0.2.2
grpcio-status==1.54.2
gunicorn==20.1.0
h11==0.14.0
httpcore==0.17.2
httplib2==0.22.0
httpx==0.24.1
idna==3.4
importlib-metadata==4.13.0
importlib-resources==5.12.0
inflection==0.5.1
itsdangerous==2.1.2
Jinja2==3.1.2
json-merge-patch==0.2
jsonschema==4.17.3
lazy-object-proxy==1.9.0
limits==3.5.0
linkify-it-py==2.0.2
lockfile==0.12.2
looker-sdk==23.10.0
Mako==1.2.4
Markdown==3.4.3
markdown-it-py==3.0.0
MarkupSafe==2.1.3
marshmallow==3.19.0
marshmallow-enum==1.5.1
marshmallow-oneofschema==3.0.1
marshmallow-sqlalchemy==0.26.1
mdit-py-plugins==0.4.0
mdurl==0.1.2
multidict==6.0.4
numpy==1.24.3
oauthlib==3.2.2
ordered-set==4.1.0
packaging==23.1
pandas==2.0.2
pandas-gbq==0.19.2
pathspec==0.9.0
pendulum==2.1.2
pkgutil-resolve-name==1.3.10
pluggy==1.0.0
prison==0.2.1
proto-plus==1.22.2
protobuf==4.23.3
psutil==5.9.5
pyarrow==12.0.1
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycparser==2.21
pydantic==1.10.9
pydata-google-auth==1.8.0
Pygments==2.15.1
PyJWT==2.7.0
pyOpenSSL==23.2.0
pyparsing==3.0.9
pyrsistent==0.19.3
python-daemon==3.0.1
python-dateutil==2.8.2
python-nvd3==0.15.0
python-slugify==8.0.1
pytz==2023.3
pytzdata==2020.1
PyYAML==6.0
requests==2.31.0
requests-oauthlib==1.3.1
requests-toolbelt==1.0.0
rfc3339-validator==0.1.4
rich==13.4.2
rich-argparse==1.1.1
rsa==4.9
setproctitle==1.3.2
Shapely==1.8.5.post1
six==1.16.0
sniffio==1.3.0
SQLAlchemy==1.4.48
sqlalchemy-bigquery==1.6.1
SQLAlchemy-JSONField==1.0.1.post0
SQLAlchemy-Utils==0.41.1
sqlparse==0.4.4
tabulate==0.9.0
tenacity==8.2.2
termcolor==2.3.0
text-unidecode==1.3
typing-extensions==4.6.3
tzdata==2023.3
uc-micro-py==1.0.2
unicodecsv==0.14.1
uritemplate==4.1.1
urllib3==2.0.3
Werkzeug==2.3.6
wrapt==1.15.0
WTForms==3.0.1
yarl==1.9.2
zipp==3.15.0
### Deployment
Virtualenv installation
### Deployment details
$ airflow info
Apache Airflow
version | 2.6.1
executor | SequentialExecutor
task_logging_handler | airflow.utils.log.file_task_handler.FileTaskHandler
sql_alchemy_conn | sqlite:////home/test1/airflow/airflow.db
dags_folder | /mnt/d/projects/airflow/dags
plugins_folder | /home/test1/airflow/plugins
base_log_folder | /mnt/d/projects/airflow/logs
remote_base_log_folder |
System info
OS | Linux
architecture | x86_64
uname | uname_result(system='Linux', node='DESKTOP-EIFUHU2', release='4.4.0-19041-Microsoft', version='#1237-Microsoft Sat Sep 11 14:32:00 PST
| 2021', machine='x86_64', processor='x86_64')
locale | ('en_US', 'UTF-8')
python_version | 3.8.10 (default, May 26 2023, 14:05:08) [GCC 9.4.0]
python_location | /mnt/d/projects/tvenv/bin/python3
Tools info
git | git version 2.25.1
ssh | OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020
kubectl | NOT AVAILABLE
gcloud | NOT AVAILABLE
cloud_sql_proxy | NOT AVAILABLE
mysql | NOT AVAILABLE
sqlite3 | NOT AVAILABLE
psql | NOT AVAILABLE
Paths info
airflow_home | /home/test1/airflow
system_path | /mnt/d/projects/tvenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/mnt/c/Program Files
| (x86)/Microsoft SDKs/Azure/CLI2/wbin:/mnt/c/Program Files/Python39/Scripts/:/mnt/c/Program
| Files/Python39/:/mnt/c/Windows/system32:/mnt/c/Windows:/mnt/c/Windows/System32/Wbem:/mnt/c/Windows/System32/WindowsPowerShell/v1.0/:/mnt/c/W
| indows/System32/OpenSSH/:/mnt/c/Users/ibrez/AppData/Roaming/nvm:/mnt/c/Program Files/nodejs:/mnt/c/Program
| Files/dotnet/:/mnt/c/Windows/system32/config/systemprofile/AppData/Local/Microsoft/WindowsApps:/mnt/c/Users/test1/AppData/Local/Microsoft/Wi
| ndowsApps:/snap/bin
python_path | /mnt/d/projects/tvenv/bin:/usr/lib/python38.zip:/usr/lib/python3.8:/usr/lib/python3.8/lib-dynload:/mnt/d/projects/tvenv/lib/python3.8/site-p
| ackages:/mnt/d/projects/airflow/dags:/home/test1/airflow/config:/home/test1/airflow/plugins
airflow_on_path | True
Providers info
apache-airflow-providers-common-sql | 1.5.1
apache-airflow-providers-ftp | 3.4.1
apache-airflow-providers-google | 10.1.1
apache-airflow-providers-http | 4.4.1
apache-airflow-providers-imap | 3.2.1
apache-airflow-providers-sqlite | 3.4.1
### Anything else
To me it seems like the issue is at the line
```
dataset=datasets.TabularDataset(dataset_name=self.dataset_id),
```
[here](https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/operators/vertex_ai/auto_ml.py)
The details like project id and credentials are not being passed to the TabularDataset class which causes issues down the like for
```
GoogleAuthError(project_not_found_exception_str)
```
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31986 | https://github.com/apache/airflow/pull/31991 | 10aa704e3d87ce951cb79f28492eed916bc18fe3 | f2ebc292fe63d2ddd0686d90c3acc0630f017a07 | "2023-06-17T22:25:21Z" | python | "2023-06-19T03:53:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,957 | ["airflow/providers/cncf/kubernetes/executors/kubernetes_executor.py", "docs/apache-airflow/administration-and-deployment/logging-monitoring/metrics.rst"] | Airflow Observability Improvement Request | ### Description
We have a scheduler house keeping work (adopt_or_reset_orphaned_tasks, check_trigger_timeouts, _emit_pool_metrics, _find_zombies, clear_not_launched_queued_tasks and _check_worker_pods_pending_timeout) runs on certain frequency. Right now, we don't have any latency metrics on these house keeping work. These will impact the scheduler heartbeat. Its good idea to capture these latency metrics to identify and tune the airflow configuration
### Use case/motivation
As we run the airflow at a large scale, we have found that the adopt_or_reset_orphaned_tasks and clear_not_launched_queued_tasks functions might take time in a few minutes (> 5 minutes). These will delay the heartbeat of the scheduler and leads to the scheduler instance restarting/killed. In order to detect these latency issues, we need better metrics to capture these latencies.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31957 | https://github.com/apache/airflow/pull/35579 | 5a6dcfd8655c9622f3838a0e66948dc3091afccb | cd296d2068b005ebeb5cdc4509e670901bf5b9f3 | "2023-06-16T10:19:09Z" | python | "2023-11-12T17:41:07Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,949 | ["airflow/www/static/js/dag/details/graph/Node.tsx", "airflow/www/static/js/types/index.ts", "airflow/www/static/js/utils/graph.ts"] | Support Operator User Interface Elements in new graph view | ### Description
The new graph UI looks good but currently doesn't support the color options mentioned here https://airflow.apache.org/docs/apache-airflow/stable/howto/custom-operator.html#user-interface
### Use case/motivation
It would be great for these features to be supported in the new grid view as they are in the old one
### Related issues
[slack](https://apache-airflow.slack.com/archives/CCPRP7943/p1686866630874739?thread_ts=1686865767.351809&cid=CCPRP7943)
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31949 | https://github.com/apache/airflow/pull/32822 | 12a760f6df831c1d53d035e4d169a69887e8bb26 | 3bb63f1087176b24e9dc8f4cc51cf44ce9986d34 | "2023-06-15T22:54:54Z" | python | "2023-08-03T09:06:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,907 | ["dev/breeze/src/airflow_breeze/commands/testing_commands.py", "dev/breeze/src/airflow_breeze/commands/testing_commands_config.py", "images/breeze/output-commands-hash.txt", "images/breeze/output-commands.svg", "images/breeze/output_testing.svg", "images/breeze/output_testing_tests.svg"] | Add `--use-airflow-version` option to `breeze testing tests` command | ### Description
The option `--use-airflow-version` is available under the command `start-airflow` in `Breeze`. As an example, this is used when testing a release candidate as specified in [documentation](https://github.com/apache/airflow/blob/main/dev/README_RELEASE_AIRFLOW.md#verify-release-candidates-by-contributors): `breeze start-airflow --use-airflow-version <VERSION>rc<X> --python 3.8 --backend postgres`.
The idea I have is to add that option as well under the command `testing tests` in `Breeze`.
### Use case/motivation
Having the option `--use-airflow-version` available under the command `testing tests` in `Breeze` would make it possible to run system tests against a specific version of Airflow and provider. This could be helpful when releasing new version of Airflow and Airflow providers. As such, providers could run all system tests of their provider package on demand and share these results (somehow, a dashboard?, another way?) to the community/release manager. This would not replace the manual testing already in place that is needed when releasing such new version but would give more information/pointers to the release manager.
Before submitting a PR, I wanted to have first some feedbacks about this idea. This might not be possible? This might not be a good idea? This might not be useful at all?
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31907 | https://github.com/apache/airflow/pull/31914 | 518b93c24fda6e7a1df0acf0f4dd1921967dc8f6 | b07a26523fad4f17ceb4e3a2f88e043dcaff5e53 | "2023-06-14T18:35:02Z" | python | "2023-06-14T23:44:04Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,902 | ["airflow/serialization/serialized_objects.py", "tests/serialization/test_dag_serialization.py"] | MappedOperator doesn't allow `operator_extra_links` instance property | ### Apache Airflow version
2.6.1
### What happened
When doing `.partial()` and `.expand()` on any operator which has an instance property (e.g. `@property`) of `operator_extra_links` the `MappedOperator` does not handle it properly, causing the dag to fail to import.
The `BatchOperator` from `airflow.providers.amazon.aws.operators.batch` is one example of an operator which defines `operator_extra_links` on a per-instance basis.
### What you think should happen instead
The dag should not fail to import (especially when using the AWS `BatchOperator`!) Either:
- If per-instance `operator_extra_links` is deemed disallowed behaviour
- `MappedOperator` should detect it's a property and give a more helpful error message
- `BatchOperator` from the AWS provider should be changed. If I need to open another ticket elsewhere for that please let me know
- If per-instance `operator_extra_links` is allowed
- `MappedOperator` needs to be adjusted to account for that
### How to reproduce
```
import pendulum
from airflow.models.baseoperator import BaseOperator
from airflow.decorators import dag, task
class BadOperator(BaseOperator):
def __init__(self, *args, some_argument: str, **kwargs):
super().__init__(*args, some_argument, **kwargs)
@property
def operator_extra_links(self):
# !PROBLEMATIC FUNCTION!
# ... Some code to create a collection of `BaseOperatorLink`s dynamically
return tuple()
@dag(
schedule=None,
start_date=pendulum.datetime(2021, 1, 1, tz="UTC"),
catchup=False,
tags=["example"]
)
def airflow_is_bad():
"""
Example to demonstrate issue with airflow API
"""
@task
def create_arguments():
return [1,2,3,4]
bad_operator_test_group = BadOperator.partial(
task_id="bad_operator_test_group",
).expand(some_argument=create_arguments())
dag = airflow_is_bad()
```
Put this in your dags folder, Airflow will fail to import the dag with error
```
Broken DAG: [<USER>/airflow/dags/airflow_is_bad_minimal_example.py] Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/airflow/serialization/serialized_objects.py", line 825, in _serialize_node
serialize_op["_operator_extra_links"] = cls._serialize_operator_extra_links(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/airflow/serialization/serialized_objects.py", line 1165, in _serialize_operator_extra_links
for operator_extra_link in operator_extra_links:
TypeError: 'property' object is not iterable
```
Commenting out the `operator_extra_links` from the `BadOperator` in the example will allow the dag to be imported fine
### Operating System
macOS Ventura 13.4
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon==8.0.0
apache-airflow-providers-common-sql==1.4.0
apache-airflow-providers-ftp==3.3.1
apache-airflow-providers-google==10.0.0
apache-airflow-providers-http==4.3.0
apache-airflow-providers-imap==3.1.1
apache-airflow-providers-postgres==5.4.0
apache-airflow-providers-sqlite==3.3.2
```
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
I found adjusting `operator_extra_links` on `BatchOperator` to be `operator_extra_links = (BatchJobDetailsLink(), BatchJobDefinitionLink(), BatchJobQueueLink(), CloudWatchEventsLink())` solved my issue and made it run fine, however I've no idea if that's safe or generalises because I'm not sure what `operator_extra_links` is actually for internally.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31902 | https://github.com/apache/airflow/pull/31904 | 225e3041d269698d0456e09586924c1898d09434 | 3318212482c6e11ac5c2e2828f7e467bca5b7245 | "2023-06-14T14:37:22Z" | python | "2023-07-06T05:50:06Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,891 | ["docs/apache-airflow-providers-google/api-auth-backend/google-openid.rst"] | Incorrect audience argument in Google OpenID authentication doc | ### What do you see as an issue?
I followed the [Google OpenID authentication doc](https://airflow.apache.org/docs/apache-airflow-providers-google/stable/api-auth-backend/google-openid.html) and got this error:
```
$ ID_TOKEN="$(gcloud auth print-identity-token "--audience=${AUDIENCE}")"
ERROR: (gcloud.auth.print-identity-token) unrecognized arguments: --audience=759115288429-c1v16874eprg4455kt1di902b3vkjho2.apps.googleusercontent.com (did you mean '--audiences'?)
To search the help text of gcloud commands, run:
gcloud help -- SEARCH_TERMS
```
Perhaps the gcloud CLI parameter changed since this doc was written.
### Solving the problem
Update the CLI argument in the doc.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31891 | https://github.com/apache/airflow/pull/31893 | ca13c7b77ea0e7d37bfe893871bab565d26884d0 | fa07812d1013f964a4736eade3ba3e1a60f12692 | "2023-06-14T09:05:50Z" | python | "2023-06-23T10:23:44Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,877 | ["airflow/providers/databricks/operators/databricks.py"] | DatabricksSubmitRunOperator libraries parameter has incorrect type | ### Apache Airflow version
2.6.1
### What happened
The type of the libraries field in the DatabricksSubmitRunOperator is incorrect. According to the Databricks docs, the values should look more like:
```python
[
{"pypi": {"package": "simplejson"}},
{"pypi": {"package": "Faker"}},
]
```
as opposed to what the type hint implies:
```python
[
{"pypi": "simplejson"},
{"pypi": "Faker"},
]
```
https://github.com/apache/airflow/blob/providers-databricks/4.2.0/airflow/providers/databricks/operators/databricks.py#L306
### What you think should happen instead
_No response_
### How to reproduce
n/a
### Operating System
macOS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31877 | https://github.com/apache/airflow/pull/31888 | f2ebc292fe63d2ddd0686d90c3acc0630f017a07 | 69bc90b82403b705b3c30176cc3d64b767f2252e | "2023-06-13T14:59:17Z" | python | "2023-06-19T07:22:45Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,873 | ["airflow/models/variable.py", "tests/models/test_variable.py"] | KubernetesPodOperator doesn't mask variables in Rendered Template that are used as arguments | ### Apache Airflow version
2.6.1
### What happened
I am pulling a variable from Google Secret Manager and I'm using it as an argument in a KubernetesPodOperator task. I've also tried it with the KubernetesPodOperatorAsync operator and I'm getting the same behaviour.
The variable value is not masked on Rendered Template page. If I use the exact same variable in a different operator, like the HttpSensorAsync, it is properly masked. That is quite critical and I can't deploy the DAG to production.
### What you think should happen instead
The variable in the KubernetesPodOperator should be masked and only '***' should be shown in the Rendered Template page
### How to reproduce
Here's the example of code where I use the exact same variable in two different Operators. It's in the arguments of the Kubernetes Operator and then used in a different operator next.
```
my_changeset = KubernetesPodOperator(
task_id='my_load',
namespace=kubernetes_namespace,
service_account_name=service_account_name,
image='my-feed:latest',
name='changeset_load',
in_cluster=in_cluster,
cluster_context='docker-desktop', # is ignored when in_cluster is set to True
is_delete_operator_pod=True,
get_logs=True,
image_pull_policy=image_pull_policy,
arguments=[
'-k{{ var.json.faros_api_key.faros_api_key }}',
],
container_resources=k8s.V1ResourceRequirements(requests=requests, limits=limits),
volumes=volumes,
volume_mounts=volume_mounts,
log_events_on_failure=True,
startup_timeout_seconds=60 * 5,
)
test_var = HttpSensorAsync(
task_id=f'wait_for_my_file',
http_conn_id='my_paymentreports_http',
endpoint='{{ var.json.my_paymentreports_http.client_id }}/report',
headers={'user-agent': 'King'},
request_params={
'access_token': '{{ var.json.faros_api_key.faros_api_key }}',
},
response_check=lambda response: True if response.status_code == 200 else False,
extra_options={'check_response': False},
timeout=60 * 60 * 8,
)
```
The same {{ var.json.faros_api_key.faros_api_key }} is used in both operators, but only masked in the HttpSensorAsync operator.
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
apache-airflow==2.6.1+astro.3
apache-airflow-providers-amazon==8.1.0
apache-airflow-providers-celery==3.2.0
apache-airflow-providers-cncf-kubernetes==7.0.0
apache-airflow-providers-common-sql==1.5.1
apache-airflow-providers-datadog==3.3.0
apache-airflow-providers-elasticsearch==4.5.0
apache-airflow-providers-ftp==3.4.1
apache-airflow-providers-github==2.3.0
apache-airflow-providers-google==10.0.0
apache-airflow-providers-hashicorp==3.4.0
apache-airflow-providers-http==4.4.1
apache-airflow-providers-imap==3.2.1
apache-airflow-providers-microsoft-azure==6.1.1
apache-airflow-providers-mysql==5.1.0
apache-airflow-providers-postgres==5.5.0
apache-airflow-providers-redis==3.2.0
apache-airflow-providers-samba==4.2.0
apache-airflow-providers-sendgrid==3.2.0
apache-airflow-providers-sftp==4.3.0
apache-airflow-providers-slack==7.3.0
apache-airflow-providers-sqlite==3.4.1
apache-airflow-providers-ssh==3.7.0
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31873 | https://github.com/apache/airflow/pull/31964 | fc0e5a4d42ee882ca5bc20ea65be38b2c739644d | e22ce9baed19ddf771db59b7da1d25e240430625 | "2023-06-13T11:25:23Z" | python | "2023-06-16T19:05:01Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,851 | ["airflow/cli/cli_config.py", "airflow/cli/commands/connection_command.py", "airflow/cli/commands/variable_command.py", "airflow/cli/utils.py"] | Allow variables to be printed to STDOUT | ### Description
Currently, the `airflow variables export` command requires an explicit file path and does not support output to stdout. However connections can be printed to stdout using `airflow connections export -`. This inconsistency between the two export commands can lead to confusion and limits the flexibility of the variables export command.
### Use case/motivation
To bring some consistency, similar to connections, variables should also be printed to STDOUT, using `-` instead of filename.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31851 | https://github.com/apache/airflow/pull/33279 | bfa09da1380f0f1e0727dbbc9f1878bd44eb848d | 09d478ec671f8017294d4e15d75db1f40b8cc404 | "2023-06-12T02:56:23Z" | python | "2023-08-11T09:02:48Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,834 | ["airflow/providers/microsoft/azure/log/wasb_task_handler.py", "airflow/providers/redis/log/__init__.py", "airflow/providers/redis/log/redis_task_handler.py", "airflow/providers/redis/provider.yaml", "airflow/utils/log/file_task_handler.py", "docs/apache-airflow-providers-redis/index.rst", "docs/apache-airflow-providers-redis/logging/index.rst", "tests/providers/redis/log/__init__.py", "tests/providers/redis/log/test_redis_task_handler.py"] | Redis task handler for logs | ### Discussed in https://github.com/apache/airflow/discussions/31832
<div type='discussions-op-text'>
<sup>Originally posted by **michalc** June 10, 2023</sup>
Should something like the below be in the codebase? It's a simple handler for storing Airflow task logs in Redis, enforcing a max number of entries per try, and an expiry time for the logs
Happy to raise a PR (and I guessed a lot at how things should be... so suspect can be improved upon...)
```python
class RedisHandler(logging.Handler):
def __init__(self, client, key):
super().__init__()
self.client = client
self.key = key
def emit(self, record):
p = self.client.pipeline()
p.rpush(self.key, self.format(record))
p.ltrim(self.key, start=-10000, end=-1)
p.expire(self.key, time=60 * 60 * 24 * 28)
p.execute()
class RedisTaskHandler(FileTaskHandler, LoggingMixin):
"""
RedisTaskHandler is a python log handler that handles and reads
task instance logs. It extends airflow FileTaskHandler and
uploads to and reads from Redis.
"""
trigger_should_wrap = True
def __init__(self, base_log_folder: str, redis_url):
super().__init__(base_log_folder)
self.handler = None
self.client = redis.Redis.from_url(redis_url)
def _read(
self,
ti,
try_number,
metadata=None,
):
log_str = b"\n".join(
self.client.lrange(self._render_filename(ti, try_number), start=0, end=-1)
).decode("utf-8")
return log_str, {"end_of_log": True}
def set_context(self, ti):
super().set_context(ti)
self.handler = RedisHandler(
self.client, self._render_filename(ti, ti.try_number)
)
self.handler.setFormatter(self.formatter)
```</div> | https://github.com/apache/airflow/issues/31834 | https://github.com/apache/airflow/pull/31855 | 6362ba5ab45a38008814616df4e17717cc3726c3 | 42b4b43c4c2ccf0b6e7eaa105c982df495768d01 | "2023-06-10T17:38:26Z" | python | "2023-07-23T06:43:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,819 | ["docs/apache-airflow/authoring-and-scheduling/deferring.rst"] | Improve the docs around deferrable mode for Sensors | ### Body
With Operators the use case of deferrable operators is pretty clear.
However with Sensor new questions are raised.
All Sensors inherit from `BaseSensorOperator` which adds [mode](https://github.com/apache/airflow/blob/a98621f4facabc207b4d6b6968e6863845e1f90f/airflow/sensors/base.py#L93) parameter a question comes to mind what is the difference between:
`SomeSensor(..., mode='reschedule')`
to:
`SomeSensor(..., deferrable=true)`
Thus unlike Operators, when working with Sensors and assuming the sensor has defer implemented the users have a choice of what to use and both can be explained as "something is not ready, lets wait without consuming resources".
The docs should clarify the difference and compare between the two options that might look the same but are different.
We should explain it in two fronts:
1. What happens in Airflow for each one of the options (task state in `defer` mode vs `up_for_reschedule`) etc...
2. What is the motivation/justification for each one. pros and cons.
3. Do we have some kind of general recommendation as "always prefer X over Y" or "In executor X better to use one of the options" etc...
also wondering what `SomeSensor(..., , mode='reschedule', deferrable=true)` means and if we are protected against such usage.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/31819 | https://github.com/apache/airflow/pull/31840 | 371833e076d033be84f109cce980a6275032833c | 0db0ff14da449dc3dbfe9577ccdb12db946b9647 | "2023-06-09T18:33:27Z" | python | "2023-06-24T16:40:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,818 | ["airflow/cli/cli_config.py", "airflow/cli/commands/db_command.py", "tests/cli/commands/test_db_command.py"] | Add retry + timeout to Airflow db check | ### Description
In my company usage of Airflow, developmental instances of Airflow run on containerized PostgreSQL that are spawned at the same time the Airflow container is spawned. Before the Airflow container runs its initialization scripts, it needs to make sure that the PostgreSQL instance can be reached, for which `airflow db check` is a great option.
However, there is a non-deterministic race condition between the PSQL container and Airflow containers (not sure which will each readiness first and by how much), calling the `airflow db check` command once is not sufficient, and implementing a retry-timeout in shell script is feasible but unpleasant.
It would be great if the `airflow db check` command can take two additional optional arguments: `--retry` and `--retry-delay` (just like with `curl`) so that the database connection can be checked repeatedly for up to a specified number of times. This command should exit with `0` exit code if any of the retries succeeds, and `1` if all of the retries failed.
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31818 | https://github.com/apache/airflow/pull/31836 | a81ac70b33a589c58b59864df931d3293fada382 | 1b35a077221481e9bf4aeea07d1264973e7f3bf6 | "2023-06-09T18:07:59Z" | python | "2023-06-15T08:54:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,811 | ["airflow/providers/microsoft/azure/hooks/wasb.py", "docs/apache-airflow-providers-microsoft-azure/connections/wasb.rst", "tests/providers/microsoft/azure/hooks/test_wasb.py"] | Airflow Connection Type Azure Blob Storage - Shared Access Key field | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Airflow Version 2.5.2
Issue: Create a connection of type azure blob storage using the method #3 described in [OSS docs](https://airflow.apache.org/docs/apache-airflow-providers-microsoft-azure/stable/connections/wasb.html).
When we use the storage account name and the access key as in the below screenshot, the connection works just fine.
<img width="637" alt="connection-blob-storage-access-key" src="https://github.com/apache/airflow/assets/75730393/6cf76b44-f65f-40c0-8279-32c58b6d57ba">
Then what is the purpose of the extra field called `Blob Storage Shared Access Key(Optional)`? When I tried to put the access key in this field, then connection fails with the below error on testing:
```
invalid url http://
```
Code reference: https://github.com/apache/airflow/blob/main/airflow/providers/microsoft/azure/hooks/wasb.py#L190-L192
### What you think should happen instead
_No response_
### How to reproduce
- Create a Azure Storage account
- Go to Access Control (IAM), copy the key
- Spin up new local airflow environment using Astro CLI with runtime version 7.4.1
- Go to Airflow UI -> Admin -> Connections -> create a new connection of type Azure blob Storage
- Enter the name of the storage account in Blob Storage Login and the key copied from Azure to Blob Shared Access Key (Optional) and click on Test connection.
### Operating System
Astro runtime 7.4.1 image on MacOS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31811 | https://github.com/apache/airflow/pull/32082 | 0bc689ee6d4b6967d7ae99a202031aac14d181a2 | 46ee1c2c8d3d0e5793f42fd10bcd80150caa538b | "2023-06-09T06:40:54Z" | python | "2023-06-27T23:00:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,795 | ["airflow/providers/apache/kafka/triggers/await_message.py"] | AwaitMessageTriggerFunctionSensor not firing all eligble messages | ### Apache Airflow version
2.6.1
### What happened
The AwaitMessageTriggerFunctionSensor is showing some buggy behaviour.
When consuming from a topic, it is correctly applying the apply_function in order to yield a TriggerEvent.
However, it is consuming multiple messages at a time and not yielding a trigger for the correct amount of messages that would be eligble (return a value in the apply_function). The observed behaviour is as follows:
- Sensor is deferred and messages start getting consumed
- Multiple eligble messages trigger a single TriggerEvent instead of multiple TriggerEvents.
- The sensor returns to a deferred state , repeating the cycle.
The event_triggered_function is being called correctly. However, due to the issue in consuming and correctly generating the appropriate TriggerEvents some of them are missed.
### What you think should happen instead
Each eligble message should create an individual TriggerEvent to be consumed by the event_triggered_function.
### How to reproduce
- Use a producer DAG to produce a set amount of messages on your kafka topic
- Use a listener DAG to consume this topic, screening for eligble messages (apply_function) and use the event_trigger_function to monitor the amount of events that are being consumed.
### Operating System
Kubernetes cluster - Linux
### Versions of Apache Airflow Providers
apache-airflow-providers-apache-kafka==1.1.0
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
helm chart version 1.9.0
### Anything else
Every time (independent of topic, message content, apply_function and event_triggered_function)
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31795 | https://github.com/apache/airflow/pull/31803 | ead2530d3500dd27df54383a0802b6c94828c359 | 1b599c9fbfb6151a41a588edaa786745f50eec38 | "2023-06-08T14:24:33Z" | python | "2023-06-30T09:26:46Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,769 | ["airflow/serialization/serde.py", "tests/serialization/test_serde.py"] | Deserialization of old xcom data fails after upgrade to 2.6.1 from 2.5.2 when calling /xcom/list/ [GET] | ### Apache Airflow version
2.6.1
### What happened
After upgrading from airflow 2.5.2 to 2.6.1 calling the endpoint `xcom/list/` we get the following exception:
```
[2023-06-07T12:16:50.050+0000] {app.py:1744} ERROR - Exception on /xcom/list/ [GET]
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.10/site-packages/flask/app.py", line 2529, in wsgi_app
response = self.full_dispatch_request()
File "/home/airflow/.local/lib/python3.10/site-packages/flask/app.py", line 1825, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/airflow/.local/lib/python3.10/site-packages/flask/app.py", line 1823, in full_dispatch_request
rv = self.dispatch_request()
File "/home/airflow/.local/lib/python3.10/site-packages/flask/app.py", line 1799, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/home/airflow/.local/lib/python3.10/site-packages/flask_appbuilder/security/decorators.py", line 139, in wraps
return f(self, *args, **kwargs)
File "/home/airflow/.local/lib/python3.10/site-packages/flask_appbuilder/views.py", line 554, in list
widgets = self._list()
File "/home/airflow/.local/lib/python3.10/site-packages/flask_appbuilder/baseviews.py", line 1177, in _list
widgets = self._get_list_widget(
File "/home/airflow/.local/lib/python3.10/site-packages/flask_appbuilder/baseviews.py", line 1076, in _get_list_widget
count, lst = self.datamodel.query(
File "/home/airflow/.local/lib/python3.10/site-packages/flask_appbuilder/models/sqla/interface.py", line 500, in query
query_results = query.all()
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/orm/query.py", line 2773, in all
return self._iter().all()
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/result.py", line 1476, in all
return self._allrows()
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/result.py", line 401, in _allrows
rows = self._fetchall_impl()
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/result.py", line 1389, in _fetchall_impl
return self._real_result._fetchall_impl()
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/result.py", line 1813, in _fetchall_impl
return list(self.iterator)
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/orm/loading.py", line 151, in chunks
rows = [proc(row) for row in fetch]
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/orm/loading.py", line 151, in <listcomp>
rows = [proc(row) for row in fetch]
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/orm/loading.py", line 984, in _instance
state.manager.dispatch.load(state, context)
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/event/attr.py", line 334, in __call__
fn(*args, **kw)
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/orm/mapper.py", line 3702, in _event_on_load
instrumenting_mapper._reconstructor(state.obj())
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/xcom.py", line 128, in init_on_load
self.value = self.orm_deserialize_value()
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/xcom.py", line 677, in orm_deserialize_value
return BaseXCom._deserialize_value(self, True)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/xcom.py", line 659, in _deserialize_value
return json.loads(result.value.decode("UTF-8"), cls=XComDecoder, object_hook=object_hook)
File "/usr/local/lib/python3.10/json/__init__.py", line 359, in loads
return cls(**kw).decode(s)
File "/usr/local/lib/python3.10/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/local/lib/python3.10/json/decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/json.py", line 126, in orm_object_hook
return deserialize(dct, False)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/serialization/serde.py", line 209, in deserialize
o = _convert(o)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/serialization/serde.py", line 273, in _convert
return {CLASSNAME: old[OLD_TYPE], VERSION: DEFAULT_VERSION, DATA: old[OLD_DATA][OLD_DATA]}
KeyError: '__var'
```
Some xcom entries from previous airflow versions seem to be incompatible with the new refactored serialization from https://github.com/apache/airflow/pull/28067
### What you think should happen instead
xcom entries should be displayed
### How to reproduce
Add an entry to your the xcom table where value contains:
` [{"__classname__": "airflow.datasets.Dataset", "__version__": 1, "__data__": {"__var": {"uri": "bq://google_cloud_default@?table=table_name&schema=schema_name", "extra": null}, "__type": "dict"}}]`
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==8.0.0
apache-airflow-providers-apache-kafka==1.1.0
apache-airflow-providers-celery==3.1.0
apache-airflow-providers-cncf-kubernetes==6.1.0
apache-airflow-providers-common-sql==1.4.0
apache-airflow-providers-docker==3.6.0
apache-airflow-providers-elasticsearch==4.4.0
apache-airflow-providers-ftp==3.3.1
apache-airflow-providers-google==10.1.1
apache-airflow-providers-grpc==3.1.0
apache-airflow-providers-hashicorp==3.3.1
apache-airflow-providers-http==4.3.0
apache-airflow-providers-imap==3.1.1
apache-airflow-providers-microsoft-azure==6.0.0
apache-airflow-providers-mysql==5.0.0
apache-airflow-providers-odbc==3.2.1
apache-airflow-providers-postgres==5.4.0
apache-airflow-providers-redis==3.1.0
apache-airflow-providers-sendgrid==3.1.0
apache-airflow-providers-sftp==4.2.4
apache-airflow-providers-slack==7.2.0
apache-airflow-providers-snowflake==4.0.5
apache-airflow-providers-sqlite==3.3.2
apache-airflow-providers-ssh==3.6.0
### Deployment
Other 3rd-party Helm chart
### Deployment details
_No response_
### Anything else
The double `old[OLD_DATA][OLD_DATA]` looks suspicious to me in https://github.com/apache/airflow/blob/58fca5eb3c3521e3fa1b3beeb066acb15629deeb/airflow/serialization/serde.py#L273
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31769 | https://github.com/apache/airflow/pull/31866 | 779226706c1d64e0fe1e19c5f077ead9c9b4914a | bd32467ede1a5a197e09456803f7cebaee9f9b77 | "2023-06-07T15:08:32Z" | python | "2023-06-29T20:37:16Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,761 | ["BREEZE.rst"] | Update the troubleshoot section in breeze for pip running for long time. | ### What do you see as an issue?
Add an update on the troubleshooting section of BREEZE.rst if the pip is taking a significant amount of time with the following error:
```
ip._vendor.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out.
```
### Solving the problem
If the pip is taking a significant amount of time and your internet connection is causing pip to be unable to download the libraries within the default timeout, it is advisable to modify the default timeout as follows and run the breeze again.
```
export PIP_DEFAULT_TIMEOUT=1000
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31761 | https://github.com/apache/airflow/pull/31760 | 07ea574fed5d56ca9405ee9e47828841289e3a3c | b9efbf513d8390b66d01ee380ccc43cd60d3c88b | "2023-06-07T11:47:11Z" | python | "2023-06-07T11:51:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,753 | ["airflow/providers/databricks/operators/databricks_sql.py", "tests/providers/databricks/operators/test_databricks_sql.py"] | AttributeError exception when returning result to XCom | ### Apache Airflow version
2.6.1
### What happened
When i use _do_xcom_push=True_ in **DatabricksSqlOperator** the an exception with following stack trace is thrown:
```
[2023-06-06, 08:52:24 UTC] {sql.py:375} INFO - Running statement: SELECT cast(max(id) as STRING) FROM prod.unified.sessions, parameters: None
[2023-06-06, 08:52:25 UTC] {taskinstance.py:1824} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/session.py", line 73, in wrapper
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/taskinstance.py", line 2354, in xcom_push
XCom.set(
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/session.py", line 73, in wrapper
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/xcom.py", line 237, in set
value = cls.serialize_value(
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/xcom.py", line 632, in serialize_value
return json.dumps(value, cls=XComEncoder).encode("UTF-8")
File "/usr/local/lib/python3.10/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/json.py", line 102, in encode
o = self.default(o)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/json.py", line 91, in default
return serialize(o)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/serialization/serde.py", line 144, in serialize
return encode(classname, version, serialize(data, depth + 1))
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/serialization/serde.py", line 123, in serialize
return [serialize(d, depth + 1) for d in o]
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/serialization/serde.py", line 123, in <listcomp>
return [serialize(d, depth + 1) for d in o]
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/serialization/serde.py", line 132, in serialize
qn = qualname(o)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/module_loading.py", line 47, in qualname
return f"{o.__module__}.{o.__name__}"
File "/home/airflow/.local/lib/python3.10/site-packages/databricks/sql/types.py", line 161, in __getattr__
raise AttributeError(item)
AttributeError: __name__. Did you mean: '__ne__'?
```
### What you think should happen instead
In _process_output() if self._output_path is False a list of tuples is returned:
```
def _process_output(self, results: list[Any], descriptions: list[Sequence[Sequence] | None]) -> list[Any]:
if not self._output_path:
return list(zip(descriptions, results))
```
I suspect this breaks the serialization somehow which might be related to my own meta database(postgres).
Replacing the Databricks SQL Operator with simple **PythonOperator** and **DatabricksSqlHook** works just fine:
```
def get_max_id(ti):
hook = DatabricksSqlHook(databricks_conn_id=databricks_sql_conn_id, sql_endpoint_name='sql_endpoint')
sql = "SELECT cast(max(id) as STRING) FROM prod.unified.sessions"
return str(hook.get_first(sql)[0])
```
### How to reproduce
```
get_max_id_task = DatabricksSqlOperator(
databricks_conn_id=databricks_sql_conn_id,
sql_endpoint_name='sql_endpoint',
task_id='get_max_id',
sql="SELECT cast(max(id) as STRING) FROM prod.unified.sessions",
do_xcom_push=True
)
```
### Operating System
Debian GNU/Linux 11 (bullseye) docker image, python 3.10
### Versions of Apache Airflow Providers
apache-airflow-providers-common-sql==1.5.1
databricks-sql-connector==2.5.2
apache-airflow-providers-databricks==4.2.0
### Deployment
Docker-Compose
### Deployment details
Using extended Airflow image, LocalExecutor, Postgres 13 meta db as container in the same stack.
docker-compose version 1.29.2, build 5becea4c
Docker version 23.0.5, build bc4487a
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31753 | https://github.com/apache/airflow/pull/31780 | 1aa9e803c26b8e86ab053cfe760153fc286e177c | 049c6184b730a7ede41db9406654f054ddc8cc5f | "2023-06-07T06:44:13Z" | python | "2023-06-08T10:49:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,750 | ["airflow/providers/google/cloud/transfers/sql_to_gcs.py", "tests/providers/google/cloud/transfers/test_sql_to_gcs.py"] | BaseSQLToGCSOperator creates row group for each rows during parquet generation | ### Apache Airflow version
Other Airflow 2 version (please specify below)
Airflow 2.4.2
### What happened
BaseSQLToGCSOperator creates row group for each rows during parquet generation, which cause compression not work and increase file size.
![image](https://github.com/apache/airflow/assets/51909776/bf256065-c130-4354-81c7-8ca2ed4e8d93)
### What you think should happen instead
_No response_
### How to reproduce
OracleToGCSOperator(
task_id='oracle_to_gcs_parquet_test',
gcp_conn_id=GCP_CONNECTION,
oracle_conn_id=ORACLE_CONNECTION,
sql='',
bucket=GCS_BUCKET_NAME,
filename='',
export_format='parquet',
)
### Operating System
CentOS Linux 7
### Versions of Apache Airflow Providers
apache-airflow-providers-apache-hive 2.1.0
apache-airflow-providers-apache-sqoop 2.0.2
apache-airflow-providers-celery 3.0.0
apache-airflow-providers-common-sql 1.2.0
apache-airflow-providers-ftp 3.1.0
apache-airflow-providers-google 8.4.0
apache-airflow-providers-http 4.0.0
apache-airflow-providers-imap 3.0.0
apache-airflow-providers-mysql 3.0.0
apache-airflow-providers-oracle 2.1.0
apache-airflow-providers-salesforce 5.3.0
apache-airflow-providers-sqlite 3.2.1
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31750 | https://github.com/apache/airflow/pull/31831 | ee83a2fbd1a65e6a5c7d550a39e1deee49856270 | b502e665d633262f3ce52d9c002c0a25e6e4ec9d | "2023-06-07T03:06:11Z" | python | "2023-06-14T12:05:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,745 | ["airflow/providers/cncf/kubernetes/operators/pod.py", "airflow/providers/cncf/kubernetes/utils/pod_manager.py", "kubernetes_tests/test_kubernetes_pod_operator.py", "tests/providers/cncf/kubernetes/utils/test_pod_manager.py"] | Add a process_line callback to KubernetesPodOperator | ### Description
Add a process_line callback to KubernetesPodOperator
Like https://github.com/apache/airflow/blob/main/airflow/providers/apache/beam/operators/beam.py#LL304C36-L304C57 the `BeamRunPythonPipelineOperator`, which allows the user to add stateful plugins based on the logging from docker job
### Use case/motivation
I can add a plugin based on the logging and also allow cleanup on_kill based on the job creation log.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31745 | https://github.com/apache/airflow/pull/34153 | d800a0de5194bb1ef3cfad44c874abafcc78efd6 | b5057e0e1fc6b7a47e38037a97cac862706747f0 | "2023-06-06T18:40:42Z" | python | "2023-09-09T18:08:29Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,726 | ["airflow/models/taskinstance.py", "airflow/www/extensions/init_wsgi_middlewares.py", "tests/www/test_app.py"] | redirect to same url after set base_url | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
```
$ curl localhost:8080/airflow/
<!doctype html>
<html lang=en>
<title>Redirecting...</title>
<h1>Redirecting...</h1>
<p>You should be redirected automatically to the target URL: <a href="http://localhost:8080/airflow/">http://localhost:8080/airflow/</a>. If not, click the link.
```
### What you think should happen instead
At least not circular redirect.
### How to reproduce
generate yaml:
```
helm template --name-template=airflow ~/downloads/airflow > airflow.yaml
```
add base_url in webserver section and remove health and ready check in webserver(to keep pod alive):
```
[webserver]
enable_proxy_fix = True
rbac = True
base_url = http://my.domain.com/airflow/
```
### Operating System
Ubuntu 22.04.2 LTS
### Versions of Apache Airflow Providers
apache-airflow==2.5.3
apache-airflow-providers-amazon==7.3.0
apache-airflow-providers-celery==3.1.0
apache-airflow-providers-cncf-kubernetes==5.2.2
apache-airflow-providers-common-sql==1.3.4
apache-airflow-providers-docker==3.5.1
apache-airflow-providers-elasticsearch==4.4.0
apache-airflow-providers-ftp==3.3.1
apache-airflow-providers-google==8.11.0
apache-airflow-providers-grpc==3.1.0
apache-airflow-providers-hashicorp==3.3.0
apache-airflow-providers-http==4.2.0
apache-airflow-providers-imap==3.1.1
apache-airflow-providers-microsoft-azure==5.2.1
apache-airflow-providers-mysql==4.0.2
apache-airflow-providers-odbc==3.2.1
apache-airflow-providers-postgres==5.4.0
apache-airflow-providers-redis==3.1.0
apache-airflow-providers-sendgrid==3.1.0
apache-airflow-providers-sftp==4.2.4
apache-airflow-providers-slack==7.2.0
apache-airflow-providers-snowflake==4.0.4
apache-airflow-providers-sqlite==3.3.1
apache-airflow-providers-ssh==3.5.0
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31726 | https://github.com/apache/airflow/pull/31833 | 69bc90b82403b705b3c30176cc3d64b767f2252e | fe4a6c843acd97c776d5890116bfa85356a54eee | "2023-06-06T02:39:47Z" | python | "2023-06-19T07:29:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,720 | ["airflow/jobs/triggerer_job_runner.py", "tests/jobs/test_triggerer_job.py"] | Add a log message when a trigger is canceled for timeout | ### Body
The _trigger_ log doesn't show that a trigger timed out when it is canceled due to timeout.
We should try to see if we can add a log message that would show up in the right place. If we emit it from the trigger process, it might show up out of order.
But then again, if we ultimately don't need to go back to the task, that would not be a problem.
Additionally if we ultimately can "log from anywhere" then again, this would provide a clean solution.
This came up in PR discussion here: https://github.com/apache/airflow/pull/30853#discussion_r1187018026
The relevant trigger code is here: https://github.com/apache/airflow/blob/main/airflow/jobs/triggerer_job_runner.py#L598-L619
I think we could add logic so that when we receive a cancelled error (which could be for a few different reasons) then we can log the reason for the cancellation. I think we could just add an `except CancelledError` and then log the reason. We might need also to update the code in the location where we actually _initiate_ the cancellation to add sufficient information for the log message.
cc @syedahsn @phanikumv @jedcunningham @pankajastro
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/31720 | https://github.com/apache/airflow/pull/31757 | 6becb7031618867bc253aefc9e3e216629575d2d | a60429eadfffb5fb0f867c220a6cecf628692dcf | "2023-06-05T18:55:20Z" | python | "2023-06-16T08:31:51Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,668 | ["docs/apache-airflow/core-concepts/dags.rst"] | Schedule "@daily" is wrongly declared in the "DAG/Core Concepts" | ### What do you see as an issue?
I found a small bug in the DAG Core Concepts documentation regarding the `@daily`schedule:
https://airflow.apache.org/docs/apache-airflow/stable/core-concepts/dags.html#running-dags
DAGs do not require a schedule, but it’s very common to define one. You define it via the `schedule` argument, like this:
```python
with DAG("my_daily_dag", schedule="@daily"):
...
```
The `schedule` argument takes any value that is a valid [Crontab](https://en.wikipedia.org/wiki/Cron) schedule value, so you could also do:
```python
with DAG("my_daily_dag", schedule="0 * * * *"):
...
```
If I'm not mistaken, the daily crontab notation should be `0 0 * * *` instead of `0 * * * *`, otherwise the DAG would run every hour.
The second `0`of course needs to be replaced with the hour, at which the DAG should run daily.
### Solving the problem
I would change the documentation at the marked location:
The `schedule` argument takes any value that is a valid [Crontab](https://en.wikipedia.org/wiki/Cron) schedule value, so for a daily run at 00:00, you could also do:
```python
with DAG("my_daily_dag", schedule="0 0 * * *"):
...
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31668 | https://github.com/apache/airflow/pull/31666 | 4ebf1c814c6e382169db00493a897b11c680e72b | 6a69fbb10c08f30c0cb22e2ba68f56f3a5d465aa | "2023-06-01T12:35:33Z" | python | "2023-06-01T14:36:56Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,656 | ["airflow/decorators/base.py", "tests/decorators/test_setup_teardown.py"] | Param on_failure_fail_dagrun should be overridable through `task.override` | Currently when you define a teardown
e.g.
```
@teardown(on_failure_fail_dagrun=True)
def my_teardown():
...
```
You can not change this when you instantiate the task
e.g. with
```
my_teardown.override(on_failure_fail_dagrun=True)()
```
I don't think this is good because if you define a reusable taskflow function then it might depend on the context.
| https://github.com/apache/airflow/issues/31656 | https://github.com/apache/airflow/pull/31665 | 29d2a31dc04471fc92cbfb2943ca419d5d8a6ab0 | 8dd194493d6853c2de80faee60d124b5d54ec3a6 | "2023-05-31T21:45:59Z" | python | "2023-06-02T05:26:40Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,636 | ["airflow/providers/amazon/aws/operators/ecs.py", "airflow/providers/amazon/aws/triggers/ecs.py", "airflow/providers/amazon/aws/utils/task_log_fetcher.py", "airflow/providers/amazon/provider.yaml", "tests/providers/amazon/aws/operators/test_ecs.py", "tests/providers/amazon/aws/triggers/test_ecs.py", "tests/providers/amazon/aws/utils/test_task_log_fetcher.py"] | Add deferrable mode to EcsRunTaskOperator | ### Description
I would greatly appreciate it if the `EcsRunTaskOperator` could incorporate the `deferrable` mode. Currently, this operator significantly affects the performance of my workers, and running multiple instances simultaneously proves to be inefficient. I have noticed that the `KubernetesPodOperator` already supports this functionality, so having a similar feature available for ECS would be a valuable addition.
Note: This feature request relates to the `amazon` provider.
### Use case/motivation
Reduce resource utilisation of my worker when running multiple ecs run task operator in paralllel.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31636 | https://github.com/apache/airflow/pull/31881 | e4ca68818eec0f29ef04a1a5bfec3241ea03bf8c | 415e0767616121854b6a29b3e44387f708cdf81e | "2023-05-31T09:40:58Z" | python | "2023-06-23T17:13:13Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,612 | ["airflow/providers/presto/provider.yaml", "generated/provider_dependencies.json"] | [airflow 2.4.3] presto queries returning none following upgrade to common.sql provider | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
After upgrading apache-airflow-providers-common-sql from 1.2.0 to anything above 1.3.0 presto queries using the get_records() and or get_first() function returns none.
using the same query -- `select 1`:
1.2.0: `Done. Returned value was: [[1]]`
1.3.0 and above:
```
Running statement: select 1, parameters: None
[2023-05-30, 11:57:37 UTC] {{python.py:177}} INFO - Done. Returned value was: None
```
### What you think should happen instead
i would expect that running the query `select 1` on presto would provide the same result when the environment is running apache-airflow-providers-common-sql 1.2.0 or apache-airflow-providers-common-sql 1.5.1.
### How to reproduce
run the following query: `PrestoHook(conn_id).get_records(`select 1`)`
ensure that the requirements are as labelled below.
### Operating System
NAME="Amazon Linux" VERSION="2" ID="amzn" ID_LIKE="centos rhel fedora"
### Versions of Apache Airflow Providers
apache-airflow==2.4.3
apache-airflow-providers-amazon==6.0.0
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-common-sql==1.5.1
apache-airflow-providers-ftp==3.1.0
apache-airflow-providers-google==8.4.0
apache-airflow-providers-http==4.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-jenkins==3.0.0
apache-airflow-providers-mysql==3.2.1
apache-airflow-providers-postgres==5.2.2
apache-airflow-providers-presto==5.1.0
apache-airflow-providers-sendgrid==3.0.0
apache-airflow-providers-slack==6.0.0
apache-airflow-providers-snowflake==3.3.0
apache-airflow-providers-sqlite==3.2.1
apache-airflow-providers-trino==4.1.0
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31612 | https://github.com/apache/airflow/pull/35132 | 789222cb1378079e2afd24c70c1a6783b57e27e6 | 8ef2a9997d8b6633ba04dd9f752f504a2ce93e25 | "2023-05-30T12:19:40Z" | python | "2023-10-23T15:40:20Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,604 | ["airflow/utils/task_group.py", "tests/decorators/test_task_group.py", "tests/utils/test_task_group.py"] | Override default_args between Nested TaskGroups | ### What do you see as an issue?
Hello!
I don't know if this is intended, but `default_args` is not an override when using nested TaskGroups.
```python
def callback_in_dag(context: Context):
print("DAG!")
def callback_in_task_group(context: Context):
print("Parent TaskGroup!")
with DAG(
"some_dag_id",
default_args={
"on_failure_callback": callback_in_dag
},
schedule=None,
start_date=datetime(2023, 1, 1)
) as dag:
with TaskGroup(
"parent_tg",
default_args={
"on_failure_callback": callback_in_task_group
}
) as parent_tg:
with TaskGroup("child_tg") as child_tg:
BashOperator(task_id="task_1", bash_command="nooooo_command")
```
I want the result to be "Parent TaskGroup!", but I get "DAG!".
```
[2023-05-30, 10:38:52 KST] {logging_mixin.py:137} INFO - DAG!
```
### Solving the problem
Add `_update_default_args` like [link](https://github.com/apache/airflow/blob/f6bb4746efbc6a94fa17b6c77b31d9fb17305ffc/airflow/models/baseoperator.py#L139)
#### [airflow/utils/task_group.py](https://github.com/apache/airflow/blob/main/airflow/utils/task_group.py)
```python
...
class TaskGroup(DAGNode):
def __init__(...):
...
self.default_args = copy.deepcopy(default_args or {})
# Call 'self._update_default_args' when exists parent_group
if parent_group is not None:
self._update_default_args(parent_group)
...
...
# Update self.default_args
def _update_default_args(parent_group: TaskGroup):
if parent_group.default_args:
self.default_args.update(parent_group.default_args)
...
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31604 | https://github.com/apache/airflow/pull/31608 | efe8473385426bf8c1e23a845f1ba26482843855 | 9e8627faa71e9d2047816b291061c28585809508 | "2023-05-30T01:51:38Z" | python | "2023-05-30T14:31:34Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,584 | ["airflow/providers/google/cloud/hooks/bigquery.py", "airflow/providers/google/cloud/triggers/bigquery.py", "tests/providers/google/cloud/hooks/test_bigquery.py", "tests/providers/google/cloud/triggers/test_bigquery.py"] | BigQueryInsertJobOperator not exiting deferred state | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Using Apache Airflow 2.4.3 and apache airflow google provider 8.4 (also tried with 10.1.0).
We have a query that in production should run for a long time, so we wanted to make the BigQueryInsertJobOperator deferrable.
Making the operator deferrable runs the job, but the UI and triggerer process don't seem to be notified that the operator has finished.
I have validated that the query is actually run as the data appears in the table, but the operator gets stuck in a deferred state.
### What you think should happen instead
After the big query job is finished, the operator should exit it's deferred state.
### How to reproduce
Skeleton of the code used
```
with DAG(
dag_id="some_dag_id",
schedule="@daily",
catchup=False,
start_date=pendulum.datetime(2023, 5, 8),
):
extract_data = BigQueryInsertJobOperator(
task_id="extract_data",
impersonation_chain=GCP_ASTRO_TEAM_SA.get()
params={"dst_table": _DST_TABLE, "lookback_days": _LOOKBACK_DAYS},
configuration={
"query": {
"query": "{% include 'sql/sql_file.sql' %}",
"useLegacySql": False,
}
},
outlets=DATASET,
execution_timeout=timedelta(hours=2, minutes=30),
deferrable=True,
)
```
### Operating System
Mac OS Ventura 13.3.1
### Versions of Apache Airflow Providers
apache airflow google provider 8.4 (also tried with 10.1.0).
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
We're using astro and on local the Airflow environment is started using `astro dev start`. The issue appears when running the DAG locally.
An entire other issue (may be unrelated) appears on Sandbox deployment.
### Anything else
Every time the operator is marked as deferrable.
I noticed this a few days ago (week started with Monday May 22nd 2023).
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31584 | https://github.com/apache/airflow/pull/31591 | fcbbf47864c251046de108aafdad394d66e1df23 | 81b85ebcbd241e1909793d7480aabc81777b225c | "2023-05-28T13:15:37Z" | python | "2023-07-29T07:33:56Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,573 | ["airflow/providers/hashicorp/_internal_client/vault_client.py", "tests/providers/hashicorp/_internal_client/test_vault_client.py", "tests/providers/hashicorp/hooks/test_vault.py"] | Vault AWS Login not working | ### Apache Airflow version
2.6.1
### What happened
Trying to connect to Vault using `auth_type` = `aws_iam`. I am receiving the following error: `AttributeError: 'Client' has no attribute 'auth_aws_iam'`. Looking through [the code](https://github.com/apache/airflow/blob/main/airflow/providers/hashicorp/_internal_client/vault_client.py#L303), you are using `client.auth_aws_iam`, but the [HVAC docs](https://hvac.readthedocs.io/en/stable/usage/auth_methods/aws.html#caveats-for-non-default-aws-regions) use `client.auth.aws.iam_login`. You should also add support for `boto3.Session()` to use role based access.
### What you think should happen instead
Airflow should authenticate to Vault using AWS auth.
### How to reproduce
Setup a secrets backend of `airflow.providers.hashicorp.secrets.vault.VaultBackend` and set `auth_type` = `aws_iam`. It will error out saying that the client doesn't have an attribute named `auth_aws_iam`.
### Operating System
Docker
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31573 | https://github.com/apache/airflow/pull/31593 | ec18db170745a8b1df0bb75569cd22e69892b3e2 | 41ea700cbdce99cddd0f7b51b33b9fab51b993af | "2023-05-26T18:20:56Z" | python | "2023-05-30T12:25:36Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,551 | ["airflow/providers/amazon/aws/hooks/redshift_sql.py", "tests/providers/amazon/aws/hooks/test_redshift_sql.py"] | Redshift connection breaking change with IAM authentication | ### Apache Airflow version
2.6.1
### What happened
This [PR](https://github.com/apache/airflow/pull/28187) introduced the get_iam_token method in `redshift_sql.py`. This is the breaking change as introduces the check for `iam` in extras, and it's set to False by default.
Error log:
```
self = <airflow.providers.amazon.aws.hooks.redshift_sql.RedshiftSQLHook object at 0x7f29f7c208e0>
conn = redshift_default
def get_iam_token(self, conn: Connection) -> tuple[str, str, int]:
"""
Uses AWSHook to retrieve a temporary ***word to connect to Redshift.
Port is required. If none is provided, default is used for each service
"""
port = conn.port or 5439
# Pull the custer-identifier from the beginning of the Redshift URL
# ex. my-cluster.ccdre4hpd39h.us-east-1.redshift.amazonaws.com returns my-cluster
> cluster_identifier = conn.extra_dejson.get("cluster_identifier", conn.host.split(".")[0])
E AttributeError: 'NoneType' object has no attribute 'split'
.nox/test-3-8-airflow-2-6-0/lib/python3.8/site-packages/airflow/providers/amazon/aws/hooks/redshift_sql.py:107: AttributeError
```
### What you think should happen instead
It should have backward compatibility
### How to reproduce
Run an example DAG for redshift with the AWS IAM profile given at hook initialization to retrieve a temporary password to connect to Amazon Redshift.
### Operating System
mac-os
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31551 | https://github.com/apache/airflow/pull/31567 | 0f1cef27a5a19dd56e6b07ab0bf9868fb850421a | 5b3382f63898e497d482870636ed156ce861afbc | "2023-05-25T17:07:51Z" | python | "2023-05-30T18:18:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,547 | ["airflow/www/views.py"] | Tag filter doesn't sort tags alphabetically | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Airflow v: 2.6.0
This has been an issue since 2.4.0 for us at least. We recently did a refactor of many of our 160+ DAGs and part of that process was to remove some tags that we didn't want anymore. Unfortunately, the old tags were still left behind when we deployed our new image with the updated DAGs (been a consistent thing across several Airflow versions for us). There is also the issue that the tag filter doesn't sort our tags alphabetically.
I tried to truncate the dag_tag table, and that did help to get rid of the old tags. However, the sorting issue remains. Example:
![image](https://github.com/apache/airflow/assets/102953522/a43194a6-90f3-40dd-887e-fdfad043f200)
On one of our dev environments, we have just about 10 DAGs with a similar sorting problem, and the dag_tag table had 18 rows. I took a backup of it and truncated the dag_tag table, which was almost instantly refilled (I guess logs are DEBUG level on that, so I saw nothing). This did not initially fix the sorting problem, but after a couple of truncates, things got weird, and all the tags were sorted as expected, and the row count in the dag_tag table was now just 15, so 3 rows were removed in all. We also added a new DAG in there with a tag "arjun", which also got listed first - so all sorted on that environment.
Summary:
1. Truncating of the dag_tag table got rid of the old tags that we no longer have in our DAGs.
2. The tags are still sorted incorrectly in the filter (see image).
It seems that the logic here is contained in `www/static/js/dags.js.` I am willing to submit a PR if I can get some guidance :)
### What you think should happen instead
_No response_
### How to reproduce
N/A
### Operating System
debian
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31547 | https://github.com/apache/airflow/pull/31553 | 6f86b6cd070097dafca196841c82de91faa882f4 | 24e52f92bd9305bf534c411f9455460060515ea7 | "2023-05-25T16:08:43Z" | python | "2023-05-26T16:31:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,526 | ["airflow/models/skipmixin.py", "airflow/operators/python.py", "airflow/operators/subdag.py", "tests/operators/test_python.py", "tests/operators/test_subdag_operator.py"] | Short circuit task in expanded task group fails when it returns false | ### Apache Airflow version
2.6.1
### What happened
I have a short circuit task which is in a task group that is expanded. The task work correctly when it returns true but the task fails when it returns false with the following error:
```
sqlalchemy.exc.IntegrityError: (psycopg2.errors.ForeignKeyViolation) insert or update on table "xcom" violates foreign key constraint "xcom_task_instance_fkey"
DETAIL: Key (dag_id, task_id, run_id, map_index)=(pipeline_output_to_s3, transfer_output_file.already_in_manifest_short_circuit, manual__2023-05-24T20:21:35.420606+00:00, -1) is not present in table "task_instance".
```
It looks like it sets the map-index to -1 when false is returned which is causing the issue.
If one task fails this way in the the task group, all other mapped tasks fail as well, even if the short circuit returns true.
When this error occurs, all subsequent DAG runs will be stuck indefinitely in the running state unless the DAG is deleted.
### What you think should happen instead
Returning false from the short circuit operator should skip downstream tasks without affecting other task groups / map indexes or subsequent DAG runs.
### How to reproduce
include a short circuit operator in a mapped task group and have it return false.
### Operating System
Red Hat Enterprise Linux 7
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31526 | https://github.com/apache/airflow/pull/31541 | c356e4fc22abc77f05aa136700094a882f2ca8c0 | e2da3151d49dae636cb6901f3d3e124a49cbf514 | "2023-05-24T20:37:27Z" | python | "2023-05-30T10:42:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,522 | ["airflow/api/common/airflow_health.py", "airflow/api_connexion/endpoints/health_endpoint.py", "airflow/www/views.py", "tests/api/__init__.py", "tests/api/common/__init__.py", "tests/api/common/test_airflow_health.py"] | `/health` endpoint missed when adding triggerer health status reporting | ### Apache Airflow version
main (development)
### What happened
https://github.com/apache/airflow/pull/27755 added the triggerer to the rest api health endpoint, but not the main one served on `/health`.
### What you think should happen instead
As documented [here](https://airflow.apache.org/docs/apache-airflow/2.6.1/administration-and-deployment/logging-monitoring/check-health.html#webserver-health-check-endpoint), the `/health` endpoint should include triggerer info like shown on `/api/v1/health`.
### How to reproduce
Compare `/api/v1/health` and `/health` responses.
### Operating System
mac os
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31522 | https://github.com/apache/airflow/pull/31529 | afa9ead4cea767dfc4b43e6f301e6204f7521e3f | f048aba47e079e0c81417170a5ac582ed00595c4 | "2023-05-24T20:08:34Z" | python | "2023-05-26T20:22:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,509 | ["airflow/cli/commands/user_command.py"] | Unable to delete user via CI | ### Apache Airflow version
2.6.1
### What happened
I am unable to delete users via the "delete command".
I am trying to create a new user delete the default admin user. so I tried running the command `airflow users delete -u admin`. Running this command gave the following error output:
```
Feature not implemented,tasks route disabled
/usr/local/lib/python3.10/site-packages/flask_limiter/extension.py:293 UserWarning: Using the in-memory storage for tracking rate limits as no storage was explicitly specified. This is not recommended for production use. See: https://flask-limiter.readthedocs.io#configuring-a-storage-backend for documentation about configuring the storage backend.
/usr/local/lib/python3.10/site-packages/astronomer/airflow/version_check/update_checks.py:440 UserWarning: The setup method 'app_context_processor' can no longer be called on the blueprint 'UpdateAvailableView'. It has already been registered at least once, any changes will not be applied consistently.
Make sure all imports, decorators, functions, etc. needed to set up the blueprint are done before registering it.
This warning will become an exception in Flask 2.3.
/usr/local/lib/python3.10/site-packages/flask/blueprints.py:673 UserWarning: The setup method 'record_once' can no longer be called on the blueprint 'UpdateAvailableView'. It has already been registered at least once, any changes will not be applied consistently.
Make sure all imports, decorators, functions, etc. needed to set up the blueprint are done before registering it.
This warning will become an exception in Flask 2.3.
/usr/local/lib/python3.10/site-packages/flask/blueprints.py:321 UserWarning: The setup method 'record' can no longer be called on the blueprint 'UpdateAvailableView'. It has already been registered at least once, any changes will not be applied consistently.
Make sure all imports, decorators, functions, etc. needed to set up the blueprint are done before registering it.
This warning will become an exception in Flask 2.3.
/usr/local/lib/python3.10/site-packages/airflow/www/fab_security/sqla/manager.py:151 SAWarning: Object of type <Role> not in session, delete operation along 'User.roles' won't proceed
[2023-05-24T12:34:19.438+0000] {manager.py:154} ERROR - Remove Register User Error: (psycopg2.errors.ForeignKeyViolation) update or delete on table "ab_user" violates foreign key constraint "ab_user_role_user_id_fkey" on table "ab_user_role"
DETAIL: Key (id)=(4) is still referenced from table "ab_user_role".
[SQL: DELETE FROM ab_user WHERE ab_user.id = %(id)s]
[parameters: {'id': 4}]
(Background on this error at: https://sqlalche.me/e/14/gkpj)
Failed to delete user
```
Deleting via the UI works fine.
The error also occurs for users that have a different role, such as Viewer.
### What you think should happen instead
No error should occur and the specified user should be deleted.
### How to reproduce
Run a Dag with a task that is a BashOperator that deletes the user e.g.:
```python
remove_default_admin_user_task = BashOperator(task_id="remove_default_admin_user",
bash_command="airflow users delete -u admin")
```
### Operating System
Docker containers run in Debian 10
### Versions of Apache Airflow Providers
Astronomer
### Deployment
Astronomer
### Deployment details
I use Astronomer 8.0.0
### Anything else
Always. It also occurs when ran inside the webserver docker container or when ran via the astro CLI with the command `astro dev run airflow users delete -u admin`.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31509 | https://github.com/apache/airflow/pull/31539 | 0fd42ff015be02d1a6a6c2e1a080f8267194b3a5 | 3ec66bb7cc686d060ff728bb6bf4d4e70e387ae3 | "2023-05-24T12:40:00Z" | python | "2023-05-25T19:45:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,499 | ["airflow/providers/databricks/operators/databricks_sql.py", "tests/providers/databricks/operators/test_databricks_sql.py"] | XCom - Attribute Error when serializing output of `Merge Into` databricks sql command. | ### Apache Airflow version
2.6.1
### What happened
After upgrading from Airflow 2.5.3 to 2.6.1 - the dag started to fail and it's related to XCom serialization.
I noticed that something has changed in regards to serializing XCom:```
key | Value Version 2.5.3 | Value Version 2.6.1 | result
-- | -- | -- | --
return_value | [[['Result', 'string', None, None, None, None, None]], []] | (['(num_affected_rows,bigint,None,None,None,None,None)', '(num_inserted_rows,bigint,None,None,None,None,None)'],[]) | ✅
return_value | [[['Result', 'string', None, None, None, None, None]], []] | (['(Result,string,None,None,None,None,None)'],[]) | ✅
return_value | [[['num_affected_rows', 'bigint', None, None, None, None, None], ['num_updated_rows', 'bigint', None, None, None, None, None], ['num_deleted_rows', 'bigint', None, None, None, None, None], ['num_inserted_rows', 'bigint', None, None, None, None, None]], [[1442, 605, 0, 837]]] | `AttributeError: __name__. Did you mean: '__ne__'?` | ❌
Query syntax that procuded an error: MERGE INTO https://docs.databricks.com/sql/language-manual/delta-merge-into.html
Stacktrace included below:
```
[2023-05-24, 01:12:43 UTC] {taskinstance.py:1824} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/session.py", line 73, in wrapper
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/taskinstance.py", line 2354, in xcom_push
XCom.set(
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/session.py", line 73, in wrapper
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/xcom.py", line 237, in set
value = cls.serialize_value(
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/xcom.py", line 632, in serialize_value
return json.dumps(value, cls=XComEncoder).encode("UTF-8")
File "/usr/local/lib/python3.10/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/json.py", line 102, in encode
o = self.default(o)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/json.py", line 91, in default
return serialize(o)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/serialization/serde.py", line 144, in serialize
return encode(classname, version, serialize(data, depth + 1))
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/serialization/serde.py", line 123, in serialize
return [serialize(d, depth + 1) for d in o]
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/serialization/serde.py", line 123, in <listcomp>
return [serialize(d, depth + 1) for d in o]
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/serialization/serde.py", line 123, in serialize
return [serialize(d, depth + 1) for d in o]
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/serialization/serde.py", line 123, in <listcomp>
return [serialize(d, depth + 1) for d in o]
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/serialization/serde.py", line 132, in serialize
qn = qualname(o)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/module_loading.py", line 47, in qualname
return f"{o.__module__}.{o.__name__}"
File "/home/airflow/.local/lib/python3.10/site-packages/databricks/sql/types.py", line 161, in __getattr__
raise AttributeError(item)
AttributeError: __name__. Did you mean: '__ne__'?
```
### What you think should happen instead
Serialization should finish without an exception raised.
### How to reproduce
1. Dag file with declared operator:
```
task = DatabricksSqlOperator(
task_id="task",
databricks_conn_id="databricks_conn_id",
sql_endpoint_name="name",
sql="file.sql"
)
```
file.sql
```
MERGE INTO table_name
ON condition
WHEN MATCHED THEN UPDATE
WHEN NOT MATCHED THEN INSERT
```
https://docs.databricks.com/sql/language-manual/delta-merge-into.html
Query output is a table
num_affected_rows | num_updated_rows | num_deleted_rows | num_inserted_rows
-- | -- | -- | --
0 | 0 | 0 | 0
EDIT: It also happens for a select command
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon==8.0.0
apache-airflow-providers-cncf-kubernetes==6.1.0
apache-airflow-providers-common-sql==1.4.0
apache-airflow-providers-databricks==4.1.0
apache-airflow-providers-ftp==3.3.1
apache-airflow-providers-google==10.0.0
apache-airflow-providers-hashicorp==3.3.1
apache-airflow-providers-http==4.3.0
apache-airflow-providers-imap==3.1.1
apache-airflow-providers-mysql==5.0.0
apache-airflow-providers-postgres==5.4.0
apache-airflow-providers-sftp==4.2.4
apache-airflow-providers-sqlite==3.3.2
apache-airflow-providers-ssh==3.6.0
```
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31499 | https://github.com/apache/airflow/pull/31780 | 1aa9e803c26b8e86ab053cfe760153fc286e177c | 049c6184b730a7ede41db9406654f054ddc8cc5f | "2023-05-24T08:32:41Z" | python | "2023-06-08T10:49:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,480 | ["airflow/providers/amazon/aws/links/emr.py", "tests/providers/amazon/aws/links/test_links.py"] | Missing LogUri from emr describe-cluster API when executing EmrCreateJobFlowOperator | ### Apache Airflow version
main (development)
### What happened
Encounter the following error when executing `EmrCreateJobFlowOperator`
```
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/airflow/providers/amazon/aws/operators/emr.py", line 695, in execute
log_uri=get_log_uri(emr_client=self._emr_hook.conn, job_flow_id=self._job_flow_id),
File "/usr/local/lib/python3.10/site-packages/airflow/providers/amazon/aws/links/emr.py", line 61, in get_log_uri
log_uri = S3Hook.parse_s3_url(response["Cluster"]["LogUri"])
KeyError: 'LogUri'
```
According to the [this document](https://docs.aws.amazon.com/cli/latest/reference/emr/describe-cluster.html), it seems we might not always be able to get `["Cluster"]["LogUri"]` and we encounter errors after the release in https://github.com/apache/airflow/issues/31322.
### What you think should happen instead
The `EmrCreateJobFlowOperator` should finish execution without error.
### How to reproduce
1. `git clone github.com/apache/airflow/`
2. `cd airflow`
3. `git checkout c082aec089405ed0399cfee548011b0520be0011` (the main branch when I found this issue)
4. Add the following DAG to `files/dags/` and name it as `example_emr.py`
```python
import os
from datetime import datetime, timedelta
from airflow import DAG
from airflow.providers.amazon.aws.operators.emr import EmrCreateJobFlowOperator, EmrTerminateJobFlowOperator
JOB_FLOW_OVERRIDES = {
"Name": "example_emr_sensor_cluster",
"ReleaseLabel": "emr-5.29.0",
"Applications": [{"Name": "Spark"}],
"Instances": {
"InstanceGroups": [
{
"Name": "Primary node",
"Market": "ON_DEMAND",
"InstanceRole": "MASTER",
"InstanceType": "m4.large",
"InstanceCount": 1,
},
],
"KeepJobFlowAliveWhenNoSteps": False,
"TerminationProtected": False,
},
"JobFlowRole": "EMR_EC2_DefaultRole"
"ServiceRole": "EMR_DefaultRole",
}
DEFAULT_ARGS = {
"execution_timeout": timedelta(hours=6),
"retries": 2,
"retry_delay": 60,
}
with DAG(
dag_id="example_emr_sensor",
schedule=None,
start_date=datetime(2022, 1, 1),
default_args=DEFAULT_ARGS,
catchup=False,
) as dag:
create_job_flow = EmrCreateJobFlowOperator(
task_id="create_job_flow",
job_flow_overrides=JOB_FLOW_OVERRIDES,
aws_conn_id="aws_default",
)
remove_job_flow = EmrTerminateJobFlowOperator(
task_id="remove_job_flow",
job_flow_id=create_job_flow.output,
aws_conn_id="aws_default",
trigger_rule="all_done",
)
create_job_flow >> remove_job_flow
```
5. `breeze --python 3.8 --backend sqlite start-airflow`
6. Trigger the DAG from web UI
### Operating System
mac OS 13.4
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
https://github.com/apache/airflow/issues/31322#issuecomment-1556876252
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31480 | https://github.com/apache/airflow/pull/31482 | 27001a23718d6b8b5118eb130be84713af9a4477 | a8c45b088e088a5f1d9c924f9efb660c80c0ce12 | "2023-05-23T14:52:59Z" | python | "2023-05-31T10:38:44Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,476 | ["airflow/cli/commands/kubernetes_command.py", "tests/cli/commands/test_kubernetes_command.py"] | cleanup-pod CLI command fails due to incorrect host | ### Apache Airflow version
2.6.1
### What happened
When running `airflow kubernetes cleanup-pods`, the API call to delete a pod fails. A snippet of the log is below:
```
urllib3.exceptions.MaxRetryError:
HTTPConnectionPool(host='localhost', port=80): Max retries exceeded with url: /api/v1/namespaces/airflow/pods/my-task-avd79fq1 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f52f9aebfd0>: Failed to establish a new connection: [Errno 111] Connection refused'))
```
[The Kubernetes client provisioned in _delete_pod](https://github.com/apache/airflow/blob/main/airflow/cli/commands/kubernetes_command.py#L151) incorrectly has the host as `http:localhost`. On the scheduler pod if I start a Python environment I can see that the configuration differs from the `get_kube_client()` configuration:
```
>>> get_kube_client().api_client.configuration.host
'https://172.20.0.1:443'
>>> client.CoreV1Api().api_client.configuration.host
'http://localhost/'
```
On Airflow 2.5.3 these two clients have the same configuration.
It's possible I have some mistake in my configuration but I'm not sure what it could be. The above fails on 2.6.0 also.
### What you think should happen instead
Pods should clean up without error
### How to reproduce
Run the following from a Kubernetes deployment of Airflow:
```python
from airflow.kubernetes.kube_client import get_kube_client
from kubernetes import client
print(get_kube_client().api_client.configuration.host)
print(client.CoreV1Api().api_client.configuration.host)
```
Alternatively run `airflow kubernetes cleanup-pods` with pods available for cleanup
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
Using `in_cluster` configuration for KubernetesExecutor
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31476 | https://github.com/apache/airflow/pull/31477 | 739e6b5d775412f987a3ff5fb71c51fbb7051a89 | adf0cae48ad4e87612c00fe9facffca9b5728e7d | "2023-05-23T14:14:39Z" | python | "2023-05-24T09:45:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,460 | ["airflow/models/connection.py", "tests/models/test_connection.py"] | Add capability in Airflow connections to validate host | ### Apache Airflow version
2.6.1
### What happened
While creating connections in airflow, the form doesn't check for the correctness of the format of the host provided. For instance, we can proceed providing something like this which is not a valid url: `spark://k8s://100.68.0.1:443?deploy-mode=cluster`. It wont instantly fail but will return faulty hosts and other details if called later.
Motivation:
https://github.com/apache/airflow/pull/31376#discussion_r1200112106
### What you think should happen instead
The Connection form can have a validator which should check for these scenarios and report if there is an issue to save developers and users time later.
### How to reproduce
1. Go to airflow connections form
2. Fill in connection host as: `spark://k8s://100.68.0.1:443?deploy-mode=cluster`, other details can be anything
3. Create the connection
4. Run `airflow connections get <name>`
5. The host and schema will be wrong
### Operating System
Macos
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31460 | https://github.com/apache/airflow/pull/31465 | 232771869030d708c57f840aea735b18bd4bffb2 | 0560881f0eaef9c583b11e937bf1f79d13e5ac7c | "2023-05-22T09:50:46Z" | python | "2023-06-19T09:32:41Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,440 | ["airflow/example_dags/example_params_ui_tutorial.py", "airflow/www/static/js/trigger.js", "airflow/www/templates/airflow/trigger.html", "docs/apache-airflow/core-concepts/params.rst"] | Multi-Select, Text Proposals and Value Labels forTrigger Forms | ### Description
After the release for Airflow 2.6.0 I was integrating some forms into our setup and was missing some options selections - and some nice features to make selections user friendly.
I'd like do contribute some few features into the user forms:
* A select box option where proposals are made but user is not limited to hard `enum` list (`enum` is restricting user input to only the options provided)
* A multi-option pick list (because sometimes a single selection is just not enough
* Labels so that technical values used to control the DAG are different from what is presented as option to user
### Use case/motivation
After the initial release of UI trigger forms, add more features (incrementially)
### Related issues
Relates or potentially has a conflict with #31299, so this should be merged before.
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31440 | https://github.com/apache/airflow/pull/31441 | 1ac35e710afc6cf5ea4466714b18efacdc44e1f7 | c25251cde620481592392e5f82f9aa8a259a2f06 | "2023-05-20T15:31:28Z" | python | "2023-05-22T14:33:03Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,432 | ["airflow/providers/google/cloud/operators/bigquery.py", "airflow/providers/google/cloud/triggers/bigquery.py", "tests/providers/google/cloud/operators/test_bigquery.py", "tests/providers/google/cloud/triggers/test_bigquery.py"] | `BigQueryGetDataOperator`'s query job is bugged in deferrable mode | ### Apache Airflow version
main (development)
### What happened
1. When not providing `project_id` to `BigQueryGetDataOperator` in deferrable mode (`project_id=None`), the query generated by `generate_query` method is bugged, i.e.,:
````sql
from `None.DATASET.TABLE_ID` limit ...
````
2. `as_dict` param does not work `BigQueryGetDataOperator`.
### What you think should happen instead
1. When `project_id` is `None` - it should be removed from the query along with the trailing dot, i.e.,:
````sql
from `DATASET.TABLE_ID` limit ...
````
2. `as_dict` should be added to the serialization method of `BigQueryGetDataTrigger`.
### How to reproduce
1. Create a DAG file with `BigQueryGetDataOperator` defined as follows:
```python
BigQueryGetDataOperator(
task_id="bq_get_data_op",
# project_id="PROJECT_ID", <-- Not provided
dataset_id="DATASET",
table_id="TABLE",
use_legacy_sql=False,
deferrable=True
)
````
2. 1. Create a DAG file with `BigQueryGetDataOperator` defined as follows:
```python
BigQueryGetDataOperator(
task_id="bq_get_data_op",
project_id="PROJECT_ID",
dataset_id="DATASET",
table_id="TABLE",
use_legacy_sql=False,
deferrable=True,
as_dict=True
)
````
### Operating System
Debian
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
The `generate_query` method is not unit tested (which would have prevented it in the first place) - will be better to add one.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31432 | https://github.com/apache/airflow/pull/31433 | 0e8bff9c4ec837d086dbe49b3d583a8d23f49e0e | 0d6e626b050a860462224ad64dc5e9831fe8624d | "2023-05-19T18:20:45Z" | python | "2023-05-22T18:20:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,431 | ["airflow/migrations/versions/0125_2_6_2_add_onupdate_cascade_to_taskmap.py", "airflow/migrations/versions/0126_2_7_0_add_index_to_task_instance_table.py", "airflow/models/taskmap.py", "docs/apache-airflow/img/airflow_erd.sha256", "docs/apache-airflow/img/airflow_erd.svg", "docs/apache-airflow/migrations-ref.rst", "tests/models/test_taskinstance.py"] | Clearing a task flow function executed earlier with task changed to mapped task crashes scheduler | ### Apache Airflow version
main (development)
### What happened
Clearing a task flow function executed earlier with task changed to mapped task crashes scheduler. It seems TaskMap stored has a foreign key reference by map_index which needs to be cleared before execution.
```
airflow scheduler
/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/cli/cli_config.py:1001 DeprecationWarning: The namespace option in [kubernetes] has been moved to the namespace option in [kubernetes_executor] - the old setting has been used, but please update your config.
____________ _____________
____ |__( )_________ __/__ /________ __
____ /| |_ /__ ___/_ /_ __ /_ __ \_ | /| / /
___ ___ | / _ / _ __/ _ / / /_/ /_ |/ |/ /
_/_/ |_/_/ /_/ /_/ /_/ \____/____/|__/
/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/jobs/scheduler_job_runner.py:196 DeprecationWarning: The '[celery] task_adoption_timeout' config option is deprecated. Please update your config to use '[scheduler] task_queued_timeout' instead.
/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/jobs/scheduler_job_runner.py:201 DeprecationWarning: The worker_pods_pending_timeout option in [kubernetes] has been moved to the worker_pods_pending_timeout option in [kubernetes_executor] - the old setting has been used, but please update your config.
/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/jobs/scheduler_job_runner.py:206 DeprecationWarning: The '[kubernetes_executor] worker_pods_pending_timeout' config option is deprecated. Please update your config to use '[scheduler] task_queued_timeout' instead.
[2023-05-19T23:41:07.907+0530] {executor_loader.py:114} INFO - Loaded executor: SequentialExecutor
[2023-05-19 23:41:07 +0530] [15527] [INFO] Starting gunicorn 20.1.0
[2023-05-19 23:41:07 +0530] [15527] [INFO] Listening at: http://[::]:8793 (15527)
[2023-05-19 23:41:07 +0530] [15527] [INFO] Using worker: sync
[2023-05-19 23:41:07 +0530] [15528] [INFO] Booting worker with pid: 15528
[2023-05-19T23:41:07.952+0530] {scheduler_job_runner.py:789} INFO - Starting the scheduler
[2023-05-19T23:41:07.952+0530] {scheduler_job_runner.py:796} INFO - Processing each file at most -1 times
[2023-05-19T23:41:07.954+0530] {scheduler_job_runner.py:1542} INFO - Resetting orphaned tasks for active dag runs
[2023-05-19 23:41:07 +0530] [15529] [INFO] Booting worker with pid: 15529
[2023-05-19T23:41:08.567+0530] {scheduler_job_runner.py:853} ERROR - Exception when executing SchedulerJob._run_scheduler_loop
Traceback (most recent call last):
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/jobs/scheduler_job_runner.py", line 836, in _execute
self._run_scheduler_loop()
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/jobs/scheduler_job_runner.py", line 970, in _run_scheduler_loop
num_queued_tis = self._do_scheduling(session)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/jobs/scheduler_job_runner.py", line 1052, in _do_scheduling
callback_tuples = self._schedule_all_dag_runs(guard, dag_runs, session)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/utils/retries.py", line 90, in wrapped_function
for attempt in run_with_db_retries(max_retries=retries, logger=logger, **retry_kwargs):
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/tenacity/__init__.py", line 382, in __iter__
do = self.iter(retry_state=retry_state)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/tenacity/__init__.py", line 349, in iter
return fut.result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 451, in result
return self.__get_result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/utils/retries.py", line 99, in wrapped_function
return func(*args, **kwargs)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/jobs/scheduler_job_runner.py", line 1347, in _schedule_all_dag_runs
callback_tuples = [(run, self._schedule_dag_run(run, session=session)) for run in dag_runs]
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/orm/query.py", line 2811, in __iter__
return self._iter().__iter__()
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/orm/query.py", line 2818, in _iter
result = self.session.execute(
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1669, in execute
conn = self._connection_for_bind(bind, close_with_result=True)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1519, in _connection_for_bind
return self._transaction._connection_for_bind(
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 721, in _connection_for_bind
self._assert_active()
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 601, in _assert_active
raise sa_exc.PendingRollbackError(
sqlalchemy.exc.PendingRollbackError: This Session's transaction has been rolled back due to a previous exception during flush. To begin a new transaction with this Session, first issue Session.rollback(). Original exception was: (psycopg2.errors.ForeignKeyViolation) update or delete on table "task_instance" violates foreign key constraint "task_map_task_instance_fkey" on table "task_map"
DETAIL: Key (dag_id, task_id, run_id, map_index)=(bash_simple, get_command, manual__2023-05-18T13:54:01.345016+00:00, -1) is still referenced from table "task_map".
[SQL: UPDATE task_instance SET map_index=%(map_index)s, updated_at=%(updated_at)s WHERE task_instance.dag_id = %(task_instance_dag_id)s AND task_instance.task_id = %(task_instance_task_id)s AND task_instance.run_id = %(task_instance_run_id)s AND task_instance.map_index = %(task_instance_map_index)s]
[parameters: {'map_index': 0, 'updated_at': datetime.datetime(2023, 5, 19, 18, 11, 8, 90512, tzinfo=Timezone('UTC')), 'task_instance_dag_id': 'bash_simple', 'task_instance_task_id': 'get_command', 'task_instance_run_id': 'manual__2023-05-18T13:54:01.345016+00:00', 'task_instance_map_index': -1}]
(Background on this error at: http://sqlalche.me/e/14/gkpj) (Background on this error at: http://sqlalche.me/e/14/7s2a)
[2023-05-19T23:41:08.572+0530] {scheduler_job_runner.py:865} INFO - Exited execute loop
Traceback (most recent call last):
File "/home/karthikeyan/stuff/python/airflow/.env/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/__main__.py", line 48, in main
args.func(args)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/cli/cli_config.py", line 51, in command
return func(*args, **kwargs)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/utils/cli.py", line 112, in wrapper
return f(*args, **kwargs)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/cli/commands/scheduler_command.py", line 77, in scheduler
_run_scheduler_job(job_runner, skip_serve_logs=args.skip_serve_logs)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/cli/commands/scheduler_command.py", line 42, in _run_scheduler_job
run_job(job=job_runner.job, execute_callable=job_runner._execute)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/utils/session.py", line 76, in wrapper
return func(*args, session=session, **kwargs)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/jobs/job.py", line 284, in run_job
return execute_job(job, execute_callable=execute_callable)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/jobs/job.py", line 313, in execute_job
ret = execute_callable()
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/jobs/scheduler_job_runner.py", line 836, in _execute
self._run_scheduler_loop()
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/jobs/scheduler_job_runner.py", line 970, in _run_scheduler_loop
num_queued_tis = self._do_scheduling(session)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/jobs/scheduler_job_runner.py", line 1052, in _do_scheduling
callback_tuples = self._schedule_all_dag_runs(guard, dag_runs, session)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/utils/retries.py", line 90, in wrapped_function
for attempt in run_with_db_retries(max_retries=retries, logger=logger, **retry_kwargs):
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/tenacity/__init__.py", line 382, in __iter__
do = self.iter(retry_state=retry_state)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/tenacity/__init__.py", line 349, in iter
return fut.result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 451, in result
return self.__get_result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/utils/retries.py", line 99, in wrapped_function
return func(*args, **kwargs)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/jobs/scheduler_job_runner.py", line 1347, in _schedule_all_dag_runs
callback_tuples = [(run, self._schedule_dag_run(run, session=session)) for run in dag_runs]
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/orm/query.py", line 2811, in __iter__
return self._iter().__iter__()
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/orm/query.py", line 2818, in _iter
result = self.session.execute(
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1669, in execute
conn = self._connection_for_bind(bind, close_with_result=True)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1519, in _connection_for_bind
return self._transaction._connection_for_bind(
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 721, in _connection_for_bind
self._assert_active()
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 601, in _assert_active
raise sa_exc.PendingRollbackError(
sqlalchemy.exc.PendingRollbackError: This Session's transaction has been rolled back due to a previous exception during flush. To begin a new transaction with this Session, first issue Session.rollback(). Original exception was: (psycopg2.errors.ForeignKeyViolation) update or delete on table "task_instance" violates foreign key constraint "task_map_task_instance_fkey" on table "task_map"
DETAIL: Key (dag_id, task_id, run_id, map_index)=(bash_simple, get_command, manual__2023-05-18T13:54:01.345016+00:00, -1) is still referenced from table "task_map".
[SQL: UPDATE task_instance SET map_index=%(map_index)s, updated_at=%(updated_at)s WHERE task_instance.dag_id = %(task_instance_dag_id)s AND task_instance.task_id = %(task_instance_task_id)s AND task_instance.run_id = %(task_instance_run_id)s AND task_instance.map_index = %(task_instance_map_index)s]
[parameters: {'map_index': 0, 'updated_at': datetime.datetime(2023, 5, 19, 18, 11, 8, 90512, tzinfo=Timezone('UTC')), 'task_instance_dag_id': 'bash_simple', 'task_instance_task_id': 'get_command', 'task_instance_run_id': 'manual__2023-05-18T13:54:01.345016+00:00', 'task_instance_map_index': -1}]
(Background on this error at: http://sqlalche.me/e/14/gkpj) (Background on this error at: http://sqlalche.me/e/14/7s2a)
```
### What you think should happen instead
_No response_
### How to reproduce
1. Create the dag with `command = get_command(1, 1)` and trigger a dagrun waiting for it to complete
2. Now change this to `command = get_command.partial(arg1=[1]).expand(arg2=[1, 2, 3, 4])` so that the task is now mapped.
3. Clear the existing task that causes the scheduler to crash.
```python
import datetime, time
from airflow.operators.bash import BashOperator
from airflow import DAG
from airflow.decorators import task
with DAG(
dag_id="bash_simple",
start_date=datetime.datetime(2022, 1, 1),
schedule=None,
catchup=False,
) as dag:
@task
def get_command(arg1, arg2):
for i in range(10):
time.sleep(1)
print(i)
return ["echo hello"]
command = get_command(1, 1)
# command = get_command.partial(arg1=[1]).expand(arg2=[1, 2, 3, 4])
t1 = BashOperator.partial(task_id="bash").expand(bash_command=command)
if __name__ == "__main__":
dag.test()
```
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31431 | https://github.com/apache/airflow/pull/31445 | adf0cae48ad4e87612c00fe9facffca9b5728e7d | f6bb4746efbc6a94fa17b6c77b31d9fb17305ffc | "2023-05-19T18:12:39Z" | python | "2023-05-24T10:54:45Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,420 | ["airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/task_instance_schema.py", "airflow/www/static/js/types/api-generated.ts", "tests/api_connexion/endpoints/test_task_instance_endpoint.py"] | allow other states than success/failed in tasks by REST API | ### Apache Airflow version
main (development)
### What happened
see also: https://github.com/apache/airflow/issues/25463
From the conversation there, it sounds like it's intended to be possible to set a task to "skipped" via REST API, but it's not.
Instead the next best thing we have is marking as success & adding a note.
### What you think should happen instead
I see no reason for users to not just be able to set tasks to "skipped". I could imagine reasons to avoid "queued", and some other more "internal" states, but skipped makes perfect sense to me for
- tasks that failed and got fixed externally
- tasks that failed but are now irrelevant because of a newer run
in case you still want to be able to see those in the future (actually, a custom state would be even better)
### How to reproduce
```python
# task is just a dictionary with the right values for below
r = requests.patch(
f"{base_url}/api/v1/dags/{task['dag_id']}/dagRuns/{task['dag_run_id']}/taskInstances/{task['task_id']}",
json={
"new_state": "skipped",
},
headers={"Authorization": token},
)
```
-> r.json() gives
`{'detail': "'skipped' is not one of ['success', 'failed'] - 'new_state'", 'status': 400, 'title': 'Bad Request', 'type': 'http://apache-airflow-docs.s3-website.eu-central-1.amazonaws.com/docs/apache-airflow/latest/stable-rest-api-ref.html#section/Errors/BadRequest'}`
### Operating System
/
### Versions of Apache Airflow Providers
/
### Deployment
Astronomer
### Deployment details
/
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31420 | https://github.com/apache/airflow/pull/31421 | 233663046d5210359ce9f4db2fe3db4f5c38f6ee | fba6f86ed7e59c166d0cf7717f1734ae30ba4d9c | "2023-05-19T14:28:31Z" | python | "2023-06-08T20:57:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,409 | ["airflow/sensors/base.py", "tests/sensors/test_base.py"] | ZeroDivisionError in BaseSensorOperator with `exponential_backoff=True` and `poke_interval=1` | ### Apache Airflow version
2.6.1
### What happened
Sensor is fired with an exception ZeroDivisionError, if set up `mode="reschedule"`, `exponential_backoff=True` and `poke_interval=1`
```
ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/sensors/base.py", line 244, in execute
next_poke_interval = self._get_next_poke_interval(started_at, run_duration, try_number)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/sensors/base.py", line 274, in _get_next_poke_interval
modded_hash = min_backoff + run_hash % min_backoff
ZeroDivisionError: integer division or modulo by zero
```
### What you think should happen instead
Throw an human-readable exception about corner values for `poke_interval` or allow to set up this value less then 2 (if it's `>=` 2 - rescheduling works fine.)
### How to reproduce
```
from airflow.sensors.base import BaseSensorOperator
from airflow import DAG
import datetime
class TestSensor(BaseSensorOperator):
def poke(self, context):
return False
with DAG("test_dag", start_date=datetime.datetime.today(), schedule=None):
sensor = TestSensor(task_id="sensor", mode="reschedule", poke_interval=1, exponential_backoff=True, max_wait=5)
```
### Operating System
Debian GNU/Linux 10 (buster)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31409 | https://github.com/apache/airflow/pull/31412 | ba220b091c9fe9ba530533a71e88a9f5ca35d42d | a98621f4facabc207b4d6b6968e6863845e1f90f | "2023-05-19T09:28:27Z" | python | "2023-05-23T10:13:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,407 | ["airflow/jobs/scheduler_job_runner.py"] | Future DagRun rarely triggered by Race Condition when max_active_runs has reached its upper limit | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
The scheduler rarely triggers a DagRun to be executed in the future.
Here are the conditions as I understand them.
- max_active_runs is set and upper limit is reached
- The preceding DagRun completes very slightly earlier than the following DagRun
Details in "Anything else".
### What you think should happen instead
DagRun should wait until scheduled
### How to reproduce
I have confirmed reproduction in Airflow 2.2.2 with the following code.
I reproduced it in my environment after running it for about half a day.
``` python
import copy
import logging
import time
from datetime import datetime, timedelta
import pendulum
from airflow import DAG, AirflowException
from airflow.sensors.python import PythonSensor
from airflow.utils import timezone
logger = logging.getLogger(__name__)
# very small min_file_process_interval may help to reproduce more. e.g. min_file_process_interval=3
def create_dag(interval):
with DAG(
dag_id=f"example_reproduce_{interval:0>2}",
schedule_interval=f"*/{interval} * * * *",
start_date=datetime(2021, 1, 1),
catchup=False,
max_active_runs=2,
tags=["example_race_condition"],
) as dag:
target_s = 10
def raise_if_future(context):
now = timezone.utcnow() - timedelta(seconds=30)
if context["data_interval_start"] > now:
raise AirflowException("DagRun supposed to be triggered in the future triggered")
def wait_sync():
now_dt = pendulum.now()
if now_dt.minute % (interval * 2) == 0:
# wait until target time to synchronize end time with the preceding job
target_dt = copy.copy(now_dt).replace(second=target_s + 2)
wait_sec = (target_dt - now_dt).total_seconds()
logger.info(f"sleep {now_dt} -> {target_dt} in {wait_sec} seconds")
if wait_sec > 0:
time.sleep(wait_sec)
return True
PythonSensor(
task_id="t2",
python_callable=wait_sync,
# To avoid getting stuck in SequentialExecutor, try to re-poke after the next job starts
poke_interval=interval * 60 + target_s,
mode="reschedule",
pre_execute=raise_if_future,
)
return dag
for i in [1, 2]:
globals()[i] = create_dag(i)
```
### Operating System
Amazon Linux 2
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
MWAA 2.2.2
### Anything else
The assumed flow and the associated actual query logs for the case max_active_runs=2 are shown below.
**The assumed flow**
1. The first DagRun (DR1) starts
1. The subsequent DagRun (DR2) starts
1. DR2 completes; The scheduler set `next_dagrun_create_after=null` if max_active_runs is exceeded
- https://github.com/apache/airflow/blob/2.2.2/airflow/jobs/scheduler_job.py#L931
1. DR1 completes; The scheduler calls dag_model.calculate_dagrun_date_fields() in SchedulerJobRunner._schedule_dag_run(). The session is NOT committed yet
- note: the result of `calculate_dagrun_date_fields` is the old DR1-based value from `dag.get_run_data_interval(DR"2")`.
- https://github.com/apache/airflow/blob/2.2.2/airflow/jobs/scheduler_job.py#L1017
1. DagFileProcessorProcess modifies next_dagrun_create_after
- note: the dag record fetched in step 4 are not locked, so the `Processor` can select it and update it.
- https://github.com/apache/airflow/blob/2.2.2/airflow/dag_processing/processor.py#L646
1. The scheduler reflects the calculation result of DR1 to DB by `guard.commit()`
- note: Only the `next_dagrun_create_after` column set to null in step 2 is updated because sqlalchemy only updates the difference between the record retrieved in step 4 and the calculation result
- https://github.com/apache/airflow/blob/2.2.2/airflow/jobs/scheduler_job.py#L795
1. The scheduler triggers a future DagRun because the current time satisfies next_dagrun_create_after updated in step 6
**The associated query log**
``` sql
bb55c5b0bdce: /# grep "future_dagrun_00" /var/lib/postgresql/data/log/postgresql-2023-03-08_210056.log | grep "next_dagrun"
2023-03-08 22: 00: 01.678 UTC[57378] LOG: statement: UPDATE dag SET next_dagrun_create_after = NULL WHERE dag.dag_id = 'future_dagrun_0' # set in step 3
2023-03-08 22: 00: 08.162 UTC[57472] LOG: statement: UPDATE dag SET last_parsed_time = '2023-03-08T22:00:07.683786+00:00':: timestamptz, next_dagrun = '2023-03-08T22:00:00+00:00':: timestamptz, next_dagrun_data_interval_start = '2023-03-08T22:00:00+00:00':: timestamptz, next_dagrun_data_interval_end = '2023-03-08T23:00:00+00:00':: timestamptz, next_dagrun_create_after = '2023-03-08T23:00:00+00:00'::timestamptz WHERE dag.dag_id = 'future_dagrun_00' # set in step 5
2023-03-08 22: 00: 09.137 UTC[57475] LOG: statement: UPDATE dag SET next_dagrun_create_after = '2023-03-08T22:00:00+00:00'::timestamptz WHERE dag.dag_id = 'future_dagrun_00' # set in step 6
2023-03-08 22: 00: 10.418 UTC[57479] LOG: statement: UPDATE dag SET next_dagrun = '2023-03-08T23:00:00+00:00':: timestamptz, next_dagrun_data_interval_start = '2023-03-08T23:00:00+00:00':: timestamptz, next_dagrun_data_interval_end = '2023-03-09T00:00:00+00:00':: timestamptz, next_dagrun_create_after = '2023-03-09T00:00:00+00:00'::timestamptz WHERE dag.dag_id = 'future_dagrun_00' # set in step 7
```
From what I've read of the relevant code in the latest v2.6.1, I believe the problem continues.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31407 | https://github.com/apache/airflow/pull/31414 | e43206eb2e055a78814fcff7e8c35c6fd9c11e85 | b53e2aeefc1714d306f93e58d211ad9d52356470 | "2023-05-19T09:07:10Z" | python | "2023-08-08T12:22:13Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,399 | ["airflow/example_dags/example_params_ui_tutorial.py", "airflow/www/templates/airflow/trigger.html"] | Trigger UI Form Dropdowns with enums do not set default correct | ### Apache Airflow version
2.6.1
### What happened
When playing around with the form features as interoduced in AIP-50 and using the select list option via `enum` I realized that the default value will not be correctly picked when the form is loaded. Instead the first value always will be pre-selected.
### What you think should happen instead
Default value should be respected
### How to reproduce
Modify the `airflow/example_dags/example_params_ui_tutorial.py:68` and change the default to some other value. Load the form and see that `value 1` is still displayed on form load.
### Operating System
Ubuntu 20.04
### Versions of Apache Airflow Providers
not relevant
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
not relevant
### Anything else
Workaround: Make your default currently top of the list.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31399 | https://github.com/apache/airflow/pull/31400 | 3bc0e3296abc9601dcaf7d77835e80e5fea43def | 58aab1118a95ef63ba00784760fd13730dd46501 | "2023-05-18T20:52:33Z" | python | "2023-05-21T17:15:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,387 | ["airflow/providers/google/cloud/operators/kubernetes_engine.py", "tests/providers/google/cloud/operators/test_kubernetes_engine.py"] | GKEStartPodOperator cannot connect to Private IP after upgrade to 2.6.x | ### Apache Airflow version
2.6.1
### What happened
After upgrading to 2.6.1, GKEStartPodOperator stopped creating pods. According with release notes we created a specific gcp connection. But connection defaults to GKE Public endpoint (in error message masked as XX.XX.XX.XX) instead of private IP which is best since our cluster do not have public internet access.
[2023-05-17T07:02:33.834+0000] {connectionpool.py:812} WARNING - Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f0e47049ba0>, 'Connection to XX.XX.XX.XX timed out. (connect timeout=None)')': /api/v1/namespaces/airflow/pods?labelSelector=dag_id%3Dmytask%2Ckubernetes_pod_operator%3DTrue%2Crun_id%3Dscheduled__2023-05-16T0700000000-8fb0e9fa9%2Ctask_id%3Dmytask%2Calready_checked%21%3DTrue%2C%21airflow-sa
Seems like with this change "use_private_ip" has been deprecated, what would be the workaround in this case then to connect using private endpoint?
Also doc has not been updated to reflect this change in behaviour: https://airflow.apache.org/docs/apache-airflow-providers-google/stable/operators/cloud/kubernetes_engine.html#using-with-private-cluster
### What you think should happen instead
There should still be an option to connect using previous method with option "--private-ip" so API calls to Kubernetes call the private endpoint of GKE Cluster.
### How to reproduce
1. Create DAG file with GKEStartPodOperator.
2. Deploy said DAG in an environment with no access tu public internet.
### Operating System
cos_coaintainerd
### Versions of Apache Airflow Providers
apache-airflow-providers-cncf-kubernetes==5.2.2
apache-airflow-providers-google==8.11.0
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31387 | https://github.com/apache/airflow/pull/31391 | 45b6cfa138ae23e39802b493075bd5b7531ccdae | c082aec089405ed0399cfee548011b0520be0011 | "2023-05-18T13:53:30Z" | python | "2023-05-23T11:40:07Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,384 | ["dev/breeze/src/airflow_breeze/utils/run_utils.py"] | Breeze asset compilation causing OOM on dev-mode | ### Apache Airflow version
main (development)
### What happened
asset compilation background thread is not killed when running `stop_airflow` or `breeze stop`.
Webpack process takes a lot of memory, each `start-airflow` starts 4-5 of them.
After a few breeze start, we end up with 15+ webpack background processes that takes more than 20G RAM
### What you think should happen instead
`run_compile_www_assets` should stop when running `stop_airflow` from tmux. It looks like it spawn a `compile-www-assets-dev` pre-commit in a subprocess, that doesn't get killed when stopping breeze
### How to reproduce
```
breeze start-airflow --backend postgres --python 3.8 --dev-mode
# Wait for tmux session to start
breeze_stop
breeze start-airflow --backend postgres --python 3.8 --dev-mode
# Wait for tmux session to start
breeze_stop
# do a couple more if needed
```
Open tmux and monitor your memory, and specifically webpack processes.
### Operating System
Ubuntu 20.04.6 LTS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31384 | https://github.com/apache/airflow/pull/31403 | c63b7774cdba29394ec746b381f45e816dcb0830 | ac00547512f33b1222d699c7857108360d99b233 | "2023-05-18T11:42:08Z" | python | "2023-05-19T09:58:16Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,365 | ["airflow/www/templates/airflow/dags.html"] | The `Next Run` column name and tooltip is misleading | ### Description
> Expected date/time of the next DAG Run, or for dataset triggered DAGs, how many datasets have been updated since the last DAG Run
"Expected date/time of the next DAG Run" to me sounds like Run After.
Should the tooltip indicate something along the lines of "start interval of the next dagrun" or maybe the header Next Run is outdated? Something like "Next Data Interval"?
On the same vein, "Last Run" is equally as confusing. The header can be "Last Data Interval" in additional to a tool tip that describe it is the data interval start of the last dagrun.
### Use case/motivation
Users confused "Next Run" as when the next dagrun will be queued and ran and does not interpret it as the next dagrun's data interval start.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31365 | https://github.com/apache/airflow/pull/31467 | f1d484c46c18a83e0b8bc010044126dafe4467bc | 7db42fe6655c28330e80b8a062ef3e07968d6e76 | "2023-05-17T17:56:37Z" | python | "2023-06-01T15:54:42Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,351 | ["airflow/models/taskinstance.py", "tests/models/test_dagrun.py", "tests/models/test_taskinstance.py"] | Changing task from unmapped to mapped task with task instance note and task reschedule | ### Apache Airflow version
main (development)
### What happened
Changing a non-mapped task with task instance note and task reschedule to a mapped task crashes scheduler when the task is cleared for rerun. Related commit where a similar fix was done.
commit a770edfac493f3972c10a43e45bcd0e7cfaea65f
Author: Ephraim Anierobi <[email protected]>
Date: Mon Feb 20 20:45:25 2023 +0100
Fix Scheduler crash when clear a previous run of a normal task that is now a mapped task (#29645)
The fix was to clear the db references of the taskinstances in XCom, RenderedTaskInstanceFields
and TaskFail. That way, we are able to run the mapped tasks
### What you think should happen instead
_No response_
### How to reproduce
1. Create below dag file with BashOperator non-mapped.
2. Schedule a dag run and wait for it to finish.
3. Add a task instance note to bash operator.
4. Change t1 to ` t1 = BashOperator.partial(task_id="bash").expand(bash_command=command)` and return `["echo hello"]` from get_command.
5. Restart the scheduler and clear the task.
6. scheduler crashes on trying to use map_index though foreign key reference exists to task instance note and task reschedule.
```python
import datetime
from airflow.operators.bash import BashOperator
from airflow import DAG
from airflow.decorators import task
with DAG(dag_id="bash_simple", start_date=datetime.datetime(2022, 1, 1), schedule=None, catchup=False) as dag:
@task
def get_command(arg1, arg2):
return "echo hello"
command = get_command(1, 1)
t1 = BashOperator(task_id="bash", bash_command=command)
if __name__ == '__main__':
dag.test()
```
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31351 | https://github.com/apache/airflow/pull/31352 | b1ea3f32f9284c6f53bab343bdf79ab3081276a8 | f82246abe9491a49701abdb647be001d95db7e9f | "2023-05-17T11:59:30Z" | python | "2023-05-31T03:08:47Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,337 | ["airflow/providers/google/cloud/hooks/gcs.py"] | GCSHook support for cacheControl | ### Description
When a file is uploaded to GCS, [by default](https://cloud.google.com/storage/docs/metadata#cache-control), public files will get `Cache-Control: public, max-age=3600`.
I've tried setting `cors` for the whole bucket (didn't work) and setting `Cache-Control` on individual file (disappears on file re-upload from airflow)
Setting `metadata` for GCSHook is for different field (can't be used to set cache-control)
### Use case/motivation
Allow GCSHook to set cache control rather than overriding the `upload` function
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31337 | https://github.com/apache/airflow/pull/31338 | ba3665f76a2205bad4553ba00537026a1346e9ae | 233663046d5210359ce9f4db2fe3db4f5c38f6ee | "2023-05-17T04:51:33Z" | python | "2023-06-08T20:51:38Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,335 | ["airflow/providers/cncf/kubernetes/triggers/pod.py", "tests/providers/cncf/kubernetes/triggers/test_pod.py", "tests/providers/google/cloud/triggers/test_kubernetes_engine.py"] | KPO deferable "random" false fail | ### Apache Airflow version
2.6.1
### What happened
With the KPO and only in deferrable I have "random" false fail
the dag
```python
from pendulum import today
from airflow import DAG
from airflow.providers.cncf.kubernetes.operators.pod import KubernetesPodOperator
dag = DAG(
dag_id="kubernetes_dag",
schedule_interval="0 0 * * *",
start_date=today("UTC").add(days=-1)
)
with dag:
cmd = "echo toto && sleep 22 && echo finish"
KubernetesPodOperator.partial(
task_id="task-one",
namespace="default",
kubernetes_conn_id="kubernetes_default",
config_file="/opt/airflow/include/.kube/config", # bug of deferrable corrected in 6.2.0
name="airflow-test-pod",
image="alpine:3.16.2",
cmds=["sh", "-c", cmd],
is_delete_operator_pod=True,
deferrable=True,
get_logs=True,
).expand(env_vars=[{"a": "a"} for _ in range(8)])
```
![Screenshot from 2023-05-17 02-01-03](https://github.com/apache/airflow/assets/10202690/1a189f78-e4d2-4aac-a621-4346d7e178c4)
the log of the task in error :
[dag_id=kubernetes_dag_run_id=scheduled__2023-05-16T00_00_00+00_00_task_id=task-one_map_index=2_attempt=1.log](https://github.com/apache/airflow/files/11492973/dag_id.kubernetes_dag_run_id.scheduled__2023-05-16T00_00_00%2B00_00_task_id.task-one_map_index.2_attempt.1.log)
### What you think should happen instead
KPO should not fail ( in deferable ) if the container succesfully run in K8S
### How to reproduce
If I remove the **sleep 22** from the cmd then I do not see any more random task fails
### Operating System
ubuntu 22.04
### Versions of Apache Airflow Providers
apache-airflow-providers-cncf-kubernetes==6.1.0
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31335 | https://github.com/apache/airflow/pull/31348 | 57b7ba16a3d860268f03cd2619e5d029c7994013 | 8f5de83ee68c28100efc085add40ae4702bc3de1 | "2023-05-17T00:06:22Z" | python | "2023-06-29T14:55:41Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,311 | ["chart/files/pod-template-file.kubernetes-helm-yaml", "tests/charts/airflow_aux/test_pod_template_file.py"] | Worker pod template file doesn't have option to add priorityClassName | ### Apache Airflow version
2.6.0
### What happened
Worker pod template file doesn't have an option to add priorityClassName.
### What you think should happen instead
Airflow workers deployment however has the option to add it via the override airflow.workers.priorityClassName . We should reuse this for the worker pod template file too.
### How to reproduce
Trying to add a priorityClassName for airflow worker pod doesnt work. Unless we override the whole worker pod template with our own. But that is not preferrable as we will need to duplicate a lot of the existing template file.
### Operating System
Rhel8
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31311 | https://github.com/apache/airflow/pull/31328 | fbb095605ab009869ef021535c16b62a3d18a562 | 2c9ce803d744949674e4ec9ac88f73ad0a361399 | "2023-05-16T07:33:59Z" | python | "2023-06-01T00:27:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,304 | ["docs/apache-airflow/administration-and-deployment/logging-monitoring/logging-tasks.rst"] | Outdated 'airflow info' output in Logging for Tasks page | ### What do you see as an issue?
https://airflow.apache.org/docs/apache-airflow/stable/administration-and-deployment/logging-monitoring/logging-tasks.html#troubleshooting
The referenced `airflow info` format is very outdated.
### Solving the problem
Current output format is something like this:
```
Apache Airflow
version | 2.7.0.dev0
executor | LocalExecutor
task_logging_handler | airflow.utils.log.file_task_handler.FileTaskHandler
sql_alchemy_conn | postgresql+psycopg2://postgres:airflow@postgres/airflow
dags_folder | /files/dags
plugins_folder | /root/airflow/plugins
base_log_folder | /root/airflow/logs
remote_base_log_folder |
System info
OS | Linux
architecture | arm
uname | uname_result(system='Linux', node='fe54afd888cd', release='5.15.68-0-virt', version='#1-Alpine SMP Fri, 16 Sep
| 2022 06:29:31 +0000', machine='aarch64', processor='')
locale | ('en_US', 'UTF-8')
python_version | 3.7.16 (default, May 3 2023, 09:44:48) [GCC 10.2.1 20210110]
python_location | /usr/local/bin/python
Tools info
git | git version 2.30.2
ssh | OpenSSH_8.4p1 Debian-5+deb11u1, OpenSSL 1.1.1n 15 Mar 2022
kubectl | NOT AVAILABLE
gcloud | NOT AVAILABLE
cloud_sql_proxy | NOT AVAILABLE
mysql | mysql Ver 15.1 Distrib 10.5.19-MariaDB, for debian-linux-gnu (aarch64) using EditLine wrapper
sqlite3 | 3.34.1 2021-01-20 14:10:07 10e20c0b43500cfb9bbc0eaa061c57514f715d87238f4d835880cd846b9ealt1
psql | psql (PostgreSQL) 15.2 (Debian 15.2-1.pgdg110+1)
Paths info
airflow_home | /root/airflow
system_path | /files/bin/:/opt/airflow/scripts/in_container/bin/:/root/.local/bin:/usr/local/bin:/usr/local/sbin:/usr/local/bin:
| /usr/sbin:/usr/bin:/sbin:/bin:/opt/airflow
python_path | /usr/local/bin:/opt/airflow:/usr/local/lib/python37.zip:/usr/local/lib/python3.7:/usr/local/lib/python3.7/lib-dynl
| oad:/usr/local/lib/python3.7/site-packages:/files/dags:/root/airflow/config:/root/airflow/plugins
airflow_on_path | True
Providers info
[too long to include]
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31304 | https://github.com/apache/airflow/pull/31336 | 6d184d3a589b988c306aa3614e0f83e514b3f526 | fc4f37b105ca0f03de7cc49ab4f00751287ae145 | "2023-05-16T01:46:27Z" | python | "2023-05-18T07:44:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,303 | ["airflow/cli/cli_config.py"] | airflow dags list-jobs missing state metavar and choices keyword arguments | ### What do you see as an issue?
The `--state` CLI flag on `airflow dags list-jobs` does not tell the user what arguments it can take.
https://github.com/apache/airflow/blob/1bd538be8c5b134643a6c5eddd06f70e6f0db2e7/airflow/cli/cli_config.py#L280
It probably needs some keyword args similar to the following:
```python
metavar="(table, json, yaml, plain)",
choices=("table", "json", "yaml", "plain"),
```
### Solving the problem
The problem can be solved by adding those keyword arguments so that the user gets a suggestion for what state arguments can be passed in.
### Anything else
Any suggestions on what can be a valid state would be much appreciated. Otherwise, I'll find some time to read through the code and/or docs and figure it out.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31303 | https://github.com/apache/airflow/pull/31308 | fc4f37b105ca0f03de7cc49ab4f00751287ae145 | 8e296a09fc5c49188a129356caca8c3ea5eee000 | "2023-05-15T23:55:24Z" | python | "2023-05-18T11:05:07Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,238 | ["airflow/providers/discord/notifications/__init__.py", "airflow/providers/discord/notifications/discord.py", "airflow/providers/discord/provider.yaml", "tests/providers/discord/notifications/__init__.py", "tests/providers/discord/notifications/test_discord.py"] | Discord notification | ### Description
The new [Slack notification](https://airflow.apache.org/docs/apache-airflow-providers-slack/stable/notifications/slack_notifier_howto_guide.html) feature allows users to send messages to a slack channel using the various [on_*_callbacks](https://airflow.apache.org/docs/apache-airflow/stable/administration-and-deployment/logging-monitoring/callbacks.html) at both the DAG level and Task level.
However, this solution needs a little more implementation ([Webhook](https://airflow.apache.org/docs/apache-airflow-providers-discord/stable/_api/airflow/providers/discord/hooks/discord_webhook/index.html)) to perform notifications on Discord.
### Use case/motivation
Send Task/DAG status or other messages as notifications to a discord channel.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31238 | https://github.com/apache/airflow/pull/31273 | 3689cee485215651bdb5ef434f24ab8774995a37 | bdfebad5c9491234a78453856bd8c3baac98f75e | "2023-05-12T03:31:07Z" | python | "2023-06-16T05:49:03Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,236 | ["docs/apache-airflow/core-concepts/dags.rst"] | The @task.branch inside the dags.html seems to be incorrect | ### What do you see as an issue?
In the documentation page [https://airflow.apache.org/docs/apache-airflow/2.6.0/core-concepts/dags.html#branching)](https://airflow.apache.org/docs/apache-airflow/2.6.0/core-concepts/dags.html#branching), there is an incorrect usage of the @task.branch decorator.
``` python
@task.branch(task_id="branch_task")
def branch_func(ti):
xcom_value = int(ti.xcom_pull(task_ids="start_task"))
if xcom_value >= 5:
return "continue_task"
elif xcom_value >= 3:
return "stop_task"
else:
return None
branch_op = branch_func()
```
This code snippet is incorrect as it attempts to initialize branch_func without providing the required parameter.
### Solving the problem
The correct version maybe is like that
```python
def branch_func(**kwargs):
ti: TaskInstance = kwargs["ti"]
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31236 | https://github.com/apache/airflow/pull/31265 | d6051fd10a0949264098af23ce74c76129cfbcf4 | d59b0533e18c7cf0ff17f8af50731d700a2e4b4d | "2023-05-12T01:53:07Z" | python | "2023-05-13T12:21:56Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,200 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/jobs/job.py", "airflow/jobs/scheduler_job_runner.py", "newsfragments/31277.significant.rst", "tests/jobs/test_base_job.py"] | Constant "The scheduler does not appear to be running" warning on the UI following 2.6.0 upgrade | ### Apache Airflow version
2.6.0
### What happened
Ever since we upgraded to Airflow 2.6.0 from 2.5.2, we have seen that there is a warning stating "The scheduler does not appear to be running" intermittently.
This warning goes away by simply refreshing the page. And this conforms with our findings that the scheduler has not been down at all, at any point. By calling the /health point constantly, we can get it to show an "unhealthy" status:
These are just approx. 6 seconds apart:
```
{"metadatabase": {"status": "healthy"}, "scheduler": {"latest_scheduler_heartbeat": "2023-05-11T07:42:36.857007+00:00", "status": "healthy"}}
{"metadatabase": {"status": "healthy"}, "scheduler": {"latest_scheduler_heartbeat": "2023-05-11T07:42:42.409344+00:00", "status": "unhealthy"}}
```
This causes no operational issues, but it is misleading for end-users. What could be causing this?
### What you think should happen instead
The warning should not be shown unless the last heartbeat was at least 30 sec earlier (default config).
### How to reproduce
There are no concrete steps to reproduce it, but the warning appears in the UI after a few seconds of browsing around, or simply refresh the /health endpoint constantly.
### Operating System
Debian GNU/Linux 11
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==8.0.0
apache-airflow-providers-celery==3.1.0
apache-airflow-providers-cncf-kubernetes==6.1.0
apache-airflow-providers-common-sql==1.4.0
apache-airflow-providers-docker==3.6.0
apache-airflow-providers-elasticsearch==4.4.0
apache-airflow-providers-ftp==3.3.1
apache-airflow-providers-google==10.0.0
apache-airflow-providers-grpc==3.1.0
apache-airflow-providers-hashicorp==3.3.1
apache-airflow-providers-http==4.3.0
apache-airflow-providers-imap==3.1.1
apache-airflow-providers-microsoft-azure==6.0.0
apache-airflow-providers-microsoft-mssql==3.3.2
apache-airflow-providers-microsoft-psrp==2.2.0
apache-airflow-providers-microsoft-winrm==3.0.0
apache-airflow-providers-mysql==5.0.0
apache-airflow-providers-odbc==3.2.1
apache-airflow-providers-oracle==3.0.0
apache-airflow-providers-postgres==5.4.0
apache-airflow-providers-redis==3.1.0
apache-airflow-providers-sendgrid==3.1.0
apache-airflow-providers-sftp==4.2.4
apache-airflow-providers-slack==7.2.0
apache-airflow-providers-snowflake==4.0.5
apache-airflow-providers-sqlite==3.3.2
apache-airflow-providers-ssh==3.6.0
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
Deployed on AKS with helm
### Anything else
None more than in the description above.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31200 | https://github.com/apache/airflow/pull/31277 | 3193857376bc2c8cd2eb133017be1e8cbcaa8405 | f366d955cd3be551c96ad7f794e0b8525900d13d | "2023-05-11T07:51:57Z" | python | "2023-05-15T08:31:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,186 | ["airflow/www/static/js/dag/details/FilterTasks.tsx", "airflow/www/static/js/dag/details/dagRun/ClearRun.tsx", "airflow/www/static/js/dag/details/dagRun/MarkRunAs.tsx", "airflow/www/static/js/dag/details/index.tsx", "airflow/www/static/js/dag/details/taskInstance/taskActions/ClearInstance.tsx"] | Problems after redesign grid view | ### Apache Airflow version
2.6.0
### What happened
The changes in #30373 have had some unintended consequences.
- The clear task button can now go of screen if the dag / task name is long enough. This is rather unfortunate since it is by far the most important button to fix issues (hence the reason it is taking up a lot of real estate)
- Above issue is exacerbated by the fact that the task name can also push the grid off screen as well. I now have dags where I can see grid or the clear state button, but not both.
- Downstream and Recursive don't seem to be selected default anymore for the clear task button. For some reason recursive is only selected for the latest task (maybe this was already the case?).
The first two are an annoyance, the last one is preventing us from updating to 2.6.0
### What you think should happen instead
- Downstream should be selected by default again. (and possibly Recursive)
- The clear task button should *always* be visible, no matter how implausibly small the viewport is.
- Ideally long taks names should no longer hide the grid.
### How to reproduce
To reproduce just make a dag with a long name with some tasks with long names and open the grid view on a small screen.
### Operating System
unix
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31186 | https://github.com/apache/airflow/pull/31232 | d1fe67184da26fb0bca2416e26f321747fa4aa5d | 03b04a3d54c0c2aff9873f88de116fad49f90600 | "2023-05-10T15:26:49Z" | python | "2023-05-12T14:27:03Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,183 | ["airflow/providers/cncf/kubernetes/operators/spark_kubernetes.py", "tests/providers/cncf/kubernetes/operators/test_spark_kubernetes.py"] | SparkKubernetesSensor: 'None' has no attribute 'metadata' | ### Apache Airflow version
2.6.0
### What happened
After upgrading to 2.6.0 version, pipelines with SparkKubernetesOperator -> SparkKubernetesSensor stopped working correctly.
[this PR](https://github.com/apache/airflow/pull/29977) introduces some enhancement into Spark Kubernetes logic, now SparkKubernetesOperator receives the log from spark pods (which is great), but it doesn't monitor the status of a pod, which means if spark application fails - a task in Airflow finishes successfully.
On the other hand, using previous pipelines (Operator + Sensor) is impossible now, cause SparkKubernetesSensor fails with `jinja2.exceptions.UndefinedError: 'None' has no attribute 'metadata'` as SparkKubernetesOperator is no longer pushing info to xcom.
### What you think should happen instead
Old pipelines should be compatible with Airflow 2.6.0, even though the log would be retrieved in two places - operator and sensor.
OR remove the sensor completely and implement all the functionality in the operator (log + status)
### How to reproduce
Create a DAG with two operators
```
t1 = SparkKubernetesOperator(
kubernetes_conn_id='common/kubernetes_default',
task_id=f"task-submit",
namespace="namespace",
application_file="spark-applications/app.yaml",
do_xcom_push=True,
dag=dag,
)
t2 = SparkKubernetesSensor(
task_id=f"task-sensor",
namespace="namespace",
application_name=f"{{{{ task_instance.xcom_pull(task_ids='task-submit')['metadata']['name'] }}}}",
dag=dag,
attach_log=True,
)
```
### Operating System
Debian GNU/Linux 10 (buster)
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==8.0.0
apache-airflow-providers-apache-spark==4.0.1
apache-airflow-providers-celery==3.1.0
apache-airflow-providers-cncf-kubernetes==6.1.0
apache-airflow-providers-common-sql==1.4.0
apache-airflow-providers-docker==3.6.0
apache-airflow-providers-elasticsearch==4.4.0
apache-airflow-providers-ftp==3.3.1
apache-airflow-providers-google==10.0.0
apache-airflow-providers-grpc==3.1.0
apache-airflow-providers-hashicorp==3.3.1
apache-airflow-providers-http==4.3.0
apache-airflow-providers-imap==3.1.1
apache-airflow-providers-microsoft-azure==6.0.0
apache-airflow-providers-microsoft-mssql==3.3.2
apache-airflow-providers-microsoft-psrp==2.2.0
apache-airflow-providers-microsoft-winrm==3.1.1
apache-airflow-providers-mysql==5.0.0
apache-airflow-providers-odbc==3.2.1
apache-airflow-providers-oracle==3.6.0
apache-airflow-providers-postgres==5.4.0
apache-airflow-providers-redis==3.1.0
apache-airflow-providers-sendgrid==3.1.0
apache-airflow-providers-sftp==4.2.4
apache-airflow-providers-slack==7.2.0
apache-airflow-providers-snowflake==4.0.5
apache-airflow-providers-sqlite==3.3.2
apache-airflow-providers-ssh==3.6.0
apache-airflow-providers-telegram==4.0.0
### Deployment
Other 3rd-party Helm chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31183 | https://github.com/apache/airflow/pull/31798 | 771362af4784f3d913d6c3d3b44c78269280a96e | 6693bdd72d70989f4400b5807e2945d814a83b85 | "2023-05-10T11:42:40Z" | python | "2023-06-27T20:55:51Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,180 | ["docs/apache-airflow/administration-and-deployment/listeners.rst"] | Plugin for listeners - on_dag_run_running hook ignored | ### Apache Airflow version
2.6.0
### What happened
I created a plugin for custom listeners, the task level listeners works fine, but the dag level listeners are not triggered.
The [docs](https://airflow.apache.org/docs/apache-airflow/stable/administration-and-deployment/listeners.html) states that listeners defined in `airflow/listeners/spec` should be supported.
```
@hookimpl
def on_task_instance_failed(previous_state: TaskInstanceState, task_instance: TaskInstance, session):
"""
This method is called when task state changes to FAILED.
Through callback, parameters like previous_task_state, task_instance object can be accessed.
This will give more information about current task_instance that has failed its dag_run,
task and dag information.
"""
print("This works fine")
@hookimpl
def on_dag_run_failed(dag_run: DagRun, msg: str):
"""
This method is called when dag run state changes to FAILED.
"""
print("This is not called!")
```
### What you think should happen instead
The dag specs defined `airflow/listeners/spec/dagrun.py` should be working
### How to reproduce
Create a plugin and add the two hooks into a listeners.
### Operating System
linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31180 | https://github.com/apache/airflow/pull/32269 | bc3b2d16d3563d5b9bccd283db3f9e290d1d823d | ab2c861dd8a96f22b0fda692368ce9b103175322 | "2023-05-10T09:41:08Z" | python | "2023-07-04T20:57:49Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,156 | ["setup.cfg", "setup.py"] | Searching task instances by state doesn't work | ### Apache Airflow version
2.6.0
### What happened
After specifying a state such as "Equal to" "failed", the search doesn't return anything but resetting the whole page (the specified filter is gone)
https://github.com/apache/airflow/assets/14293802/5fb7f550-c09f-4040-963f-76dc0a2c1a53
### What you think should happen instead
_No response_
### How to reproduce
Go to "Browse" tab -> click "Task Instances" -> "Add Filter" -> "State" -> "Use anything (equal to, contains, etc)" -> Click "Search"
### Operating System
Debian GNU/Linux 10 (buster)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31156 | https://github.com/apache/airflow/pull/31203 | d59b0533e18c7cf0ff17f8af50731d700a2e4b4d | 1133035f7912fb2d2612c7cee5017ebf01f8ec9d | "2023-05-09T14:40:44Z" | python | "2023-05-13T13:13:06Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,109 | ["airflow/providers/google/cloud/operators/bigquery.py", "tests/providers/google/cloud/operators/test_bigquery.py"] | Add support for standard SQL in `BigQueryGetDataOperator` | ### Description
Currently, the BigQueryGetDataOperator always utilizes legacy SQL when submitting jobs (set as the default by the BQ API). This approach may cause problems when using standard SQL features, such as names for projects, datasets, or tables that include hyphens (which is very common nowadays). We would like to make it configurable, so users can set a flag in the operator to enable the use of standard SQL instead.
### Use case/motivation
When implementing #30887 to address #24460, I encountered some unusual errors, which were later determined to be related to the usage of hyphens in the GCP project ID name.
### Related issues
- #24460
- #28522 (PR) adds this parameter to `BigQueryCheckOperator`
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31109 | https://github.com/apache/airflow/pull/31190 | 24532312b694242ba74644fdd43a487e93122235 | d1fe67184da26fb0bca2416e26f321747fa4aa5d | "2023-05-06T14:04:34Z" | python | "2023-05-12T14:13:21Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,099 | ["airflow/providers/amazon/aws/operators/emr.py", "tests/providers/amazon/aws/operators/test_emr_serverless.py"] | Can't cancel EMR Serverless task | ### Apache Airflow version
2.6.0
### What happened
When marking an EMR Serverless job as failed, the job continues to run.
### What you think should happen instead
The job should be cancelled. Looking at the [EMR Serverless Operator](https://github.com/apache/airflow/blob/a6be96d92828a86e982b53646a9e2eeca00a5463/airflow/providers/amazon/aws/operators/emr.py#L939), I don't see an `on_kill` method, so assuming we just need to add that.
I'm not sure how to handle the `EmrServerlessCreateApplicationOperator` operator, though - if the workflow has a corresponding `EmrServerlessDeleteApplicationOperator`, we'd probably want to delete the application if the job is cancelled.
### How to reproduce
- Start an EMR Serverless DAG
- Mark the job as failed in the Airflow UI
- See that the EMR Serverless job continues to run
### Operating System
n/a
### Versions of Apache Airflow Providers
`apache-airflow-providers-amazon==8.0.0`
### Deployment
Amazon (AWS) MWAA
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31099 | https://github.com/apache/airflow/pull/31169 | 761c0da723799c3c37d9eb2cadaa9c4fa256d13a | d6051fd10a0949264098af23ce74c76129cfbcf4 | "2023-05-05T17:01:29Z" | python | "2023-05-12T20:00:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,087 | ["Dockerfile.ci", "scripts/docker/entrypoint_ci.sh"] | Latest Botocore breaks SQS tests | ### Body
Our tests are broken in main due to latest botocore failing SQS tests.
Example here: https://github.com/apache/airflow/actions/runs/4887737387/jobs/8724954226
```
E botocore.exceptions.ClientError: An error occurred (400) when calling the SendMessage operation:
<ErrorResponse xmlns="http://queue.amazonaws.com/doc/2012-11-05/"><Error><Type>Sender</Type>
<Code>AWS.SimpleQueueService.NonExistentQueue</Code><Message>The specified queue does not exist for this wsdl version.</Message><Detail/></Error>
<RequestId>ETDUP0OoJOXmn0WS6yWmB0dOhgYtpdVJCVwFWA28lYLKLmGJLAGu</RequestId></ErrorResponse>
```
The problem seems to come from botocore not recognizing just added queue:
```
QUEUE_NAME = "test-queue"
QUEUE_URL = f"https://{QUEUE_NAME}"
```
Even if we replace it with the full queue name that gets returned by the "create_queue" API call to `moto`, it still does not work with latest botocore:
```
QUEUE_URL = f"https://sqs.us-east-1.amazonaws.com/123456789012/{QUEUE_NAME}"
```
Which indicates this likely a real botocore issue.
## How to reproduce:
1. Get working venv with `[amazon]` extra of Airlow (or breeze). Should be (when constraints from main are used):
```
root@b0c430d9a328:/opt/airflow# pip list | grep botocore
aiobotocore 2.5.0
botocore 1.29.76
```
3. `pip uninstall aiobotocore`
4. `pip install --upgrade botocore`
```
root@b0c430d9a328:/opt/airflow# pip list | grep botocore
botocore 1.29.127
```
6. `pytest tests/providers/amazon/aws/sensors/test_sqs.py`
Result:
```
===== 4 failed, 7 passed in 2.43s ===
```
----------------------------------
Comparing it to "success case":
When you run it breeze (with the current constrained botocore):
```
root@b0c430d9a328:/opt/airflow# pip list | grep botocore
aiobotocore 2.5.0
botocore 1.29.76
```
1, `pytest tests/providers/amazon/aws/sensors/test_sqs.py`
Result:
```
============ 11 passed in 4.57s =======
```
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/31087 | https://github.com/apache/airflow/pull/31103 | 41c87464428d8d31ba81444b3adf457bc968e11d | 49cc213919a7e2a5d4bdc9f952681fa4ef7bf923 | "2023-05-05T11:31:51Z" | python | "2023-05-05T20:32:51Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,084 | ["docs/docker-stack/build.rst", "docs/docker-stack/docker-examples/extending/add-airflow-configuration/Dockerfile"] | Changing configuration as part of the custom airflow docker image | ### What do you see as an issue?
https://airflow.apache.org/docs/docker-stack/build.html
This docs doesn't share information on how we can edit the airflow.cfg file for the airflow installed via docker. Adding this to docs, would give better idea about editing the configuration file
### Solving the problem
Add more details in build your docker image for editing the airflow.cfg file
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31084 | https://github.com/apache/airflow/pull/31842 | 7a786de96ed178ff99aef93761d82d100b29bdf3 | 9cc72bbaec0d7d6041ecd53541a524a2f1e523d0 | "2023-05-05T10:43:52Z" | python | "2023-06-11T18:12:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,080 | ["airflow/providers/common/sql/operators/sql.py", "airflow/providers/common/sql/provider.yaml", "airflow/providers/databricks/operators/databricks_sql.py", "airflow/providers/databricks/provider.yaml", "generated/provider_dependencies.json", "tests/providers/databricks/operators/test_databricks_sql.py"] | SQLExecuteQueryOperator AttributeError exception when returning result to XCom | ### Apache Airflow version
2.6.0
### What happened
I am using DatabricksSqlOperator which writes the result to a file. When the task finishes it writes all the data correctly to the file the throws the following exception:
> [2023-05-05, 07:56:22 UTC] {taskinstance.py:1847} ERROR - Task failed with exception
> Traceback (most recent call last):
> File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/session.py", line 73, in wrapper
> return func(*args, **kwargs)
> File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 2377, in xcom_push
> XCom.set(
> File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/session.py", line 73, in wrapper
> return func(*args, **kwargs)
> File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/xcom.py", line 237, in set
> value = cls.serialize_value(
> File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/xcom.py", line 632, in serialize_value
> return json.dumps(value, cls=XComEncoder).encode("UTF-8")
> File "/usr/local/lib/python3.9/json/__init__.py", line 234, in dumps
> return cls(
> File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/json.py", line 102, in encode
> o = self.default(o)
> File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/json.py", line 91, in default
> return serialize(o)
> File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serde.py", line 144, in serialize
> return encode(classname, version, serialize(data, depth + 1))
> File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serde.py", line 123, in serialize
> return [serialize(d, depth + 1) for d in o]
> File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serde.py", line 123, in <listcomp>
> return [serialize(d, depth + 1) for d in o]
> File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serde.py", line 123, in serialize
> return [serialize(d, depth + 1) for d in o]
> File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serde.py", line 123, in <listcomp>
> return [serialize(d, depth + 1) for d in o]
> File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serde.py", line 132, in serialize
> qn = qualname(o)
> File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/module_loading.py", line 47, in qualname
> return f"{o.__module__}.{o.__name__}"
> File "/home/airflow/.local/lib/python3.9/site-packages/databricks/sql/types.py", line 161, in __getattr__
> raise AttributeError(item)
> AttributeError: __name__
I found that **SQLExecuteQueryOperator** always return the result(so pushing XCom) from its execute() method except when the parameter **do_xcom_push** is set to **False**. But if do_xcom_push is False then the method _process_output() is not executed and DatabricksSqlOperator wont write the results to a file.
### What you think should happen instead
I am not sure if the problem should be fixed in DatabricksSqlOperator or in SQLExecuteQueryOperator. In any case setting do_xcom_push shouldn't automatically prevent the exevution of _process_output():
```
if not self.do_xcom_push:
return None
if return_single_query_results(self.sql, self.return_last, self.split_statements):
# For simplicity, we pass always list as input to _process_output, regardless if
# single query results are going to be returned, and we return the first element
# of the list in this case from the (always) list returned by _process_output
return self._process_output([output], hook.descriptions)[-1]
return self._process_output(output, hook.descriptions)
```
What happens now is - i have in the same time big result in a file AND in the XCom.
### How to reproduce
I suspect that the actual Exception is related to writing the XCom to the meta database and it might not fail on other scenarios.
### Operating System
Debian GNU/Linux 11 (bullseye) docker image
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==8.0.0
apache-airflow-providers-apache-spark==4.0.1
apache-airflow-providers-celery==3.1.0
apache-airflow-providers-cncf-kubernetes==6.1.0
apache-airflow-providers-common-sql==1.4.0
apache-airflow-providers-databricks==4.1.0
apache-airflow-providers-docker==3.6.0
apache-airflow-providers-elasticsearch==4.4.0
apache-airflow-providers-ftp==3.3.1
apache-airflow-providers-google==10.0.0
apache-airflow-providers-grpc==3.1.0
apache-airflow-providers-hashicorp==3.3.1
apache-airflow-providers-http==4.3.0
apache-airflow-providers-imap==3.1.1
apache-airflow-providers-microsoft-azure==6.0.0
apache-airflow-providers-microsoft-mssql==3.3.2
apache-airflow-providers-mysql==5.0.0
apache-airflow-providers-odbc==3.2.1
apache-airflow-providers-oracle==3.6.0
apache-airflow-providers-postgres==5.4.0
apache-airflow-providers-redis==3.1.0
apache-airflow-providers-samba==4.1.0
apache-airflow-providers-sendgrid==3.1.0
apache-airflow-providers-sftp==4.2.4
apache-airflow-providers-slack==7.2.0
apache-airflow-providers-snowflake==4.0.5
apache-airflow-providers-sqlite==3.3.2
apache-airflow-providers-ssh==3.6.0
apache-airflow-providers-telegram==4.0.0
### Deployment
Docker-Compose
### Deployment details
Using extended Airflow image, LocalExecutor, Postgres 13 meta db as container in the same stack.
docker-compose version 1.29.2, build 5becea4c
Docker version 23.0.5, build bc4487a
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31080 | https://github.com/apache/airflow/pull/31136 | 521dae534dd0b906e4dd9a7446c6bec3f9022ac3 | edd7133a1336c9553d77ba13c83bc7f48d4c63f0 | "2023-05-05T08:16:58Z" | python | "2023-05-09T11:11:41Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,067 | ["setup.py"] | [BUG] apache.hive extra is referencing incorrect provider name | ### Apache Airflow version
2.6.0
### What happened
When creating docker image with airflow 2.6.0 I receive the error: `ERROR: No matching distribution found for apache-airflow-providers-hive>=5.1.0; extra == "apache.hive"`
After which, I see that the package name should be `apache-airflow-providers-apache-hive` and not `apache-airflow-providers-hive`.
### What you think should happen instead
We should change this line to say `apache-airflow-providers-apache-hive` and not `apache-airflow-providers-hive`, this will reference a provider which exists.
### How to reproduce
Build image for airflow 2.6.0 with the dependency `apache.hive`. Such as `pip3 install apache-airflow[apache.hive]==2.6.0 --constraint https://raw.githubusercontent.com/apache/airflow/constraints-2.6.0/constraints-3.8.txt`.
### Operating System
ubuntu:22.04
### Versions of Apache Airflow Providers
Image does not build.
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31067 | https://github.com/apache/airflow/pull/31068 | da61bc101eba0cdb17554f5b9ae44998bb0780d3 | 9e43d4aee3b86134b1b9a42f988fb9d3975dbaf7 | "2023-05-04T17:37:49Z" | python | "2023-05-05T15:39:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,059 | ["airflow/utils/log/file_task_handler.py", "tests/providers/amazon/aws/log/test_s3_task_handler.py", "tests/utils/test_log_handlers.py"] | Logs no longer shown after task completed CeleryExecutor | ### Apache Airflow version
2.6.0
### What happened
Stream logging works as long as the task is running. Once the task finishes, no logs are printed to the UI (only the hostname of the worker is printed)
<img width="1657" alt="image" src="https://user-images.githubusercontent.com/16529101/236212701-aecf6cdc-4d87-4817-a685-0778b94d182b.png">
### What you think should happen instead
Expected to see the complete log of a task
### How to reproduce
Start an airflow task. You should be able to see the logs coming in as a stream, once it finishes, the logs are gone
### Operating System
CentOS 7
### Versions of Apache Airflow Providers
airflow-provider-great-expectations==0.1.5
apache-airflow==2.6.0
apache-airflow-providers-airbyte==3.1.0
apache-airflow-providers-apache-hive==4.0.1
apache-airflow-providers-apache-spark==3.0.0
apache-airflow-providers-celery==3.1.0
apache-airflow-providers-common-sql==1.4.0
apache-airflow-providers-ftp==3.3.1
apache-airflow-providers-http==4.3.0
apache-airflow-providers-imap==3.1.1
apache-airflow-providers-mysql==5.0.0
apache-airflow-providers-oracle==3.4.0
apache-airflow-providers-postgres==5.4.0
apache-airflow-providers-sqlite==3.3.2
### Deployment
Virtualenv installation
### Deployment details
Celery with 4 workers nodes/VMs. Scheduler and Webserver on a different VM
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31059 | https://github.com/apache/airflow/pull/31101 | 10dda55e8b0fed72e725b369c17cb5dfb0d77409 | 672ee7f0e175dd7edb041218850d0cd556d62106 | "2023-05-04T13:07:47Z" | python | "2023-05-08T21:51:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,039 | ["airflow/dag_processing/processor.py", "tests/dag_processing/test_processor.py"] | Packaged DAGs not getting loaded in Airflow 2.6.0 (ValueError: source code string cannot contain null bytes) | ### Apache Airflow version
2.6.0
### What happened
I am trying to upgrade Airflow from version 2.3.1 to 2.6.0. I have a zip file with a few test DAGs which used to get loaded correctly in 2.3.1 but after the upgrade I get the following error in the scheduler logs.
```
Process ForkProcess-609:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/usr/local/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/dag_processing/manager.py", line 259, in _run_processor_manager
processor_manager.start()
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/dag_processing/manager.py", line 493, in start
return self._run_parsing_loop()
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/dag_processing/manager.py", line 572, in _run_parsing_loop
self.start_new_processes()
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/dag_processing/manager.py", line 1092, in start_new_processes
processor.start()
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/dag_processing/processor.py", line 196, in start
for module in iter_airflow_imports(self.file_path):
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/file.py", line 389, in iter_airflow_imports
parsed = ast.parse(Path(file_path).read_bytes())
File "/usr/local/lib/python3.7/ast.py", line 35, in parse
return compile(source, filename, mode, PyCF_ONLY_AST)
ValueError: source code string cannot contain null bytes
```
Single .py files are getting loaded without issues.
### What you think should happen instead
Packaged file should be parsed and the DAGs inside it should be available in the DagBag.
### How to reproduce
- Setup Airflow 2.6.0 using the official Docker image and helm chart.
- Create a folder and place the python file below inside it.
- Create a packaged DAG using command `zip -r test_dags.zip ./*` from within the folder.
- Place the `test_dags.zip` file in `/opt/airflow/dags` directory.
```python
import time
from datetime import timedelta
from textwrap import dedent
from airflow import DAG
from airflow.operators.python_operator import PythonOperator
from airflow.utils.dates import days_ago
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'email': ['[email protected]'],
'email_on_failure': False,
'email_on_retry': False,
'retries': 3,
'retry_delay': timedelta(minutes=1),
}
def test_function(task_name):
print(f"{task_name}: Test function invoked")
print(f"{task_name}: Sleeping 10 seconds")
time.sleep(10)
print(f"{task_name}: Exiting")
with DAG(
'airflow2_test_dag_1',
default_args=default_args,
description='DAG for testing airflow 2.0',
schedule_interval=timedelta(days=1),
start_date=days_ago(1)
) as dag:
t1 = PythonOperator(
task_id='first_task',
python_callable=test_function,
op_kwargs={"task_name": "first_task"},
dag=dag,
)
t2 = PythonOperator(
task_id='second_task',
python_callable=test_function,
op_kwargs={"task_name": "second_task"},
dag=dag,
)
t3 = PythonOperator(
task_id='third_task',
python_callable=test_function,
op_kwargs={"task_name": "third_task"},
dag=dag,
queue='kubernetes'
)
t1 >> t2 >> t3
```
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon==7.3.0
apache-airflow-providers-apache-hive==2.3.2
apache-airflow-providers-celery==3.1.0
apache-airflow-providers-cncf-kubernetes==5.2.2
apache-airflow-providers-common-sql==1.3.4
apache-airflow-providers-docker==3.5.1
apache-airflow-providers-elasticsearch==4.4.0
apache-airflow-providers-ftp==3.3.1
apache-airflow-providers-google==6.8.0
apache-airflow-providers-grpc==3.1.0
apache-airflow-providers-hashicorp==2.2.0
apache-airflow-providers-http==4.2.0
apache-airflow-providers-imap==3.1.1
apache-airflow-providers-microsoft-azure==5.2.1
apache-airflow-providers-mysql==4.0.2
apache-airflow-providers-odbc==3.2.1
apache-airflow-providers-postgres==5.4.0
apache-airflow-providers-redis==3.1.0
apache-airflow-providers-sendgrid==3.1.0
apache-airflow-providers-sftp==4.2.4
apache-airflow-providers-slack==4.2.3
apache-airflow-providers-snowflake==4.0.4
apache-airflow-providers-sqlite==3.3.1
apache-airflow-providers-ssh==3.5.0
```
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
The issue only occurs with `2.6.0`. I used the `2.5.3` docker image with everything else remaining same and packaged DAGs loaded with no issues.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31039 | https://github.com/apache/airflow/pull/31061 | 91e18bfc3e53002c191b33dbbfd017e152b23935 | 34b6230f3c7815b8ae7e99443e45a56921059d3f | "2023-05-03T13:02:13Z" | python | "2023-05-04T17:18:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,027 | ["airflow/config_templates/default_celery.py"] | Airflow doesn't recognize `rediss:...` url to point to a Redis broker | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Airflow 2.5.3
Redis is attached using `rediss:...` url. While deploying the instance, it Airflow/Celery downgrades `rediss` to `redis` with the warning `[2023-05-02 18:38:30,377: WARNING/MainProcess] Secure redis scheme specified (rediss) with no ssl options, defaulting to insecure SSL behaviour.`
Adding `AIRFLOW__CELERY__SSL_ACTIVE=True` as an environmental variable (the same as `ssl_active = true` in `airflow.cfg` file `[celery]` section) fails with the error
`airflow.exceptions.AirflowException: The broker you configured does not support SSL_ACTIVE to be True. Please use RabbitMQ or Redis if you would like to use SSL for broker.`
<img width="1705" alt="Screenshot 2023-05-12 at 12 07 45 PM" src="https://github.com/apache/airflow/assets/94494788/b56cf054-d122-4baf-b6e9-75effe804731">
### What you think should happen instead
It seems that Airflow doesn't recognize `rediss:...` url to be related to Redis broker
### How to reproduce
Airflow 2.5.3
Python 3.10.9
Redis 4.0.14 (url starts with `rediss:...`)
![Screenshot 2023-05-12 at 12 07 29 PM](https://github.com/apache/airflow/assets/94494788/04226516-cd29-4fe8-8ecc-aca2e1bb5045)
You need to add `AIRFLOW__CELERY__SSL_ACTIVE=True` as an environmental variable or `ssl_active = true` to `airflow.cfg` file `[celery]` section and deploy the instance
![Screenshot 2023-05-12 at 12 07 15 PM](https://github.com/apache/airflow/assets/94494788/214c7485-1718-4835-b921-a140e8e6311a)
### Operating System
Ubuntu 22.04.2 LTS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
Heroku platform, heroku-22 stack, python 3.10.9
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31027 | https://github.com/apache/airflow/pull/31028 | d91861d3bdbde18c937978c878d137d6c758e2c6 | 471fdacd853a5bcb190e1ffc017a4e650097ed69 | "2023-05-02T20:10:11Z" | python | "2023-06-07T17:09:46Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,025 | ["airflow/www/static/js/dag/details/graph/Node.tsx", "airflow/www/static/js/dag/details/graph/utils.ts", "airflow/www/static/js/utils/graph.ts"] | New graph view renders incorrectly when prefix_group_id=false | ### Apache Airflow version
2.6.0
### What happened
If a task_group in a dag has `prefix_group_id=false` in its config, the new graph won't render correctly. When the group is collapsed nothing is shown and there is an error in the console. When the group is expanded, the nodes will render but its edges become disconnected. As reported in https://github.com/apache/airflow/issues/29852#issuecomment-1531766479
This is because we use the prefix to determine where an edge is supposed to be rendered. We shouldn't make that assumption and actually iterate through the nodes to find where an edge belongs.
### What you think should happen instead
It renders like any other task group
### How to reproduce
Add `prefix_group_id=false` to a task group
### Operating System
any
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31025 | https://github.com/apache/airflow/pull/32764 | 53c6305bd0a914738074821d5f5f233e3ed5bee5 | 3e467ba510d29e912d89115769726111b8bce891 | "2023-05-02T18:15:05Z" | python | "2023-07-22T10:23:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,014 | ["airflow/www/static/js/trigger.js", "airflow/www/templates/airflow/trigger.html", "docs/apache-airflow/core-concepts/params.rst", "tests/www/views/test_views_trigger_dag.py"] | Exception when manually triggering dags via UI with `params` defined. | ### Apache Airflow version
2.6.0
### What happened
When clicking the "Trigger DAG w/ config" in a DAG UI I receive a 500 "Oops" page when `params` are defined for the DAG.
The Airflow webserver logs show this:
```
2023-05-02T13:02:50 - [2023-05-02T12:02:50.249+0000] {app.py:1744} ERROR - Exception on /trigger [GET]
2023-05-02T13:02:50 - Traceback (most recent call last):
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/flask/app.py", line 2529, in wsgi_app
2023-05-02T13:02:50 - response = self.full_dispatch_request()
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/flask/app.py", line 1825, in full_dispatch_request
2023-05-02T13:02:50 - rv = self.handle_user_exception(e)
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/flask/app.py", line 1823, in full_dispatch_request
2023-05-02T13:02:50 - rv = self.dispatch_request()
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/flask/app.py", line 1799, in dispatch_request
2023-05-02T13:02:50 - return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/airflow/www/auth.py", line 47, in decorated
2023-05-02T13:02:50 - return func(*args, **kwargs)
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/airflow/www/decorators.py", line 125, in wrapper
2023-05-02T13:02:50 - return f(*args, **kwargs)
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/airflow/utils/session.py", line 76, in wrapper
2023-05-02T13:02:50 - return func(*args, session=session, **kwargs)
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/airflow/www/views.py", line 1967, in trigger
2023-05-02T13:02:50 - return self.render_template(
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/airflow/www/views.py", line 640, in render_template
2023-05-02T13:02:50 - return super().render_template(
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/flask_appbuilder/baseviews.py", line 339, in render_template
2023-05-02T13:02:50 - return render_template(
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/flask/templating.py", line 147, in render_template
2023-05-02T13:02:50 - return _render(app, template, context)
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/flask/templating.py", line 130, in _render
2023-05-02T13:02:50 - rv = template.render(context)
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/jinja2/environment.py", line 1301, in render
2023-05-02T13:02:50 - self.environment.handle_exception()
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/jinja2/environment.py", line 936, in handle_exception
2023-05-02T13:02:50 - raise rewrite_traceback_stack(source=source)
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/airflow/www/templates/airflow/trigger.html", line 106, in top-level template code
2023-05-02T13:02:50 - <span class="help-block">{{ form_details.description }}</span>
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/airflow/www/templates/airflow/main.html", line 21, in top-level template code
2023-05-02T13:02:50 - {% from 'airflow/_messages.html' import show_message %}
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/flask_appbuilder/templates/appbuilder/baselayout.html", line 2, in top-level template code
2023-05-02T13:02:50 - {% import 'appbuilder/baselib.html' as baselib %}
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/flask_appbuilder/templates/appbuilder/init.html", line 42, in top-level template code
2023-05-02T13:02:50 - {% block body %}
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/flask_appbuilder/templates/appbuilder/baselayout.html", line 19, in block 'body'
2023-05-02T13:02:50 - {% block content %}
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/airflow/www/templates/airflow/trigger.html", line 162, in block 'content'
2023-05-02T13:02:50 - {{ form_element(form_key, form_details) }}
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/jinja2/runtime.py", line 777, in _invoke
2023-05-02T13:02:50 - rv = self._func(*arguments)
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/airflow/www/templates/airflow/trigger.html", line 83, in template
2023-05-02T13:02:50 - {%- for txt in form_details.value -%}
2023-05-02T13:02:50 - TypeError: 'NoneType' object is not iterable
```
### What you think should happen instead
No error is shown (worked in 2.5.2)
### How to reproduce
Create a DAG with the following config defined for parameters:
```
params={
"delete_actions": Param(
False,
description="Whether to delete actions after execution.",
type="boolean",
),
"dates": Param(
None,
description="An explicit list of date strings to run on.",
type=["null", "array"],
minItems=1,
),
"start_date_inclusive": Param(
None,
description="An inclusive start-date used to generate a list of dates to run on.",
type=["null", "string"],
pattern="^[0-9]{4}[-/][0-9]{2}[-/][0-9]{2}$",
),
"end_date_exclusive": Param(
None,
description="An exclusive end-date used to generate a list of dates to run on.",
type=["null", "string"],
pattern="^[0-9]{4}[-/][0-9]{2}[-/][0-9]{2}$",
),
"actions_bucket_name": Param(
None,
description='An S3 bucket to read batch actions from. Set as "ACTIONS_BUCKET".',
type=["null", "string"],
),
"actions_path_prefix": Param(
None,
description='An S3 bucket to read batch actions from. Prefixes "ACTIONS_PATH".',
type=["null", "string"],
pattern="^.+/$",
),
"sns_output_topic_name": Param(
None,
description='An SNS output topic ARN to set as "DATA_READY_TO_INDEX_OUTPUT_TOPIC."',
type=["null", "string"],
),
},
# required to convert params to their correct types
render_template_as_native_obj=True,
```
Deploy the DAG, click the manual trigger button.
### Operating System
Debian
### Versions of Apache Airflow Providers
N/A
### Deployment
Other Docker-based deployment
### Deployment details
Amazon ECS
Python version: 3.10.11
Airflow version: 2.6.0 (official docker image as base)
### Anything else
Occurs every time.
Does *NOT* occur when `params` are not defined on the DAG.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31014 | https://github.com/apache/airflow/pull/31078 | 49cc213919a7e2a5d4bdc9f952681fa4ef7bf923 | b8b18bd74b72edc4b40e91258fccc54cf3aff3c1 | "2023-05-02T12:08:03Z" | python | "2023-05-06T12:20:01Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,984 | ["airflow/models/dagrun.py", "airflow/models/taskinstance.py", "tests/models/test_dagrun.py", "tests/models/test_taskinstance.py"] | Unable to remove DagRun and TaskInstance with note | ### Apache Airflow version
2.6.0
### What happened
Hi, I'm unable to remove DagRun and TaskInstance when they have note attached.
### What you think should happen instead
Should be able to remove DagRuns or TaskInstances with or without notes.
Also note should be removed when parent entity is removed.
### How to reproduce
1. Create note in DagRun or TaskInstance
2. Try to remove the row that note has been added by clicking delete record icon. This will display alert in the UI `General Error <class 'AssertionError'>`
3. Select checkbox DagRun containing note, click `Actions` dropdown and select `Delete`. This won't display anything in the UI.
### Operating System
OSX 12.6
### Versions of Apache Airflow Providers
apache-airflow-providers-cncf-kubernetes==5.2.2
apache-airflow-providers-common-sql==1.3.4
apache-airflow-providers-ftp==3.3.1
apache-airflow-providers-http==4.2.0
apache-airflow-providers-imap==3.1.1
apache-airflow-providers-postgres==5.4.0
apache-airflow-providers-sqlite==3.3.1
### Deployment
Virtualenv installation
### Deployment details
Deployed using Postgresql 13.9 and sqlite 3
### Anything else
DagRun deletion Log
```
[2023-05-01T13:06:42.125+0700] {interface.py:790} ERROR - Delete record error: Dependency rule tried to blank-out primary key column 'dag_run_note.dag_run_id' on instance '<DagRunNote at 0x1125afa00>'
Traceback (most recent call last):
File "/opt/airflow/.venv/lib/python3.10/site-packages/flask_appbuilder/models/sqla/interface.py", line 775, in delete
self.session.commit()
File "<string>", line 2, in commit
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1451, in commit
self._transaction.commit(_to_root=self.future)
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 829, in commit
self._prepare_impl()
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 808, in _prepare_impl
self.session.flush()
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 3446, in flush
self._flush(objects)
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 3585, in _flush
with util.safe_reraise():
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
compat.raise_(
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 3546, in _flush
flush_context.execute()
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/unitofwork.py", line 456, in execute
rec.execute(self)
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/unitofwork.py", line 577, in execute
self.dependency_processor.process_deletes(uow, states)
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/dependency.py", line 552, in process_deletes
self._synchronize(
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/dependency.py", line 610, in _synchronize
sync.clear(dest, self.mapper, self.prop.synchronize_pairs)
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/sync.py", line 86, in clear
raise AssertionError(
AssertionError: Dependency rule tried to blank-out primary key column 'dag_run_note.dag_run_id' on instance '<DagRunNote at 0x1125afa00>'
```
TaskInstance deletion Log
```
[2023-05-01T13:06:42.125+0700] {interface.py:790} ERROR - Delete record error: Dependency rule tried to blank-out primary key column 'task_instance_note.task_id' on instance '<TaskInstanceNote at 0x1126ba770>'
Traceback (most recent call last):
File "/opt/airflow/.venv/lib/python3.10/site-packages/flask_appbuilder/models/sqla/interface.py", line 775, in delete
self.session.commit()
File "<string>", line 2, in commit
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1451, in commit
self._transaction.commit(_to_root=self.future)
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 829, in commit
self._prepare_impl()
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 808, in _prepare_impl
self.session.flush()
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 3446, in flush
self._flush(objects)
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 3585, in _flush
with util.safe_reraise():
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
compat.raise_(
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 3546, in _flush
flush_context.execute()
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/unitofwork.py", line 456, in execute
rec.execute(self)
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/unitofwork.py", line 577, in execute
self.dependency_processor.process_deletes(uow, states)
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/dependency.py", line 552, in process_deletes
self._synchronize(
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/dependency.py", line 610, in _synchronize
sync.clear(dest, self.mapper, self.prop.synchronize_pairs)
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/sync.py", line 86, in clear
raise AssertionError(
AssertionError: Dependency rule tried to blank-out primary key column 'task_instance_note.task_id' on instance '<TaskInstanceNote at 0x1126ba770>'
```
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30984 | https://github.com/apache/airflow/pull/30987 | ec7674f111177c41c02e5269ad336253ed9c28b4 | 0212b7c14c4ce6866d5da1ba9f25d3ecc5c2188f | "2023-05-01T06:29:36Z" | python | "2023-05-01T21:14:04Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,947 | ["BREEZE.rst"] | BREEZE: add troubleshooting section to cover ETIMEDOUT during start-airflow | ### What do you see as an issue?
BREEZE troubleshooting section does not have issue related to ETIMEOUT and potential fix when it happens:
https://github.com/apache/airflow/blob/main/BREEZE.rst#troubleshooting
### Solving the problem
BREEZE.rst can be improved by having ways to troubleshoot and fix the ETIMEOUT error when running `start-airflow`, which seemed to be one of the common problems when using BREEZE.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30947 | https://github.com/apache/airflow/pull/30949 | 783aa9cecbf47b4d0e5509d1919f644b9689b6b3 | bd542fdf51ad9550e5c4348f11e70b5a6c9adb48 | "2023-04-28T18:03:00Z" | python | "2023-04-28T20:37:03Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,932 | ["airflow/models/baseoperator.py", "tests/models/test_mappedoperator.py"] | Tasks created using "dynamic task mapping" ignore the Task Group passed as argument | ### Apache Airflow version
main (development)
### What happened
When creating a DAG with Task Groups and a Mapped Operator, if the Task Group is passed as argument to Mapped Operator's `partial` method it is ignored and the operator is not added to the group.
### What you think should happen instead
The Mapped Operator should be added to the Task Group passed as an argument.
### How to reproduce
Create a DAG with a source code like the following one
```python
from airflow import DAG
from airflow.operators.bash import BashOperator
from airflow.operators.empty import EmptyOperator
from airflow.utils import timezone
from airflow.utils.task_group import TaskGroup
with DAG("dag", start_date=timezone.datetime(2016, 1, 1)) as dag:
start = EmptyOperator(task_id="start")
finish = EmptyOperator(task_id="finish")
group = TaskGroup("test-group")
commands = ["echo a", "echo b", "echo c"]
mapped = BashOperator.partial(task_id="task_2", task_group=group).expand(bash_command=commands)
start >> group >> finish
# assert mapped.task_group == group
```
### Operating System
macOS 13.2.1 (22D68)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30932 | https://github.com/apache/airflow/pull/30933 | 1d4b1410b027c667d4e2f51f488f98b166facf71 | 4ee2de1e38a85abb89f9f313a3424c7368e12d1a | "2023-04-27T23:34:38Z" | python | "2023-04-29T21:27:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,900 | ["airflow/api_connexion/endpoints/dag_endpoint.py", "tests/api_connexion/endpoints/test_dag_endpoint.py"] | REST API, order_by parameter in dags list is not taken into account | ### Apache Airflow version
2.5.3
### What happened
It seems that the order_by parameters is not used when calling dags list with the rest api
The following two commands returns the same results which should not be possible cause one is ascending and the other descending
curl -X 'GET' 'http://<server_name>:<port>/api/v1/dags?limit=100&order_by=dag_id&only_active=true' -H 'accept: application/json'
curl -X 'GET' 'http://<server_name>:<port>/api/v1/dags?limit=100&order_by=-dag_id&only_active=true' -H 'accept: application/json'
by the way, giving an incorrect field name doesn't throw an error
### What you think should happen instead
_No response_
### How to reproduce
The following two commands returns the same results
curl -X 'GET' 'http://<server_name>:<port>/api/v1/dags?limit=100&order_by=-dag_id&only_active=true' -H 'accept: application/json'
curl -X 'GET' 'http://<server_name>:<port>/api/v1/dags?limit=100&order_by=dag_id&only_active=true' -H 'accept: application/json'
Same problem is visible with the swagger ui
### Operating System
PRETTY_NAME="Debian GNU/Linux 11 (bullseye)" NAME="Debian GNU/Linux" VERSION_ID="11" VERSION="11 (bullseye)" VERSION_CODENAME=bullseye ID=debian HOME_URL="https://www.debian.org/" SUPPORT_URL="https://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/"
### Versions of Apache Airflow Providers
apache-airflow-providers-common-sql==1.3.4
apache-airflow-providers-ftp==3.3.1
apache-airflow-providers-http==4.2.0
apache-airflow-providers-imap==3.1.1
apache-airflow-providers-microsoft-mssql==3.3.2
apache-airflow-providers-mysql==4.0.2
apache-airflow-providers-oracle==3.6.0
apache-airflow-providers-sqlite==3.3.1
apache-airflow-providers-vertica==3.3.1
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30900 | https://github.com/apache/airflow/pull/30926 | 36fe6d0377d37b5f6be8ea5659dcabb44b4fc233 | 1d4b1410b027c667d4e2f51f488f98b166facf71 | "2023-04-27T10:10:57Z" | python | "2023-04-29T16:07:01Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,884 | ["airflow/jobs/dag_processor_job_runner.py"] | DagProcessor Performance Regression | ### Apache Airflow version
2.5.3
### What happened
Upgrading from `2.4.3` to `2.5.3` caused a significant increase in dag processing time on standalone dag processor (~1-2s to 60s):
```
/opt/airflow/dags/ecco_airflow/dags/image_processing/product_image_load.py 0 -1 56.68s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/known_consumers/known_consumers.py 0 -1 56.64s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/monitoring/row_counts.py 0 -1 56.67s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/omnichannel/base.py 0 -1 56.66s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/omnichannel/oc_data.py 0 -1 56.67s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/omnichannel/oc_stream.py 0 -1 56.52s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/reporting/reporting_data_foundation.py 0 -1 56.63s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/retail_analysis/retail_analysis_dbt.py 0 -1 56.66s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/rfm_segments/rfm_segments.py 0 -1 56.02s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/utils/airflow.py 0 -1 56.65s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/bronze/aad_users_listing.py 1 0 55.51s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/bronze/funnel_io.py 1 0 56.13s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/bronze/iar_param.py 1 0 56.50s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/bronze/sfmc_copy.py 1 0 56.59s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/bronze/us_legacy_datawarehouse.py 1 0 55.15s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/cdp/ecco_cdp_auditing.py 1 0 56.54s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/cdp/ecco_cdp_budget_daily_phasing.py 1 0 56.63s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/cdp/ecco_cdp_gold_rm_tests.py 1 0 55.00s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/consumer_entity_matching/graph_entity_matching.py 1 0 56.67s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/data_backup/data_backup.py 1 0 56.69s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/hive/adhoc_entity_publish.py 1 0 55.33s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/image_regression/train.py 1 0 56.63s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/maintenance/db_maintenance.py 1 0 56.58s 2023-04-26T12:56:15
```
Also seeing messages like these
```
[2023-04-26T12:56:15.322+0000] {manager.py:979} DEBUG - Processor for /opt/airflow/dags/ecco_airflow/dags/bronze/us_legacy_datawarehouse.py finished
[2023-04-26T12:56:15.323+0000] {processor.py:296} DEBUG - Waiting for <ForkProcess name='DagFileProcessor68-Process' pid=116 parent=7 stopped exitcode=0>
[2023-04-26T12:56:15.323+0000] {manager.py:979} DEBUG - Processor for /opt/airflow/dags/ecco_airflow/dags/cdp/ecco_cdp_gold_rm_tests.py finished
[2023-04-26T12:56:15.323+0000] {processor.py:296} DEBUG - Waiting for <ForkProcess name='DagFileProcessor69-Process' pid=122 parent=7 stopped exitcode=0>
[2023-04-26T12:56:15.324+0000] {manager.py:979} DEBUG - Processor for /opt/airflow/dags/ecco_airflow/dags/bronze/streaming/sap_inventory_feed.py finished
[2023-04-26T12:56:15.324+0000] {processor.py:314} DEBUG - Waiting for <ForkProcess name='DagFileProcessor70-Process' pid=128 parent=7 stopped exitcode=-SIGKILL>
[2023-04-26T12:56:15.324+0000] {manager.py:986} ERROR - Processor for /opt/airflow/dags/ecco_airflow/dags/bronze/streaming/sap_inventory_feed.py exited with return code -9.
```
In `2.4.3`:
```
/opt/airflow/dags/ecco_airflow/dags/image_regression/train.py 1 0 1.34s 2023-04-26T14:19:08
/opt/airflow/dags/ecco_airflow/dags/known_consumers/known_consumers.py 1 0 1.12s 2023-04-26T14:19:00
/opt/airflow/dags/ecco_airflow/dags/maintenance/db_maintenance.py 1 0 0.63s 2023-04-26T14:18:27
/opt/airflow/dags/ecco_airflow/dags/monitoring/row_counts.py 1 0 3.74s 2023-04-26T14:18:45
/opt/airflow/dags/ecco_airflow/dags/omnichannel/oc_data.py 1 0 1.21s 2023-04-26T14:18:47
/opt/airflow/dags/ecco_airflow/dags/omnichannel/oc_stream.py 1 0 1.22s 2023-04-26T14:18:30
/opt/airflow/dags/ecco_airflow/dags/reporting/reporting_data_foundation.py 1 0 1.39s 2023-04-26T14:19:08
/opt/airflow/dags/ecco_airflow/dags/retail_analysis/retail_analysis_dbt.py 1 0 1.32s 2023-04-26T14:18:51
/opt/airflow/dags/ecco_airflow/dags/rfm_segments/rfm_segments.py 1 0 1.20s 2023-04-26T14:18:34
```
### What you think should happen instead
Dag processing time remains unchanged
### How to reproduce
Provision Airflow with the following settings:
## Airflow 2.5.3
- K8s 1.25.6
- Kubernetes executor
- Postgres backend (Postgres 11.0)
- Deploy using Airflow Helm **v1.9.0** with image **2.5.3-python3.9**
- pgbouncer enabled
- standalone dag processort with 3500m cpu / 4000Mi memory, single replica
- dags and logs mounted from RWM volume (Azure files)
## Airflow 2.4.3
- K8s 1.25.6
- Kubernetes executor
- Postgres backend (Postgres 11.0)
- Deploy using Airflow Helm **v1.7.0** with image **2.4.3-python3.9**
- pgbouncer enabled
- standalone dag processort with 2500m cpu / 2000Mi memory, single replica
- dags and logs mounted from RWM volume (Azure files)
## Image modifications
We use image built from `apache/airflow:2.4.3-python3.9`, with some dependencies added/reinstalled with different versions.
### Poetry dependency spec:
For `2.5.3`:
```
[tool.poetry.dependencies]
python = ">=3.9,<3.11"
authlib = "~1.0.1"
adapta = { version = "==2.2.3", extras = ["azure", "storage"] }
numpy = "==1.23.3"
db-dtypes = "~1.0.4"
gevent = "^21.12.0"
sqlalchemy = ">=1.4,<2.0"
snowflake-sqlalchemy = ">=1.4,<2.0"
esd-services-api-client = "~0.6.0"
apache-airflow-providers-common-sql = "~1.3.1"
apache-airflow-providers-databricks = "~3.1.0"
apache-airflow-providers-google = "==8.4.0"
apache-airflow-providers-microsoft-azure = "~5.2.1"
apache-airflow-providers-datadog = "~3.0.0"
apache-airflow-providers-snowflake = "~3.3.0"
apache-airflow = "==2.5.3"
dataclasses-json = ">=0.5.7,<0.6"
```
For `2.4.3`:
```
[tool.poetry.dependencies]
python = ">=3.9,<3.11"
authlib = "~1.0.1"
adapta = { version = "==2.2.3", extras = ["azure", "storage"] }
numpy = "==1.23.3"
db-dtypes = "~1.0.4"
gevent = "^21.12.0"
sqlalchemy = ">=1.4,<2.0"
snowflake-sqlalchemy = ">=1.4,<2.0"
esd-services-api-client = "~0.6.0"
apache-airflow-providers-common-sql = "~1.3.1"
apache-airflow-providers-databricks = "~3.1.0"
apache-airflow-providers-google = "==8.4.0"
apache-airflow-providers-microsoft-azure = "~5.2.1"
apache-airflow-providers-datadog = "~3.0.0"
apache-airflow-providers-snowflake = "~3.3.0"
apache-airflow = "==2.4.3"
dataclasses-json = ">=0.5.7,<0.6"
```
### Operating System
Container OS: Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==6.0.0
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-cncf-kubernetes==4.4.0
apache-airflow-providers-common-sql==1.3.4
apache-airflow-providers-databricks==3.1.0
apache-airflow-providers-datadog==3.0.0
apache-airflow-providers-docker==3.2.0
apache-airflow-providers-elasticsearch==4.2.1
apache-airflow-providers-ftp==3.3.1
apache-airflow-providers-google==8.4.0
apache-airflow-providers-grpc==3.0.0
apache-airflow-providers-hashicorp==3.1.0
apache-airflow-providers-http==4.3.0
apache-airflow-providers-imap==3.1.1
apache-airflow-providers-microsoft-azure==5.2.1
apache-airflow-providers-mysql==3.2.1
apache-airflow-providers-odbc==3.1.2
apache-airflow-providers-postgres==5.2.2
apache-airflow-providers-redis==3.0.0
apache-airflow-providers-sendgrid==3.0.0
apache-airflow-providers-sftp==4.1.0
apache-airflow-providers-slack==6.0.0
apache-airflow-providers-snowflake==3.3.0
apache-airflow-providers-sqlite==3.3.2
apache-airflow-providers-ssh==3.2.0
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
See How-to-reproduce section
### Anything else
Occurs by upgrading the helm chart from 1.7.0/2.4.3 to 1.9.0/2.5.3 installation.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30884 | https://github.com/apache/airflow/pull/30899 | 7ddad1a24b1664cef3827b06d9c71adbc558e9ef | 00ab45ffb7dee92030782f0d1496d95b593fd4a7 | "2023-04-26T14:47:31Z" | python | "2023-04-27T11:27:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,838 | ["airflow/www/templates/airflow/dags.html", "airflow/www/views.py"] | Sort Dag List by Last Run Date | ### Description
It would be helpful to me if I could see the most recently ran DAGs and their health in the Airflow UI. Right now many fields are sortable but not last run.
The solution here would likely build off the previous work from this issue: https://github.com/apache/airflow/issues/8459
### Use case/motivation
When my team updates a docker image we want to confirm our DAGs are still running healthy. One way to do that would be to pop open the Airflow UI and look at our teams DAGs (using the label tag) and confirm the most recently ran jobs are still healthy.
### Related issues
I think it would build off of https://github.com/apache/airflow/issues/8459
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30838 | https://github.com/apache/airflow/pull/31234 | 7ebda3898db2eee72d043a9565a674dea72cd8fa | 3363004450355582712272924fac551dc1f7bd56 | "2023-04-24T13:41:07Z" | python | "2023-05-17T15:11:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,797 | ["airflow/serialization/serde.py", "tests/utils/test_json.py"] | Deserialization of nested dict failing | ### Apache Airflow version
2.6.0b1
### What happened
When returning nested dictionary data from Task A and passing the returned value in Task B the deserialization fails if the data is nested dictionary with nonprimitive or iterable type.
```
Traceback (most recent call last):
File "/Users/utkarsharma/sandbox/astronomer/apache-airflow-provider-transfers/.nox/dev/lib/python3.10/site-packages/airflow/models/abstractoperator.py", line 570, in _do_render_template_fields
rendered_content = self.render_template(
File "/Users/utkarsharma/sandbox/astronomer/apache-airflow-provider-transfers/.nox/dev/lib/python3.10/site-packages/airflow/template/templater.py", line 162, in render_template
return tuple(self.render_template(element, context, jinja_env, oids) for element in value)
File "/Users/utkarsharma/sandbox/astronomer/apache-airflow-provider-transfers/.nox/dev/lib/python3.10/site-packages/airflow/template/templater.py", line 162, in <genexpr>
return tuple(self.render_template(element, context, jinja_env, oids) for element in value)
File "/Users/utkarsharma/sandbox/astronomer/apache-airflow-provider-transfers/.nox/dev/lib/python3.10/site-packages/airflow/template/templater.py", line 158, in render_template
return value.resolve(context)
File "/Users/utkarsharma/sandbox/astronomer/apache-airflow-provider-transfers/.nox/dev/lib/python3.10/site-packages/airflow/utils/session.py", line 76, in wrapper
return func(*args, session=session, **kwargs)
File "/Users/utkarsharma/sandbox/astronomer/apache-airflow-provider-transfers/.nox/dev/lib/python3.10/site-packages/airflow/models/xcom_arg.py", line 342, in resolve
result = ti.xcom_pull(
File "/Users/utkarsharma/sandbox/astronomer/apache-airflow-provider-transfers/.nox/dev/lib/python3.10/site-packages/airflow/utils/session.py", line 73, in wrapper
return func(*args, **kwargs)
File "/Users/utkarsharma/sandbox/astronomer/apache-airflow-provider-transfers/.nox/dev/lib/python3.10/site-packages/airflow/models/taskinstance.py", line 2454, in xcom_pull
return XCom.deserialize_value(first)
File "/Users/utkarsharma/sandbox/astronomer/apache-airflow-provider-transfers/.nox/dev/lib/python3.10/site-packages/airflow/models/xcom.py", line 666, in deserialize_value
return BaseXCom._deserialize_value(result, False)
File "/Users/utkarsharma/sandbox/astronomer/apache-airflow-provider-transfers/.nox/dev/lib/python3.10/site-packages/airflow/models/xcom.py", line 659, in _deserialize_value
return json.loads(result.value.decode("UTF-8"), cls=XComDecoder, object_hook=object_hook)
File "/opt/homebrew/Cellar/[email protected]/3.10.10_1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/__init__.py", line 359, in loads
return cls(**kw).decode(s)
File "/opt/homebrew/Cellar/[email protected]/3.10.10_1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/opt/homebrew/Cellar/[email protected]/3.10.10_1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
File "/Users/utkarsharma/sandbox/astronomer/apache-airflow-provider-transfers/.nox/dev/lib/python3.10/site-packages/airflow/utils/json.py", line 122, in object_hook
val = deserialize(dct)
File "/Users/utkarsharma/sandbox/astronomer/apache-airflow-provider-transfers/.nox/dev/lib/python3.10/site-packages/airflow/serialization/serde.py", line 212, in deserialize
return {str(k): deserialize(v, full) for k, v in o.items()}
File "/Users/utkarsharma/sandbox/astronomer/apache-airflow-provider-transfers/.nox/dev/lib/python3.10/site-packages/airflow/serialization/serde.py", line 212, in <dictcomp>
return {str(k): deserialize(v, full) for k, v in o.items()}
File "/Users/utkarsharma/sandbox/astronomer/apache-airflow-provider-transfers/.nox/dev/lib/python3.10/site-packages/airflow/serialization/serde.py", line 206, in deserialize
raise TypeError()
```
The way we are deserializing is by adding a [custom encoder](https://docs.python.org/3/library/json.html#encoders-and-decoders) for JSON and we are overriding the `object_hook` as shown below.
https://github.com/apache/airflow/blob/ebe2f2f626ffee4b9d0f038fe5b89c322125a49b/airflow/utils/json.py#L107-L126
But if you try to run below code:
```
import json
def object_hook(dct: dict) -> dict:
print("dct : ", dct)
return dct
if __name__ == "__main__":
val = json.dumps({"a": {"a-1": 1, "a-2": {"a-2-1": 1, "a-2-2": 2}}, "b": {"b-1": 1, "b-2": 2}, "c": {"c-1": 1, "c-2": 2}})
print("val : ", val, "\n\n")
return_val = json.loads(val, object_hook=object_hook)
```
Output:
```
val : {"a": {"a-1": 1, "a-2": {"a-2-1": 1, "a-2-2": 2}}, "b": {"b-1": 1, "b-2": 2}, "c": {"c-1": 1, "c-2": 2}}
dct : {'a-2-1': 1, 'a-2-2': 2}
dct : {'a-1': 1, 'a-2': {'a-2-1': 1, 'a-2-2': 2}}
dct : {'b-1': 1, 'b-2': 2}
dct : {'c-1': 1, 'c-2': 2}
dct : {'a': {'a-1': 1, 'a-2': {'a-2-1': 1, 'a-2-2': 2}}, 'b': {'b-1': 1, 'b-2': 2}, 'c': {'c-1': 1, 'c-2': 2}}
```
`object_hook` is called with every decoded value. Because of this `deserialize` is getting called even with the deserialized data causing this issue.
deserialize function code:
https://github.com/apache/airflow/blob/ebe2f2f626ffee4b9d0f038fe5b89c322125a49b/airflow/serialization/serde.py#L174
### What you think should happen instead
Airflow should be able to Serialization/Deserialization without any issue
### How to reproduce
Refer - https://github.com/apache/airflow/pull/30798
Run below code:
```
import pandas as pd
from airflow import DAG
from astro import sql as aql
from airflow.utils import timezone
with DAG("random-string", start_date=timezone.datetime(2016, 1, 1), catchup=False):
@task
def taskA():
return {"foo": 1, "bar": 2, "baz": pd.DataFrame({"numbers": [1, 2, 3], "Colors": ["red", "white", "blue"]})}
@task
def taskB(x):
print(x)
v = taskA()
taskB(v)
```
### Operating System
Mac - ventura
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30797 | https://github.com/apache/airflow/pull/30819 | cbaea573b3658dd941335e21c5f29118b31cb6d8 | 58e26d9df42f10e4e2b46cd26c6832547945789b | "2023-04-21T16:47:31Z" | python | "2023-04-23T10:38:38Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,796 | ["docs/apache-airflow/authoring-and-scheduling/plugins.rst"] | Tasks forked by the Local Executor are loading stale modules when the modules are also referenced by plugins | ### Apache Airflow version
2.5.3
### What happened
After upgrading from Airflow 2.4.3 to 2.5.3, tasks forked by the `Local Executor` can run with outdated module imports if those modules are also imported by plugins. It seems as though tasks will reuse imports that were first loaded when the scheduler boots, and any subsequent updates to those shared modules do not get reflected in new tasks.
I verified this issue occurs for all patch versions of 2.5.
### What you think should happen instead
Given that the plugin documentation states:
> if you make any changes to plugins and you want the webserver or scheduler to use that new code you will need to restart those processes.
this behavior may be attended. But it's not clear that this affects the code for forked tasks as well. So if this is not actually a bug then perhaps the documentation can be updated.
### How to reproduce
Given a plugin file like:
```python
from airflow.models.baseoperator import BaseOperatorLink
from src.custom_operator import CustomOperator
class CustomerOperatorLink(BaseOperatorLink):
operators = [CustomOperator]
```
And a dag file like
```
from src.custom_operator import CustomOperator
...
```
Any updates to the `CustomOperator` will not be reflected in new running tasks after the scheduler boots.
### Operating System
Debian bullseye
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
Workarounds
- Set `execute_tasks_new_python_interpreter` to `False`
- In my case of using Operator Links, I can alternatively set the Operator Link in my custom operator using `operator_extra_links`, which wouldn't require importing the operator from the plugin file.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30796 | https://github.com/apache/airflow/pull/31781 | ab8c9ec2545caefb232d8e979b18b4c8c8ad3563 | 18f2b35c8fe09aaa8d2b28065846d7cf1e85cae2 | "2023-04-21T15:35:10Z" | python | "2023-06-08T18:50:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,689 | ["airflow/sensors/external_task.py", "tests/sensors/test_external_task_sensor.py"] | ExternalTaskSensor waits forever for TaskGroup containing mapped tasks | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
If you have an `ExternalTaskSensor` that uses `external_task_group_id` to wait on a `TaskGroup`, and if that `TaskGroup` contains any [mapped tasks](https://airflow.apache.org/docs/apache-airflow/2.3.0/concepts/dynamic-task-mapping.html), the sensor will be stuck waiting forever even after the task group is successful.
### What you think should happen instead
`ExternalTaskSensor` should be able to wait on `TaskGroup`s, regardless of whether or not that group contains mapped tasks.
### How to reproduce
```
#!/usr/bin/env python3
import datetime
import logging
from airflow.decorators import dag, task
from airflow.operators.empty import EmptyOperator
from airflow.sensors.external_task import ExternalTaskSensor
from airflow.utils.task_group import TaskGroup
logger = logging.getLogger(__name__)
@dag(
schedule_interval='@daily',
start_date=datetime.datetime(2023, 4, 17),
)
def task_groups():
with TaskGroup(group_id='group'):
EmptyOperator(task_id='operator1') >> EmptyOperator(task_id='operator2')
with TaskGroup(group_id='mapped_tasks'):
@task
def get_tasks():
return [1, 2, 3]
@task
def process(x):
print(x)
process.expand(x=get_tasks())
ExternalTaskSensor(
task_id='wait_for_normal_task_group',
external_dag_id='task_groups',
external_task_group_id='group',
poke_interval=3,
check_existence=True,
)
ExternalTaskSensor(
task_id='wait_for_mapped_task_group',
external_dag_id='task_groups',
external_task_group_id='mapped_tasks',
poke_interval=3,
check_existence=True,
)
task_groups()
```
### Operating System
CentOS Stream 8
### Versions of Apache Airflow Providers
N/A
### Deployment
Other
### Deployment details
Standalone
### Anything else
I think the bug is [here](https://github.com/apache/airflow/blob/731ef3d692fc7472e245f39f3e3e42c2360cb769/airflow/sensors/external_task.py#L364):
```
elif self.external_task_group_id:
external_task_group_task_ids = self.get_external_task_group_task_ids(session)
count = (
self._count_query(TI, session, states, dttm_filter)
.filter(TI.task_id.in_(external_task_group_task_ids))
.scalar()
) / len(external_task_group_task_ids)
```
If the group contains mapped tasks, `external_task_group_ids` only contains a list of task names (not expanded to include mapped task indices), but the `count` will count all mapped instances. This returns a larger value than the calling function expects to receive when it checks for `count_allowed == len(dttm_filter)`, so `poke` always returns `False`.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30689 | https://github.com/apache/airflow/pull/30742 | ae3a61775a79a3000df0a8bdf50807033f4e3cdc | 3c30e54de3b8a6fe793b0ff1ed8225562779d96c | "2023-04-17T21:23:39Z" | python | "2023-05-18T07:38:24Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,673 | ["airflow/providers/openlineage/utils/utils.py"] | Open-Lineage type-ignore in OpenLineageRedactor likely hides some problem | ### Body
The new `attrs` package 23.1 released on 16th of April (11 hours ago) added typing information to "attrs.asdict" method and it mypy tests started to fail with
```
airflow/providers/openlineage/utils/utils.py:345: error: Argument 1 to "asdict"
has incompatible type "Type[AttrsInstance]"; expected "AttrsInstance"
[arg-type]
... for dict_key, subval in attrs.asdict(item, recurse=False)....
^
airflow/providers/openlineage/utils/utils.py:345: note: ClassVar protocol member AttrsInstance.__attrs_attrs__ can never be matched by a class object
```
The nature of this error (receiving Type where expecting instance indicates that there is somewhat serious issue here.
Especially that there were a `type: ignore` one line above that would indicate that something is quite wrong here (when we ignore typing issue, we usually comment why and ignore very specific error (`type: ignore[attr-undefined]` for example) when we have good reason to ignore it.
Since open-lineage is not yet released/functional and partially in progress, this is not an issue to be solved immediately, but soon (cc: @mobuchowski).
For now I am workaroudning this by adding another `type: ignore` to stop the static checks from failing (they fail only for PRs that are updating dependencies) and allow to upgrade to attrs 23.1.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/30673 | https://github.com/apache/airflow/pull/30677 | 2557c07aa5852d415a647679180d4dbf81a5d670 | 6a6455ad1c2d76eaf9c60814c2b0a0141ad29da0 | "2023-04-16T21:32:52Z" | python | "2023-04-17T13:56:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,635 | ["airflow/providers/google/cloud/operators/bigquery.py"] | `BigQueryGetDataOperator` does not respect project_id parameter | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google==8.11.0
google-cloud-bigquery==2.34.4
### Apache Airflow version
2.5.2+astro.2
### Operating System
OSX
### Deployment
Astronomer
### Deployment details
_No response_
### What happened
When setting a `project_id` parameter for `BigQueryGetDataOperator` the default project from env is not overwritten. Maybe something broke after it was added in? https://github.com/apache/airflow/pull/25782
### What you think should happen instead
Passing in as parameter should take precedence over reading in from environment
### How to reproduce
Part1
```py
from airflow.providers.google.cloud.operators.bigquery import BigQueryGetDataOperator
bq = BigQueryGetDataOperator(
task_id=f"my_test_query_task_id",
gcp_conn_id="bigquery",
table_id="mytable",
dataset_id="mydataset",
project_id="my_non_default_project",
)
f2 = bq.execute(None)
```
in env i have set
```py
AIRFLOW_CONN_BIGQUERY=gcpbigquery://
GOOGLE_CLOUD_PROJECT=my_primary_project
GOOGLE_APPLICATION_CREDENTIALS=/usr/local/airflow/gcloud/application_default_credentials.json
```
The credentials json file doesn't have project
Part2
Unsetting GOOGLE_CLOUD_PROJECT and rerunning results in
```sh
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.9/site-packages/airflow/providers/google/cloud/operators/bigquery.py", line 886, in execute
schema: dict[str, list] = hook.get_schema(
File "/usr/local/lib/python3.9/site-packages/airflow/providers/google/common/hooks/base_google.py", line 463, in inner_wrapper
raise AirflowException(
airflow.exceptions.AirflowException: The project id must be passed either as keyword project_id parameter or as project_id extra in Google Cloud connection definition. Both are not set!
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30635 | https://github.com/apache/airflow/pull/30651 | d3aeb4db0c539f2151ef395300cb2b5efc6dce08 | 4eab616e9f0a89c1a6268d5b5eaba526bfa9be6d | "2023-04-14T01:00:24Z" | python | "2023-04-15T00:39:19Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,613 | ["airflow/providers/amazon/aws/hooks/base_aws.py", "tests/providers/amazon/aws/hooks/test_dynamodb.py"] | DynamoDBHook - not able to registering a custom waiter | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon=7.4.1
### Apache Airflow version
airflow=2.5.3
### Operating System
Mac
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
We can register a custom waiter by adding a JSON file to the path - `airflow/airflow/providers/amazon/aws/waiters/`. The should be named `<client_type>.json` in this case - `dynamodb.json`. Once registered we can use the custom waiter.
content of the file - `airflow/airflow/providers/amazon/aws/waiters/dynamodb.json`:
```
{
"version": 2,
"waiters": {
"export_table": {
"operation": "ExportTableToPointInTime",
"delay": 30,
"maxAttempts": 60,
"acceptors": [
{
"matcher": "path",
"expected": "COMPLETED",
"argument": "ExportDescription.ExportStatus",
"state": "success"
},
{
"matcher": "path",
"expected": "FAILED",
"argument": "ExportDescription.ExportStatus",
"state": "failure"
},
{
"matcher": "path",
"expected": "IN_PROGRESS",
"argument": "ExportDescription.ExportStatus",
"state": "retry"
}
]
}
}
}
```
Getting below error post running test case:
```
class TestCustomDynamoDBServiceWaiters:
"""Test waiters from ``amazon/aws/waiters/dynamodb.json``."""
STATUS_COMPLETED = "COMPLETED"
STATUS_FAILED = "FAILED"
STATUS_IN_PROGRESS = "IN_PROGRESS"
@pytest.fixture(autouse=True)
def setup_test_cases(self, monkeypatch):
self.client = boto3.client("dynamodb", region_name="eu-west-3")
monkeypatch.setattr(DynamoDBHook, "conn", self.client)
@pytest.fixture
def mock_export_table_to_point_in_time(self):
"""Mock ``DynamoDBHook.Client.export_table_to_point_in_time`` method."""
with mock.patch.object(self.client, "export_table_to_point_in_time") as m:
yield m
def test_service_waiters(self):
assert os.path.exists('/Users/utkarsharma/sandbox/airflow-sandbox/airflow/airflow/providers/amazon/aws/waiters/dynamodb.json')
hook_waiters = DynamoDBHook(aws_conn_id=None).list_waiters()
assert "export_table" in hook_waiters
```
## Error
tests/providers/amazon/aws/waiters/test_custom_waiters.py:273 (TestCustomDynamoDBServiceWaiters.test_service_waiters)
'export_table' != ['table_exists', 'table_not_exists']
Expected :['table_exists', 'table_not_exists']
Actual :'export_table'
<Click to see difference>
self = <tests.providers.amazon.aws.waiters.test_custom_waiters.TestCustomDynamoDBServiceWaiters object at 0x117f085e0>
def test_service_waiters(self):
assert os.path.exists('/Users/utkarsharma/sandbox/airflow-sandbox/airflow/airflow/providers/amazon/aws/waiters/dynamodb.json')
hook_waiters = DynamoDBHook(aws_conn_id=None).list_waiters()
> assert "export_table" in hook_waiters
E AssertionError: assert 'export_table' in ['table_exists', 'table_not_exists']
test_custom_waiters.py:277: AssertionError
### What you think should happen instead
It should register the custom waiter and test case should pass.the
### How to reproduce
Add the file mentioned above to Airflow's code base and try running the test case provided.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30613 | https://github.com/apache/airflow/pull/30595 | cb5a2c56b99685305eecdd3222b982a1ef668019 | 7c2d3617bf1be0781e828d3758ee6d9c6490d0f0 | "2023-04-13T04:27:21Z" | python | "2023-04-14T16:43:59Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,600 | ["airflow/dag_processing/manager.py", "airflow/models/dag.py", "tests/dag_processing/test_job_runner.py"] | DAGs deleted from zips aren't deactivated | ### Apache Airflow version
2.5.3
### What happened
When a DAG is removed from a zip in the DAGs directory, but the zip file remains, it is not marked correctly as inactive. It is still visible in the UI, and attempting to open the DAG results in an `DAG "mydag" seems to be missing from DagBag.` error in the UI.
The DAG is removed from the SerializedDag table, resulting in the scheduler repeatedly erroring with `[2023-04-12T12:43:51.165+0000] {scheduler_job.py:1063} ERROR - DAG 'mydag' not found in serialized_dag table`.
I have done some minor investigating and it appears that [this piece of code](https://github.com/apache/airflow/blob/2.5.3/airflow/dag_processing/manager.py#L748-L772) may be the cause.
`dag_filelocs` provides the path to a specific python file within a zip, so `SerializedDagModel.remove_deleted_dags` is able to remove the missing DAG.
However, `self._file_paths` only contains the top-level zip name, so `DagModel.deactivate_deleted_dags` will only deactivate DAGs where the zip they are contained in is deleted, regardless of whether the DAG is still inside the zip.
I can see there are [other methods that handle DAG deactivation](https://github.com/apache/airflow/blob/2.5.3/airflow/models/dag.py#L2945-L2968) and I'm not sure how these all interact but this does seem to cause this specific issue.
### What you think should happen instead
DAGS that are no longer in the DagBag are marked as inactive
### How to reproduce
Running airflow locally with docker-compose:
- Create a zipfile with 2 DAG py files in in ./dags
- Wait for the DAGs to be parsed by the scheduler and appear in the UI
- Overwrite the existing DAG zip, with a new zip containing only 1 of the original DAG py files
- Wait for scheduler loop to parse the new zip
- Attempt to open the removed DAG in the UI, you will see an error
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
If I replace the docker image in the docker compose with an image built from this Dockerfile:
```
FROM apache/airflow:2.5.3
RUN sed -i '772s/self._file_paths/dag_filelocs/' /home/airflow/.local/lib/python3.7/site-packages/airflow/dag_processing/manager.py
RUN sed -i '3351s/correct_maybe_zipped(dag_model.fileloc)/dag_model.fileloc/' /home/airflow/.local/lib/python3.7/site-packages/airflow/models/dag.py
```
The DAG is deactivated as expected
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30600 | https://github.com/apache/airflow/pull/30608 | 0f3b6579cb67d3cf8bd9fa8f9abd502fc774201a | 7609021ce93d61f2101f5e8cdc126bb8369d334b | "2023-04-12T14:05:38Z" | python | "2023-04-13T04:10:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,593 | ["airflow/jobs/dag_processor_job_runner.py"] | After upgrading to 2.5.3, Dag Processing time increased dramatically. | ### Apache Airflow version
2.5.3
### What happened
I upgraded my airflow cluster from 2.5.2 to 2.5.3 , after which strange things started happening.
I'm currently using a standalone dagProcessor, and the parsing time that used to take about 2 seconds has suddenly increased to about 10 seconds.
I'm thinking it's weird because I haven't made any changes other than a version up, but is there something I can look into? Thanks in advance! 🙇🏼
![image](https://user-images.githubusercontent.com/16011260/231323427-e0d95506-c752-4a2b-93fc-9880b18814f3.png)
### What you think should happen instead
I believe that the time it takes to parse a Dag should be constant, or at least have some variability, but shouldn't take as long as it does now.
### How to reproduce
If you cherrypick [this commit](https://github.com/apache/airflow/pull/30079) into 2.5.2 stable code, the issue will recur.
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
- Kubernetes 1.21 Cluster
- 1.7.0 helm chart
- standalone dag processor
- using kubernetes executor
- using mysql database
### Anything else
This issue still persists, and restarting the Dag Processor has not resolved the issue.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30593 | https://github.com/apache/airflow/pull/30899 | 7ddad1a24b1664cef3827b06d9c71adbc558e9ef | 00ab45ffb7dee92030782f0d1496d95b593fd4a7 | "2023-04-12T01:28:37Z" | python | "2023-04-27T11:27:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,562 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/utils/db.py", "tests/utils/test_db.py"] | alembic Logging | ### Apache Airflow version
2.5.3
### What happened
When I call the airflow initdb function, it outputs these lines to the log
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
### What you think should happen instead
There should be a mechanism to disable these logs, or they should just be set to WARN by default
### How to reproduce
Set up a new postgres connection and call:
from airflow.utils.db import initdb
initdb()
### Operating System
MacOS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30562 | https://github.com/apache/airflow/pull/31415 | c5597d1fabe5d8f3a170885f6640344d93bf64bf | e470d784627502f171819fab072e0bbab4a05492 | "2023-04-10T11:25:58Z" | python | "2023-05-23T01:33:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,414 | ["airflow/www/views.py", "tests/www/views/test_views_tasks.py"] | Cannot clear tasking instances on "List Task Instance" page with User role | ### Apache Airflow version
main (development)
### What happened
Only users with the role `Admin` are allowed to use the action clear on the TaskInstance list view.
### What you think should happen instead
Users with role `User` should be able to clear task instance in the Task Instance page.
### How to reproduce
Try to clear Task instance while using a user with a `User` role.
### Operating System
Fedora 37
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30414 | https://github.com/apache/airflow/pull/30415 | 22bef613678e003dde9128ac05e6c45ce934a50c | b140c4473335e4e157ff2db85148dd120c0ed893 | "2023-04-01T11:20:33Z" | python | "2023-04-22T17:10:49Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,407 | [".github/workflows/ci.yml", "BREEZE.rst", "dev/breeze/src/airflow_breeze/commands/testing_commands.py", "dev/breeze/src/airflow_breeze/commands/testing_commands_config.py", "dev/breeze/src/airflow_breeze/utils/selective_checks.py", "images/breeze/output-commands-hash.txt", "images/breeze/output_testing_tests.svg"] | merge breeze's --test-type and --test-types options | ### Description
using `breeze testing tests` recently I noticed that the way to specify which tests to run is very confusing:
* `--test-type` supports specifying one type only (or `All`), allows specifying which provider tests to run in details, and is ignored if `--run-in-parallel` is provided (from what I saw)
* `--test-types` (note the `s` at the end) supports a list of types, does not allow to select specific provider tests, and is ignored if `--run-in-parallel` is NOT specified.
I _think_ that the two are mutually exclusive (i.e. there is no situation where one is taken into account and the other isn’t ignored), so it’d make sense to merge them.
Definition of Done:
- --test-type or --test-types can be used interchangeably, whether the tests are running in parallel or not (it'd be a bit like how `kubectl` allows using singular or plural for some actions, like `k get pod` == `k get pods`)
- When using the type `Providers`, specific provider tests can be selected between square brackets using the current syntax (e.g. `Providers[airbyte,http]`)
- several types can be specified, separated by a space (e.g. `"WWW CLI"`)
- the two bullet points above can be combined (e.g. `--test-type "Always Providers[airbyte,http] WWW"`)
### Use case/motivation
having a different behavior for a very similar option depending on whether we are running in parallel or not is confusing, and from a user perspective, there is no benefit to having those as separate options.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30407 | https://github.com/apache/airflow/pull/30424 | 90ba6fe070d903bca327b52b2f61468408d0d96a | 20606438c27337c20aa9aff8397dfa6f286f03d3 | "2023-03-31T22:12:56Z" | python | "2023-04-04T11:30:23Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,400 | ["airflow/executors/kubernetes_executor.py"] | ERROR - Unknown error in KubernetesJobWatcher | ### Official Helm Chart version
1.7.0
### Apache Airflow version
2.4.0
### Kubernetes Version
K3s Kubernetes Version: v1.24.2+k3s2
### Helm Chart configuration
_No response_
### Docker Image customizations
_No response_
### What happened
Same as https://github.com/apache/airflow/issues/12229
```
[2023-03-31T17:47:08.304+0000] {kubernetes_executor.py:112} ERROR - Unknown error in KubernetesJobWatcher. Failing
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/executors/kubernetes_executor.py", line 103, in run
self.resource_version = self._run(
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/executors/kubernetes_executor.py", line 148, in _run
for event in list_worker_pods():
File "/home/airflow/.local/lib/python3.10/site-packages/kubernetes/watch/watch.py", line 182, in stream
raise client.rest.ApiException(
kubernetes.client.exceptions.ApiException: (410)
Reason: Expired: too old resource version: 202541353 (202544371)
Process KubernetesJobWatcher-3:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/executors/kubernetes_executor.py", line 103, in run
self.resource_version = self._run(
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/executors/kubernetes_executor.py", line 148, in _run
for event in list_worker_pods():
File "/home/airflow/.local/lib/python3.10/site-packages/kubernetes/watch/watch.py", line 182, in stream
raise client.rest.ApiException(
kubernetes.client.exceptions.ApiException: (410)
Reason: Expired: too old resource version: 202541353 (202544371)
[2023-03-31T17:47:08.832+0000] {kubernetes_executor.py:291} ERROR - Error while health checking kube watcher process. Process died for unknown reasons
```
### What you think should happen instead
no errors in the logs?
### How to reproduce
appears soon after AF-scheduler pod restart
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30400 | https://github.com/apache/airflow/pull/30425 | cce9b2217b86a88daaea25766d0724862577cc6c | 9e5fabecb05e83700688d940d31a0fbb49000d64 | "2023-03-31T18:13:24Z" | python | "2023-04-13T13:56:03Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,382 | ["airflow/providers/amazon/aws/transfers/sql_to_s3.py", "docs/apache-airflow-providers-amazon/transfer/sql_to_s3.rst", "tests/providers/amazon/aws/transfers/test_sql_to_s3.py", "tests/system/providers/amazon/aws/example_sql_to_s3.py"] | SqlToS3Operator not able to write data with partition_cols provided. | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
I am using the standard operator version which comes with apache/airflow:2.5.2.
### Apache Airflow version
2.5.2
### Operating System
Ubuntu 22.04.2 LTS
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
I have used a simple docker compose setup can using the same in my local.
### What happened
I am using SqlToS3Operator in my Dag. I need to store the data using the partition col. The operator writes the data in a temporary file but in my case it should be a folder. I am getting the below error for the same.
```
[2023-03-31, 03:47:57 UTC] {sql_to_s3.py:175} INFO - Writing data to temp file
[2023-03-31, 03:47:57 UTC] {taskinstance.py:1775} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/transfers/sql_to_s3.py", line 176, in execute
getattr(data_df, file_options.function)(tmp_file.name, **self.pd_kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/pandas/util/_decorators.py", line 207, in wrapper
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/pandas/core/frame.py", line 2685, in to_parquet
**kwargs,
File "/home/airflow/.local/lib/python3.7/site-packages/pandas/io/parquet.py", line 423, in to_parquet
**kwargs,
File "/home/airflow/.local/lib/python3.7/site-packages/pandas/io/parquet.py", line 190, in write
**kwargs,
File "/home/airflow/.local/lib/python3.7/site-packages/pyarrow/parquet/__init__.py", line 3244, in write_to_dataset
max_rows_per_group=row_group_size)
File "/home/airflow/.local/lib/python3.7/site-packages/pyarrow/dataset.py", line 989, in write_dataset
min_rows_per_group, max_rows_per_group, create_dir
File "pyarrow/_dataset.pyx", line 2775, in pyarrow._dataset._filesystemdataset_write
File "pyarrow/error.pxi", line 113, in pyarrow.lib.check_status
NotADirectoryError: [Errno 20] Cannot create directory '/tmp/tmp3z4dpv_p.parquet/application_createdAt=2020-06-05 11:47:44.000000000'. Detail: [errno 20] Not a directory
```
### What you think should happen instead
The Operator should have supported the partition col as well.
### How to reproduce
I am using the below code snipet for the same.
```
sql_to_s3_task = SqlToS3Operator(
task_id="sql_to_s3_task",
sql_conn_id="mysql_con",
query=sql,
s3_bucket=Variable.get("AWS_S3_BUCKET"),
aws_conn_id="aws_con",
file_format="parquet",
s3_key="Fact_applications",
pd_kwargs={
"partition_cols":['application_createdAt']
},
replace=True,
)
```
This could be using to reproduce the same.
### Anything else
I believe [this](https://github.com/apache/airflow/blob/6e751812d2e48b743ae1bc375e3bebb8414b4a0e/airflow/providers/amazon/aws/transfers/sql_to_s3.py#L173) logic should be updated for the same.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30382 | https://github.com/apache/airflow/pull/30460 | 372a0881d9591f6d69105b1ab6709f5f42560fb6 | d7cef588d6f6a749bd5e8fbf3153a275f4120ee8 | "2023-03-31T04:14:10Z" | python | "2023-04-18T23:19:49Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,365 | ["airflow/cli/cli_config.py", "airflow/cli/commands/dag_command.py", "tests/cli/commands/test_dag_command.py"] | Need an REST API or/and Airflow CLI to fetch last parsed time of a given DAG | ### Description
We need to access the time at which a given DAG was parsed last.
Airflow Version : 2.2.2 and above.
### Use case/motivation
End users want to run a given DAG post applying the changes they have done on them. This would mean that the DAG should be parsed post the edits done to it. Right now the last parsed time is available by accessing the airflow database only. Querying the database directly is not the best solution to the problem. Ideally airflow should be exposing APIs that end users can consume that can help provide the last parsed time for a given DAG.
### Related issues
Not Aware.
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30365 | https://github.com/apache/airflow/pull/30432 | c5b685e88dd6ecf56d96ef4fefa6c409f28e2b22 | 7074167d71c93b69361d24c1121adc7419367f2a | "2023-03-30T08:34:47Z" | python | "2023-04-14T17:14:48Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,341 | ["airflow/providers/amazon/aws/transfers/s3_to_redshift.py", "tests/providers/amazon/aws/transfers/test_s3_to_redshift.py"] | S3ToRedshiftOperator does not support default values on upsert | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
7.2.1
### Apache Airflow version
2.5.1
### Operating System
Ubuntu
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
I am trying to use the `S3ToRedshiftOperator` to copy data into an existing table which has a column defined as non-null with default.
The copy fails with the following error:
```
redshift_connector.error.ProgrammingError: {'S': 'ERROR', 'C': '42601', 'M': 'NOT NULL column without DEFAULT must be included in column list', 'F': '../src/pg/src/backend/commands/commands_copy.c', 'L': '2727', 'R': 'DoTheCopy'}
```
This is happening because when using the `UPSERT` method, the operator first creates a temporary table with this statement ([here](https://github.com/apache/airflow/blob/47cf233ccd612a68bea1ad3898f06b91c63c1964/airflow/providers/amazon/aws/transfers/s3_to_redshift.py#L173)):
```
CREATE TABLE #bar (LIKE foo.bar);
```
And then attempts to copy data into this temporary table. By default, `CREATE TABLE ... LIKE` does not include default values: _The default behavior is to exclude default expressions, so that all columns of the new table have null defaults._ (from the [docs](https://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_TABLE_NEW.html)).
### What you think should happen instead
We should be able to include default values when creating the temporary table.
### How to reproduce
* Create a table with a column defined as non-null with default value
* Use the operator to copy data into it using the `UPSERT` method
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30341 | https://github.com/apache/airflow/pull/32558 | bf68e1060b0214ee195c61f9d7be992161e25589 | 145b16caaa43f0c42bffd97344df916c602cddde | "2023-03-28T06:30:34Z" | python | "2023-07-13T06:29:07Z" |