1. K8S多Worker节点安装后问题总结
1.1 卸载重新安装无法登录问题
还记得install.sh吗,我把创建用户关闭了,并且删除了airflow_plus数据库,以下配置导致不创建默认用户,所以登录不进去,设置为true,重新install即可
# 关闭创建用户job
--set webserver.defaultUser.enabled=false \
1.2 Worker节点重启导致任务失败问题
原因为自定义安装依赖,版本冲突导致。检查重启Worker所在pod的事件,查看信息如下
"/home/airflow/.local/lib/python3.7/site-packages/kombu/entity.py", line 7, in <module> from .serialization import prepare_accept_content File "/home/airflow/.local/lib/python3.7/site-packages/kombu/serialization.py", line 440, in <module> for ep, args in entrypoints('kombu.serializers'): # pragma: no cover File "/home/airflow/.local/lib/python3.7/site-packages/kombu/utils/compat.py", line 82, in entrypoints for ep in importlib_metadata.entry_points().get(namespace, []) AttributeError: 'EntryPoints' object has no attribute 'get'
检查依赖,把依赖版本固定
importlib-metadata==4.13.0
1.3 动态安装依赖问题
- 对于单worker节点,可以每次任务前前执行安装方法,但是比较麻烦,不实用:
import logging
import os
log = logging.getLogger(__name__)
def install():
log.info("begin install requirements")
os.system("pip install -r /opt/airflow/dags/repo/dags/requirements.txt")
os.system("pip install -I my_utils")
log.info("finish install requirements")
DAG中的第一个TASK具体执行文件开头引入,这样可以在WebServer Log中可以看到具体日志
import sys
sys.path.insert(0, '/opt/airflow/dags/repo')
import dags.install as install
install.install()
- 对于多worker节点,由于每个任务中下一过程流转可能运行在不同worker上面,所以无解,最简便手工安装
1.4 Triggerer一直启动中无法定时调度问题
{triggerer_job.py:101} INFO - Starting the triggerer
[2023-03-17T13:47:30.947+0000] {triggerer_job.py:348} ERROR - Triggerer's async thread was blocked for 0.36 seconds, likely by a badly-written trigger. Set PYTHONASYNCIODEBUG=1 to get more information on overrunning coroutines.
[2023-03-17T22:15:17.394+0000] {triggerer_job.py:348} ERROR - Triggerer's async thread was blocked for 0.27 seconds, likely by a badly-written trigger. Set PYTHONASYNCIODEBUG=1 to get more information on overrunning coroutines.
应该是版本问题,我把版本改为了2.2.1,清空数据库,uninstall后重新安装即可【这个问题排查了好久,无奈】,注意在values.yaml中写版本,在install.sh中的版本去掉,不然不生效
airflowVersion: 2.2.1
defaultAirflowTag: 2.2.1
config:
core:
dags_folder: /opt/airflow/dags/repo/dags
hostname_callable: airflow.utils.net.get_host_ip_address
1.5 .airflowignore忽略文件问题
.airflowignore文件用户忽略检查非DAG文件,放置在dags_folders下,该配置在values.yaml中,我配置的是/opt/airflow/dags/repo/dags,所以放在此处
如配置忽略merge下面的py非DAG文件
jh/merge/*
此时我们在jh目录下就不能命名为merge_dag.py文件,否则在WebServer中不显示该DAG,我们统一改为命名前缀为dag_解决此问题
1.6 挂载问题
笔者试过把worker节点进行挂载到cephfs目录,挂载成功了,但是worker节点一直阻塞不成功,放弃中间表为csv方式,改为写入ClickHouse
1.7 Scheduler 401权限问题
确保Scheduler的部署executor标签为CeleryKubernetesExecutor
,如果是其他会出现该问题
1.8 Scheduler与Triggerer健康检查问题
Liveness probe failed: No alive jobs found
修改values.yaml,修改存活检查配置重新install即可
scheduler:
livenessProbe:
command: ["bash", "-c", "airflow jobs check --job-type SchedulerJob --allow-multiple --limit 100"]
triggerer:
livenessProbe:
command: ["bash", "-c", "airflow jobs check --job-type TriggererJob --allow-multiple --limit 100"]
2. 自定义镜像并传参
本文先讲述笔者DockerOperator的试错过程,后面的KubernetesPodOperator才是最便捷的方式
2.1 Connections页面报错问题
当我点击Admin -> Connections时遇到下面报错
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/connection.py", line 238, in get_password
return fernet.decrypt(bytes(self._password, 'utf-8')).decode()
File "/home/airflow/.local/lib/python3.7/site-packages/cryptography/fernet.py", line 194, in decrypt
raise InvalidToken
cryptography.fernet.InvalidToken
问题原因:重新执行install.sh时候我们发现有该命令,由于每次重新部署fernet-key都会变,导致之前加密的密码无法正确解密
[root@sha-216 airflow-plus]# echo Fernet Key: $(kubectl get secret --namespace airflow-plus airflow-fernet-key -o jsonpath="{.data.fernet-key}" | base64 --decode)
Fernet Key: ekc3MHl6VmSSSNnY0pjSSSSZk94WkVxl5T3E2OUtXT0s=
解决方式:我们在install.sh中加入固定秘钥,这样不会每次都重新生成
--set webserverSecretKey=ekc3MHl6VmSSSNnY0pjSSSSZk94WkVxl5T3E2OUtXT0s=
--set fernetKey=ekc3MHl6VmNnY0pjZk94WkVxRVRmUkl5T3E2OUtXT0s=
然后我们找到连接的pgsql库,airflow-plus数据库中表connection, 备份之前手工新增的配置,以备重新配置, 然后清空该表,不用担心之前系统数据会丢失,接着我们在执行install.sh即可
然后我们把手工配置的connection在配置一遍即可,这样后面重新卸载install也不会出现该问题
2.2 在Connection中配置Dokcer连接配置
该配置指定Conn ID为docker_repo【后面会用到】,指定私有仓库地址及用户名密码
2.3 编写镜像执行DAG
from airflow.providers.docker.operators.docker import DockerOperator
import pendulum
from airflow import DAG
with DAG(
dag_id="comm_add_dag",
schedule_interval=None,
start_date=pendulum.datetime(2023, 5, 8, tz="Asia/Shanghai"),
catchup=True,
description="小区自动化入库"
) as dag:
t1 = DockerOperator(
# 刚才定义的conn id
docker_conn_id="docker_repo",
# WebServer中点击Admin -> Providers -> Docker对应的Version,如果是auto,可能会报错
api_version="2.2.0",
image='comm-add:1.0.1',
# 临沂为我们的传参,python代码中, 镜像中部分代码
# @ click.command()
# @ click.option('--city', default="all", help='要计算的城市')
command='python main.py --city {{dag_run.conf["city"] if "city" in dag_run.conf}} else all',
task_id='comm_add'
)
t1
2.4 传参运行
输入json串,点击运行Trigger即可
2.5 报错及解决
requests.exceptions.ConnectionError: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))
[2023-05-10, 01:46:33 UTC] {local_task_job.py:154} INFO - Task exited with return code 1
[2023-05-10, 01:46:33 UTC] {local_task_job.py:264} INFO - 0 downstream tasks scheduled from follow-on schedule check
我们修改之前说到的values.yaml中worker的配置挂载docker.sock
workers:
resources:
#limits:
# cpu: 4000m
# memory: 12288Mi
requests:
cpu: 100m
memory: 1024Mi
extraVolumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
extraVolumeMounts:
- name: dockersock
mountPath: /var/run/docker.sock
挂载后重新install后出现权限问题
File "/home/airflow/.local/lib/python3.7/site-packages/docker/transport/unixconn.py", line 30, in connect
sock.connect(self.unix_socket)
PermissionError: [Errno 13] Permission denied
折腾了许久,暂时无招,有经验的伙伴可以讨论下
2.6 基于KubernetesPodOperator运行Docker
这种方式很便捷,分分钟搞定!!!!!
其中image_pull_secrets是K8S中配置的Secret,提供私有仓库镜像访问授权,在此不在详解
from airflow.providers.docker.operators.docker import DockerOperator
from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import KubernetesPodOperator
import pendulum
from airflow import DAG
from docker.types import Mount
from kubernetes.client import models as k8s
with DAG(
dag_id="comm_add_dag",
schedule_interval=None,
start_date=pendulum.datetime(2023, 5, 8, tz="Asia/Shanghai"),
catchup=True,
description="自动化入库"
) as dag:
production_task = KubernetesPodOperator(namespace='airflow-plus',
image="comm-add:1.0.1",
image_pull_secrets=[k8s.V1LocalObjectReference("nexus-registry-secret")],
cmds=["python", "main.py"],
arguments=["--city", '{{dag_run.conf["city"] if "city" in dag_run.conf}} else all'],
name="comm_add",
task_id="comm_add",
get_logs=True
)
production_task
欢迎关注公众号算法小生
希望对你有所帮助,欢迎关注公众号算法小生