基于flink 1.13.2版本做的实践
本次主要实践flink on k8s native 的两种方式, 分别是sesion 和 application方式
第一步: k8s环境准备
1, 创建一个namespace
kubectl create namespace flink-session-cluster-test-1213
2, 新建一个serviceaccount, 用来提交flink的任务
kubectl create serviceaccount flink -n flink-session-cluster-test-1213
3, 做好绑定
kubectl create clusterrolebinding flink-role-binding-flink-session-cluster-test-1213_flink \
--clusterrole=edit --serviceaccount=flink-session-cluster-test-1213:flink
第二步: 镜像准备
使用hdfs作为flink的checkpoint存储,所以需要在flink的lib目录中放入hadoop的jar包
创建Dockerfile文件,并添加如下内容:
vi Dockerfile
FROM flink:1.13.2-scala_2.11-java8
COPY ./flink-shaded-hadoop-2-uber-2.7.5-10.0.jar $FLINK_HOME/lib/flink-shaded-hadoop-2-uber-2.7.5-10.0.jar
构建image
docker build -t native_realtime:1.0.3 .
后续的session与application均使用该镜像镜像实践
为了解决hosts映射以及用户自定义jar包等问题, 需要使用yaml模板
vi flink-template.yaml
apiVersion: v1
kind: Pod
metadata:
name: flink-pod-template
spec:
initContainers:
- name: artifacts-fetcher
image: native_realtime:1.0.3
# 添加自定义运行的jar包以及各种配置文件
command: ["/bin/sh","-c"]
args: ["wget http://xxxxxx:8082/flinkhistory/1.13.2/tt.sql -O /opt/flink/usrHome/taa.sql ; wget http://xxxx:8082/flinkhistory/1.13.2/realtime-dw-service-1.0.1-SNAPSHOT.jar -O /opt/flink/usrHome/realtime-dw-service-1.0.1.jar"]
volumeMounts:
- mountPath: /opt/flink/usrHome
name: flink-usr-home
hostAliases:
- ip: 10.1.1.103
hostnames:
- "cdh103"
- ip: 10.1.1.104
hostnames:
- "cdh104"
- ip: 10.1.1.105
hostnames:
- "cdh105"
- ip: 10.1.1.106
hostnames:
- "cdh106"
containers:
# Do not change the main container name
- name: flink-main-container
resources:
requests:
ephemeral-storage: 2048Mi
limits:
ephemeral-storage: 2048Mi
volumeMounts:
- mountPath: /opt/flink/usrHome
name: flink-usr-home
volumes:
- name: flink-usr-home
hostPath:
path: /tmp
type: Directory
使用run application模式提交任务
/data/flink-1.13.0/bin/flink run-application \
--target kubernetes-application \
-Dresourcemanager.taskmanager-timeout=345600 \
-Dkubernetes.namespace=fli