flink on k8s(minikube) session模式部署(HA)

安装 kubectl

curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl

chmod +x ./kubectl

sudo mv ./kubectl /usr/local/bin/kubectl

1.安装minikube

curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 \
  && chmod +x minikube

2.启动minikube

minikube start --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --kubernetes-version v1.15.0

k8s相关指令操作

kubectl get pods --all-namespaces
kubectl get pods -A

kubectl describe pod ${podName}
#进入pod容器
kubectl exec -ti <your-pod-name>  -n <your-namespace>  -- /bin/sh

#查看指定分区的pod
 kubectl get pod -n flink
 #查看创建的service
 kubectl get service -n flink
 #修改创建的pod配置信息
 kubectl edit svc -n ding-flink-test flink-jobmanager

3.部署flink集群

验证minikube 信息

minikube ssh 'sudo ip link set docker0 promisc on'

创建命名空间

kubectl create -f namespace.yaml namespace/flink created

其中namespace.yaml文件为:

kind: Namespace
apiVersion: v1
metadata:
	name: flink
    labels:
    	name: flink

查询minikube集群的的命名空间:

# kubectl get namespaces
NAME          STATUS    AGE
flink         Active    1m
kube-public   Active    254d
kube-system   Active    254d

创建 flink-conf/flink-jobmanager/task-manager
(yaml详情信息见文章附录)

kubectl create -f flink-configuration-configmap.yaml
kubectl create -f jobmanager-service.yaml
kubectl create -f jobmanager-deployment.yaml
kubectl create -f taskmanager-deployment.yaml

4.做pod端口代理转发到本地

kubectl port-forward service/flink-jobmanager 8081:8081

查看服务启动信息

kubectl get svc

您可以通过不同的方式访问Flink UI:

./bin/flink run -m localhost:8081 ./examples/streaming/WordCount.jar
  • NodePort在jobmanager的其余服务上创建服务:
    1. 运行kubectl create -f jobmanager-rest-service.yamlNodePort在jobmanager上创建服务。的示例jobmanager-rest-service.yaml可以在附录中找到。
    2. 运行kubectl get svc flink-jobmanager-rest以了解node-port该服务的,并在浏览器中导航到 http://: .
    3. port-forward解决方案类似,您还可以使用以下命令将作业提交到集群:
./bin/flink run -m <public-node-ip>:<node-port> ./examples/streaming/WordCount.jar

5.终止指令:

kubectl delete -f jobmanager-deployment.yaml
kubectl delete -f taskmanager-deployment.yaml
kubectl delete -f jobmanager-service.yaml
kubectl delete -f flink-configuration-configmap.yaml

附录:flink创建及启动yaml详情

flink-configuration-configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: flink-config
  labels:
    app: flink
data:
  flink-conf.yaml: |+
    jobmanager.rpc.address: flink-jobmanager
    taskmanager.numberOfTaskSlots: 1
    blob.server.port: 6124
    jobmanager.rpc.port: 6123
    taskmanager.rpc.port: 6122
    jobmanager.heap.size: 1024m
    taskmanager.heap.size: 1024m
  log4j.properties: |+
    log4j.rootLogger=INFO, file
    log4j.logger.akka=INFO
    log4j.logger.org.apache.kafka=INFO
    log4j.logger.org.apache.hadoop=INFO
    log4j.logger.org.apache.zookeeper=INFO
    log4j.appender.file=org.apache.log4j.FileAppender
    log4j.appender.file.file=${log.file}
    log4j.appender.file.layout=org.apache.log4j.PatternLayout
    log4j.appender.file.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n
    log4j.logger.org.apache.flink.shaded.akka.org.jboss.netty.channel.DefaultChannelPipeline=ERROR, file

jobmanager-deployment.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: flink-jobmanager
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: flink
        component: jobmanager
    spec:
      containers:
      - name: jobmanager
        image: flink:1.8.2
        workingDir: /opt/flink
        command: ["/bin/bash", "-c", "$FLINK_HOME/bin/jobmanager.sh start;\
          while :;
          do
            if [[ -f $(find log -name '*jobmanager*.log' -print -quit) ]];
              then tail -f -n +1 log/*jobmanager*.log;
            fi;
          done"]
        ports:
        - containerPort: 6123
          name: rpc
        - containerPort: 6124
          name: blob
        - containerPort: 8081
          name: ui
        livenessProbe:
          tcpSocket:
            port: 6123
          initialDelaySeconds: 30
          periodSeconds: 60
        volumeMounts:
        - name: flink-config-volume
          mountPath: /opt/flink/conf
      volumes:
      - name: flink-config-volume
        configMap:
          name: flink-config
          items:
          - key: flink-conf.yaml
            path: flink-conf.yaml
          - key: log4j.properties
            path: log4j.properties
      hostAliases:
      - ip: "192.168.66.192"
        hostnames:
        - "cdh-master"
      - ip: "192.168.66.193"
        hostnames:
        - "cdh-slave1"
      - ip: "192.168.66.194"
        hostnames:
        - "cdh-slave2"
      - ip: "192.168.66.195"
        hostnames:
        - "cdh-slave3"

taskmanager-deployment.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: flink-taskmanager
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: flink
        component: taskmanager
    spec:
      containers:
      - name: taskmanager
        image: flink:1.8.2
        workingDir: /opt/flink
        command: ["/bin/bash", "-c", "$FLINK_HOME/bin/taskmanager.sh start; \
          while :;
          do
            if [[ -f $(find log -name '*taskmanager*.log' -print -quit) ]];
              then tail -f -n +1 log/*taskmanager*.log;
            fi;
          done"]
        ports:
        - containerPort: 6122
          name: rpc
        livenessProbe:
          tcpSocket:
            port: 6122
          initialDelaySeconds: 30
          periodSeconds: 60
        volumeMounts:
        - name: flink-config-volume
          mountPath: /opt/flink/conf/
      volumes:
      - name: flink-config-volume
        configMap:
          name: flink-config
          items:
          - key: flink-conf.yaml
            path: flink-conf.yaml
          - key: log4j.properties
            path: log4j.properties
      hostAliases:
      - ip: "192.168.66.192"
        hostnames:
        - "cdh-master"
      - ip: "192.168.66.193"
        hostnames:
        - "cdh-slave1"
      - ip: "192.168.66.194"
        hostnames:
        - "cdh-slave2"
      - ip: "192.168.66.195"
        hostnames:
        - "cdh-slave3"

jobmanager-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: flink-jobmanager
spec:
  type: ClusterIP
  ports:
  - name: rpc
    port: 6123
  - name: blob
    port: 6124
  - name: ui
    nodePort: 30080
    port: 8081
    protocol: TCP
    targetPort: 8081
  selector:
    app: flink
    component: jobmanager

jobmanager-rest-service.yaml

(可选服务,将jobmanager rest端口公开为公共Kubernetes节点的端口)

apiVersion: v1
kind: Service
metadata:
  name: flink-jobmanager-rest
spec:
  type: NodePort
  ports:
  - name: rest
    port: 8081
    targetPort: 8081
  selector:
    app: flink
    component: jobmanager

host-edit.yaml

apiVersion: v1
kind: Pod
metadata:
  name: hostaliases-pod #pod名称
spec:
  hostAliases:
  - ip: "192.168.66.192"
    hostnames:
    - "cdh-master"
  - ip: "192.168.66.193"
    hostnames:
    - "cdh-slave1"
  - ip: "192.168.66.194"
    hostnames:
    - "cdh-slave2"
  - ip: "192.168.66.195"
    hostnames:
    - "cdh-slave3"
  containers:
  - name: cat-hosts
    image: flink:1.8.2
    command:
    - cat
    args:
    - "/etc/hosts"
  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值