Kafka在Kubernetes中的集群部署

Helm 安装Kafka

一.Helm部署Kafka

helm repo add incubator http://mirror.azure.cn/kubernetes/charts-incubator

[root@k8s-centos7-master-150 kafka]# helm repo list
NAME            URL
minio           https://helm.min.io/
bitnami         https://charts.bitnami.com/bitnami
incubator       http://mirror.azure.cn/kubernetes/charts-incubator

1 创建 StorageClass

首先先创建本地存储的 StorageClass

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Retain

执行命令

kubectl apply -f local-storage.yaml 
storageclass.storage.k8s.io/local-storage created
[root@k8s-centos7-master-150 kafka]# kubectl get sc --all-namespaces -o wide
NAME            PROVISIONER                    RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
local-storage   kubernetes.io/no-provisioner   Delete          WaitForFirstConsumer   false                  83m

2 创建 Kafka 的 pv

因为要在 k8s-centos7-node3-151k8s-centos7-node4-152 这两个 k8s 节点上部署 2 个 Kafka 的 broker 节点,因此先在三个节点上创建这 2 个kafka broker节点的Local PV

kafka-local-pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: kafka-pv-0
spec:
  capacity:
    storage: 5Gi 
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  local:
    path: /data/kafka/data-0
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - k8s-centos7-node3-151
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: kafka-pv-1
spec:
  capacity:
    storage: 5Gi 
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  local:
    path: /data/kafka/data-1
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - k8s-centos7-node4-152

执行命令

kubectl apply -f kafka-local-pv.yaml

k8s-centos7-node3-151 上创建目录 /data/kafka/data-0

k8s-centos7-node4-152 上创建目录 /data/kafka/data-1

查看:

[root@k8s-centos7-master-150 kafka]# kubectl get pv,pvc --all-namespaces
NAME                             CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                            STORAGECLASS    REASON   AGE
persistentvolume/kafka-pv-0      5Gi        RWO            Retain           Bound    default/data-kafka-zookeeper-1   local-storage            53m
persistentvolume/kafka-pv-1      5Gi        RWO            Retain           Bound    default/data-kafka-zookeeper-0   local-storage            53m
persistentvolume/kafka-zk-pv-0   5Gi        RWO            Retain           Bound    default/datadir-kafka-0          local-storage            52m
persistentvolume/kafka-zk-pv-1   5Gi        RWO            Retain           Bound    default/datadir-kafka-1          local-storage            52m

NAMESPACE   NAME                                           STATUS   VOLUME          CAPACITY   ACCESS MODES   STORAGECLASS    AGE
default     persistentvolumeclaim/data-kafka-zookeeper-0   Bound    kafka-pv-1      5Gi        RWO            local-storage   39m
default     persistentvolumeclaim/data-kafka-zookeeper-1   Bound    kafka-pv-0      5Gi        RWO            local-storage   26m
default     persistentvolumeclaim/datadir-kafka-0          Bound    kafka-zk-pv-0   5Gi        RWO            local-storage   39m
default     persistentvolumeclaim/datadir-kafka-1          Bound    kafka-zk-pv-1   5Gi        RWO            local-storage   20m

3.创建 Zookeeper 的 pv

因为要在 k8s-centos7-node3-151k8s-centos7-node4-152 这两个 k8s 节点上部署 2 个 Zookeeper 节点,因此先在三个节点上创建这 2 个 Zookeeper 节点的 Local PV

zookeeper-local-pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: kafka-zk-pv-0
spec:
  capacity:
    storage: 5Gi 
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  local:
    path: /data/kafka/zkdata-0
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - k8s-centos7-node3-151
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: kafka-zk-pv-1
spec:
  capacity:
    storage: 5Gi 
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  local:
    path: /data/kafka/zkdata-1
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - k8s-centos7-node4-152

执行命令

kubectl apply -f zookeeper-local-pv.yaml

同理,也要在相应的节点上创建目录,不再赘述

查看

[root@k8s-centos7-master-150 kafka]# kubectl get pv,pvc --all-namespaces
NAME                             CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                            STORAGECLASS    REASON   AGE
persistentvolume/kafka-pv-0      5Gi        RWO            Retain           Bound    default/data-kafka-zookeeper-1   local-storage            57m
persistentvolume/kafka-pv-1      5Gi        RWO            Retain           Bound    default/data-kafka-zookeeper-0   local-storage            57m
persistentvolume/kafka-zk-pv-0   5Gi        RWO            Retain           Bound    default/datadir-kafka-0          local-storage            56m
persistentvolume/kafka-zk-pv-1   5Gi        RWO            Retain           Bound    default/datadir-kafka-1          local-storage            56m

NAMESPACE   NAME                                           STATUS   VOLUME          CAPACITY   ACCESS MODES   STORAGECLASS    AGE
default     persistentvolumeclaim/data-kafka-zookeeper-0   Bound    kafka-pv-1      5Gi        RWO            local-storage   42m
default     persistentvolumeclaim/data-kafka-zookeeper-1   Bound    kafka-pv-0      5Gi        RWO            local-storage   30m
default     persistentvolumeclaim/datadir-kafka-0          Bound    kafka-zk-pv-0   5Gi        RWO            local-storage   42m
default     persistentvolumeclaim/datadir-kafka-1          Bound    kafka-zk-pv-1   5Gi        RWO            local-storage   24m

4.部署 Kafka-values.yaml

kafka-values.yaml

replicas: 2
tolerations:
- key: node-role.kubernetes.io/master
  operator: Exists
  effect: NoSchedule
- key: node-role.kubernetes.io/master
  operator: Exists
  effect: PreferNoSchedule
persistence:
  storageClass: local-storage
  size: 5Gi
zookeeper:
  persistence:
    enabled: true
    storageClass: local-storage
    size: 5Gi
  replicaCount: 2
  tolerations:
  - key: node-role.kubernetes.io/master
    operator: Exists
    effect: NoSchedule
  - key: node-role.kubernetes.io/master
    operator: Exists
    effect: PreferNoSchedule

执行命令

helm install kafka -f kafka-values.yaml incubator/kafka

5.结果

468

# pod,svc
[root@k8s-centos7-master-150 kafka]# kubectl get po,svc -o wide
NAME                    READY   STATUS    RESTARTS   AGE   IP               NODE                    NOMINATED NODE   READINESS GATES
pod/kafka-0             1/1     Running   2          33m   10.244.182.130   k8s-centos7-node3-151   <none>           <none>
pod/kafka-1             1/1     Running   0          26m   10.244.216.4     k8s-centos7-node4-152   <none>           <none>
pod/kafka-zookeeper-0   1/1     Running   0          33m   10.244.216.3     k8s-centos7-node4-152   <none>           <none>
pod/kafka-zookeeper-1   1/1     Running   0          32m   10.244.182.131   k8s-centos7-node3-151   <none>           <none>

NAME                               TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE   SELECTOR
service/kafka                      ClusterIP   10.106.236.254   <none>        9092/TCP                     33m   app.kubernetes.io/component=kafka-broker,app.kubernetes.io/instance=kafka,app.kubernetes.io/name=kafka
service/kafka-headless             ClusterIP   None             <none>        9092/TCP                     33m   app.kubernetes.io/component=kafka-broker,app.kubernetes.io/instance=kafka,app.kubernetes.io/name=kafka
service/kafka-zookeeper            ClusterIP   10.96.218.152    <none>        2181/TCP                     33m   app=zookeeper,release=kafka
service/kafka-zookeeper-headless   ClusterIP   None             <none>        2181/TCP,3888/TCP,2888/TCP   33m   app=zookeeper,release=kafka
service/kubernetes                 ClusterIP   10.96.0.1        <none>        443/TCP                      18h   <none>
    
# statefulset
[root@k8s-centos7-master-150 kafka]# kubectl get statefulset
NAME              READY   AGE
kafka             2/2     35m
kafka-zookeeper   2/2     35m

6.Kafka可视化–安装 Kafka Manager

Helm的官方repo中已经提供了Kafka Manager的Chart

kafka-manager-values.yaml

image:
  repository: zenko/kafka-manager
  tag: 1.3.3.22
zkHosts: kafka-zookeeper:2181
basicAuth:
  enabled: true
  username: admin
  password: admin
ingress:
  enabled: true
  hosts: 
   - km.hongda.com
  tls:
    - secretName: hongda-com-tls-secret
      hosts:
      - km.hongda.com

执行命令

helm install kafka-manager --set service.type=NodePort -f kafka-manager-values.yaml stable/kafka-manager

安装完成后,确认kafka-manager的Pod已经正常启动:

[root@k8s-centos7-master-150 kafka]# kubectl get pod -l app=kafka-manager
NAME                             READY   STATUS    RESTARTS   AGE
kafka-manager-5979b5b6c8-spmzz   1/1     Running   0          6m36s

并配置Cluster Zookeeper Hostskafka-zookeeper:2181,即可将前面部署的kafka集群纳入kafka-manager管理当中。

469

470

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值