【云原生】详解 Zookeeper + Kafka on K8S 环境部署

7 篇文章 1 订阅
4 篇文章 0 订阅

一、概述

  • Apache ZooKeeper 是一个集中式服务,用于维护配置信息、命名、提供分布式同步和提供组服务,ZooKeeper 致力于开发和维护一个开源服务器,以实现高度可靠的分布式协调,其实也可以认为就是一个分布式数据库,只是结构比较特殊,是树状结构。官网文档:

https://zookeeper.apache.org/doc/r3.8.0/

  • Kafka是最初由 Linkedin 公司开发,是一个分布式、支持分区的(partition)、多副本的(replica),基于 zookeeper 协调的分布式消息系统。官方文档:

    https://kafka.apache.org/documentation/

二、Zookeeper on k8s 部署

1)添加源

部署包地址:

https://artifacthub.io/packages/helm/zookeeper/zookeeper

helm repo add bitnami https://charts.bitnami.com/bitnami
helm pull bitnami/zookeeper
tar -xf  zookeeper-10.2.1.tgz

2)修改配置

  • 修改zookeeper/values.yaml

image:
#如果有私有镜像仓库,需要配置仓库地址
  registry: myharbor.com
  repository: bigdata/zookeeper
  tag: 3.8.0-debian-11-r36
...

replicaCount: 3

...

service:
  type: NodePort
  nodePorts:
    #NodePort 默认范围是 30000-32767
    client: "32181"
    tls: "32182"

...

persistence:
  storageClass: "zookeeper-local-storage"
  size: "10Gi"
  # 目录需要提前在宿主机上创建
  local:
    - name: zookeeper-0
      host: "k8s-master"
      path: "/opt/bigdata/servers/zookeeper/data/data1"
    - name: zookeeper-1
      host: "k8s-node-1"
      path: "/opt/bigdata/servers/zookeeper/data/data1"
    - name: zookeeper-2
      host: "k8s-node-2"
      path: "/opt/bigdata/servers/zookeeper/data/data1"

...

# Enable Prometheus to access ZooKeeper metrics endpoint
metrics:
  enabled: true
  • 添加zookeeper/templates/pv.yaml

{{- range .Values.persistence.local }}
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: {{ .name }}
  labels:
    name: {{ .name }}
spec:
  storageClassName: {{ $.Values.persistence.storageClass }}
  capacity:
    storage: {{ $.Values.persistence.size }}
  accessModes:
    - ReadWriteOnce
  local:
    path: {{ .path }}
  nodeAffinity:
    required:
      nodeSelectorTerms:
        - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
                - {{ .host }}
---
{{- end }}
  • 添加zookeeper/templates/storage-class.yaml(如果环境当中已经存在sc,则无需再创建)

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: {{ .Values.persistence.storageClass }}
provisioner: kubernetes.io/no-provisioner

3)开始安装

# 先准备好镜像
docker pull docker.io/bitnami/zookeeper:3.8.0-debian-11-r36
docker tag docker.io/bitnami/zookeeper:3.8.0-debian-11-r36 myharbor.com/bigdata/zookeeper:3.8.0-debian-11-r36
docker push myharbor.com/bigdata/zookeeper:3.8.0-debian-11-r36

# 开始安装
helm install zookeeper ./zookeeper -n zookeeper --create-namespace

NOTES 

NAME: zookeeper
LAST DEPLOYED: Sun Sep 18 18:24:03 2022
NAMESPACE: zookeeper
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: zookeeper
CHART VERSION: 10.2.1
APP VERSION: 3.8.0

** Please be patient while the chart is being deployed **

ZooKeeper can be accessed via port 2181 on the following DNS name from within your cluster:

    zookeeper.zookeeper.svc.cluster.local

To connect to your ZooKeeper server run the following commands:

    export POD_NAME=$(kubectl get pods --namespace zookeeper -l "app.kubernetes.io/name=zookeeper,app.kubernetes.io/instance=zookeeper,app.kubernetes.io/component=zookeeper" -o jsonpath="{.items[0].metadata.name}")
    kubectl exec -it $POD_NAME -- zkCli.sh

To connect to your ZooKeeper server from outside the cluster execute the following commands:

    export NODE_IP=$(kubectl get nodes --namespace zookeeper -o jsonpath="{.items[0].status.addresses[0].address}")
    export NODE_PORT=$(kubectl get --namespace zookeeper -o jsonpath="{.spec.ports[0].nodePort}" services zookeeper)
    zkCli.sh $NODE_IP:$NODE_PORT

查看 pod 状态

kubectl get pods,svc -n zookeeper -owide

4)测试验证

# 登录zookeeper pod
kubectl exec -it zookeeper-0 -n zookeeper -- zkServer.sh status
kubectl exec -it zookeeper-1 -n zookeeper -- zkServer.sh status
kubectl exec -it zookeeper-2 -n zookeeper -- zkServer.sh status

kubectl exec -it zookeeper-0 -n zookeeper -- bash

5)Prometheus 监控

Prometheus:https://prometheus.k8s.local/targets?search=zookeeper

修改values.yaml

#修改配置如下:
serviceMonitor:
  enabled: true                         #开启
  namespace: zookeeper                  #命名空间
  interval: 10s                         #采集间隔
  selector:                             #匹配endpoint
      app.kubernetes.io/name: zookeeper
      
---
prometheusRule:
  enabled: true
  namespace: zookeeper
  rules:
    - alert: ZooKeeperSyncedFollowers
      annotations:
        message: The number of synced followers for the leader node in ZooKeeper deployment my-release is less than 2. This usually means that some of the ZooKeeper nodes aren't communicating properly. If it doesn't resolve itself you can try killing the pods (one by one).
      expr: max(synced_followers{service="my-release-metrics"}) < 2
      for: 5m
      labels:
        severity: critical
    - alert: ZooKeeperOutstandingRequests
      annotations:
        message: The number of outstanding requests for ZooKeeper pod {{ $labels.pod }} is greater than 10. This can indicate a performance issue with the Pod or cluster a whole.
      expr: outstanding_requests{service="my-release-metrics"} > 10
      for: 5m
      labels:
        severity: critical

 helm升级zookeeper:

helm upgrade zookeeper -n zookeeper ./zookeeper

检查资源是否创建:

[root@k8s-master][16:03:41][OK] ~/zookeeper 
#kubectl get servicemonitor,prometheusrule -n zookeeper 
NAME                                             AGE
servicemonitor.monitoring.coreos.com/zookeeper   14m

NAME                                             AGE
prometheusrule.monitoring.coreos.com/zookeeper   14m

 添加标签,关联kube-prometheus-stack(我的环境prometheus是用的这个)

kubectl edit prometheusrules.monitoring.coreos.com -n zookeeper zookeeper
#添加如下
  labels:
    release: kube-prometheus-stack

kubectl edit prometheusrules.monitoring.coreos.com -n zookeeper zookeeper
#添加如下
  labels:
    release: kube-prometheus-stack

登录prometheus页面,查看targets是否正常: 

可以通过命令查看采集数据

kubectl get --raw http://10.164.140.90:9141/metrics
kubectl get --raw http://10.161.201.39:9141/metrics
kubectl get --raw http://10.172.82.223:9141/metrics

Grafana:https://grafana.k8s.local/账号:admin,密码通过下面命令获取

kubectl get secret --namespace grafana grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

导入 grafana 模板,集群资源监控:10465
官方模块下载地址:

https://grafana.com/grafana/dashboards/

6)卸载

helm uninstall zookeeper -n zookeeper

kubectl delete pod -n zookeeper `kubectl get pod -n zookeeper|awk 'NR>1{print $1}'` --force
kubectl patch ns zookeeper -p '{"metadata":{"finalizers":null}}'
kubectl delete ns zookeeper --force

三、Kafka on k8s 部署

1)添加源

部署包地址:

https://artifacthub.io/packages/helm/bitnami/kafka

helm repo add bitnami https://charts.bitnami.com/bitnami
helm pull bitnami/kafka
tar -xf kafka-18.4.2.tgz

2)修改配置

  • 修改kafka/values.yaml

image:
  registry: myharbor.com
  repository: bigdata/kafka
  tag: 3.2.1-debian-11-r16

...

replicaCount: 3

...

service:
  type: NodePort
  nodePorts:
    client: "30092"
    external: "30094"

...

externalAccess
  enabled: true
  service:
    type: NodePort
     nodePorts:
       - 30001
       - 30002
       - 30003
     useHostIPs: true

...

persistence:
  storageClass: "kafka-local-storage"
  size: "10Gi"
  # 目录需要提前在宿主机上创建
  local:
    - name: kafka-0
      host: "local-168-182-110"
      path: "/opt/bigdata/servers/kafka/data/data1"
    - name: kafka-1
      host: "local-168-182-111"
      path: "/opt/bigdata/servers/kafka/data/data1"
    - name: kafka-2
      host: "local-168-182-112"
      path: "/opt/bigdata/servers/kafka/data/data1"

...

metrics:
  kafka:
    enabled: true
    image:
      registry: myharbor.com
      repository: bigdata/kafka-exporter
      tag: 1.6.0-debian-11-r8
    jmx:
      enabled: true
      image:
      registry: myharbor.com
      repository: bigdata/jmx-exporter
      tag: 0.17.1-debian-11-r1
      annotations:
        prometheus.io/path: "/metrics"

...

zookeeper:
  enabled: false

...

externalZookeeper
  servers:
    - zookeeper-0.zookeeper-headless.zookeeper
    - zookeeper-1.zookeeper-headless.zookeeper
    - zookeeper-2.zookeeper-headless.zookeeper
  • 添加kafka/templates/pv.yaml

{{- range .Values.persistence.local }}
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: {{ .name }}
  labels:
    name: {{ .name }}
spec:
  storageClassName: {{ $.Values.persistence.storageClass }}
  capacity:
    storage: {{ $.Values.persistence.size }}
  accessModes:
    - ReadWriteOnce
  local:
    path: {{ .path }}
  nodeAffinity:
    required:
      nodeSelectorTerms:
        - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
                - {{ .host }}
---
{{- end }}
  • 添加kafka/templates/storage-class.yaml

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: {{ .Values.persistence.storageClass }}
provisioner: kubernetes.io/no-provisioner

3)开始安装

# 先准备好镜像
docker pull docker.io/bitnami/kafka:3.2.1-debian-11-r16
docker tag docker.io/bitnami/kafka:3.2.1-debian-11-r16 myharbor.com/bigdata/kafka:3.2.1-debian-11-r16
docker push myharbor.com/bigdata/kafka:3.2.1-debian-11-r16

# node-export
docker pull docker.io/bitnami/kafka-exporter:1.6.0-debian-11-r8
docker tag docker.io/bitnami/kafka-exporter:1.6.0-debian-11-r8 myharbor.com/bigdata/kafka-exporter:1.6.0-debian-11-r8
docker push myharbor.com/bigdata/kafka-exporter:1.6.0-debian-11-r8

# JXM
docker.io/bitnami/jmx-exporter:0.17.1-debian-11-r1
docker tag docker.io/bitnami/jmx-exporter:0.17.1-debian-11-r1 myharbor.com/bigdata/jmx-exporter:0.17.1-debian-11-r1
docker push myharbor.com/bigdata/jmx-exporter:0.17.1-debian-11-r1

#开始安装
helm install kafka ./kafka -n kafka --create-namespace

NOTES

NAME: kafka
LAST DEPLOYED: Sun Sep 18 20:57:02 2022
NAMESPACE: kafka
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: kafka
CHART VERSION: 18.4.2
APP VERSION: 3.2.1
---------------------------------------------------------------------------------------------
 WARNING

    By specifying "serviceType=LoadBalancer" and not configuring the authentication
    you have most likely exposed the Kafka service externally without any
    authentication mechanism.

    For security reasons, we strongly suggest that you switch to "ClusterIP" or
    "NodePort". As alternative, you can also configure the Kafka authentication.

---------------------------------------------------------------------------------------------

** Please be patient while the chart is being deployed **

Kafka can be accessed by consumers via port 9092 on the following DNS name from within your cluster:

    kafka.kafka.svc.cluster.local

Each Kafka broker can be accessed by producers via port 9092 on the following DNS name(s) from within your cluster:

    kafka-0.kafka-headless.kafka.svc.cluster.local:9092
    kafka-1.kafka-headless.kafka.svc.cluster.local:9092
    kafka-2.kafka-headless.kafka.svc.cluster.local:9092

To create a pod that you can use as a Kafka client run the following commands:

    kubectl run kafka-client --restart='Never' --image docker.io/bitnami/kafka:3.2.1-debian-11-r16 --namespace kafka --command -- sleep infinity
    kubectl exec --tty -i kafka-client --namespace kafka -- bash

    PRODUCER:
        kafka-console-producer.sh \

            --broker-list kafka-0.kafka-headless.kafka.svc.cluster.local:9092,kafka-1.kafka-headless.kafka.svc.cluster.local:9092,kafka-2.kafka-headless.kafka.svc.cluster.local:9092 \
            --topic test

    CONSUMER:
        kafka-console-consumer.sh \

            --bootstrap-server kafka.kafka.svc.cluster.local:9092 \
            --topic test \
            --from-beginning

To connect to your Kafka server from outside the cluster, follow the instructions below:

    Kafka brokers domain: You can get the external node IP from the Kafka configuration file with the following commands (Check the EXTERNAL listener)

        1. Obtain the pod name:

        kubectl get pods --namespace kafka -l "app.kubernetes.io/name=kafka,app.kubernetes.io/instance=kafka,app.kubernetes.io/component=kafka"

        2. Obtain pod configuration:

        kubectl exec -it KAFKA_POD -- cat /opt/bitnami/kafka/config/server.properties | grep advertised.listeners

    Kafka brokers port: You will have a different node port for each Kafka broker. You can get the list of configured node ports using the command below:

        echo "$(kubectl get svc --namespace kafka -l "app.kubernetes.io/name=kafka,app.kubernetes.io/instance=kafka,app.kubernetes.io/component=kafka,pod" -o jsonpath='{.items[*].spec.ports[0].nodePort}' | tr ' ' '\n')"

查看 pod 状态

kubectl get pods,svc -n kafka -owide

4)测试验证

# 登录kafka的pod
kubectl exec -it kafka-0 -n kafka -- bash

1、创建 Topic(一个副本一个分区)

--create: 指定创建topic动作

--topic:指定新建topic的名称

--bootstrap-server: 指定kafka连接地址

--config:指定当前topic上有效的参数值,参数列表参考文档为: Topic-level configuration

--partitions:指定当前创建的kafka分区数量,默认为1个

--replication-factor:指定每个分区的复制因子个数,默认1个
kafka-topics.sh --create --topic test001 --bootstrap-server kafka.kafka:9092 --partitions 1 --replication-factor 1
# 查看
kafka-topics.sh --describe --bootstrap-server kafka.kafka:9092  --topic test001

2、查看 Topic 列表

kafka-topics.sh --list --bootstrap-server kafka.kafka:9092

3、生产者/消费者测试

【生产者】

kafka-console-producer.sh --broker-list kafka.kafka:9092 --topic test001

{"id":"1","name":"n1","age":"20"}
{"id":"2","name":"n2","age":"21"}
{"id":"3","name":"n3","age":"22"}

【消费者】

# 从头开始消费
kafka-console-consumer.sh --bootstrap-server kafka.kafka:9092 --topic test001 --from-beginning
# 指定从分区的某个位置开始消费,这里只指定了一个分区,可以多写几行或者遍历对应的所有分区
kafka-console-consumer.sh --bootstrap-server kafka.kafka:9092 --topic test001 --partition 0 --offset 100 --group test001

4、查看数据积压

kafka-consumer-groups.sh --bootstrap-server kafka.kafka:9092 --describe --group test001

5、删除 topic

kafka-topics.sh --delete --topic test001 --bootstrap-server kafka.kafka:9092

5)Prometheus 监控

Prometheus:

https://prometheus.k8s.local/targets?search=kafka

可以通过命令查看采集数据

kubectl get --raw http://10.244.2.165:9308/metrics

Grafana:https://grafana.k8s.local/
账号:admin,密码通过下面命令获取

kubectl get secret --namespace grafana grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

导入 grafana 模板,集群资源监控:11962
官方模块下载地址:

https://grafana.com/grafana/dashboards/

6)卸载

helm uninstall kafka -n kafka

kubectl delete pod -n kafka `kubectl get pod -n kafka|awk 'NR>1{print $1}'` --force
kubectl patch ns kafka  -p '{"metadata":{"finalizers":null}}'
kubectl delete ns kafka  --force
  • 3
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
好的,以下是一个基本的在k8s部署zookeeper + kafka集群的配置教程: 1. 创建一个namespace 首先,我们需要创建一个namespace,用于部署zookeeperkafka集群。可以使用以下命令创建一个名为“zookeeper-kafka”的namespace: ``` kubectl create namespace zookeeper-kafka ``` 2. 部署Zookeeper 接下来,我们需要部署Zookeeper。可以使用以下YAML文件创建一个Zookeeper服务: ``` apiVersion: v1 kind: Service metadata: name: zookeeper spec: selector: app: zookeeper ports: - name: client port: 2181 protocol: TCP - name: follower port: 2888 protocol: TCP - name: leader port: 3888 protocol: TCP --- apiVersion: apps/v1 kind: StatefulSet metadata: name: zookeeper spec: serviceName: zookeeper replicas: 3 selector: matchLabels: app: zookeeper template: metadata: labels: app: zookeeper spec: containers: - name: zookeeper image: zookeeper:3.5.8 ports: - name: client containerPort: 2181 - name: follower containerPort: 2888 - name: leader containerPort: 3888 volumeMounts: - name: datadir mountPath: /data env: - name: ZOO_MY_ID valueFrom: fieldRef: fieldPath: metadata.name - name: ZOO_SERVERS value: zookeeper-0.zookeeper:2888:3888,zookeeper-1.zookeeper:2888:3888,zookeeper-2.zookeeper:2888:3888 volumeClaimTemplates: - metadata: name: datadir spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi ``` 这将创建一个3个Pod的Zookeeper StatefulSet,并创建一个名为“zookeeper”的Service,暴露Zookeeper的客户端端口2181,follower端口2888和leader端口3888。 3. 部署Kafka 现在,我们可以部署Kafka。以下是一个Kafka部署的YAML文件示例: ``` apiVersion: v1 kind: Service metadata: name: kafka spec: type: NodePort selector: app: kafka ports: - name: kafka port: 9092 nodePort: 30092 protocol: TCP --- apiVersion: apps/v1 kind: StatefulSet metadata: name: kafka spec: serviceName: kafka replicas: 3 selector: matchLabels: app: kafka template: metadata: labels: app: kafka spec: containers: - name: kafka image: wurstmeister/kafka:2.13-2.7.0 ports: - name: kafka containerPort: 9092 env: - name: KAFKA_BROKER_ID valueFrom: fieldRef: fieldPath: metadata.name - name: KAFKA_ZOOKEEPER_CONNECT value: zookeeper-0.zookeeper:2181,zookeeper-1.zookeeper:2181,zookeeper-2.zookeeper:2181 - name: KAFKA_ADVERTISED_LISTENERS value: PLAINTEXT://$(hostname -f):9092 - name: KAFKA_LISTENERS value: PLAINTEXT://0.0.0.0:9092 - name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR value: "3" volumeMounts: - name: datadir mountPath: /data volumeClaimTemplates: - metadata: name: datadir spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi ``` 这将创建一个3个Pod的Kafka StatefulSet和一个名为“kafka”的Service,它将Kafka的9092端口暴露为NodePort 30092。 4. 验证部署 现在,您可以使用以下命令检查ZookeeperKafka是否正在运行: ``` kubectl get pods -n zookeeper-kafka ``` 您应该看到3个Zookeeper和3个Kafka Pod处于“Running”状态。 接下来,您可以使用以下命令检查Kafka是否正在监听端口30092(或您自己选择的端口): ``` kubectl get services -n zookeeper-kafka ``` 您应该看到一个名为“kafka”的service,它将Kafka的9092端口暴露为30092端口。可以使用此端口测试Kafka是否正常运行。 至此,您已经成功地在k8s部署zookeeper + kafka集群。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

CN-FuWei

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值