k8s上搭建zookeeper集群和kafka集群

k8s上搭建zookeeper集群和kafka集群

一.zookeeper集群搭建

1.原计划是在一个development下创建三个副本,后来发现不行,如果是一个development,那么三个副本的yml是一模一样的,而zookeeper集群中每个节点的myid不能一样,故需要创建三份development
2.集群节点之间需要通信,就需要建立service,但是不能经service的负载均衡,需要直达pod,有两种法方法可以实现

  • 1)service建成headless
    service,这样就可以通过pod的名字直接找到pod,配置server.1=zookeeper1:2888:3888;2181即可到达pod,其中zookeeper1是service名称

  • 2)service建成普通service,使用server.1=zookeeper1-0.zookeeper1.kafka.svc.cluster.local:2888:3888;2181,可以到达pod

3.为了简单起见,没有使用nfs,如果需要自行加上挂载
4.顺便列出下zookeeper的作用,方便自己记忆

  1. 数据发布与订阅(配置中心)
  2. 负载均衡
  3. 命名服务(Naming Service)
  4. 分布式通知/协调
  5. 集群管理与Master选举
  6. 分布式锁
  7. 分布式队列

前提:创建namespace kafka,我们构建的内容都放到这里面

1.node1


---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  annotations:
    k8s.eip.work/ingress: 'false'
    k8s.eip.work/service: None
    k8s.eip.work/workload: zookeeper1
  labels:
    k8s.eip.work/layer: cloud
    k8s.eip.work/name: zookeeper1
  name: zookeeper1
  namespace: kafka
spec:
  podManagementPolicy: OrderedReady
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s.eip.work/layer: cloud
      k8s.eip.work/name: zookeeper1
  serviceName: zookeeper1
  template:
    metadata:
      labels:
        k8s.eip.work/layer: cloud
        k8s.eip.work/name: zookeeper1
    spec:
      containers:
        - env:
          - name: ZOO_SERVERS
            value: >-
              #下面两种方法都可以,一种直接到pod,一种到headless servcie(下面只有一个pod)
              #server.1=zookeeper1-0.zookeeper1.kafka.svc.cluster.local:2888:3888;2181
              #server.2=zookeeper2-0.zookeeper2.kafka.svc.cluster.local:2888:3888;2181
              #server.3=zookeeper3-0.zookeeper3.kafka.svc.cluster.local:2888:3888;2181
              server.1=zookeeper1:2888:3888;2181
              server.2=zookeeper2:2888:3888;2181
              server.3=zookeeper3:2888:3888;2181
          - name: ZOO_MY_ID
            value: '1'
          image: zookeeper
          imagePullPolicy: IfNotPresent
          name: zookeeper
          ports:
            - containerPort: 2181
              name: client
              protocol: TCP
            - containerPort: 2888
              name: server
              protocol: TCP
            - containerPort: 3888
              name: leader-election
              protocol: TCP

---
apiVersion: v1
kind: Service
metadata:
  annotations:
    k8s.eip.work/workload: zookeeper1
  labels:
    k8s.eip.work/layer: cloud
    k8s.eip.work/name: zookeeper1
  name: zookeeper1
  namespace: kafka
spec:
  clusterIP: None
  selector:
    k8s.eip.work/layer: cloud
    k8s.eip.work/name: zookeeper1
  type: ClusterIP



2.node2


---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  annotations:
    k8s.eip.work/ingress: 'false'
    k8s.eip.work/service: None
    k8s.eip.work/workload: zookeeper2
  labels:
    k8s.eip.work/layer: cloud
    k8s.eip.work/name: zookeeper2
  name: zookeeper2
  namespace: kafka
spec:
  podManagementPolicy: OrderedReady
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s.eip.work/layer: cloud
      k8s.eip.work/name: zookeeper2
  serviceName: zookeeper2
  template:
    metadata:
      labels:
        k8s.eip.work/layer: cloud
        k8s.eip.work/name: zookeeper2
    spec:
      containers:
        - env:
          - name: ZOO_SERVERS
            value: >-
              #下面两种方法都可以,一种直接到pod,一种到headless servcie(下面只有一个pod)
              #server.1=zookeeper1-0.zookeeper1.kafka.svc.cluster.local:2888:3888;2181
              #server.2=zookeeper2-0.zookeeper2.kafka.svc.cluster.local:2888:3888;2181
              #server.3=zookeeper3-0.zookeeper3.kafka.svc.cluster.local:2888:3888;2181
              server.1=zookeeper1:2888:3888;2181
              server.2=zookeeper2:2888:3888;2181
              server.3=zookeeper3:2888:3888;2181
          - name: ZOO_MY_ID
            value: '2'
          image: zookeeper
          imagePullPolicy: IfNotPresent
          name: zookeeper
          ports:
            - containerPort: 2181
              name: client
              protocol: TCP
            - containerPort: 2888
              name: server
              protocol: TCP
            - containerPort: 3888
              name: leader-election
              protocol: TCP

---
apiVersion: v1
kind: Service
metadata:
  annotations:
    k8s.eip.work/workload: zookeeper2
  labels:
    k8s.eip.work/layer: cloud
    k8s.eip.work/name: zookeeper2
  name: zookeeper2
  namespace: kafka
spec:
  clusterIP: None
  selector:
    k8s.eip.work/layer: cloud
    k8s.eip.work/name: zookeeper2
  type: ClusterIP



3.node3

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  annotations:
    k8s.eip.work/ingress: 'false'
    k8s.eip.work/service: None
    k8s.eip.work/workload: zookeeper3
  labels:
    k8s.eip.work/layer: cloud
    k8s.eip.work/name: zookeeper3
  name: zookeeper3
  namespace: kafka
spec:
  podManagementPolicy: OrderedReady
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s.eip.work/layer: cloud
      k8s.eip.work/name: zookeeper3
  serviceName: zookeeper3
  template:
    metadata:
      labels:
        k8s.eip.work/layer: cloud
        k8s.eip.work/name: zookeeper3
    spec:
      containers:
        - env:
          - name: ZOO_SERVERS
            value: >-
              #下面两种方法都可以,一种直接到pod,一种到headless servcie(下面只有一个pod)
              #server.1=zookeeper1-0.zookeeper1.kafka.svc.cluster.local:2888:3888;2181
              #server.2=zookeeper2-0.zookeeper2.kafka.svc.cluster.local:2888:3888;2181
              #server.3=zookeeper3-0.zookeeper3.kafka.svc.cluster.local:2888:3888;2181
              server.1=zookeeper1:2888:3888;2181
              server.2=zookeeper2:2888:3888;2181
              server.3=zookeeper3:2888:3888;2181
          - name: ZOO_MY_ID
            value: '3'
          image: zookeeper
          imagePullPolicy: IfNotPresent
          name: zookeeper
          ports:
            - containerPort: 2181
              name: client
              protocol: TCP
            - containerPort: 2888
              name: server
              protocol: TCP
            - containerPort: 3888
              name: leader-election
              protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    k8s.eip.work/workload: zookeeper3
  labels:
    k8s.eip.work/layer: cloud
    k8s.eip.work/name: zookeeper3
  name: zookeeper3
  namespace: kafka
spec:
  clusterIP: None
  selector:
    k8s.eip.work/layer: cloud
    k8s.eip.work/name: zookeeper3
  type: ClusterIP

4.验证是否成功

Mode: follower或leader说明集群部署成功

root@zookeeper1-0:/apache-zookeeper-3.6.1-bin# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower

二.kafka集群搭建

1.broker.id一定要指定,否则每次启动会自动生成,很有问题,有可能找不到分区leader
2.注意:name=kafka会有如下问题,但是可以通过增加环境变量KAFKA_PORT来解决,或者不要叫kafka

rg.apache.kafka.common.config.ConfigException: Invalid value tcp://10.0.35.234:9092 for configuration port: Not a number of type INT

1.node1

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  annotations:
    k8s.eip.work/ingress: 'false'
    k8s.eip.work/service: NodePort
    k8s.eip.work/workload: kafka1
  labels:
    k8s.eip.work/layer: cloud
    k8s.eip.work/name: kafka1
  name: kafka1
  namespace: kafka
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s.eip.work/layer: cloud
      k8s.eip.work/name: kafka1
  serviceName: kafka1
  template:
    metadata:
      labels:
        k8s.eip.work/layer: cloud
        k8s.eip.work/name: kafka1
    spec:
      containers:
        - env:
            - name: KAFKA_ZOOKEEPER_CONNECT
              value: 'zookeeper1:2181,zookeeper2:2181,zookeeper3:2181'
            - name: KAFKA_LISTENERS
              value: 'PLAINTEXT://:9092'
            - name: KAFKA_ADVERTISED_LISTENERS
              value: 'PLAINTEXT://192.168.100.16:31367'
            - name: KAFKA_BROKER_ID
              value: '1'
          image: wurstmeister/kafka
          imagePullPolicy: IfNotPresent
          name: kafka
          ports:
            - containerPort: 9092
              protocol: TCP
            - containerPort: 1099
              protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    k8s.eip.work/workload: kafka1
  labels:
    k8s.eip.work/layer: cloud
    k8s.eip.work/name: kafka1
  name: kafka1
  namespace: kafka
spec:
  ports:
    - name: zhnz8q
      nodePort: 31367
      port: 9092
      protocol: TCP
      targetPort: 9092
  selector:
    k8s.eip.work/layer: cloud
    k8s.eip.work/name: kafka1
  sessionAffinity: None
  type: NodePort




2.node2

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  annotations:
    k8s.eip.work/ingress: 'false'
    k8s.eip.work/service: NodePort
    k8s.eip.work/workload: kafka2
  labels:
    k8s.eip.work/layer: cloud
    k8s.eip.work/name: kafka2
  name: kafka2
  namespace: kafka
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s.eip.work/layer: cloud
      k8s.eip.work/name: kafka2
  serviceName: kafka2
  template:
    metadata:
      labels:
        k8s.eip.work/layer: cloud
        k8s.eip.work/name: kafka2
    spec:
      containers:
        - env:
            - name: KAFKA_ZOOKEEPER_CONNECT
              value: 'zookeeper1:2181,zookeeper2:2181,zookeeper3:2181'
            - name: KAFKA_LISTENERS
              value: 'PLAINTEXT://:9092'
            - name: KAFKA_ADVERTISED_LISTENERS
              value: 'PLAINTEXT://192.168.100.16:31368'
            - name: KAFKA_BROKER_ID
              value: '2'
          image: wurstmeister/kafka
          imagePullPolicy: IfNotPresent
          name: kafka
          ports:
            - containerPort: 9092
              protocol: TCP
            - containerPort: 1099
              protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    k8s.eip.work/workload: kafka2
  labels:
    k8s.eip.work/layer: cloud
    k8s.eip.work/name: kafka2
  name: kafka2
  namespace: kafka
spec:
  ports:
    - nodePort: 31368
      port: 9092
      protocol: TCP
      targetPort: 9092
  selector:
    k8s.eip.work/layer: cloud
    k8s.eip.work/name: kafka2
  sessionAffinity: None
  type: NodePort

3.node3

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  annotations:
    k8s.eip.work/ingress: 'false'
    k8s.eip.work/service: NodePort
    k8s.eip.work/workload: kafka3
  labels:
    k8s.eip.work/layer: cloud
    k8s.eip.work/name: kafka3
  name: kafka3
  namespace: kafka
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s.eip.work/layer: cloud
      k8s.eip.work/name: kafka3
  serviceName: kafka3
  template:
    metadata:
      labels:
        k8s.eip.work/layer: cloud
        k8s.eip.work/name: kafka3
    spec:
      containers:
        - env:
            - name: KAFKA_ZOOKEEPER_CONNECT
              value: 'zookeeper1:2181,zookeeper2:2181,zookeeper3:2181'
            - name: KAFKA_LISTENERS
              value: 'PLAINTEXT://:9092'
            - name: KAFKA_ADVERTISED_LISTENERS
              value: 'PLAINTEXT://192.168.100.16:31369'
            - name: KAFKA_BROKER_ID
              value: '3'
          image: wurstmeister/kafka
          imagePullPolicy: IfNotPresent
          name: kafka
          ports:
            - containerPort: 9092
              protocol: TCP
            - containerPort: 1099
              protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    k8s.eip.work/workload: kafka3
  labels:
    k8s.eip.work/layer: cloud
    k8s.eip.work/name: kafka2
  name: kafka3
  namespace: kafka
spec:
  ports:
    - nodePort: 31369
      port: 9092
      protocol: TCP
      targetPort: 9092
  selector:
    k8s.eip.work/layer: cloud
    k8s.eip.work/name: kafka3
  sessionAffinity: None
  type: NodePort

2.问题解决

1.如果kafka已经连接上zoo,zoo重启后,日志中会不停的打印,同时kafka日志也会不停的打印信息,说连不上,不是已经配置了多个吗,好像不起作用?

2020-06-02 11:29:07,541 [myid:1] - INFO  [NIOWorkerThread-2:ZooKeeperServer@1375] - Refusing session request for client /10.100.15.190:52678 as it has seen zxid 0x10000002e our last zxid is 0x0 client must try another server
2020-06-02 11:29:09,925 [myid:1] - INFO  [NIOWorkerThread-1:ZooKeeperServer@1375] - Refusing session request for client /10.100.5.193:42016 as it has seen zxid 0x10000003f our last zxid is 0x0 client must try another server

2.客户端连接报如下异常,如果topic是新的则报,什么鬼?

bash-4.4# kafka-console-producer.sh --broker-list kafka1:9092 --topic zipkin>
[2020-06-02 12:09:34,009] WARN [Producer clientId=console-producer] 1 partitions have leader brokers without a matching listener, including [zipkin-0] (org.apache.kafka.clients.NetworkClient)
>[2020-06-02 12:09:34,108] WARN [Producer clientId=console-producer] 1 partitions have leader brokers without a matching listener, including [zipkin-0] (org.apache.kafka.clients.NetworkClient)

原因:kafka的boker.id要指定,如果变化,则找不到分区leader就报这个错误

  • 0
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值