在k8s中安装kafka单机、集群说明

一、安装单节点kafka(自己测试过)

1、创建zookeeper服务

zookeeper-service.yaml内容如下:

#Service
apiVersion: v1
kind: Service
metadata:
    name: kafka-zookeeper-service
    namespace: paas-basic
    labels:
        name: zookeeper-service
spec:    
    selector:
        name: kafka-zookeeper-pod
    sessionAffinity: ClientIP
    type: NodePort
    ports:
    - name: "zookeeper"
      port: 2181
      targetPort: 2181

zookeeper-deploy.yaml内容如下:

#Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
    name: kafka-zookeeper-deploy
    namespace: paas-basic
    labels:
        name: zookeeper-deploy-label
spec:
    replicas: 1
    selector:
      matchLabels:
        name: kafka-zookeeper-pod
    template:
        metadata:
            labels:
                name: kafka-zookeeper-pod
        spec:
            terminationGracePeriodSeconds: 30  #k8s正确、优雅地关闭应用,等待时间30秒
            nodeSelector:
              kafka: "true"
            containers:
            - name: "kafka-zookeeper"
              image: wurstmeister/zookeeper
              imagePullPolicy: IfNotPresent
              ports:
              - containerPort: 2181
              volumeMounts:
              - name: zk-data
                readOnly: false
                mountPath: /opt/zookeeper-3.4.13/data
            volumes:
            - name: zk-data
              hostPath:
                path: /home/k8s-1.19.2/paas-basic/kafka/zookeeper_data
2、创建kafka。

kafka-service.yaml的内容如下:

#Service
apiVersion: v1
kind: Service
metadata:
    name: kafka-service
    namespace: paas-basic
    labels:
        name: kafka-service
spec:    
    selector:
        name: kafka-pod
    sessionAffinity: ClientIP
    type: NodePort
    ports:
    - name: "kafka"
      port: 9092
      targetPort: 9092
      nodePort: 30092

kafka-deploy.yaml的内容如下:

#Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
    name: kafka-deploy
    namespace: paas-basic
    labels:
        name: kafka-deploy
spec:
    replicas: 1
    selector: 
      matchLabels:
        name: kafka-pod
    template:
        metadata:
            labels:
                name: kafka-pod
        spec:
            terminationGracePeriodSeconds: 30  #k8s正确、优雅地关闭应用,等待时间30秒
            nodeSelector:
              kafka: "true"
            hostname: kafka-hostname       #设置pod的hostaname
            containers:
            - name: "kafka"
              image: wurstmeister/kafka:2.12-2.3.0
              imagePullPolicy: IfNotPresent
              ports:
              - containerPort: 9092
              volumeMounts:
              - name: kafka-volume
                mountPath: /kafka
              env:
              - name: KAFKA_ADVERTISED_PORT
                value: "30092"
              - name: KAFKA_MESSAGE_MAX_BYTES
                value: "1073741824"
              - name: KAFKA_REPLICA_FETCH_MAX_BYTES
                value: "1073741824"
              - name: KAFKA_BATCH_SIZE
                value: "4096"
              - name: KAFKA_ADVERTISED_HOST_NAME
                value: "192.168.180.37"
              - name: KAFKA_ZOOKEEPER_CONNECT
                value: kafka-zookeeper-service.paas-basic:2181
              - name: KAFKA_AUTO_CREATE_TOPICS_ENABLE
                value: "true"
              - name: KAFKA_LOG_RETENTION_HOURS
                value: "24"
              - name: LOG_CLEANUP_POLICY
                value: "delete"
            volumes:
            - name: kafka-volume
              hostPath:
                path: /home/k8s-1.19.2/paas-basic/kafka/volume
3、创建manager

manager-service.yaml

#Service
apiVersion: v1
kind: Service
metadata:
    name: kafka-manager
    namespace: paas-basic
    labels:
        name: manager-service
spec:    
    selector:
        name: kafka-manager-pod
    sessionAffinity: ClientIP
    type: NodePort
    ports:
    - name: "manager"
      port: 9000
      targetPort: 9000
      nodePort: 30900

manager-deploy.yaml

#Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
    name: kafka-manager-deploy
    namespace: paas-basic
    labels:
        name: manager-deploy
spec:
    replicas: 1
    selector:
      matchLabels:
        name: kafka-manager-pod
    template:
        metadata:
            labels:
                name: kafka-manager-pod
        spec:
            terminationGracePeriodSeconds: 1  #k8s正确、优雅地关闭应用,等待时间30秒
            nodeSelector:
              kafka: "true"
            containers:
            - name: "kafka-manager"
              image:  sheepkiller/kafka-manager
              imagePullPolicy: IfNotPresent
              ports:
              - containerPort: 9000
              env:
              - name: ZK_HOSTS
                value: kafka-zookeeper-service.paas-basic:2181
              #- name: KAFKA_ZOOKEEPER_CONNECT
                #value: kafka-zookeeper-service.paas-basic:2181

二、创建多节点的kafka集群(未测试)

下面将搭建3个节点的kafka集群。这里采用了3个Deployment来运行Kafka和Zookeeper,其实更优雅的方式是使用StatefulSet。Kubernetes的官方文档上有使用StatefulSet搭建Zookeeper集群的范例。但是使用StatefulSet搭建Zookeeper和Kafka时,Zookeeper的myid和Kafka的brokerID就不能预先设置了,因此需要在镜像构建过程中加入相关的操作,而Docker Hub中的绝大多数镜像都不包含这一逻辑。而Deployment虽然不够优雅,但是可以对各节点预先配置,运行起来相对简单,可以说各有所长。

1)搭建zookeeper集群
首先创建zookeeper的yaml文件。

zookeeper-svc2.yaml的内容如下:

apiVersion: v1
kind: Service
metadata:
  name: zoo1
  labels:
    app: zookeeper-1
spec:
  ports:
  - name: client
    port: 2181
    protocol: TCP
  - name: follower
    port: 2888
    protocol: TCP
  - name: leader
    port: 3888
    protocol: TCP
  selector:
    app: zookeeper-1
---
apiVersion: v1
kind: Service
metadata:
  name: zoo2
  labels:
    app: zookeeper-2
spec:
  ports:
  - name: client
    port: 2181
    protocol: TCP
  - name: follower
    port: 2888
    protocol: TCP
  - name: leader
    port: 3888
    protocol: TCP
  selector:
    app: zookeeper-2
---
apiVersion: v1
kind: Service
metadata:
  name: zoo3
  labels:
    app: zookeeper-3
spec:
  ports:
  - name: client
    port: 2181
    protocol: TCP
  - name: follower
    port: 2888
    protocol: TCP
  - name: leader
    port: 3888
    protocol: TCP
  selector:
    app: zookeeper-3

zookeeper-deployment2.yaml的内容如下:

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: zookeeper-deployment-1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: zookeeper-1
      name: zookeeper-1
  template:
    metadata:
      labels:
        app: zookeeper-1
        name: zookeeper-1
    spec:
      containers:
      - name: zoo1
        image: digitalwonderland/zookeeper
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 2181
        env:
        - name: ZOOKEEPER_ID
          value: "1"
        - name: ZOOKEEPER_SERVER_1
          value: zoo1
        - name: ZOOKEEPER_SERVER_2
          value: zoo2
        - name: ZOOKEEPER_SERVER_3
          value: zoo3
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: zookeeper-deployment-2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: zookeeper-2
      name: zookeeper-2
  template:
    metadata:
      labels:
        app: zookeeper-2
        name: zookeeper-2
    spec:
      containers:
      - name: zoo2
        image: digitalwonderland/zookeeper
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 2181
        env:
        - name: ZOOKEEPER_ID
          value: "2"
        - name: ZOOKEEPER_SERVER_1
          value: zoo1
        - name: ZOOKEEPER_SERVER_2
          value: zoo2
        - name: ZOOKEEPER_SERVER_3
          value: zoo3
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: zookeeper-deployment-3
spec:
  replicas: 1
  selector:
    matchLabels:
      app: zookeeper-3
      name: zookeeper-3
  template:
    metadata:
      labels:
        app: zookeeper-3
        name: zookeeper-3
    spec:
      containers:
      - name: zoo3
        image: digitalwonderland/zookeeper
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 2181
        env:
        - name: ZOOKEEPER_ID
          value: "3"
        - name: ZOOKEEPER_SERVER_1
          value: zoo1
        - name: ZOOKEEPER_SERVER_2
          value: zoo2
        - name: ZOOKEEPER_SERVER_3
          value: zoo3

分别执行kubectl apply -f zookeeper-svc2.yaml 和 kubectl apply -f zookeeper-deployment2.yaml

这里创建了3个deployment和3个service,一一对应。这样,三个实例都可以对外提供服务。创建完成后,需要用kubectl logs查看一下三个Zookeeper的pod的日志,确保没有错误发生,并且在3个节点的日志中,有类似下面的语句,则表明Zookeeper集群已顺利搭建成功。

2019-06-24 05:22:06,582 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Leader@371] - LEADING - LEADER ELECTION TOOK - 14641

2)搭建Kafka集群
同样创建3个deployment和3个service,编写kafka-svc2.yaml和kafka-deployment2.yaml如下:

kafka-svc2.yaml配置如下

apiVersion: v1
kind: Service
metadata:
  name: kafka-service-1
  labels:
    app: kafka-service-1
spec:
  type: NodePort
  ports:
  - port: 9092
    name: kafka-service-1
    targetPort: 9092
    nodePort: 30901
    protocol: TCP
  selector:
    app: kafka-service-1
---

apiVersion: v1
kind: Service
metadata:
  name: kafka-service-2
  labels:
    app: kafka-service-2
spec:
  type: NodePort
  ports:
  - port: 9092
    name: kafka-service-2
    targetPort: 9092
    nodePort: 30902
    protocol: TCP
  selector:
    app: kafka-service-2
---
apiVersion: v1
kind: Service
metadata:
  name: kafka-service-3
  labels:
    app: kafka-service-3
spec:
  type: NodePort
  ports:
  - port: 9092
    name: kafka-service-3
    targetPort: 9092
    nodePort: 30903
    protocol: TCP
  selector:
    app: kafka-service-3

kafka-deployment2.yaml 配置如下,注意需要将[kafka-service2的clusterIP]等三项替换成实际的clusterIP。

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: kafka-deployment-1
spec:
  replicas: 1
  selector:
    matchLabels:
      name: kafka-service-1
  template:
    metadata:
      labels:
        name: kafka-service-1
        app: kafka-service-1
    spec:
      containers:
      - name: kafka-1
        image: wurstmeister/kafka
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9092
        env:
        - name: KAFKA_ADVERTISED_PORT
          value: "9092"
        - name: KAFKA_ADVERTISED_HOST_NAME
          value: [kafka-service1的clusterIP]
        - name: KAFKA_ZOOKEEPER_CONNECT
          value: zoo1:2181,zoo2:2181,zoo3:2181
        - name: KAFKA_BROKER_ID
          value: "1"
        - name: KAFKA_CREATE_TOPICS
          value: mytopic:2:1
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: kafka-deployment-2
spec:
  replicas: 1
  selector:
    matchLabels:
      name: kafka-service-2
  template:
    metadata:
      labels:
        name: kafka-service-2
        app: kafka-service-2
    spec:
      containers:
      - name: kafka-2
        image: wurstmeister/kafka
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9092
        env:
        - name: KAFKA_ADVERTISED_PORT
          value: "9092"
        - name: KAFKA_ADVERTISED_HOST_NAME
          value: [kafka-service2的clusterIP]
        - name: KAFKA_ZOOKEEPER_CONNECT
          value: zoo1:2181,zoo2:2181,zoo3:2181
        - name: KAFKA_BROKER_ID
          value: "2"
---

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: kafka-deployment-3
spec:
  replicas: 1
  selector:
    matchLabels:
      name: kafka-service-3
  template:
    metadata:
      labels:
        name: kafka-service-3
        app: kafka-service-3
    spec:
      containers:
      - name: kafka-3
        image: wurstmeister/kafka
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9092
        env:
        - name: KAFKA_ADVERTISED_PORT
          value: "9092"
        - name: KAFKA_ADVERTISED_HOST_NAME
          value: [kafka-service3的clusterIP]
        - name: KAFKA_ZOOKEEPER_CONNECT
          value: zoo1:2181,zoo2:2181,zoo3:2181
        - name: KAFKA_BROKER_ID
          value: "3"

3)集群功能测试
测试方法基本同单集群的情况,这里就不赘述了。不同的是,这次可以将不同的节点作为生产者和消费者。

  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值