K8S 搭建 Kafka:2.13-2.6.0 和 Zookeeper:3.6.2 集群

1 篇文章 0 订阅

在 K8S 搭建 Kafka:2.13-2.6.0 和 Zookeeper:3.6.2 集群 基础上调整,试kafka集群能被外部访问

 

搭建 Kafka:2.13-2.6.0 和 Zookeeper:3.6.2 集群#

一、服务版本信息:#

  • Kafka:v2.13-2.6.0
  • Zookeeper:v3.6.2
  • Kubernetes:v1.18.4

二、制作 Zookeeper 镜像#

Zookeeper 使用的是 docker hub 中提供的官方镜像,使用如下命令可以直接下载:

docker pull zookeeper:3.6.2 

由于官方镜像中使用的启动脚本不适用于我们公司内部使用,所以对其中的 docker-entrypoint.sh 脚本和 Dockerfile 进行了一些修改。

1. 修改 docker-entrypoint.sh 脚本

修改后的 docker-entrypoint.sh 脚本如下(原脚本内容可参考:https://github.com/31z4/zookeeper-docker/tree/2373492c6f8e74d3c1167726b19babe8ac7055dd/3.6.2):

#!/bin/bash
 
set -e
 
HOST=$(hostname -s)
DOMAIN=$(hostname -d)
CLIENT_PORT=2181
SERVER_PORT=2888
ELECTION_PORT=3888
 
function createConfig(){
    if [[ ! -f "$ZOO_CONF_DIR/${HOST}/zoo.cfg" ]]; then
        
        # 根据传入的变量创建目录
        mkdir -p $ZOO_CONF_DIR/${HOST}
        mkdir -p $ZOO_DATA_DIR/${HOST}
        mkdir -p $ZOO_DATA_LOG_DIR/${HOST}
        
        # 向 zoo.cfg 中写入一些必要的配置项,这些变量是在 Dockerfile 中定义好的,如果需要修改可以在 yaml 文件中定义 env
        CONFIG="$ZOO_CONF_DIR/${HOST}/zoo.cfg"
        {
            echo "dataDir=$ZOO_DATA_DIR/${HOST}"
            echo "dataLogDir=$ZOO_DATA_LOG_DIR/${HOST}"
 
            echo "tickTime=$ZOO_TICK_TIME"
            echo "initLimit=$ZOO_INIT_LIMIT"
            echo "syncLimit=$ZOO_SYNC_LIMIT"
 
            echo "autopurge.snapRetainCount=$ZOO_AUTOPURGE_SNAPRETAINCOUNT"
            echo "autopurge.purgeInterval=$ZOO_AUTOPURGE_PURGEINTERVAL"
            echo "maxClientCnxns=$ZOO_MAX_CLIENT_CNXNS"
            echo "standaloneEnabled=$ZOO_STANDALONE_ENABLED"
            echo "admin.enableServer=$ZOO_ADMINSERVER_ENABLED"
        } >> ${CONFIG}
        
        if [[ -n $ZOO_4LW_COMMANDS_WHITELIST ]]; then
            echo "4lw.commands.whitelist=$ZOO_4LW_COMMANDS_WHITELIST" >> ${CONFIG}
        fi
 
		# 如果需要添加其他配置项,可以在 yaml 文件的 env 配置中设置 ZOO_CFG_EXTRA 变量,将额外配置项都写在这一个变量中
		# 需要注意的是,添加额外配置项,value 一定要使用 zookeeper 能识别的名称,因为下面没有做任何格式转换
        for cfg_extra_entry in $ZOO_CFG_EXTRA; do
            echo "$cfg_extra_entry" >> ${CONFIG}
        done
    fi
}
 
# 由于 sts 是以 “服务名称-编号” 的格式来命名的 Pod,下面用于获取主机名中的数字编号和服务的名称
function getHostNum(){
    if [[ $HOST =~ (.*)-([0-9]+)$ ]]; then
        NAME=${BASH_REMATCH[1]}
        ORD=${BASH_REMATCH[2]}
    else
        echo "Fialed to parse name and ordinal of Pod"
        exit 1
    fi
}
 
# 创建 Zookeeper 集群的 myid,这样可以确保生成的 myid 是唯一且递增的
function createID(){
    ID_FILE="$ZOO_DATA_DIR/${HOST}/myid"
    MY_ID=$((ORD+1))
    echo $MY_ID > $ID_FILE
}
 
# 向配置文件中写入各个节点的信息,这样集群才能生效。需要注意的是,一定要向容器中传入 SERVERS 变量,而且这个变量的值要和副本数保持一致
# 所以后续要扩容节点的时候,只需要更改副本数和 SERVERS 变量的值即可
function addServer(){
    for (( i=1; i<=$SERVERS; i++ ))
    do
        s="server.$i=$NAME-$((i-1)).$DOMAIN:$SERVER_PORT:$ELECTION_PORT;$CLIENT_PORT"
        [[ $(grep "$s" $ZOO_CONF_DIR/${HOST}/zoo.cfg) ]] || echo $s >> $ZOO_CONF_DIR/${HOST}/zoo.cfg
    done
}
 
# 为工作目录和数据目录授权,允许使用 --user zookeeper 启动
function userPerm(){
    if [[ "$1" = 'zkServer.sh' && "$(id -u)" = '0' ]]; then
        chown -R zookeeper "$ZOO_DATA_DIR" "$ZOO_DATA_LOG_DIR" "$ZOO_LOG_DIR" "$ZOO_CONF_DIR"
        exec gosu zookeeper "$0" "$@"
    fi
}
 
# 启动 Zookeeper,由于更改了配置文件的路径,所以此处一定要使用 --config 选项
# 默认的配置文件目录是 ZOO_CONF_DIR=/conf,已经在 Dockerfile 中定义好了,所以如果不更换默认路径的话,可以将 --config 去掉
function startZK(){
    /apache-zookeeper-3.6.2-bin/bin/zkServer.sh --config "$ZOO_CONF_DIR/$(hostname -s)" start-foreground
}
 
createConfig
getHostNum
createID
addServer
userPerm
startZK

2. 修改 Dockerfile

我这里对于 Dockerfile 的改动很小,只是将原来的 ENTRYPOINT 配置项注释掉,CMD 配置项更改为由 docker-entrypoint.sh 启动:

FROM openjdk:11-jre-slim
 
ENV ZOO_CONF_DIR=/conf \
    ZOO_DATA_DIR=/data \
    ZOO_DATA_LOG_DIR=/datalog \
    ZOO_LOG_DIR=/logs \
    ZOO_TICK_TIME=2000 \
    ZOO_INIT_LIMIT=5 \
    ZOO_SYNC_LIMIT=2 \
    ZOO_AUTOPURGE_PURGEINTERVAL=0 \
    ZOO_AUTOPURGE_SNAPRETAINCOUNT=3 \
    ZOO_MAX_CLIENT_CNXNS=60 \
    ZOO_STANDALONE_ENABLED=true \
    ZOO_ADMINSERVER_ENABLED=true
 
# Add a user with an explicit UID/GID and create necessary directories
RUN set -eux; \
    groupadd -r zookeeper --gid=1000; \
    useradd -r -g zookeeper --uid=1000 zookeeper; \
    mkdir -p "$ZOO_DATA_LOG_DIR" "$ZOO_DATA_DIR" "$ZOO_CONF_DIR" "$ZOO_LOG_DIR"; \
    chown zookeeper:zookeeper "$ZOO_DATA_LOG_DIR" "$ZOO_DATA_DIR" "$ZOO_CONF_DIR" "$ZOO_LOG_DIR"
 
# Install required packges
RUN set -eux; \
    apt-get update; \
    DEBIAN_FRONTEND=noninteractive \
    apt-get install -y --no-install-recommends \
        ca-certificates \
        dirmngr \
        gosu \
        gnupg \
        netcat \
        wget; \
    rm -rf /var/lib/apt/lists/*; \
# Verify that gosu binary works
    gosu nobody true
 
ARG GPG_KEY=BBE7232D7991050B54C8EA0ADC08637CA615D22C
ARG SHORT_DISTRO_NAME=zookeeper-3.6.2
ARG DISTRO_NAME=apache-zookeeper-3.6.2-bin
 
# Download Apache Zookeeper, verify its PGP signature, untar and clean up
RUN set -eux; \
    ddist() { \
        local f="$1"; shift; \
        local distFile="$1"; shift; \
        local success=; \
        local distUrl=; \
        for distUrl in \
            'https://www.apache.org/dyn/closer.cgi?action=download&filename=' \
            https://www-us.apache.org/dist/ \
            https://www.apache.org/dist/ \
            https://archive.apache.org/dist/ \
        ; do \
            if wget -q -O "$f" "$distUrl$distFile" && [ -s "$f" ]; then \
                success=1; \
                break; \
            fi; \
        done; \
        [ -n "$success" ]; \
    }; \
    ddist "$DISTRO_NAME.tar.gz" "zookeeper/$SHORT_DISTRO_NAME/$DISTRO_NAME.tar.gz"; \
    ddist "$DISTRO_NAME.tar.gz.asc" "zookeeper/$SHORT_DISTRO_NAME/$DISTRO_NAME.tar.gz.asc"; \
    export GNUPGHOME="$(mktemp -d)"; \
    gpg --keyserver ha.pool.sks-keyservers.net --recv-key "$GPG_KEY" || \
    gpg --keyserver pgp.mit.edu --recv-keys "$GPG_KEY" || \
    gpg --keyserver keyserver.pgp.com --recv-keys "$GPG_KEY"; \
    gpg --batch --verify "$DISTRO_NAME.tar.gz.asc" "$DISTRO_NAME.tar.gz"; \
    tar -zxf "$DISTRO_NAME.tar.gz"; \
    mv "$DISTRO_NAME/conf/"* "$ZOO_CONF_DIR"; \
    rm -rf "$GNUPGHOME" "$DISTRO_NAME.tar.gz" "$DISTRO_NAME.tar.gz.asc"; \
    chown -R zookeeper:zookeeper "/$DISTRO_NAME"
 
WORKDIR $DISTRO_NAME
VOLUME ["$ZOO_DATA_DIR", "$ZOO_DATA_LOG_DIR", "$ZOO_LOG_DIR"]
 
EXPOSE 2181 2888 3888 8080
 
ENV PATH=$PATH:/$DISTRO_NAME/bin \
    ZOOCFGDIR=$ZOO_CONF_DIR
 
COPY docker-entrypoint.sh /
 
# 将 ENTRYPOINT 内容注释
# ENTRYPOINT ["/docker-entrypoint.sh"]
 
# 将原 CMD 注释,并新增下面的配置
# CMD ["zkServer.sh", "start-foreground"]
CMD ["/docker-entrypoint.sh"]

3. 打包镜像并上传私服

在 Dockerfile 的根目录下,使用如下命令打包镜像,并修改 tag

docker build --tag 10.16.12.204/ops/zookeeper:custom-v3.6.2 -f Dockerfile .

上传至镜像仓库:

docker push 10.16.12.204/ops/zookeeper:custom-v3.6.2 

 

三、制作 Kafka 镜像#

制作 Kafka 镜像是基于 docker hub 中 wurstmeister 制作的镜像,原镜像文件可使用如下命令下载:

docker pull wurstmeister/kafka:2.13-2.6.0 

这个镜像中使用 start-kafka.sh 脚本来初始化 Kafka 的配置并启动,但是其中有些内容不符合在 K8S 中部署的需求,所以对该脚本进行修改。

1. 修改 start-kafka.sh 脚本

原始的 start-kafka.sh 脚本内容可到 https://github.com/wurstmeister/kafka-docker 中查看。修改后的内容如下:

#!/bin/bash -e
 
# Allow specific kafka versions to perform any unique bootstrap operations
OVERRIDE_FILE="/opt/overrides/${KAFKA_VERSION}.sh"
if [[ -x "$OVERRIDE_FILE" ]]; then
    echo "Executing override file $OVERRIDE_FILE"
    eval "$OVERRIDE_FILE"
fi
 
# Store original IFS config, so we can restore it at various stages
ORIG_IFS=$IFS
 
# 设置 zookeeper 连接地址,如果没有指定该变量会报错
if [[ -z "$KAFKA_ZOOKEEPER_CONNECT" ]]; then
    echo "ERROR: missing mandatory config: KAFKA_ZOOKEEPER_CONNECT"
    exit 1
fi
 
# 设置 kafka 的端口,如果没有指定端口则使用默认端口
if [[ -z "$KAFKA_PORT" ]]; then
    export KAFKA_PORT=9092
fi
 
# kafka 启动后自动创建 topic,如果没有指定 KAFKA_CREATE_TOPICS 则不会自动创建 topic
create-topics.sh &
unset KAFKA_CREATE_TOPICS
 
if [[ -z "$KAFKA_ADVERTISED_PORT" && \
  -z "$KAFKA_LISTENERS" && \
  -z "$KAFKA_ADVERTISED_LISTENERS" && \
  -S /var/run/docker.sock ]]; then
    KAFKA_ADVERTISED_PORT=$(docker port "$(hostname)" $KAFKA_PORT | sed -r 's/.*:(.*)/\1/g')
    export KAFKA_ADVERTISED_PORT
fi

# 如果没有直接指定 KAFKA_BROKER_ID,则通过 BROKER_ID_COMMAND 变量中包含的命令来自动生成 broker id,这样可以确保 broker id 是唯一且递增的
if [[ -z "$KAFKA_BROKER_ID" ]]; then
    if [[ -n "$BROKER_ID_COMMAND" ]]; then
        KAFKA_BROKER_ID=$(eval "$BROKER_ID_COMMAND")
        export KAFKA_BROKER_ID
    else
        export KAFKA_BROKER_ID=-1
    fi
fi
 

# 比原文章多此处配置:用来配置KAFKA_ADVERTISED_LISTENERS,以便能够被外部访问
if [[ -n "$POD_IP" && \
    -n "$HOST_IP" && \
    -n "$CUSTOM_IN_PROTOCOL" && \
    -n "$CUSTOM_ADVERTISED_IN_PORT" && \
    -n "$CUSTOM_OUT_PROTOCOL" && \
    -n "$CUSTOM_ADVERTISED_OUT_PORT" ]]; then
    KAFKA_ADVERTISED_LISTENERS="$CUSTOM_IN_PROTOCOL://$POD_IP:$CUSTOM_ADVERTISED_IN_PORT,$CUSTOM_OUT_PROTOCOL://$HOST_IP:$CUSTOM_ADVERTISED_OUT_PORT"
    export KAFKA_ADVERTISED_LISTENERS
fi
 

# 如果没有指定 kafka log 目录,则使用默认的地址,默认的目录名会带有当前主机名
if [[ -z "$KAFKA_LOG_DIRS" ]]; then
    export KAFKA_LOG_DIRS="/kafka/kafka-logs-$HOSTNAME"
fi
 
# 如果指定了 KAFKA_HEAP_OPTS 配置,将其写入到 kafka-server-start.sh 脚本中
if [[ -n "$KAFKA_HEAP_OPTS" ]]; then
    sed -r -i 's/(export KAFKA_HEAP_OPTS)="(.*)"/\1="'"$KAFKA_HEAP_OPTS"'"/g' "$KAFKA_HOME/bin/kafka-server-start.sh"
    unset KAFKA_HEAP_OPTS
fi
 
# 此处的作用是如果希望容器在启动后根据执行指定命令的返回结果作为主机名,那么就将这个命令赋值给 HOSTNAME_COMMAND
# 然后使用 eval 来执行变量中的命令来获取结果,再赋值给 HOSTNAME_VALUE 变量
if [[ -n "$HOSTNAME_COMMAND" ]]; then
    HOSTNAME_VALUE=$(eval "$HOSTNAME_COMMAND")
 
    # Replace any occurences of _{HOSTNAME_COMMAND} with the value
    IFS=$'\n'
    for VAR in $(env); do
        if [[ $VAR =~ ^KAFKA_ && "$VAR" =~ "_{HOSTNAME_COMMAND}" ]]; then
            eval "export ${VAR//_\{HOSTNAME_COMMAND\}/$HOSTNAME_VALUE}"
        fi
    done
    IFS=$ORIG_IFS
fi
 
# 此处的作用是如果希望容器在启动后根据执行指定命令的返回结果作为端口号,那么就将这个命令赋值给 PORT_COMMAND
# 然后使用 eval 来执行变量中的命令来获取结果,再赋值给 PORT_VALUE 变量
if [[ -n "$PORT_COMMAND" ]]; then
    PORT_VALUE=$(eval "$PORT_COMMAND")
 
    # Replace any occurences of _{PORT_COMMAND} with the value
    IFS=$'\n'
    for VAR in $(env); do
        if [[ $VAR =~ ^KAFKA_ && "$VAR" =~ "_{PORT_COMMAND}" ]]; then
	    eval "export ${VAR//_\{PORT_COMMAND\}/$PORT_VALUE}"
        fi
    done
    IFS=$ORIG_IFS
fi
 
if [[ -n "$RACK_COMMAND" && -z "$KAFKA_BROKER_RACK" ]]; then
    KAFKA_BROKER_RACK=$(eval "$RACK_COMMAND")
    export KAFKA_BROKER_RACK
fi
 
# 这里是检查是否设置了 KAFKA_LISTENERS 变量,一般将其值设置为 PLAINTEXT://:9092
if [[ -z "$KAFKA_ADVERTISED_HOST_NAME$KAFKA_LISTENERS" ]]; then
    if [[ -n "$KAFKA_ADVERTISED_LISTENERS" ]]; then
        echo "ERROR: Missing environment variable KAFKA_LISTENERS. Must be specified when using KAFKA_ADVERTISED_LISTENERS"
        exit 1
    elif [[ -z "$HOSTNAME_VALUE" ]]; then
        echo "ERROR: No listener or advertised hostname configuration provided in environment."
        echo "       Please define KAFKA_LISTENERS / (deprecated) KAFKA_ADVERTISED_HOST_NAME"
        exit 1
    fi
 
    # Maintain existing behaviour
    # If HOSTNAME_COMMAND is provided, set that to the advertised.host.name value if listeners are not defined.
    export KAFKA_ADVERTISED_HOST_NAME="$HOSTNAME_VALUE"
fi
 
#Issue newline to config file in case there is not one already
echo "" >> "$KAFKA_HOME/config/server.properties"
 
(
    function updateConfig() {
        key=$1
        value=$2
        file=$3
 
        # Omit $value here, in case there is sensitive information
        echo "[Configuring] '$key' in '$file'"
 
        # If config exists in file, replace it. Otherwise, append to file.
        if grep -E -q "^#?$key=" "$file"; then
            sed -r -i "s@^#?$key=.*@$key=$value@g" "$file" #note that no config values may contain an '@' char
        else
            echo "$key=$value" >> "$file"
        fi
    }
 
    # KAFKA_VERSION + KAFKA_HOME + grep -rohe KAFKA[A-Z0-0_]* /opt/kafka/bin | sort | uniq | tr '\n' '|'
 
# 定义要排除的初始化配置,这些配置已经在配置文件中存在了,所以不需要更改或添加
EXCLUSIONS="|KAFKA_VERSION|KAFKA_HOME|KAFKA_DEBUG|KAFKA_GC_LOG_OPTS|KAFKA_HEAP_OPTS|KAFKA_JMX_OPTS|KAFKA_JVM_PERFORMANCE_OPTS|KAFKA_LOG|KAFKA_OPTS|"
 
 
    IFS=$'\n'
    for VAR in $(env)
    do
        env_var=$(echo "$VAR" | cut -d= -f1)
        if [[ "$EXCLUSIONS" = *"|$env_var|"* ]]; then
            echo "Excluding $env_var from broker config"
            continue
        fi
 
        if [[ $env_var =~ ^KAFKA_ ]]; then
            kafka_name=$(echo "$env_var" | cut -d_ -f2- | tr '[:upper:]' '[:lower:]' | tr _ .)
            updateConfig "$kafka_name" "${!env_var}" "$KAFKA_HOME/config/server.properties"
        fi
 
        if [[ $env_var =~ ^LOG4J_ ]]; then
            log4j_name=$(echo "$env_var" | tr '[:upper:]' '[:lower:]' | tr _ .)
            updateConfig "$log4j_name" "${!env_var}" "$KAFKA_HOME/config/log4j.properties"
        fi
    done
 
	# 主要是添加了这里的配置,根据 SERVERS 的值,拼接出 BOOTSTRAP_SERVERS 的地址,并将该配置更新到配置文件中
    PODNAME=$(hostname -s | awk -F'-' 'OFS="-"{$NF="";print}' |sed 's/-$//g')
    for ((i=0;i<$SERVERS;i++))
    do
        BOOTSTRAP_SERVERS+="$PODNAME-$i.$(hostname -d):${KAFKA_PORT},"
    done
    BOOTSTRAP_SERVERS=${BOOTSTRAP_SERVERS%?}
    echo ${BOOTSTRAP_SERVERS} > /opt/log.txt
    sed -i "s/bootstrap.servers.*$/bootstrap.servers=$BOOTSTRAP_SERVERS/g" $KAFKA_HOME/config/consumer.properties
    sed -i "s/bootstrap.servers.*$/bootstrap.servers=$BOOTSTRAP_SERVERS/g" $KAFKA_HOME/config/producer.properties
)
 
# 如果还定义了其他初始化的配置脚本,则执行
if [[ -n "$CUSTOM_INIT_SCRIPT" ]] ; then
  eval "$CUSTOM_INIT_SCRIPT"
fi
 
exec "$KAFKA_HOME/bin/kafka-server-start.sh" "$KAFKA_HOME/config/server.properties"
2. 修改 Dockerfile

Dockerfile 未做其他修改,只是将修改后的 start-kafka.sh 脚本添加到镜像中,并使用 bash 环境来执行脚本(否则会有些命令无法执行):

FROM wurstmeister/kafka:2.13-2.6.0
 
ADD start-kafka.sh /
 
CMD ["bash","start-kafka.sh"]

3. 打包镜像并上传私服

使用如下命令重新打包镜像并修改 tag:

docker build --tag 10.16.12.204/ops/kafka:custom-v2.13-2.6.0 -f Dockerfile . 

将镜像上传至镜像仓库:

docker push 10.16.12.204/ops/kafka:custom-v2.13-2.6.0 

四、创建命名空间#

整个 Kafka 和 Zookeeper 集群都要在同一个命名空间下,所以使用如下 yaml 文件创建 ns-kafka 命名空间:

---
apiVersion: v1
kind: Namespace
metadata:
  name: ns-kafka
  labels:
    name: ns-kafka

 

五、创建 Secret#

Kubelet 到镜像仓库中拉取镜像需要进行验证,所以创建一个用于验证 Harbor 仓库的 Secret:

kubectl create secret docker-registry harbor-secret --namespace=ns-kafka --docker-server=http://10.16.12.204 --docker-username=admin --docker-password=Harbor12345 

六、创建 PV 和 PVC#

在此次搭建集群的过程中,计划让 Kafka 集群和 Zookeeper 集群使用同一个 PV。在前面定义 Pod 初始化脚本时可以看到,Kafka 和 Zookeeper 中的数据目录以及日志目录,都是在以自己主机名命名的目录下,所以即便使用同一个 PV,也可以对目录进行区分。创建 PV 和 PVC 的 yaml 文件内容如下:

#原文章配置方式
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: kafka-data-pv
spec:
  accessModes:
  - ReadWriteMany
  capacity:
    storage: 500Gi
  local:
    path: /opt/ops_ceph_data/kafka_data
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kafka-cluster
          operator: In
          values:
          - "true"
  persistentVolumeReclaimPolicy: Retain
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: kafka-data-pvc
  namespace: ns-kafka
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 500Gi

#调整后配置方式
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: kafka-storage-host
provisioner: kubernetes.io/host-path

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: kafka-data-pv
spec:
  capacity:
    storage: 30Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: kafka-storage-host
  hostPath:
    path: /data/kafka
    type: DirectoryOrCreate

---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: kafka-data-pvc
  namespace: ns-kafka
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi
  storageClassName: kafka-storage-host

需要声明的一点是,我当前使用的存储是 cephfs,并将其挂载到 K8S 的各个节点的 /opt/ops_ceph_data 目录下,所以在创建 PV 的时候使用的存储类型是 local。

七、创建 Labels#

由于上面创建 PV 时指定的存储类型是 local,这个 PV 只能在满足指定 Label 的节点中进行调度,所以为集群中的所有节点添加一个 label:

for i in 1 2 3 4 5; do kubectl label nodes k8s-node${i} kafka-cluster=true; done 

八、创建 Zookeeper 集群#

1. 创建 Service

创建用于 Zookeeper 与其他节点通信的 Service,yaml 文件内容如下:

---
apiVersion: v1
kind: Service
metadata:
  name: zk-inner-service
  namespace: ns-kafka
  labels:
    app: zk
spec:
  selector:
    app: zk
  clusterIP: None
  ports:
  - name: server
    port: 2888
  - name: leader-election
    port: 3888
---
apiVersion: v1
kind: Service
metadata:
  name: zk-client-service
  namespace: ns-kafka
  labels:
    app: zk
spec:
  selector:
    app: zk
  type: NodePort
  ports:
  - name: client
    port: 2181
    nodePort: 31811

2. 创建 StatefulSet

Zookeeper 属于有状态服务,所以要使用 StatefulSet 来部署,yaml 文件内容如下:

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: zk
  namespace: ns-kafka
spec:
  selector:
    matchLabels:
      app: zk
  serviceName: "zk-inner-service"
  replicas: 3
  updateStrategy:
    type: RollingUpdate
  podManagementPolicy: Parallel
  template:
    metadata:
      labels:
        app: zk
    spec:
      containers:
      - name: zk
        imagePullPolicy: Always
        image: 10.16.12.204/ops/zookeeper:custom-v3.6.2
        resources:
          requests:
            memory: "500Mi"
            cpu: "0.5"
        ports:
          - containerPort: 2181
            name: client
          - containerPort: 2888
            name: server
          - containerPort: 3888
            name: leader-election
        env:
          - name: SERVERS              # 设置 SERVERS 变量,一定要与副本数一致
            value: "3"
          - name: ZOO_CONF_DIR         # 设置配置文件的目录
            value: /opt/conf
          - name: ZOO_DATA_DIR         # 设置数据文件的目录
            value: /opt/data
          - name: ZOO_DATA_LOG_DIR     # 设置数据日志文件的目录
            value: /opt/data_log
        volumeMounts:                  # 设置需要持久化存储数据的目录
        - name: zookeeper-data
          mountPath: /opt/data
          subPath: zookeeper-cluster-data/data
        - name: zookeeper-data
          mountPath: /opt/data_log
          subPath: zookeeper-cluster-data/data_log
        - name: data-conf
          mountPath: /etc/localtime
      imagePullSecrets:
      - name: harbor-secret
      volumes:
      - name: zookeeper-data
        persistentVolumeClaim:
          claimName: kafka-data-pvc
      - name: data-conf
        hostPath:
          path: /usr/share/zoneinfo/Asia/Shanghai

3. 验证集群状态

集群搭建完成后,查看 zookeeper 各个节点当前的状态,使用如下命令:

[@k8s-master1 /]# for i in  0 1 2; do kubectl exec -it zk-$i  -n ns-kafka -- zkServer.sh --config /opt/conf/zk-$i status; done
ZooKeeper JMX enabled by default
Using config: /opt/conf/zk-0/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower
ZooKeeper JMX enabled by default
Using config: /opt/conf/zk-1/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: leader
ZooKeeper JMX enabled by default
Using config: /opt/conf/zk-2/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower
可以看到当前集群中是一个 leader,两个follower。接下来验证集群各个节点的消息同步,首先在 zk-0 节点上创建一个信息:
[@k8s-master1 /]# kubectl exec -it zk-0 -n ns-kafka -- zkCli.sh
[zk: localhost:2181(CONNECTED) 0] create /testMessage Hello
Created /testMessage
在其他两个节点上查看这条消息:
[@k8s-master1 /]# kubectl exec -it zk-1 -n ns-kafka -- zkCli.sh
[zk: localhost:2181(CONNECTED) 0] get /testMessage
Hello
 
[@k8s-master1 /]# kubectl exec -it zk-2 -n ns-kafka -- zkCli.sh
[zk: localhost:2181(CONNECTED) 0] get /testMessage
Hello
可以正常看到消息,代表集群当前运行正常。

九、创建 Kafka 集群#

1. 创建 Service

创建用于 Kafka 通信的 Service,yaml 文件内容如下:

#原文章配置service方式
---
apiVersion: v1
kind: Service
metadata:
  name: kafka-service
  namespace: ns-kafka
  labels:
    app: kafka
spec:
  ports:
  - port: 9092
    name: server
  clusterIP: None
  selector:
    app: kafka

#调整后配置方式
---
apiVersion: v1
kind: Service
metadata:
  name: kafka-service
  namespace: ns-kafka
spec:
  selector:
    app: kafka 
  type: LoadBalancer
  ports:
  - name: inside
    port: 9092
    targetPort: 9092
    nodePort: 9092
    protocol: TCP
  - name: outside
    port: 9094
    targetPort: 9094
    nodePort: 9094
    protocol: TCP
  externalIPs:
  - 192.168.xx.xx #k8s集群master节点ip
  externalTrafficPolicy: Cluster
  sessionAffinity: None
status:
  loadBalancer: {}

2. 创建 StatefulSet

Kafka 属于有状态服务,所以要使用 StatefulSet 来部署,yaml 文件内容如下:

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: kafka
  namespace: ns-kafka
spec:
  selector:
    matchLabels:
      app: kafka
  serviceName: "kafka-service"
  replicas: 3
  updateStrategy:
    type: RollingUpdate
  podManagementPolicy: Parallel
  template:
    metadata:
      labels:
        app: kafka
    spec:
      imagePullSecrets:
      - name: harbor-secret
      containers:
      - name: kafka
        imagePullPolicy: Always
        image: 10.16.12.204/ops/kafka:custom-v2.13-2.6.0
        resources:
          requests:
            memory: "500Mi"
            cpu: "0.5"
        env:
          - name: SERVERS                      # 要确保 SERVERS 设置的值与副本数一致
            value: "3"
          - name: KAFKA_LISTENERS
            value: "PLAINTEXT://:9092"
          - name: KAFKA_ZOOKEEPER_CONNECT      # 设置 Zookeeper 连接地址
            value: "zk-inner-service.ns-kafka.svc.cluster.local:2181"
          - name: KAFKA_PORT
            value: "9092"
          - name: KAFKA_MESSAGE_MAX_BYTES
            value: "20000000"
          - name: BROKER_ID_COMMAND            # 这个变量用于在容器内部生成一个 broker id
            value: "hostname | awk -F'-' '{print $NF}'"

          # 比原文章多出以下配置
          - name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
            value: INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
          - name: KAFKA_INTER_BROKER_LISTENER_NAME #专门用于Kafka集群中Broker之间的通信
            value: INTERNAL  
          - name: KAFKA_LISTENERS
            value: "INTERNAL://:9092,EXTERNAL://:9094"

          # 以下6项配置KAFKA_ADVERTISED_LISTENERS,
          # 用来将Broker的Listener信息发布到Zookeeper中  
          # KAFKA_ADVERTISED_LISTENERS的协议要在KAFKA_LISTENERS再定义一遍
          - name: POD_IP
            valueFrom:
              fieldRef:
                fieldPath: status.podIP  #pod内部IP
          - name: HOST_IP
            valueFrom:
              fieldRef:
                fieldPath: status.hostIP #pod所在宿主机IP
          - name: CUSTOM_IN_PROTOCOL
            value: "INTERNAL"   
          - name: CUSTOM_ADVERTISED_IN_PORT
            value: "9092"   
          - name: CUSTOM_OUT_PROTOCOL
            value: "EXTERNAL"
          - name: CUSTOM_ADVERTISED_OUT_PORT
            value: "9094"


        volumeMounts:
          - name: kafka-log                    # 只需要将 kafka 的 log 目录持久化存储
            mountPath: /kafka
            subPath: kafka-cluster-log
          - name: data-conf
            mountPath: /etc/localtime
      volumes:
      - name: kafka-log
        persistentVolumeClaim:
          claimName: kafka-data-pvc
      - name: data-conf
        hostPath:
          path: /usr/share/zoneinfo/Asia/Shanghai

3. 验证集群状态

3.1 在 Zookeeper 中查看 broker

[@k8s-master1 ~]# kubectl exec -it zk-0 -n ns-kafka -- zkCli.sh
Connecting to localhost:2181
 
[zk: localhost:2181(CONNECTED) 0] ls /
[admin, brokers, cluster, config, consumers, controller, controller_epoch, isr_change_notification, latest_producer_id_block, log_dir_event_notification, zookeeper]
 
[zk: localhost:2181(CONNECTED) 1] ls /brokers
[ids, seqid, topics]
 
[zk: localhost:2181(CONNECTED) 2] ls /brokers/ids
[0, 1, 2]
 
[zk: localhost:2181(CONNECTED) 3] get /brokers/ids/0
{"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://kafka-0.kafka-service.ns-kafka.svc.cluster.local:9092"],"jmx_port":-1,"port":9092,"host":"kafka-0.kafka-service.ns-kafka.svc.cluster.local","version":4,"timestamp":"1604644074102"}
 
[zk: localhost:2181(CONNECTED) 4] get /brokers/ids/1
{"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://kafka-1.kafka-service.ns-kafka.svc.cluster.local:9092"],"jmx_port":-1,"port":9092,"host":"kafka-1.kafka-service.ns-kafka.svc.cluster.local","version":4,"timestamp":"1604644074079"}
 
[zk: localhost:2181(CONNECTED) 5] get /brokers/ids/2
{"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://kafka-2.kafka-service.ns-kafka.svc.cluster.local:9092"],"jmx_port":-1,"port":9092,"host":"kafka-2.kafka-service.ns-kafka.svc.cluster.local","version":4,"timestamp":"1604644074009"}

可以看到 3 个 broker 都已经在 zookeeper 中注册了。

3.2 创建 Topic

在 kafka-0 节点中创建一个名为 Message 的 topic,3个分区,3个副本:

[@k8s-master1 ~]# kubectl exec -it kafka-0 -n ns-kafka -- /bin/bash
bash-4.4# kafka-topics.sh --create --topic Message --zookeeper zk-inner-service.ns-kafka.svc.cluster.local:2181 --partitions 3 --replication-factor 3
Created topic Message.

在 zk-1 节点中查看是否存在这个 Topic:

[@k8s-master1 ~]# kubectl exec -it zk-1 -n ns-kafka -- zkCli.sh
Connecting to localhost:2181
[zk: localhost:2181(CONNECTED) 0] ls /
[admin, brokers, cluster, config, consumers, controller, controller_epoch, isr_change_notification, latest_producer_id_block, log_dir_event_notification, zookeeper]
[zk: localhost:2181(CONNECTED) 1] ls /brokers
[ids, seqid, topics]
[zk: localhost:2181(CONNECTED) 3] ls /brokers/topics
[Message]

可以看到 Zookeeper 中已经存在这个 Topic 了。

3.3 模拟生产者和消费者

首先 在 kafka-1 上模拟生产者向 Message 中写入消息:

[@k8s-master1 ~]# kubectl exec -it kafka-1 -n ns-kafka -- /bin/bash
bash-4.4# kafka-console-producer.sh --topic Message --broker-list kafka-0.kafka-service.ns-kafka.svc.cluster.local:9092,kafka-1.kafka-service.ns-kafka.svc.cluster.local:9092,kafka-2.kafka-service.ns-kafka.svc.cluster.local:9092
>This is a test message
>Welcome to Kafka

然后在 kafka-2 中模拟消费者消费这些信息:

[@k8s-master1 ~]# kubectl exec -it kafka-2 -n ns-kafka -- /bin/bash
bash-4.4# kafka-console-consumer.sh --topic Message --bootstrap-server kafka-0.kafka-service.ns-kafka.svc.cluster.local:9092,kafka-1.kafka-service.ns-kafka.svc.cluster.local:9092,kafka-2.kafka-service.ns-kafka.svc.cluster.local:9092 --from-beginning
 
This is a test message
Welcome to Kafka

可以正常生产消息和消费消息,代表 Kafka 集群运行正常。

十、FAQ#

1. 如何在 yaml 文件中指定要创建的 Topic

在 yaml 文件中指定如下 env,即可在 Pod 启动自动创建 Topic:

env:
  - name: KAFKA_CREATE_TOPICS
    value: "Topic1:1:3,Topic2:1:1:compact"   

上面的内容代表 Topic1 会有 1 个分区,3个副本,Topic2 会有 1 个分区,1 个副本并且副本的 cleanup.policy 设置为 compact。

自动创建 Topic 一定要设置 KAFKA_CREATE_TOPICS 变量,然后会由 create_topic.sh 脚本(镜像中存在)根据变量内容自动创建。

2. 为 Topic 设置的 compaction 不生效

可参考网址:https://github.com/wurstmeister/kafka-docker/wiki#topic-compaction-does-not-work

 

 

注意事项:

  1. zk部署过程中可以存在污点问题;kubectl describe node master | grep Trints命令查看;kubectl trint nodes master node-role.kubernetes.io/master:NoSchedule-删除污点;
  2. zk和kafka最好分开使用不同的pvc;
  3. KAFKA_ADVERTISED_LISTENERS的协议需要在KAFKA_LISTENERS存在;

    所有协议需要在KAFKA_LISTENER_SECURITY_PROTOCOL_MAP定义;

  • 0
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值