一、基础说明
本部分参考选取自: K8S部署Kafka集群 - 部署笔记
Kafka和zookeeper是两种典型的有状态的应用集群服务。首先kafka和zookeeper都需要存储盘来保存有状态信息;其次kafka和zookeeper每一个实例都需要有对应的实例Id (Kafka需broker.id, zookeeper需要my.id) 来作为集群内部每个成员的标识,集群内节点之间进行内部通信时需要用到这些标识。
对于这类服务的部署,需要解决两个大的问题:一个是状态保存,另一个是集群管理 (多服务实例管理)。kubernetes中提的StatefulSet方便了有状态集群服务在上的部署和管理。通常来说,通过下面三个手段来实现有状态集群服务的部署:
- 通过Init Container来做集群的初始化工 作。
- 通过Headless Service来维持集群成员的稳定关系。
- 通过Persistent Volume和Persistent Volume Claim提供网络存储来持久化数据。
因此,在K8S集群里面部署类似kafka、zookeeper这种有状态的服务,不能使用Deployment,必须使用StatefulSet来部署,有状态简单来说就是需要持久化数据,比如日志、数据库数据、服务状态等。
StatefulSet 应用场景:
- 稳定的持久化存储,即Pod重新调度后还是能访问到相同的持久化数据,基于PVC来实现
- 稳定的网络标志,即Pod重新调度后其PodName和HostName不变,基于Headless Service(即没有Cluster
IP的Service)来实现 - 有序部署,有序扩展,即Pod是有顺序的,在部署或者扩展的时候要依据定义的顺序依次依次进行(即从0到N-1,在下一个Pod运行之前所有之前的Pod必须都是Running和Ready状态),基于init containers来实现。
- 有序收缩,有序删除(即从N-1到0)。
StatefulSet组成:
- 用于定义网络标志(DNS domain)的Headless Service。
- 用于创建PersistentVolumes的volumeClaimTemplates。
- 定义具体应用的StatefulSet。
StatefulSet中每个Pod的DNS格式为:
statefulSetName-{0…N-1}.serviceName.namespace.svc.cluster.local,其中:
- statefulSetName为StatefulSet的名字。
- 0…N-1为Pod所在的序号,从0开始到N-1。
- serviceName为Headless Service的名字。
- namespace为服务所在的namespace,Headless Servic和StatefulSet必须在相同的namespace。
- svc.cluster.local为K8S的Cluster Domain集群根域。
二、集群部署
本部分参考选取自:k8s-7: kafka+zookeeper的单节点与集群的持久化
2.1、添加zookeeper存储卷
kind: PersistentVolume
apiVersion: v1
metadata:
name: k8s-pv-zk0
annotations:
volume.beta.kubernetes.io/storage-class: "wms-zook"
labels:
type: local
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.XX.XX
path: "/mnt/nas/zmj_pord/nfs/wms-zookeeper/pv0"
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: k8s-pv-zk1
annotations:
volume.beta.kubernetes.io/storage-class: "wms-zook"
labels:
type: local
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.XX.XX
path: "/mnt/nas/zmj_pord/nfs/wms-zookeeper/pv1"
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: k8s-pv-zk2
annotations:
volume.beta.kubernetes.io/storage-class: "wms-zook"
labels:
type: local
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.XX.XX
path: "/mnt/nas/zmj_pord/nfs/wms-zookeeper/pv2"
2.2、zookeeper集群部署
---
apiVersion: v1
kind: Service
metadata:
name: zk-hs
labels:
app: zk
spec:
ports:
- port: 2888
name: server
- port: 3888
name: leader-election
clusterIP: None
selector:
app: zk
---
apiVersion: v1
kind: Service
metadata:
name: zk-cs
labels:
app: zk
spec:
type: NodePort
ports:
- port: 2181
name: client
selector:
app: zk
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
spec:
selector:
matchLabels:
app: zk
maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zk
spec:
selector:
matchLabels:
app: zk
serviceName: zk-hs
replicas: 3
updateStrategy:
type: RollingUpdate
podManagementPolicy: Parallel
template:
metadata:
labels:
app: zk
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
containers:
- name: kubernetes-zookeeper
imagePullPolicy: Always
image: "mirrorgooglecontainers/kubernetes-zookeeper:1.0-3.4.10"
resources:
requests:
memory: "1Gi"
cpu: "0.5"
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: server
- containerPort: 3888
name: leader-election
command:
- sh
- -c
- "start-zookeeper \
--servers=3 \
--data_dir=/var/lib/zookeeper/data \
--data_log_dir=/var/lib/zookeeper/data/log \
--conf_dir=/opt/zookeeper/conf \
--client_port=2181 \
--election_port=3888 \
--server_port=2888 \
--tick_time=2000 \
--init_limit=10 \
--sync_limit=5 \
--heap=512M \
--max_client_cnxns=60 \
--snap_retain_count=3 \
--purge_interval=12 \
--max_session_timeout=40000 \
--min_session_timeout=4000 \
--log_level=INFO"
readinessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds: 10
timeoutSeconds: 5
livenessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds: 10
timeoutSeconds: 5
volumeMounts:
- name: datadir
mountPath: /var/lib/zookeeper
securityContext:
runAsUser: 1000
fsGroup: 1000
volumeClaimTemplates:
- metadata:
name: datadir
annotations:
volume.beta.kubernetes.io/storage-class: "wms-zook"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
2.3、添加Kafka存储卷
apiVersion: v1
kind: PersistentVolume
metadata:
name: k8s-pv-kafka0
namespace: tools
labels:
app: kafka
annotations:
volume.beta.kubernetes.io/storage-class: "wms-kafka"
spec:
capacity:
storage: 5G
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.XX.XX
path: "/mnt/nas/zmj_pord/nfs/wms-kafk/pv0"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: k8s-pv-kafka1
namespace: tools
labels:
app: kafka
annotations:
volume.beta.kubernetes.io/storage-class: "wms-kafka"
spec:
capacity:
storage: 5G
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.XX.XX
path: "/mnt/nas/zmj_pord/nfs/wms-kafka/pv1"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: k8s-pv-kafka2
namespace: tools
labels:
app: kafka
annotations:
volume.beta.kubernetes.io/storage-class: "wms-kafka"
spec:
capacity:
storage: 5G
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.xx.XX
path: "/mnt/nas/zmj_pord/nfs/wms-kafka/pv2"
2.4、部署Kafka集群
---
apiVersion: v1
kind: Service
metadata:
name: kafka-hs
labels:
app: kafka
spec:
ports:
- port: 9092
name: server
clusterIP: None
selector:
app: kafka
---
apiVersion: v1
kind: Service
metadata:
name: kafka-cs
labels:
app: kafka
spec:
selector:
app: kafka
type: NodePort
ports:
- name: client
port: 9092
# nodePort: 19092
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: kafka-pdb
spec:
selector:
matchLabels:
app: kafka
minAvailable: 2
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: kafka
spec:
serviceName: kafka-hs
replicas: 3
selector:
matchLabels:
app: kafka
template:
metadata:
labels:
app: kafka
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- kafka
topologyKey: "kubernetes.io/hostname"
podAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
terminationGracePeriodSeconds: 300
containers:
- name: kafka
# imagePullPolicy: IfNotPresent
image: registry.cn-hangzhou.aliyuncs.com/jaxzhai/k8skafka:v1
#image: wurstmeister/kafka
command: [ "/bin/bash", "-ce", "tail -f /dev/null" ]
resources:
requests:
memory: "200M"
cpu: 500m
ports:
- containerPort: 9092
name: server
command:
- sh
- -c
- "exec kafka-server-start.sh /opt/kafka/config/server.properties --override broker.id=${HOSTNAME##*-} \
--override listeners=PLAINTEXT://:9092 \
--override zookeeper.connect=zk-0.zk-hs.prod-zmj-wms.svc.cluster.local:2181,zk-1.zk-hs.prod-zmj-wms.svc.cluster.local:2181,zk-2.zk-hs.prod-zmj-wms.svc.cluster.local:2181 \
--override log.dir=/var/lib/kafka \
--override auto.create.topics.enable=true \
--override auto.leader.rebalance.enable=true \
--override background.threads=10 \
--override compression.type=producer \
--override delete.topic.enable=true \
--override leader.imbalance.check.interval.seconds=300 \
--override leader.imbalance.per.broker.percentage=10 \
--override log.flush.interval.messages=9223372036854775807 \
--override log.flush.offset.checkpoint.interval.ms=60000 \
--override log.flush.scheduler.interval.ms=9223372036854775807 \
--override log.retention.bytes=-1 \
--override log.retention.hours=168 \
--override log.roll.hours=168 \
--override log.roll.jitter.hours=0 \
--override log.segment.bytes=1073741824 \
--override log.segment.delete.delay.ms=60000 \
--override message.max.bytes=1000012 \
--override min.insync.replicas=1 \
--override num.io.threads=8 \
--override num.network.threads=3 \
--override num.recovery.threads.per.data.dir=1 \
--override num.replica.fetchers=1 \
--override offset.metadata.max.bytes=4096 \
--override offsets.commit.required.acks=-1 \
--override offsets.commit.timeout.ms=5000 \
--override offsets.load.buffer.size=5242880 \
--override offsets.retention.check.interval.ms=600000 \
--override offsets.retention.minutes=1440 \
--override offsets.topic.compression.codec=0 \
--override offsets.topic.num.partitions=50 \
--override offsets.topic.replication.factor=3 \
--override offsets.topic.segment.bytes=104857600 \
--override queued.max.requests=500 \
--override quota.consumer.default=9223372036854775807 \
--override quota.producer.default=9223372036854775807 \
--override replica.fetch.min.bytes=1 \
--override replica.fetch.wait.max.ms=500 \
--override replica.high.watermark.checkpoint.interval.ms=5000 \
--override replica.lag.time.max.ms=10000 \
--override replica.socket.receive.buffer.bytes=65536 \
--override replica.socket.timeout.ms=30000 \
--override request.timeout.ms=30000 \
--override socket.receive.buffer.bytes=102400 \
--override socket.request.max.bytes=104857600 \
--override socket.send.buffer.bytes=102400 \
--override unclean.leader.election.enable=true \
--override zookeeper.session.timeout.ms=6000 \
--override zookeeper.set.acl=false \
--override broker.id.generation.enable=true \
--override connections.max.idle.ms=600000 \
--override controlled.shutdown.enable=true \
--override controlled.shutdown.max.retries=3 \
--override controlled.shutdown.retry.backoff.ms=5000 \
--override controller.socket.timeout.ms=30000 \
--override default.replication.factor=1 \
--override fetch.purgatory.purge.interval.requests=1000 \
--override group.max.session.timeout.ms=300000 \
--override group.min.session.timeout.ms=6000 \
--override inter.broker.protocol.version=0.10.2-IV0 \
--override log.cleaner.backoff.ms=15000 \
--override log.cleaner.dedupe.buffer.size=134217728 \
--override log.cleaner.delete.retention.ms=86400000 \
--override log.cleaner.enable=true \
--override log.cleaner.io.buffer.load.factor=0.9 \
--override log.cleaner.io.buffer.size=524288 \
--override log.cleaner.io.max.bytes.per.second=1.7976931348623157E308 \
--override log.cleaner.min.cleanable.ratio=0.5 \
--override log.cleaner.min.compaction.lag.ms=0 \
--override log.cleaner.threads=1 \
--override log.cleanup.policy=delete \
--override log.index.interval.bytes=4096 \
--override log.index.size.max.bytes=10485760 \
--override log.message.timestamp.difference.max.ms=9223372036854775807 \
--override log.message.timestamp.type=CreateTime \
--override log.preallocate=false \
--override log.retention.check.interval.ms=300000 \
--override max.connections.per.ip=2147483647 \
--override num.partitions=1 \
--override producer.purgatory.purge.interval.requests=1000 \
--override replica.fetch.backoff.ms=1000 \
--override replica.fetch.max.bytes=1048576 \
--override replica.fetch.response.max.bytes=10485760 \
--override reserved.broker.max.id=1000 "
env:
- name: KAFKA_HEAP_OPTS
value : "-Xmx300M -Xms200M"
- name: KAFKA_OPTS
value: "-Dlogging.level=INFO"
volumeMounts:
- name: datadir
mountPath: /var/lib/kafka
readinessProbe:
tcpSocket:
port: 9092
timeoutSeconds: 5
initialDelaySeconds: 20
# exec:
# command:
# - sh
# - -c
# - "/opt/kafka/bin/kafka-broker-api-versions.sh --bootstrap-server=0.0.0.0:9092"
securityContext:
runAsUser: 1000
fsGroup: 1000
volumeClaimTemplates:
- metadata:
name: datadir
annotations:
volume.beta.kubernetes.io/storage-class: "wms-kafka"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5G
三、数据挂载之存储卷PersistentVolume(PV)
本部分内容参考选取自:Kubernetes 进阶 - 存储
有几个概念需要先声明一下:
- Pod:相当于一个主机或虚机,可以包含多个容器。
- 容器:Container,一个应用,如filebeat、kafka。
- 数据卷:Volume,一个可被容器组中的容器访问的文件目录,里面存放容器需要同步或访问的数据,将其挂载在应用程序的文件目录或文件下(即挂载在挂载点下)。
- 挂载点:应用的具体文件夹或文件。
- 存储卷:PersistentVolume,数据卷的一种。
- 存储卷声明:PersistentVolumeClaim,代表用户使用存储的请求,与存储卷绑定。
- 存储类:storageClass,管理存储卷,真正的存储地方,包含多个存储卷。
以上纯属个人理解,如有错误不当请指正。
详细说明请参考 Kubernetes 进阶 - 存储
关系图如下:(来自 Kubernetes 进阶 - 存储)
数据卷中声明外部数据类型及路径,外部数据通过数据卷挂载在挂载点下(实现外部数据与容器数据同步)
存储卷与存储卷声明绑定流程如上。
绑定方式有两种:
- 创建存储卷,创建存储卷声明,将其绑定。
- 直接创建存储卷声明,指定存储类,自动创建存储卷。