Zookeeper集群部署
Zookeeper集群部署参考Kubernetes官方案例部署,《运行Zookeeper》。根据官方的配置做以下的一些调整,最终配置如下。
apiVersion: v1
kind: Service
metadata:
name: zk-headless
labels:
app: zk-headless
spec:
ports:
- port: 2888
name: server
- port: 3888
name: leader-election
- port: 2181
name: client
clusterIP: None
selector:
app: zk
---
apiVersion: v1
kind: ConfigMap
metadata:
name: zk-config
data:
ensemble: "zk-0;zk-1;zk-2"
jvm.heap: "2G"
tick: "2000"
init: "10"
sync: "5"
client.cnxns: "60"
snap.retain: "3"
purge.interval: "1"
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-budget
spec:
selector:
matchLabels:
app: zk
minAvailable: 2
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: zk
spec:
serviceName: zk-headless
replicas: 3
template:
metadata:
labels:
app: zk
annotations:
pod.alpha.kubernetes.io/initialized: "true"
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk-headless
topologyKey: "kubernetes.io/hostname"
containers:
- name: k8szk
imagePullPolicy: Always
image: gcr.io/google_samples/k8szk:v1
resources:
requests:
memory: "4Gi"
cpu: "1"
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: server
- containerPort: 3888
name: leader-election
env:
- name : ZK_ENSEMBLE
valueFrom:
configMapKeyRef:
name: zk-config
key: ensemble
- name : ZK_HEAP_SIZE
valueFrom:
configMapKeyRef:
name: zk-config
key: jvm.heap
- name : ZK_TICK_TIME
valueFrom:
configMapKeyRef:
name: zk-config
key: tick
- name : ZK_INIT_LIMIT
valueFrom:
configMapKeyRef:
name: zk-config
key: init
- name : ZK_SYNC_LIMIT
valueFrom:
configMapKeyRef:
name: zk-config
key: tick
- name : ZK_MAX_CLIENT_CNXNS
valueFrom:
configMapKeyRef:
name: zk-config
key: client.cnxns
- name: ZK_SNAP_RETAIN_COUNT
valueFrom:
configMapKeyRef:
name: zk-config
key: snap.retain
- name: ZK_PURGE_INTERVAL
valueFrom:
configMapKeyRef:
name: zk-config
key: purge.interval
- name: ZK_CLIENT_PORT
value: "2181"
- name: ZK_SERVER_PORT
value: "2888"
- name: ZK_ELECTION_PORT
value: "3888"
command:
- sh
- -c
- zkGenConfig.sh && zkServer.sh start-foreground
readinessProbe:
exec:
command:
- "zkOk.sh"
initialDelaySeconds: 15
timeoutSeconds: 5
livenessProbe:
exec:
command:
- "zkOk.sh"
initialDelaySeconds: 15
timeoutSeconds: 5
volumeMounts:
- name: datadir
mountPath: /var/lib/zookeeper
securityContext:
runAsUser: 1000
fsGroup: 1000
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 20Gi
storageClassName: nfs-storage
在配置清单的最后指定使用哪个存储类来创建PV,默认没有指定存储这里。这里为了方便,后端使用的存储为NFS,关于如何使用NFS作为Kubernetes后端存储,并实现动态挂载参考《Kubernetes使用NFS作为后端动态存储类的实现》。
其它的配置基本上不需要修改,直接apply配置清单即可。全部启动成功后,客户端可以使用下面的三个链接在Kubernetes集群内访问到Zookeeper。
zk-0.zk-headless.default.svc:2181
zk-1.zk-headless.default.svc:2181
zk-2.zk-headless.default.svc:2181
Kafka集群部署
Kafka集群在Kubernetes的部署参考一位外国小哥的Blog《Kafka on Kubernetes》Kafka集群部署部分。
根据需求更改后的配置如下,首先创建一个service,同样使用headless服务。
apiVersion: v1
kind: Service
metadata:
name: kafka-headless
spec:
ports:
- name: broker
port: 9092
protocol: TCP
targetPort: 9092
selector:
app: kafka
sessionAffinity: None
type: ClusterIP
clusterIP: None
由于前面的Zookeeper部署在默认的名称空间,所以将指定的名称空间删除。
接下来创建主要的Kafka集群的配置清单,修改后的配置如下。
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: kafka
name: kafka
spec:
podManagementPolicy: OrderedReady
replicas: 3
revisionHistoryLimit: 1
selector:
matchLabels:
app: kafka
serviceName: kafka-headless
template:
metadata:
labels:
app: kafka
spec:
containers:
- command:
- sh
- -exc
- |
unset KAFKA_PORT && \
export KAFKA_BROKER_ID=${HOSTNAME##*-} && \
export KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://${POD_IP}:9092 && \
exec /etc/confluent/docker/run
env:
- name: POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: KAFKA_HEAP_OPTS
value: -Xmx1G -Xms1G
- name: KAFKA_ZOOKEEPER_CONNECT
value: zk-0.zk-headless.default.svc:2181,zk-1.zk-headless.default.svc:2181,zk-2.zk-headless.default.svc:2181
- name: KAFKA_LOG_DIRS
value: /opt/kafka/data/logs
- name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR
value: "3"
- name: KAFKA_JMX_PORT
value: "9999"
#image: confluentinc/cp-kafka:4.1.2-2
image: confluentinc/cp-kafka:3.3.0
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- sh
- -ec
- /usr/bin/jps | /bin/grep -q SupportedKafka
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: kafka-broker
ports:
- containerPort: 9092
name: kafka
protocol: TCP
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: kafka
timeoutSeconds: 5
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /opt/kafka/data
name: datadir
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 60
updateStrategy:
type: OnDelete
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: nfs-storage
删除指了指定的名称空间。
更改了存储类名,同样使用NFS作为后端存储。
更改了Kafka的镜像,默认使用cp-kafka:4.1.2-2,更改为cp-kafka:3.3.0,cp-kafka:3.3.0对应的Kafka版本为0.11.0。根据自己需求是否更改。
更改了默认了JXM_PORT,默认为5555,更改为了9999。Kafka-Manager默认使用9999号端口连接。需要通过修改KAFKA_JMX_PORT此环境变量的值 来更改些端口,根据自己的需要是否修改。
指定了连接Zookeeper的地址,指定KAFKA_ZOOKEEPER_CONNECT环境变量的值为如下配置。
zk-0.zk-headless.default.svc:2181,zk-1.zk-headless.default.svc:2181,zk-2.zk-headless.default.svc:2181
注意:所有Kafka的启动参数配置选项的值都可以改,格式为:KAFKA_,如:
KAFKA_ZOOKEEPER_CONNECT
测试Kafka集群
Kafka集群的测试可以使用Kafka客户端连接至集群内测试,同样可以Kafka部署时参考的Blog《Kafka Test Pod》
首先创建一个测试的Pod,配置清单文件如下。
apiVersion: v1
kind: Pod
metadata:
name: kafka-test-client
spec:
containers:
- command:
- sh
- -c
- exec tail -f /dev/null
image: confluentinc/cp-kafka:4.1.2-2
imagePullPolicy: IfNotPresent
name: kafka
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
启动成功后接下来开始测试,首先查看当前Kafka集群内有多少个topic。
kubectl exec -ti kafka-test-client -- /usr/bin/kafka-topics --zookeeper zk-0.zk-headless.default.svc:2181 --list
如果使用的是本文档使用的Kafka版本,那么默认是没有任何的topic。接下来可以创建一个topic。
kubectl exec -ti kafka-test-client -- /usr/bin/kafka-topics --zookeeper zk-0.zk-headless.default.svc:2181 --topic test-01 --create --partitions 1 --replication-factor 3
接下来再查看topic就会有 test-01。再接下来可以去查看后端的存储,可以看到每个Kafka的实例的存储下面都有一个test-01的目录。
部署Kafka-Manager
Kafka-Manger是一个很强大的Kafka集群管理工具,这里也使用Docker的方式来部署Kafka-Manger服务。
使用Docker镜像官方地址:
https://hub.docker.com/r/kafkamanager/kafka-manager
定义Kubernetes配置清单文件如下。
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kafka-manager
spec:
replicas: 1
selector:
matchLabels:
app-name: kafka-manager
template:
metadata:
name: kafka-manager
labels:
app-name: kafka-manager
spec:
containers:
- name: kafka
image: kafkamanager/kafka-manager
env:
- name: ZK_HOSTS
value: "zk-0.zk-headless.prod.svc:2181"
- name: KAFKA_MANAGER_AUTH_ENABLED
value: "true"
- name: KAFKA_MANAGER_PASSWORD
value: "gogenius123"
---
apiVersion: v1
kind: Service
metadata:
name: kafka-manager
namespace: default
spec:
selector:
app-name: kafka-manager
type: ClusterIP
ports:
- port: 9000
protocol: TCP
targetPort: 9000
主要更改一些环境变量的配置即可。
ZK_HOSTS:指定zookeeper集群任意一台节点的客户端连接地址。
KAFKA_MANAGER_AUTH_ENABLED:表示开启认证,默认不开启认证。
KAFKA_MANAGER_PASSWORD:设置默认的密码,如果不设置密码默认为password。
更多的可用变量可以参考下面的链接。
https://github.com/yahoo/kafka-manager/blob/master/conf/application.conf
完成后只需要配置好Ingress对外提供访问即可。