一、准备工作
1、pull镜像
docker pull mirrorgooglecontainers/kubernetes-zookeeper:1.0-3.4.10
2、Zookeeper集群需要用到存储,这里需要准备持久卷(PersistentVolume,简称PV),我这里以yaml文件创建3个PV,供3个Zookeeper节点创建出来的持久卷(PersistentVolumeClaim,简称PVC)绑定。
3、persistent-volume.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: k8s-pv-zk1
annotations:
volume.beta.kubernetes.io/storage-class: "anything"
labels:
type: local
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/var/lib/zookeeper" #此目录不需要自行创建,系统会自动创建
persistentVolumeReclaimPolicy: Recycle
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: k8s-pv-zk2
annotations:
volume.beta.kubernetes.io/storage-class: "anything"
labels:
type: local
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/var/lib/zookeeper"
persistentVolumeReclaimPolicy: Recycle
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: k8s-pv-zk3
annotations:
volume.beta.kubernetes.io/storage-class: "anything"
labels:
type: local
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/var/lib/zookeeper"
persistentVolumeReclaimPolicy: Recycle
4、使用以下命令创建pv
kubectl create -f persistent-volume.yaml
5、查看创建结果
[root@k8s-node1 zookeeper]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
k8s-pv-zk1 1Gi RWO Recycle Bound default/datadir-zk-0 anything 42m
k8s-pv-zk2 1Gi RWO Recycle Bound default/datadir-zk-2 anything 42m
k8s-pv-zk3 1Gi RWO Recycle Bound default/datadir-zk-1 anything 42m
6、创建Zookeeper集群
6.1、zookeeper.yaml
apiVersion: v1
kind: Service
metadata:
name: zk-hs
labels:
app: zk
spec:
ports:
- port: 2888
name: server
- port: 3888
name: leader-election
clusterIP: None
selector:
app: zk
---
apiVersion: v1
kind: Service
metadata:
name: zk-cs
labels:
app: zk
spec:
type: NodePort
ports:
- port: 2181
targetPort: 2181
name: client
selector:
app: zk
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
spec:
selector:
matchLabels:
app: zk
maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zk
spec:
selector:
matchLabels:
app: zk
serviceName: zk-hs
replicas: 2
updateStrategy:
type: RollingUpdate
podManagementPolicy: Parallel
template:
metadata:
labels:
app: zk
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
containers:
- name: kubernetes-zookeeper
imagePullPolicy: Always
image: "mirrorgooglecontainers/kubernetes-zookeeper:1.0-3.4.10"
resources:
requests:
memory: "1Gi"
cpu: "0.5"
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: server
- containerPort: 3888
name: leader-election
command:
- sh
- -c
- "start-zookeeper \
--servers=2 \
--data_dir=/var/lib/zookeeper/data \
--data_log_dir=/var/lib/zookeeper/data/log \
--conf_dir=/opt/zookeeper/conf \
--client_port=2181 \
--election_port=3888 \
--server_port=2888 \
--tick_time=2000 \
--init_limit=10 \
--sync_limit=5 \
--heap=512M \
--max_client_cnxns=60 \
--snap_retain_count=3 \
--purge_interval=12 \
--max_session_timeout=40000 \
--min_session_timeout=4000 \
--log_level=INFO"
readinessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds: 10
timeoutSeconds: 5
livenessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds: 10
timeoutSeconds: 5
volumeMounts:
- name: datadir
mountPath: /var/lib/zookeeper
securityContext:
runAsUser: 1000
fsGroup: 1000
volumeClaimTemplates:
- metadata:
name: datadir
annotations:
volume.beta.kubernetes.io/storage-class: "anything"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
7、使用以下命令创建
kubectl create -f zookeeper.yaml
说明:创建完后会出现一个问题,就是所有的Zookeeper pod都启动不起来,查看日志发现是用户对文件夹【/var/lib/zookeeper】没有权限引起的,文件夹的权限是root用户。需要将此目录手动修改为普通用户即可
useradd zookeeper
chown -R zookeeper:zookeeper /var/lib/zookeeper/
8、查看创建状态
kubectl get pod -o wide
zk-0 1/1 Running 0 37m 10.2.11.40 192.168.29.182
zk-1 1/1 Running 0 37m 10.2.1.31 192.168.29.176
zk-2 1/1 Running 0 37m 10.2.11.43 192.168.29.183
9、查看PV,发现持久卷声明已经绑定上了
[root@k8s-node1 zookeeper]# kubectl get pv -o wide
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
k8s-pv-zk1 1Gi RWO Recycle Bound default/datadir-zk-0 anything 50m
k8s-pv-zk2 1Gi RWO Recycle Bound default/datadir-zk-2 anything 50m
k8s-pv-zk3 1Gi RWO Recycle Bound default/datadir-zk-1 anything 50m
10、查看PVC状态
[root@k8s-node1 zookeeper]# kubectl get pvc -o wide
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
datadir-zk-0 Bound k8s-pv-zk1 1Gi RWO anything 47m
datadir-zk-1 Bound k8s-pv-zk3 1Gi RWO anything 47m
datadir-zk-2 Bound k8s-pv-zk2 1Gi RWO anything 47m
11、验证Zookeeper集群是否正常,查看集群节点状态
for i in 0 1 2; do kubectl exec zk-$i zkServer.sh status; done
通过查看zk状态发现一个leader,两个follower,说明已将zk集群在k8s中创建成功
12、查看主机名
[root@k8s-node1 zookeeper]# for i in 0 1 2; do kubectl exec zk-$i -- hostname; done
zk-0
zk-1
zk-2
13、查看myid
[root@k8s-node1 zookeeper]# for i in 0 1 2; do echo "myid zk-$i";kubectl exec zk-$i -- cat /var/lib/zookeeper/data/myid; done
myid zk-0
1
myid zk-1
2
myid zk-2
3
14、查看完整域名
[root@k8s-node1 zookeeper]# for i in 0 1 2; do kubectl exec zk-$i -- hostname -f; done
zk-0.zk-hs.default.svc.cluster.local.
zk-1.zk-hs.default.svc.cluster.local.
zk-2.zk-hs.default.svc.cluster.local.
15、查看zk配置
[root@k8s-node1 ~]# kubectl exec zk-0 -- cat /opt/zookeeper/conf/zoo.cfg
#This file was autogenerated DO NOT EDIT
clientPort=2181
dataDir=/var/lib/zookeeper/data
dataLogDir=/var/lib/zookeeper/data/log
tickTime=2000
initLimit=10
syncLimit=5
maxClientCnxns=60
minSessionTimeout=4000
maxSessionTimeout=40000
autopurge.snapRetainCount=3
autopurge.purgeInteval=12
server.1=zk-0.zk-hs.default.svc.cluster.local.:2888:3888
server.2=zk-1.zk-hs.default.svc.cluster.local.:2888:3888
16、测试zookeeper集群整体性
[root@k8s-node1 ~]# kubectl exec -ti zk-1 bash
zookeeper@zk-1:/$ cd /opt/zookeeper
zookeeper@zk-1:/opt/zookeeper$ cd bin/
zookeeper@zk-1:/opt/zookeeper/bin$ zkCli.sh
Connecting to localhost:2181
[zk: localhost:2181(CONNECTED) 1] create /hello world
Created /hello
[zk: localhost:2181(CONNECTED) 2] get /hello
world
cZxid = 0x200000079
ctime = Fri Nov 22 01:24:40 UTC 2019
mZxid = 0x200000079
mtime = Fri Nov 22 01:24:40 UTC 2019
pZxid = 0x200000079
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 5
numChildren = 0
17、在其他节点查看数据是否同步:
17.1、比如在zk-0节点查看数据:
[root@k8s-node1 ~]# kubectl exec -ti zk-0 bash
zookeeper@zk-0:/$ cd /opt/zookeeper/bin/
zookeeper@zk-0:/opt/zookeeper/bin$ zkCli.sh
Connecting to localhost:2181
[cluster, controller_epoch, controller, brokers, zookeeper, admin, isr_change_notification, consumers, hello, config]
[zk: localhost:2181(CONNECTED) 1] ls /hello
[]
[zk: localhost:2181(CONNECTED) 2] get /hello
world
cZxid = 0x200000079
ctime = Fri Nov 22 01:24:40 UTC 2019
mZxid = 0x200000079
mtime = Fri Nov 22 01:24:40 UTC 2019
pZxid = 0x200000079
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 5
numChildren = 0