前言
相关的教程很多, 但是也很乱. 自己整理一篇. 使用过程中还遇到一些问题, 顺手记录一些.
创建PV
一般的PV都是创建在集群, 使用NFS共享管理的. 但是本次实验只有一台机器. 所以简单一点.
- 创建安装目录
localhost:pv sean$ pwd
/Users/sean/Software/MiniK8s/zookeeper/pv
localhost:pv sean$ ls
zk1 zk2 zk3
- 装备pv的
yml
配置文件.
apiVersion: v1
kind: PersistentVolume
metadata:
name: k8s-pv-zk01
namespace: tools
labels:
app: zk
annotations:
volume.beta.kubernetes.io/storage-class: "anything"
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /Users/sean/Software/MiniK8s/zookeeper/pv/zk01
persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: k8s-pv-zk02
namespace: tools
labels:
app: zk
annotations:
volume.beta.kubernetes.io/storage-class: "anything"
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /Users/sean/Software/MiniK8s/zookeeper/pv/zk02
persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: k8s-pv-zk03
namespace: tools
labels:
app: zk
annotations:
volume.beta.kubernetes.io/storage-class: "anything"
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /Users/sean/Software/MiniK8s/zookeeper/pv/zk03
persistentVolumeReclaimPolicy: Recycle
---
- 需要注意的是这个配置文件有一个
namespace
. 在使用的时候-n tools
才能查询到你需要的答案.- 此外
pv
说得就是PersistentVolume
. 也就是k8s
的物理磁盘概念.
- 安装
kubectl create -f zk-pv.yaml
localhost:zookeeper sean$ kubectl create -f zk-pv.yaml
persistentvolume/k8s-pv-zk01 created
persistentvolume/k8s-pv-zk02 created
persistentvolume/k8s-pv-zk03 created
- 查看
localhost:zookeeper sean$ kubectl get pv -o wide
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE
k8s-pv-zk01 5Gi RWO Recycle Available anything 9s Filesystem
k8s-pv-zk02 5Gi RWO Recycle Available anything 8s Filesystem
k8s-pv-zk03 5Gi RWO Recycle Available anything 8s Filesystem
安装实例集群与网络
- 创建namespace
localhost:zookeeper sean$ kubectl get ns
NAME STATUS AGE
default Active 67m
kube-node-lease Active 67m
kube-public Active 67m
kube-system Active 67m
kubernetes-dashboard Active 65m
localhost:zookeeper sean$ kubectl create ns tools
namespace/tools created
这一步骤. 在我看的那个教程里面是没有的. 但是没有这个步骤铁定报错. 所以我将这个步骤写在此处.
- 创建
yml
apiVersion: v1
kind: Service
metadata:
name: zk-hs
namespace: tools
labels:
app: zk
spec:
selector:
app: zk
clusterIP: None
ports:
- name: server
port: 2888
- name: leader-election
port: 3888
---
apiVersion: v1
kind: Service
metadata:
name: zk-cs
namespace: tools
labels:
app: zk
spec:
selector:
app: zk
type: NodePort
ports:
- name: client
port: 2181
targetPort: 2181
nodePort: 31811
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
namespace: tools
spec:
selector:
matchLabels:
app: zk
maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zk
namespace: tools
spec:
selector:
matchLabels:
app: zk # has to match .spec.template.metadata.labels
serviceName: "zk-hs"
replicas: 3 # by default is 1
updateStrategy:
type: RollingUpdate
podManagementPolicy: Parallel
template:
metadata:
labels:
app: zk # has to match .spec.selector.matchLabels
spec:
containers:
- name: zk
imagePullPolicy: Always
image: leolee32/kubernetes-library:kubernetes-zookeeper1.0-3.4.10
resources:
requests:
memory: "200Mi"
cpu: "0.1"
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: server
- containerPort: 3888
name: leader-election
command:
- sh
- -c
- "start-zookeeper \
--servers=3 \
--data_dir=/var/lib/zookeeper/data \
--data_log_dir=/var/lib/zookeeper/data/log \
--conf_dir=/opt/zookeeper/conf \
--client_port=2181 \
--election_port=3888 \
--server_port=2888 \
--tick_time=2000 \
--init_limit=10 \
--sync_limit=5 \
--heap=512M \
--max_client_cnxns=60 \
--snap_retain_count=3 \
--purge_interval=12 \
--max_session_timeout=40000 \
--min_session_timeout=4000 \
--log_level=INFO"
readinessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds: 10
timeoutSeconds: 5
livenessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds: 10
timeoutSeconds: 5
volumeMounts:
- name: datadir
mountPath: /var/lib/zookeeper
volumeClaimTemplates:
- metadata:
name: datadir
annotations:
volume.beta.kubernetes.io/storage-class: "anything"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
注意这个
yml
主要分成3块.
- 第一块是构建
2888:2888
3888:3888
的集群内网络- 第二块是构建
2181:31811
的集群外网络. 虽然到最后我还是无法访问. 看起来后续再研究这一块.- 第三块是构建3个真实的
pods
.
- 运行脚本
localhost:zookeeper sean$ kubectl apply -f zk.yaml
service/zk-hs unchanged
service/zk-cs created
poddisruptionbudget.policy/zk-pdb unchanged
statefulset.apps/zk configured
小插曲1
The Service "zk-cs" is invalid: spec.ports[0].nodePort: Invalid value: 21811: provided port is not in the valid range. The range of valid ports is 30000-32767
. 这个之前是21811
, 但是端口号不支持. 我将其改成了31811
.
小插曲2 资源不够的时候会出现最后一个结点pending的情况.localhost:zookeeper sean$ kubectl get pods -n tools NAME READY STATUS RESTARTS AGE zk-0 1/1 Running 0 3m54s zk-1 1/1 Running 0 3m54s zk-2 0/1 Pending 0 3m54s
- 校验
# 查看pods
kubectl get pod -l app=zk -o wide -n tools
localhost:zookeeper sean$ kubectl get pods -n tools
NAME READY STATUS RESTARTS AGE
zk-0 1/1 Running 0 59s
zk-1 1/1 Running 0 2m59s
zk-2 1/1 Running 0 3m59s
# 查看网络
localhost:zookeeper sean$ kubectl get svc -n tools
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
zk-cs NodePort 10.10x.10x.19x <none> 2181:31811/TCP 47m
zk-hs ClusterIP None <none> 2888/TCP,3888/TCP 48m
localhost:zookeeper sean$ for i in 0 1 2; do kubectl exec zk-$i -n tools zkServer.sh status; done
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
ZooKeeper JMX enabled by default
Using config: /usr/bin/../etc/zookeeper/zoo.cfg
Mode: follower
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
ZooKeeper JMX enabled by default
Using config: /usr/bin/../etc/zookeeper/zoo.cfg
Mode: follower
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
ZooKeeper JMX enabled by default
Using config: /usr/bin/../etc/zookeeper/zoo.cfg
Mode: leader
# 查看结点详情
kubectl describe pods
由于最后我还是没有通过
31811
访问到集群. 所以只能对单个pods进行了映射. 相关命令.
kubectl port-forward -n tools zk-0 2181:2181 &
- zk 相关命令
# 链接
./zkCli.sh -timeout 500000 -server 10.111.255.148:2181
[zk: 127.0.0.1:2181(CONNECTED) 0] ls
[zk: 127.0.0.1:2181(CONNECTED) 1] create /my node
Created /my
[zk: 127.0.0.1:2181(CONNECTED) 6] ls /
[zookeeper, my]
操作超时. 失败.
- 超时
[zk: 10.111.255.148:31811(CONNECTING) 0] ls [zk: 10.111.255.148:31811(CONNECTING) 1] 2021-03-28 19:13:24,864 [myid:] - >WARN [main->SendThread(10.111.255.148:31811):ClientCnxn$SendThread@1102] - Session >0x0 for server null, unexpected error, closing socket connection and attempting >reconnect java.net.ConnectException: Operation timed out at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at >org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.j>ava:361) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
- 失败
[zk: 192.168.49.2:2181(CONNECTING) 0] ls [zk: 192.168.49.2:2181(CONNECTING) 1] ls / 2021-03-28 19:42:11,581 [myid:] - WARN [main->SendThread(192.168.49.2:2181):ClientCnxn$SendThread@1102] - Session 0x0 >for server null, unexpected error, closing socket connection and attempting >reconnect java.net.ConnectException: Operation timed out at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at >org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.j>ava:361) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) Exception in thread "main" > >org.apache.zookeeper.KeeperException$ConnectionLossException: > > KeeperErrorCode = ConnectionLoss for / at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1472) at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1500) at >org.apache.zookeeper.ZooKeeperMain.processZKCmd(ZooKeeperMain.java:720) at >org.apache.zookeeper.ZooKeeperMain.processCmd(ZooKeeperMain.java:588) at >org.apache.zookeeper.ZooKeeperMain.executeLine(ZooKeeperMain.java:360) at org.apache.zookeeper.ZooKeeperMain.run(ZooKeeperMain.java:323) at org.apache.zookeeper.ZooKeeperMain.main(ZooKeeperMain.java:282)
Others
- kubectl expose 暴露服务(未成功)
localhost:~ sean$ kubectl expose service zk-0 --port=2181 --target-port=2181 --external-ip=127.0.0.1 --name use-zk
Error from server (NotFound): services "zk-0" not found
localhost:~ sean$ kubectl expose service zk-cs --port=2181 --target-port=2181 --external-ip=127.0.0.1 --name use-zk
Error from server (NotFound): services "zk-cs" not found
localhost:~ sean$ kubectl expose service -n tools zk-cs --port=2181 --target-port=2181 --external-ip=127.0.0.1 --name use-zk
The Service "use-zk" is invalid: spec.externalIPs[0]: Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.0/8)
localhost:~ sean$ kubectl expose service -n tools zk-cs --port=2181 --target-port=2181 --external-ip=10.108.102.198 --name use-zk
service/use-zk exposed
localhost:~ sean$ kubectl get services -n tools
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
use-zk ClusterIP 10.107.97.21 10.108.102.198 2181/TCP 51s
zk-cs NodePort 10.108.102.198 <none> 2181:31811/TCP 75m
zk-hs ClusterIP None <none> 2888/TCP,3888/TCP 76m
- 网络相关
localhost:~ sean$ kubectl get svc -n tools
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
zk-cs NodePort 10.1xx.25x.1x8 <none> 2181:31811/TCP 31m
zk-hs ClusterIP None <none> 2888/TCP,3888/TCP 120m
localhost:~ sean$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h8m
localhost:~ sean$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
minikube Ready control-plane,master 3h10m v1.20.2 192.168.49.2 <none> Ubuntu 20.04.1 LTS 4.19.76-linuxkit docker://20.10.3
localhost:~ sean$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
minikube Ready control-plane,master 3h16m v1.20.2 192.168.49.2 <none> Ubuntu 20.04.1 LTS 4.19.76-linuxkit docker://20.10.3
Reference
[1]. k8s部署zookeeper集群
[2]. kubernetes 中 kafka 和 zookeeper 有状态集群服务部署实践 (一)
[3]. 在Minikube上运行Kafka集群
[4]. k8s 如何对外提供服务
[5]. k8s集群化部署之使用service暴露应用