k8s存储类资源
PersistentVolume(PV):持久数据卷,对存储资源的抽象,使得存储作为集群中的资源管理
PersistentVolumeClaim(PVC):持久数据卷申请,用户定义使用的存储容量,使得用户不需要关心后端存储实现
环境简述: k8s集群版本1.22 1master 2node 分别添加一块未挂载未使用的硬盘/dev/sdb
使用heketi二进制包时Glusterfs集群默认为daemonset方式对特定标签节点部署pod
本文目标:利用glusterfs实现k8s持久化存储
一,环境准备
无特殊说明在k8s-master节点操作
1,所有节点安装Glusterfs客户端
yum install glusterfs glusterfs-fuse -y
2,所有节点加载服务所需模块
手动加载
modprobe dm_snapshot
modprobe dm_mirror
modprobe dm_thin_pool
开机自动加载
cat >/etc/sysconfig/modules/glusterfs.modules <<EOF
#!/bin/bash
for kernel_module in dm_snapshot dm_mirror dm_thin_pool;do
/sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
if [ $? -eq 0 ]; then
/sbin/modprobe ${kernel_module}
fi
done;
EOF
chmod +x /etc/sysconfig/modules/glusterfs.modules
二,下载Glusterfs集群管理工具 heketi
wget https://github.com/heketi/heketi/releases/download/v10.0.0/heketi-client-v10.0.0.linux.amd64.tar.gz
ps: 这里的版本尽量和k8s集群中的部署heketi大版本一致,不一致的踩坑问题为创建集群成功后无法创建volume
本文中,Heketi v10.0.0-78-gc59b35b6 (using go: go1.16.6)
和heketi-cli v10.4.0-release-10
解压后目录格式,没有特殊说明内容无需改动
tar xf heketi-client-v10.0.0.linux.amd64.tar.gz -C /opt/
tree /opt/heketi-client/
/opt/heketi-client/
├── bin
│ └── heketi-cli
└── share
└── heketi
├── kubernetes
│ ├── glusterfs-daemonset.json
│ ├── heketi-bootstrap.json
│ ├── heketi-deployment.json
│ ├── heketi.json
│ ├── heketi-service-account.json
│ ├── README.md
│ └── topology-sample.json
├── openshift
│ └── templates
│ ├── deploy-heketi-template.json
│ ├── glusterfs-template.json
│ ├── heketi-template.json
│ └── README.md
└── topology-sample.json
二,进入kubernetes目录部署Glusterfs
节点打上标签
kubectl label node k8s-master storagenode=glusterfs
kubectl label node k8s-node1 storagenode=glusterfs
kubectl label node k8s-node2 storagenode=glusterfs
软件包中文件是json格式,这里贴上可以直接使用的yaml格式
kubectl apply -f glusterfs-daemonset.yaml
#glusterfs-daemonset.yaml
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: glusterfs
labels:
glusterfs: deployment
annotations:
description: GlusterFS Daemon Set
tags: glusterfs
spec:
selector:
matchLabels:
glusterfs-node: daemonset
template:
metadata:
name: glusterfs
labels:
glusterfs-node: daemonset
spec:
nodeSelector:
storagenode: glusterfs
hostNetwork: true
containers:
- image: 'gluster/gluster-centos:latest'
imagePullPolicy: IfNotPresent
name: glusterfs
volumeMounts:
- name: glusterfs-heketi
mountPath: /var/lib/heketi
- name: glusterfs-run
mountPath: /run
- name: glusterfs-lvm
mountPath: /run/lvm
- name: glusterfs-etc
mountPath: /etc/glusterfs
- name: glusterfs-logs
mountPath: /var/log/glusterfs
- name: glusterfs-config
mountPath: /var/lib/glusterd
- name: glusterfs-dev
mountPath: /dev
- name: glusterfs-cgroup
mountPath: /sys/fs/cgroup
securityContext:
capabilities: {}
privileged: true
readinessProbe:
timeoutSeconds: 3
initialDelaySeconds: 60
exec:
command:
- /bin/bash
- '-c'
- systemctl status glusterd.service
livenessProbe:
timeoutSeconds: 3
initialDelaySeconds: 60
exec:
command:
- /bin/bash
- '-c'
- systemctl status glusterd.service
volumes:
- name: glusterfs-heketi
hostPath:
path: /var/lib/heketi
- name: glusterfs-run
- name: glusterfs-lvm
hostPath:
path: /run/lvm
- name: glusterfs-etc
hostPath:
path: /etc/glusterfs
- name: glusterfs-logs
hostPath:
path: /var/log/glusterfs
- name: glusterfs-config
hostPath:
path: /var/lib/glusterd
- name: glusterfs-dev
hostPath:
path: /dev
- name: glusterfs-cgroup
hostPath:
path: /sys/fs/cgroup
通过 kubectl get pods 查看是否部署成功
三,部署heketi
- 创建ServiceAccount
kubectl apply -f heketi-service-account.json
- 创建clusterrolebinding绑定clusterrole中的edit到default命名空间下的heketi-service-account
kubectl create clusterrolebinding heketi-gluster-admin --clusterrole=edit --serviceaccount=default:heketi-service-account
- 创建secret内容为heketi.json的文件,heketi.json中修改jwt验证下的admin账号key密码为admin
kubectl create secret generic heketi-config-secret --from-file=./heketi.json
- 部署heketi服务
kubectl create -f heketi-bootstrap.yaml
#heketi-bootstrap.yaml
kind: List
apiVersion: v1
items:
- kind: Service
apiVersion: v1
metadata:
name: deploy-heketi
labels:
glusterfs: heketi-service
deploy-heketi: support
annotations:
description: Exposes Heketi Service
spec:
selector:
name: deploy-heketi
ports:
- name: deploy-heketi
port: 8080
targetPort: 8080
- kind: Deployment
apiVersion: apps/v1
metadata:
name: deploy-heketi
labels:
glusterfs: heketi-deployment
deploy-heketi: deployment
annotations:
description: Defines how to deploy Heketi
spec:
replicas: 1
selector:
matchLabels:
glusterfs: heketi-pod
deploy-heketi: pod
template:
metadata:
name: deploy-heketi
labels:
name: deploy-heketi
glusterfs: heketi-pod
deploy-heketi: pod
spec:
serviceAccountName: heketi-service-account
containers:
- image: 'heketi/heketi:dev'
imagePullPolicy: Always
name: deploy-heketi
env:
- name: HEKETI_EXECUTOR
value: kubernetes
- name: HEKETI_DB_PATH
value: /var/lib/heketi/heketi.db
- name: HEKETI_FSTAB
value: /var/lib/heketi/fstab
- name: HEKETI_SNAPSHOT_LIMIT
value: '14'
- name: HEKETI_KUBE_GLUSTER_DAEMONSET
value: 'y'
ports:
- containerPort: 8080
volumeMounts:
- name: db
mountPath: /var/lib/heketi
- name: config
mountPath: /etc/heketi
readinessProbe:
timeoutSeconds: 3
initialDelaySeconds: 3
httpGet:
path: /hello
port: 8080
livenessProbe:
timeoutSeconds: 3
initialDelaySeconds: 30
httpGet:
path: /hello
port: 8080
volumes:
- name: db
- name: config
secret:
secretName: heketi-config-secret
四,heketi创建Glusterfs集群
- 查看pod和svc部署情况
kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/deploy-heketi-974d59f8c-9nvw9 1/1 Running 0 30m
pod/glusterfs-mqc4r 1/1 Running 0 51m
pod/glusterfs-tm8wm 1/1 Running 0 51m
pod/glusterfs-vt9k4 1/1 Running 0 51m
pod/nfs-client-provisioner-595c744db6-nb85s 1/1 Running 5 (3h9m ago) 106d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/deploy-heketi ClusterIP 10.0.0.221 <none> 8080/TCP 30m
service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 107d
- 创建Glusterfs集群,请求地址为heketi的service
cp /opt/heketi-client/bin/heketi-cli /usr/local/bin/
heketi-cli -s http://10.0.0.221:8080 --user admin --secret 'admin' topology load --json=topology-sample.json
Creating cluster ... ID: 6f0047b37e207cff6f2620295e595db4
Allowing file volumes on cluster.
Allowing block volumes on cluster.
Creating node k8s-master ... ID: 70eee69253ee1f843a16494278452279
Adding device /dev/sdb ... OK
Creating node k8s-node1 ... ID: 3914c1bcece46b38f176ca0df32329c1
Adding device /dev/sdb ... OK
Creating node k8s-node2 ... ID: 7f354f6dba16fb8832d2a6fa465a2da8
Adding device /dev/sdb ... OK
//topology-sample.json
{
"clusters": [
{
"nodes": [
{
"node": {
"hostnames": {
"manage": [
"k8s-master"
],
"storage": [
"10.98.4.1"
]
},
"zone": 1
},
"devices": [
{
"name": "/dev/sdb",
"destroydata": false
}
]
},
{
"node": {
"hostnames": {
"manage": [
"k8s-node1"
],
"storage": [
"10.98.4.2"
]
},
"zone": 1
},
"devices": [
{
"name": "/dev/sdb",
"destroydata": false
}
]
},
{
"node": {
"hostnames": {
"manage": [
"k8s-node2"
],
"storage": [
"10.98.4.3"
]
},
"zone": 1
},
"devices": [
{
"name": "/dev/sdb",
"destroydata": false
}
]
}
]
}
]
}
heketi常用命令
heketi-cli -s http://10.0.0.221:8080 --user admin --secret ‘admin’ topology info #查看集群拓扑信息
heketi-cli -s http://10.0.0.221:8080 --user admin --secret ‘admin’ cluster list #集群列表
heketi-cli -s http://10.0.0.221:8080 --user admin --secret ‘admin’ cluster info [cluster list 在的id] #查看某个集群详细信息
heketi-cli -s http://10.0.0.221:8080 --user admin --secret ‘admin’ volume
五,测试k8s动态存储
- 创建存储类
kubectl apply -f glusterfs-storage.yaml
#glusterfs-storage.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: glusterfs
provisioner: kubernetes.io/glusterfs
#reclaimPolicy: Delete #默认值
reclaimPolicy: Retain
allowVolumeExpansion: true
parameters:
resturl: "http://10.0.0.221:8080" #kubectl get svc查看heketi的service ip
clusterid: "6f0047b37e207cff6f2620295e595db4" #heketi-cli -s http://10.0.0.221:8080 --user admin --secret 'admin' cluster list查看创建的集群id
restauthenabled: "true" #使用验证
restuser: "admin"
restuserkey: "admin"
gidMin: "40000"
gidMax: "50000"
volumetype: "replicate:3" #使用复制卷,副本3
- 创建pod使用Glusterfs存储类
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.21
volumeMounts:
- name: wwwroot
mountPath: /usr/share/nginx/html
ports:
- containerPort: 80
livenessProbe: #http方式存活检查,默认重启pod
httpGet:
port: 80
path: /
initialDelaySeconds: 10
periodSeconds: 10
readinessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 30
periodSeconds: 20
volumes:
- name: wwwroot
persistentVolumeClaim:
claimName: glusterfs-pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: glusterfs-pvc
spec:
storageClassName: "glusterfs" #此处代表使用动态存储,自动创建pv
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: NodePort
- 查看资源创建情况
kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-a65d57a9-ecce-43fb-a2fb-5a076a94a0f0 1Gi RWO Retain Bound default/glusterfs-pvc glusterfs 48m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/glusterfs-pvc Bound pvc-a65d57a9-ecce-43fb-a2fb-5a076a94a0f0 1Gi RWO glusterfs 48m
任意节点df -h查看卷挂载情况
cd /var/lib/kubelet/pods/f8143156-8231-48b9-a8bd-23d5bbf6c65d/volumes/kubernetes.io~glusterfs/pvc-1f9962ab-6896-46da-87f2-811084dad40b
echo “glusterfs test succe” > index.html
访问测试