Kubernetes实录系列记录文档完整目录参考: Kubernetes实录-目录
相关记录链接地址 :
一、GlusterFS+Heketi环境
GlusterFS+Heketi环境可以采用容器化部署集成到Kubernetes,小规模环境或者计算与存储融合架构可以采用这种方式(要求kubernetes节点有磁盘或者其他类型的裸设备,例如FC)。
本记录GlusterFS+Heketi环境采用独立部署的模式。具体部署参考:GlusterFS操作记录(5) GlusterFS+Heketi配置(独立部署)
主机名称 | IP地址 | 角色 | 备注 |
---|---|---|---|
gluster-server-1 | 10.99.7.11 | glusterfs,heketi | heketi服务节点,启用认证 account=admin,key=admin_secret |
gluster-server-2 | 10.99.7.12 | glusterfs | |
gluster-server-3 | 10.99.7.13 | glusterfs |
Heketi服务地址: 10.99.7.11:8080
实际产线中最好配置为域名方式访问。
二、Kubernetes使用glusterfs+Heketi提供持久存储PV(自动模式)
在kubernetes上创建StorageClass来配置使用Dynamic Provisioning功能,由StorageClass连接Heketi,根据实际需要自动创建glusterfs存储卷volume。
StorageClass由Kubernetes系统管理员创建,可以复用,多个PVC共用一个。
1.2 配置StorageClass
操作人员:系统管理员
范围:StorageClass是全局的,也就是说在所有namespace下可见可用
# cat glusterfs-storageclass.yaml
apiVersion: v1
kind: Secret
type: kubernetes.io/glusterfs
metadata:
name: heketi-secret
namespace: kube-system
data:
# base64 encoded. key=admin_secret
key: YWRtaW5fc2VjcmV0
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: glusterfs-storage-class-1
provisioner: kubernetes.io/glusterfs
allowVolumeExpansion: true
#reclaimPolicy 默认就是Delete(可以不指定使用默认),删除pvc会自动删除pv,heketi也自动清理vol
reclaimPolicy: Delete
parameters:
resturl: "http://10.99.7.11:8080"
clusterid: "77b24830f331f2e12ca064d7daab3e43"
volumetype: "replicate:3"
gidMax: "50000"
gidMin: "40000"
#heketi启用认证情况下配置,如果没有启用认证可以不用配置
restauthenabled: "true"
restuser: "admin"
#restuserkey: "admin_secret"
#密码最好保存在secret中,官方建议
secretNamespace: "kube-system"
secretName: "heketi-secret"
kubectl apply -f glusterfs-storageclass.yaml
secret/heketi-secret created
storageclass.storage.k8s.io/glusterfs-storage-class-1 created
kubectl get sc
NAME PROVISIONER AGE
glusterfs-storage-class-1 kubernetes.io/glusterfs 57s
kubectl get secret -n kube-system
NAME TYPE DATA AGE
...
heketi-secret kubernetes.io/glusterfs 1 57s
2. 配置PVC并部署应用使用glusterfs提供的共享存储
2.1 pvc配置
操作人员:应用部署人员
# cat demo_mode3_pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: glusterfs-vol-pvc02
namespace: default
spec:
storageClassName: glusterfs-storage-class-1
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
备注这里PVC使用的apiVersion是v1,storageclass在spec的storageClassName指定。
如果使用其他之前的apiVersion,可能需要在 metadata下 annotations 里面 volume.beta.kubernetes.io/storage-class 指定。
kubectl apply -f demo_mode2_pvc.yaml
persistentvolumeclaim/glusterfs-vol-pvc02 created
# pvc不是全局资源,绑定在namespace上,这里使用的是default
kubectl get pvc -n default
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
...
glusterfs-vol-pvc02 Bound pvc-30848f17-1f86-11e9-8a0e-1418776411a1 10Gi RWX glusterfs-storage-class-1 10s
# pv不区分namespace,为全局资源
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
...
pvc-30848f17-1f86-11e9-8a0e-1418776411a1 10Gi RWX Delete Bound default/glusterfs-vol-pvc02 glusterfs-storage-class-1 4s
# 在heketi-cli服务器上查看(启用了认证,信息现在alias了,参考部署文档)
heketi-cli volume list
Id:d1b3fcfeb86fe7eaffdccd4eb26f2bd8 Cluster:77b24830f331f2e12ca064d7daab3e43 Name:vol_d1b3fcfeb86fe7eaffdccd4eb26f2bd8
heketi-cli volume info d1b3fcfeb86fe7eaffdccd4eb26f2bd8
Name: vol_d1b3fcfeb86fe7eaffdccd4eb26f2bd8
Size: 10
Volume Id: d1b3fcfeb86fe7eaffdccd4eb26f2bd8
Cluster Id: 77b24830f331f2e12ca064d7daab3e43
Mount: 10.99.7.12:vol_d1b3fcfeb86fe7eaffdccd4eb26f2bd8
Mount Options: backup-volfile-servers=10.99.7.11,10.99.7.13
Block: false
Free Size: 0
Reserved Size: 0
Block Hosting Restriction: (none)
Block Volumes: []
Durability Type: replicate
Distributed+Replica: 3
Snapshot Factor: 1.00
以上,glusterfs的存储卷volume已经处于可用状态。
2.2 部署应用通过pvc使用glusterfs提供的存储卷volume
# cat demo_mode3_nginx.yaml
apiVersion: v1
kind: Service
metadata:
name: svc-demo-mode3-nginx
labels:
name: demo-mode3-nginx
spec:
type: NodePort
ports:
- name: nginx
port: 80
nodePort: 31280
selector:
name: demo-mode3-nginx
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: demo-mode3-nginx
labels:
name: demo-mode3-nginx
spec:
replicas: 2
selector:
matchLabels:
name: demo-mode3-nginx
template:
metadata:
labels:
name: demo-mode3-nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: demo-mode3-nginx-vol
mountPath: "/usr/share/nginx/html"
volumes:
- name: demo-mode3-nginx-vol
persistentVolumeClaim:
claimName: glusterfs-vol-pvc02
kubectl apply -f demo_mode3_nginx.yaml
service/svc-demo-mode3-nginx created
deployment.apps/demo-mode3-nginx created
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
...
svc-demo-mode3-nginx NodePort 10.98.156.236 <none> 80:31280/TCP 4m50s
kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
...
demo-mode3-nginx 2/2 2 2 5m49s
kubectl get rs
NAME DESIRED CURRENT READY AGE
...
demo-mode3-nginx-56c4fcdb4c 2 2 2 6m5s
kubectl get pods
NAME READY STATUS RESTARTS AGE
demo-mode3-nginx-56c4fcdb4c-95v8r 1/1 Running 0 6m25s
demo-mode3-nginx-56c4fcdb4c-txbk4 1/1 Running 0 6m25s
2.4 验证
# 1. 在共享存储卷里提供一个index.html文件
kubectl exec -it demo-mode3-nginx-56c4fcdb4c-95v8r /bin/bash
root@demo-mode3-nginx-56c4fcdb4c-95v8r:/# df -h
Filesystem Size Used Avail Use% Mounted on
...
10.99.7.11:vol_d1b3fcfeb86fe7eaffdccd4eb26f2bd8 10G 136M 9.9G 2% /usr/share/nginx/html
root@demo-mode3-nginx-56c4fcdb4c-95v8r:/# cd /usr/share/nginx/html
root@demo-mode3-nginx-56c4fcdb4c-95v8r:/usr/share/nginx/html#
cat <<EOF > index.html
<html>
<title>demo3</title>
<body>Hello, world</body>
</html>
EOF
root@demo-mode3-nginx-56c4fcdb4c-95v8r:/usr/share/nginx/html# ls -l
total 1
-rw-r--r-- 1 root 40000 62 Jan 24 03:25 index.html
root@demo-mode3-nginx-56c4fcdb4c-95v8r:/usr/share/nginx/html# exit
# 验证nginx服务是否可以正常访问到该index.html
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
...
svc-demo-mode3-nginx NodePort 10.98.156.236 <none> 80:31280/TCP 13m
curl 10.99.12.201:31280 # nodeIP:NodePort
curl 10.98.156.236 # svc clusterIP
curl 192.168.3.10 或者 192.168.4.10 # nginx pod IP
<html>
<title>demo3</title>
<body>Hello, world</body>
</html>
经过验证信息访问正常,glusterfs存储卷可以正常提供服务了。