在kubernetes挂载glusterfs的现存卷
测试环境
- kubernetes集群,版本v1.15
- glusterfs集群,3台,centos7.3
glusterfs集群的创建和卷的创建
- 安装glusterfs组建
# 三台设备都需要执行 yum search gluster # yum -y install centos-release-gluster${你想要的版本}.noarch yum -y install centos-release-gluster7.noarch yum install -y glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma systemctl start glusterd.service systemctl status glusterd.service
- 创建集群
# 登录在node1上 gluster peer status ${node2 ip} gluster peer status ${node3 ip}
- 创建volume
gluster volume create test replica 3 node1:/data/test node2:/data/test node3:/data/test force gluster volume start test
在kubernetes的pod中挂载glusterfs的volume
-
通过hostpath的方式挂载
- 要求
- 需要把glusterfs挂载在每一台kubernetes的节点的同一路径上
- pod yaml``` apiVersion: v1 kind: Pod metadata: name: glusterfs spec: containers: - name: glusterfs image: nginx volumeMounts: - mountPath: "/mnt/glusterfs" name: glusterfsvol volumes: - name: glusterfsvol hostPath: path: /data/glusterfs type: Directory ```
-
通过endpoints的方式挂载
- 要求
- 需要把glusterfs集群的所有hostname都配置到kubernetes的每一个node的/etc/hosts文件中
- 为glusterfs创建endpoints
apiVersion: v1 kind: Endpoints metadata: name: glusterfs-cluster subsets: - addresses: - ip: ${node1 ip} ports: - port: 1 #这里随便写,其实是没有用的 - addresses: - ip: ${nod2 ip} ports: - port: 1 #这里随便写,其实是没有用的 - addresses: - ip: ${nod3 ip} ports: - port: 1 #这里随便写,其实是没有用的
- 为endpoints创建service
apiVersion: v1 kind: Service metadata: name: glusterfs-cluster spec: ports: - port: 1
- 通过service将glusterfs的volume挂载到pod中去
apiVersion: v1 kind: Pod metadata: name: glusterfs spec: containers: - name: glusterfs image: nginx volumeMounts: - mountPath: "/mnt/glusterfs" name: glusterfsvol volumes: - name: glusterfsvol glusterfs: endpoints: glusterfs-cluster path: test readOnly: false
- 要求