环境准备:
准备二台机器用于安装和测试GlusterFS
10.8.2.32 k8s-worker01
10.8.2.33 k8s-worker02
关闭selinux
安装glusterFS服务
安装glusterFS服务
yum install centos-release-gluster -y yum install glusterfs-server glusterfs glusterfs-fuse -y
systemctl start glusterd systemctl enable glusterd systemctl status glusterd |
配置hosts,每个节点都要配置,很重要,配置不对下面构建信任池会失败
配置hosts
10.8.2.32 k8s-worker01 10.8.2.33 k8s-worker02 |
构建信任池,任意节点即可
构建信任池
#任意节点即可,我在k8s-worker01执行,添加02 gluster peer probe k8s-worker02 |
查看节点状态
查看节点状态
gluster peer status |
如果有问题,移除节点命令
移除节点
gluster peer detach k8s-worker02 |
创建GlusterFS逻辑卷(Volume)
#在两个节点分别建立/data/gfsdata目录: mkdir -p /data/gfsdata
#任意节点执行,创建卷 gluster volume create rep-vol replica 2 k8s-worker01:/data/gfsdata k8s-worker02:/data/gfsdata force
#这条命令的意思是使用Replicated的方式,建立一个名为rep-vol的卷(Volume),存储块(Brick)为2个,分别为k8s-worker01:/data/gfsdata和k8s-worker02:/data/gfsdata #如果Brick的目录在系统分区,会报错。因为gluster默认情况下是不允许的,需要添加force参数。
gluster volume start rep-vol #启用rep-vol卷
gluster volume info #查看逻辑卷状态
gluster volume stop rep-vol #删除卷命令
gluster peer probe 10.8.2.34 # 加节点 gluster volume add-brick rep-vol 10.8.2.34:/data/glusterfs # 合并卷 |
客户端挂载
yum -y install glusterfs glusterfs-fuse #安装客户端
mkdir /app #创建挂载目录
mount -t glusterfs k8s-worker01:/rep-vol /app #把服务器上新建的GlusterFS逻辑卷rep-vol挂载到本地目录/app上
mount -t fuse.glusterfs#查看挂载状态 |
------------------------------------------
k8s使用持久化卷
接上文,目前GFS有两个节点,10.8.2.32 和 10.8.2.33,我们建立了一个两副本的卷:rep-vol
为k8s产生GFS的endingpoint和service
ep-svc.yaml
cat << EOF >> ep-svc.yaml --- apiVersion: v1 kind: Service metadata: name: gfs-cluster_svc spec: ports: - port: 1 --- apiVersion: v1 kind: Endpoints metadata: name: gfs-cluster_svc subsets: - addresses: - ip: 172.19.20.18 ports: - port: 1 - addresses: - ip: 172.19.20.36 ports: - port: 1 EOF
kubectl apply -f ep-svc.yaml |
为k8s生产创建 PV和PVC
pod做pvc声明,pvc连接到pv,pv从GFS中拿到卷
声明一个 pv:
pv.yaml
cat << EOF >> ep-svc.yaml --- apiVersion: v1 kind: PersistentVolume metadata: name: gfs-pv labels: name: gfs-pv spec: capacity: storage: 50Gi accessModes: - ReadWriteMany glusterfs: endpoints: gfs-cluster-svc path: rep-vol readOnly: false persistentVolumeReclaimPolicy: Retain EOF
kubectl apply -f pv.yaml |
声明一个 pvc,通过 matchLabels 来跟之前的 pv 绑定
pvc.ymal
cat << EOF >> ep-svc.yaml --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: gfs-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 50Gi selector: matchLabels: name: gfs-pv EOF
kubectl apply -f pvc.yaml |
上面的 pvc,直接申请了50G,整个 pv 空间全用光了
可以只申请个 10Gi,下个 pvc 再 10Gi
声明一个 Nginx 的 Deployment 来使用这个 pvc
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 volumeMounts: - name: data-www mountPath: /data/www volumes: - name: data-www persistentVolumeClaim: claimName: gfs_pvc |
kubernetes 的存储部分就搞定了。GFS 用于生产非常稳定