kuberneters部署local-path-provisione

local-path-provisione简介

kubernetes local volume static provisioner管理预分配磁盘的PersistentVolume生命周期,为主机上的每个本地磁盘创建和删除pv,当pv被释放时清理磁盘。它不支持动态提供pv。

local-path-provisioner会以daemonset方式在每个node节点运行provisioner pod,该pod监测节点指定主机目录下是否存在挂载的文件系统,存在时基于该挂载目录自动创建local pv。基本原理如下图:
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-9LKYhpMb-1590668791547)(../images/screenshot_1590661772691.png)]
说明:

  • LocalVolume:卷本身包含了调度信息,使用这个卷的pod会被固定在特定的节点上,这样可以很好的保证数据的连续性;
  • Provisioner会在启动时把本地盘的dev目录以及用户自定义的hostPath目录挂载到容器的同名目录下;
  • 通过LocalVolume Provisioner自动创建LocalVolume,PV附带所属的节点信息。创建pvc消费LocalVolume时使用这个pvc的pod都会调度到pv所指定的节点;
  • Provisioner会实时的轮询检查目标目录,根据目标目录列表动态创建PV;
  • 创建PV时StorageClass可以支持reclaimPolicy、volumeBindingMode的配置;
  • LocalVolume PV删除后会自动重新创建;

支持两种文件系统类型

  1. Filesystem volumeMode (default) PVs - mount them under discovery directories.
  2. Block volumeMode PVs - create a symbolic link under discovery directory to the block device on the node.

项目地址:
https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner

挂载磁盘

部署local-path-provisione前需要在使用local volume的节点增加磁盘(不挂载磁盘可以参考后面的mount bind方式),这里在node01节点增加一块20G磁盘:

[root@node01 ~]# lsblk | grep  sd
sda               8:0    0   70G  0 disk 
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   69G  0 part 
sdb               8:16   0   20G  0 disk

#格式化磁盘
mkfs.xfs /dev/sdb

由于一个挂载点对应一个pv,为了创建多个pv可以使用mount --bind方式。

首先挂载/dev/sdb到/mnt/$DISK_UUID目录下,然后在该目录下创建多个子目录,再将这些子目录挂载到provisione实际监测的目录下。

#创建挂载点
DISK_UUID=$(blkid -s UUID -o value /dev/sdb)
mkdir -p /mnt/$DISK_UUID

#配置永久挂载
echo $DISK_UUID /mnt/$DISK_UUID xfs defaults 0 2 | sudo tee -a /etc/fstab
mount -a

#确认挂载正常
[root@node01 ~]# df -h |grep sdb
/dev/sdb                  20G   33M   20G   1% /mnt/da98d147-e5ae-4952-bc7d-4ef920dd2318

创建多个目录并 mount 到/mnt/fast-disks

for i in $(seq 1 5); do
  mkdir -p /mnt/${DISK_UUID}/vol${i} 
  mkdir -p /mnt/fast-disks/${DISK_UUID}_vol${i}
  mount --bind /mnt/${DISK_UUID}/vol${i} /mnt/fast-disks/${DISK_UUID}_vol${i}
done

配置/etc/fstab永久挂载

for i in $(seq 1 5); do
  echo /mnt/${DISK_UUID}/vol${i} /mnt/fast-disks/${DISK_UUID}_vol${i} none bind 0 0 | sudo tee -a /etc/fstab
done

# 输出如下
/mnt/da98d147-e5ae-4952-bc7d-4ef920dd2318/vol1 /mnt/fast-disks/da98d147-e5ae-4952-bc7d-4ef920dd2318_vol1 none bind 0 0
/mnt/da98d147-e5ae-4952-bc7d-4ef920dd2318/vol2 /mnt/fast-disks/da98d147-e5ae-4952-bc7d-4ef920dd2318_vol2 none bind 0 0
/mnt/da98d147-e5ae-4952-bc7d-4ef920dd2318/vol3 /mnt/fast-disks/da98d147-e5ae-4952-bc7d-4ef920dd2318_vol3 none bind 0 0
/mnt/da98d147-e5ae-4952-bc7d-4ef920dd2318/vol4 /mnt/fast-disks/da98d147-e5ae-4952-bc7d-4ef920dd2318_vol4 none bind 0 0
/mnt/da98d147-e5ae-4952-bc7d-4ef920dd2318/vol5 /mnt/fast-disks/da98d147-e5ae-4952-bc7d-4ef920dd2318_vol5 none bind 0 0

如果要卸载挂载的目录,执行以下操作:

DISK_UUID=$(blkid -s UUID -o value /dev/sdb)
for i in $(seq 1 5); do
  umount /mnt/fast-disks/${DISK_UUID}_vol${i}
done
rm -fr /mnt

删除 /etc/fstab 配置中挂载的目录

UUID=da98d147-e5ae-4952-bc7d-4ef920dd2318 /mnt/da98d147-e5ae-4952-bc7d-4ef920dd2318 xfs defaults 0 2
/mnt/da98d147-e5ae-4952-bc7d-4ef920dd2318/vol1 /mnt/fast-disks/da98d147-e5ae-4952-bc7d-4ef920dd2318_vol1 none bind 0 0
/mnt/da98d147-e5ae-4952-bc7d-4ef920dd2318/vol2 /mnt/fast-disks/da98d147-e5ae-4952-bc7d-4ef920dd2318_vol2 none bind 0 0
/mnt/da98d147-e5ae-4952-bc7d-4ef920dd2318/vol3 /mnt/fast-disks/da98d147-e5ae-4952-bc7d-4ef920dd2318_vol3 none bind 0 0
/mnt/da98d147-e5ae-4952-bc7d-4ef920dd2318/vol4 /mnt/fast-disks/da98d147-e5ae-4952-bc7d-4ef920dd2318_vol4 none bind 0 0
/mnt/da98d147-e5ae-4952-bc7d-4ef920dd2318/vol5 /mnt/fast-disks/da98d147-e5ae-4952-bc7d-4ef920dd2318_vol5 none bind 0 0

参考:https://www.jianshu.com/p/303d5e804145

部署local-path-provisione

部署参考:

https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/getting-started.md

下载发布的正式release版本

wget https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/archive/v2.3.4.tar.gz
tar -zxvf v2.3.4.tar.gz
cd sig-storage-local-static-provisioner-2.3.4/

创建storageclass

kubectl create -f deployment/kubernetes/example/default_example_storageclass.yaml

查看storageclass

[root@master01 ~]# kubectl get sc
NAME                   PROVISIONER                            RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
fast-disks             kubernetes.io/no-provisioner           Delete          WaitForFirstConsumer   false                  16h

安装local-volume-provisioner

修改helm values.yaml,fsType改为xfs格式,默认hostDir为/mnt/fast-disks,所有的文件系统都要挂载到该目录下:

$ vim helm/provisioner/values.yaml
classes:
- name: fast-disks # Defines name of storage classe.
  hostDir: /mnt/fast-disks
  volumeMode: Filesystem
  fsType: xfs
  
#生成yaml文件
helm template ./helm/provisioner > deployment/kubernetes/provisioner_generated.yaml

部署provisioner

kubectl create -f deployment/kubernetes/provisioner_generated.yaml

查看provisioner pod

[root@master01 ~]# kubectl get pods -o wide
NAME                                     READY   STATUS    RESTARTS   AGE   IP                NODE     NOMINATED NODE   READINESS GATES
local-volume-provisioner-78b4v           1/1     Running   0          12s   100.95.185.238    node02   <none>           <none>
local-volume-provisioner-vt6br           1/1     Running   0          15h   100.117.144.178   node01   <none>           <none>

查看自动创建的local pv

[root@master01 ~]# kubectl get pv
NAME                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
local-pv-305fea01   19Gi       RWO            Delete           Available           fast-disks              13s
local-pv-7811f3c6   19Gi       RWO            Delete           Available           fast-disks              13s
local-pv-e6ed5fc2   19Gi       RWO            Delete           Available           fast-disks              13s
local-pv-ec2bef97   19Gi       RWO            Delete           Available           fast-disks              13s
local-pv-ed0e38bc   19Gi       RWO            Delete           Available           fast-disks              13s

查看pv绑定的节点

[root@master01 ~]# kubectl describe pv | grep hostname
    Term 0:        kubernetes.io/hostname in [node01]
    Term 0:        kubernetes.io/hostname in [node01]
    Term 0:        kubernetes.io/hostname in [node01]
    Term 0:        kubernetes.io/hostname in [node01]
    Term 0:        kubernetes.io/hostname in [node01]

创建pod,定义pvc申请使用pv

$ cat local_volume.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: local-test
spec:
  serviceName: "local-service"
  replicas: 3
  selector:
    matchLabels:
      app: local-test
  template:
    metadata:
      labels:
        app: local-test
    spec:
      containers:
      - name: test-container
        image: busybox
        command:
        - "/bin/sh"
        args:
        - "-c"
        - "sleep 100000"
        volumeMounts:
        - name: local-vol
          mountPath: /tmp
  volumeClaimTemplates:
  - metadata:
      name: local-vol
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "fast-disks"
      resources:
        requests:
          storage: 2Gi

查看pv绑定情况

[root@master01 ~]# kubectl get pvc
NAME                     STATUS   VOLUME              CAPACITY   ACCESS MODES   STORAGECLASS   AGE
local-vol-local-test-0   Bound    local-pv-e6ed5fc2   19Gi       RWO            fast-disks     3m47s
local-vol-local-test-1   Bound    local-pv-ec2bef97   19Gi       RWO            fast-disks     3m28s
local-vol-local-test-2   Bound    local-pv-ed0e38bc   19Gi       RWO            fast-disks     3m9s

[root@master01 ~]# kubectl get pv
NAME                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                            STORAGECLASS   REASON   AGE
local-pv-305fea01   19Gi       RWO            Delete           Available                                    fast-disks              5m57s
local-pv-7811f3c6   19Gi       RWO            Delete           Available                                    fast-disks              5m57s
local-pv-e6ed5fc2   19Gi       RWO            Delete           Bound       default/local-vol-local-test-0   fast-disks              5m57s
local-pv-ec2bef97   19Gi       RWO            Delete           Bound       default/local-vol-local-test-1   fast-disks              5m57s
local-pv-ed0e38bc   19Gi       RWO            Delete           Bound       default/local-vol-local-test-2   fast-disks              5m57s

[root@master01 ~]# kubectl get pods 
NAME                             READY   STATUS    RESTARTS   AGE
local-test-0                     1/1     Running   0          3m50s
local-test-1                     1/1     Running   0          3m31s
local-test-2                     1/1     Running   0          3m12s
local-volume-provisioner-7fs5q   1/1     Running   0          5m51s
local-volume-provisioner-9w9vq   1/1     Running   0          6m

写入数据验证

kubectl exec -it local-test-0 -- touch /tmp/hello

在对应节点主机目录下确认数据写入成功

#确认节点和目录
kubectl describe pv local-vol-local-test-0

#确认数据本地写入成功
[root@node01 ~]# ls /mnt/fast-disks/da98d147-e5ae-4952-bc7d-4ef920dd2318_vol1
hello

mount bind方式挂载

mount bind另一种用法,用于本地没有多余磁盘的情况

for i in $(seq 1 5); do
  mkdir -p /mnt/fast-disks-bind/vol${i}
  mkdir -p /mnt/fast-disks/vol${i}
  mount --bind /mnt/fast-disks-bind/vol${i} /mnt/fast-disks/vol${i}
done

配置/etc/fstab永久挂载
for i in $(seq 1 5); do
  echo /mnt/fast-disks-bind/vol${i} /mnt/fast-disks/vol${i} none bind 0 0 | sudo tee -a /etc/fstab
done

配置完成后会自动创建pv

[root@master01 mnt]# kubectl get pv 
NAME                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                            STORAGECLASS   REASON   AGE
local-pv-305fea01   19Gi       RWO            Delete           Available                                    fast-disks              33m
local-pv-35448a03   60Gi       RWO            Delete           Available                                    fast-disks              18m
local-pv-38865def   60Gi       RWO            Delete           Available                                    fast-disks              18m
local-pv-7811f3c6   19Gi       RWO            Delete           Available                                    fast-disks              33m
local-pv-a73cb79e   60Gi       RWO            Delete           Available                                    fast-disks              18m
local-pv-c62715d4   60Gi       RWO            Delete           Available                                    fast-disks              18m
local-pv-e6ed5fc2   19Gi       RWO            Delete           Bound       default/local-vol-local-test-0   fast-disks              33m
local-pv-ebf46e59   60Gi       RWO            Delete           Available                                    fast-disks              18m
local-pv-ec2bef97   19Gi       RWO            Delete           Bound       default/local-vol-local-test-1   fast-disks              33m
local-pv-ed0e38bc   19Gi       RWO            Delete           Bound       default/local-vol-local-test-2   fast-disks              33m

lvm方式挂载

也可以创建多个lv逻辑卷进行挂载,在node2节点增加一块20G磁盘

#创建物理卷
pvcreate /dev/sdb

#创建卷组
vgcreate vg01 /dev/sdb

#创建3个2G大小的逻辑卷并格式化
for i in $(seq 1 3); do
  lvcreate -L 2GB -n $i vg01
  mkfs.xfs /dev/vg01/lv${i}
done

配置/etc/fstab永久挂载
for i in $(seq 1 3); do
  mkdir -p /mnt/fast-disks/lv_vol${i}
  echo /dev/vg01/lv${i} /mnt/fast-disks/lv_vol${i} xfs defaults 0 0 | sudo tee -a /etc/fstab
done
mount -a

#确认挂载成功
[root@node02 ~]# df -h | grep lv_vol
/dev/mapper/vg01-lv1     2.0G   33M  2.0G   2% /mnt/fast-disks/lv_vol1
/dev/mapper/vg01-lv2     2.0G   33M  2.0G   2% /mnt/fast-disks/lv_vol2
/dev/mapper/vg01-lv3     2.0G   33M  2.0G   2% /mnt/fast-disks/lv_vol3

此时会自动在node02节点创建3个pv

[root@master01 ~]# kubectl describe pv | grep lv_vol
    Path:  /mnt/fast-disks/lv_vol3
    Path:  /mnt/fast-disks/lv_vol1
    Path:  /mnt/fast-disks/lv_vol2

tmpfs方式挂载

除了使用磁盘,还可以考虑使用内存文件系统,从而获取更高的io性能,只是容量就没那么理想了。一些特殊的应用可以考虑。

for vol in tmp_vol{1..3}; do
  mkdir -p /mnt/fast-disks/$vol
  mount -t tmpfs $vol /mnt/fast-disks/$vol
done

[root@node01 ~]# df -h  |grep mnt
/dev/sdb                  20G   33M   20G   1% /mnt/da98d147-e5ae-4952-bc7d-4ef920dd2318
tmp_vol1                 1.9G     0  1.9G   0% /mnt/fast-disks/tmp_vol1
tmp_vol2                 1.9G     0  1.9G   0% /mnt/fast-disks/tmp_vol2
tmp_vol3                 1.9G     0  1.9G   0% /mnt/fast-disks/tmp_vol3

此时会自动创建tmpfs类型的pv.

[root@master01 ~]# kubectl describe pv | grep tmp_vol
    Path:  /mnt/fast-disks/tmp_vol1
    Path:  /mnt/fast-disks/tmp_vol3
    Path:  /mnt/fast-disks/tmp_vol2

默认tmpfs使用物理内存的一半,如果指定挂载大小,使用以下命令

mount -t tmpfs -o size=1G,nr_inodes=10k,mode=700 tmpfs /mnt/fast-disks/tmpfs
  • 0
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值