实验环境简述
百度是这样说
使用RBD模式创建的pvc不支持RWM(readwriteMany),只支持 RWO(ReadWriteOnce)和ROM(ReadOnlyMany)
k8s 不支持跨节点挂载同一Ceph RBD,支持跨节点挂载 CephFS
K8S官网这样说
在ceph服务器上面创建ceph fs文件系统
ceph osd pool create fs_kube_data 32 32
ceph osd pool create fs_kube_metadata 32 32
ceph fs new cephfs fs_kube_metadata fs_kube_data
获取密钥
[root@cc110 ~]# ceph auth print-key client.admin | base64
QVFCcndLdGZtMCtaT2hBQWFZMVpZdlJZVEhXbE5TNS82SmlVY0E9PQ==
客户端测试挂载CephFS文件系统
1.内核态挂载命令
[root@cc111 ~]# mount -t ceph 192.168.8.100:/ /mnt/ -o name=admin,secret=QVFCcndLdGZtMCtaT2hBQWFZMVpZdlJZVEhXbE5TNS82SmlVY0E9PQ==
2.用户态挂载命令:
cephfs还支持用户态mount,方法是使用ceph-fuse命令
将ceph存储上面的认证文件拷贝到挂载主机
scp -r /etc/ceph/ceph.client.admin.keyring root@node-0xx:/etc/ceph/
安装相应的软件包
yum -y install ceph-fuse
3 挂载上去
[root@cc111 ~]# ceph-fuse -m 192.168.8.100:6789 /mnt/123/
ceph-fuse[7556]: starting ceph client
2020-11-14 00:06:36.944 7fd0406b8f80 -1 init, newargv = 0x55e7f682e430 newargc=9
ceph-fuse[7556]: starting fuse
[root@cc111 ~]# df -h | grep mnt
ceph-fuse 2.6G 0 2.6G 0% /mnt/123
4 测试是否可以测试读写(主要针对selinux开启)
[root@cc111 123]# dd if=/dev/zero of=test2 bs=1M count=100
记录了100+0 的读入
记录了100+0 的写出
104857600字节(105 MB)已复制,1.09182 秒,96.0 MB/秒
温馨提示:
社区有个external storage项目,只是现在还在孵化器里。所谓external storage其实就是一个 controller,它会去监听 apiserver 的storage class api的变化,当发现有cephfs的请求,会拿来处理:根据用户的请求,创建PV,并将PV的创建请求发给api server;PV创建后再将其与PVC绑定。
回到K8S集群下载相应驱动链接
https://github.com/kubernetes-retired/external-storage/tree/master/ceph/cephfs/deploy
NAMESPACE=cephfs
sed -r -i "s/namespace: [^ ]+/namespace: $NAMESPACE/g" ceph-fs/*.yaml
#提示先把deployment.yaml 移到其他目录 以上替换完在移回ceph-fs目录
sed -r -i "N;s/(name: PROVISIONER_SECRET_NAMESPACE.*\n[[:space:]]*)value:.*/\1value: $NAMESPACE/" ceph-fs/deployment.yaml
[root@cc110 ceph-fs]# kubectl create ns cephfs
namespace/cephfs created
[root@cc110 ceph-fs]# kubectl apply -f .
clusterrole.rbac.authorization.k8s.io/cephfs-provisioner unchanged
clusterrolebinding.rbac.authorization.k8s.io/cephfs-provisioner unchanged
deployment.apps/cephfs-provisioner created
role.rbac.authorization.k8s.io/cephfs-provisioner created
rolebinding.rbac.authorization.k8s.io/cephfs-provisioner created
serviceaccount/cephfs-provisioner created
[root@cc110 ceph-fs]# kubectl get pods -n cephfs
NAME READY STATUS RESTARTS AGE
cephfs-provisioner-67dd56fb57-9fj85 0/1 ContainerCreating 0 74s
1.创建secret
[root@cc110 deploy]# ceph auth get-key client.admin | base64
QVFCcndLdGZtMCtaT2hBQWFZMVpZdlJZVEhXbE5TNS82SmlVY0E9PQ==
[root@cc110 deploy]# cat fs-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: ceph-admin-secret
namespace: cephfs
data:
key: QVFCcndLdGZtMCtaT2hBQWFZMVpZdlJZVEhXbE5TNS82SmlVY0E9PQ== <<<----复制到此处
# ceph auth get-key client.admin | base64
# type: "kubernetes.io/rbd"
2. 创建SC和pvc
[root@cc110 deploy]# cat sc.yaml && cat pvc.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: cephfs-sc
namespace: cephfs
provisioner: ceph.com/cephfs
parameters:
monitors: 192.168.8.100:6789
adminId: admin
adminSecretName: ceph-admin-secret
adminSecretNamespace: "cephfs"
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: claim
namespace: cephfs
annotations:
volume.beta.kubernetes.io/storage-class: "cephfs-sc"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
创建pods测试
[root@cc110 deploy]# kubectl apply -f nginx-aline.yaml
deployment.apps/nginx-deploy created
[root@cc110 deploy]# cat nginx-aline.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deploy
namespace: cephfs
labels:
app: nginx
spec:
replicas: 5
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html
name: data
volumes:
- name: data
persistentVolumeClaim:
claimName: claim
测试随便进入一个容器 执行以下命令
echo 11111111111111111 > /usr/share/nginx/html/index.html
[root@cc110 deploy]# kubectl get pod -n cephfs -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
cephfs-provisioner-67dd56fb57-p4w4t 1/1 Running 1 9h 10.244.1.27 cc111.com <none> <none>
nginx-deploy-69cc6dd8cc-77d6q 1/1 Running 0 10m 10.244.1.35 cc111.com <none> <none>
nginx-deploy-69cc6dd8cc-czfzc 1/1 Running 0 9m36s 10.244.1.38 cc111.com <none> <none>
nginx-deploy-69cc6dd8cc-f8kd8 1/1 Running 0 10m 10.244.1.36 cc111.com <none> <none>
nginx-deploy-69cc6dd8cc-hftb2 1/1 Running 0 10m 10.244.1.37 cc111.com <none> <none>
nginx-deploy-69cc6dd8cc-jw2hx 1/1 Running 0 9m36s 10.244.1.39 cc111.com <none> <none>
[root@cc110 deploy]# for i in {35..39};do curl 10.244.1.$i; done
11111111111111111111
11111111111111111111
11111111111111111111
11111111111111111111
11111111111111111111
集群2
[root@cc110 deploy]# kubectl get pods -n cephfs -o wide | grep nginx
nginx-deploy-69cc6dd8cc-6zllp 1/1 Running 0 2m48s 10.244.1.42 cc111.com <none> <none>
nginx-deploy-69cc6dd8cc-76pkt 1/1 Running 0 2m48s 10.244.2.10 node112.com <none> <none>
nginx-deploy-69cc6dd8cc-8n89s 1/1 Running 0 2m48s 10.244.1.41 cc111.com <none> <none>
nginx-deploy-69cc6dd8cc-92vr4 1/1 Running 0 2m48s 10.244.2.11 node112.com <none> <none>
nginx-deploy-69cc6dd8cc-vc4n8 1/1 Running 0 2m48s 10.244.1.43 cc111.com <none> <none>
[root@cc110 deploy]# for i in {10..11};do curl 10.244.2.$i; done
111111111111111
111111111111111
[root@cc110 deploy]# for i in {41..43};do curl 10.244.1.$i; done
111111111111111
111111111111111
111111111111111