环境准备:
host | IP |
---|---|
master01 | 192.168.200.150 |
node01 | 192.168.200.151 |
node02 | 192.168.200.152 |
1.使用kubeadm快速安装kubenrnetes集群
略
2.安装,配置GlusterFs:
1.在每台物理主机采用yum安装的方式gluster源以及相关组件:
yum -y install centos-release-gluster
yum -y install glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma
2.启动gluster服务,并设置开机启动
systemctl start glusterd.service && systemctl enable glusterd.service
3.在master01主机上配置,将2个节点加入到集群中:
gluster peer probe node01
gluster peer probe node02
4.在任意节点查看集群
[root@master01 ~]# gluster peer status
Number of Peers: 2
Hostname: node01
Uuid: 8478ff05-62b1-496a-9b7c-83544e627a31
State: Peer in Cluster (Connected)
Hostname: node02
Uuid: 8378a8c9-0c66-4a25-8afb-37a50b541020
State: Peer in Cluster (Connected)
5.每个节点创建数据存储目录
mkdir -p ~/gluster/data
6.创建glusterfs磁盘(master01节点上执行),以创建striped模式(条带卷)的磁盘为例
gluster volume create k8s-volume stripe 3 master01:/root/glusterfs/data node01:/root/glusterfs/data node02:/root/glusterfs/data force
7.启动k8s-volume(master01节点上执行)
gluster volume start
8.查看volume状态
[root@master01 ~]# gluster volume info
Volume Name: k8s-volume
Type: Stripe
Volume ID: 0477dc7c-fd5b-45a1-a752-d9d64654fc1f
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: master01:/root/glusterfs/data
Brick2: node01:/root/glusterfs/data
Brick3: node02:/root/glusterfs/data
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
3.配置endpoints:
curl -O https://raw.githubusercontent.com/kubernetes/examples/master/volumes/glusterfs/glusterfs-endpoints.json
修改glusterfs-endpoints.json的IP为集群实际IP,port任意,不冲突即可
[root@master01 ~]# cat glusterfs-endpoints.json
{
"kind": "Endpoints",
"apiVersion": "v1",
"metadata": {
"name": "glusterfs-cluster"
},
"subsets": [
{
"addresses": [
{
"ip": "192.168.200.150"
}
],
"ports": [
{
"port": 6000
}
]
},
{
"addresses": [
{
"ip": "192.168.200.151"
}
],
"ports": [
{
"port": 6001
}
]
},
{
"addresses": [
{
"ip": "192.168.200.152"
}
],
"ports": [
{
"port": 6002
}
]
}
]
}
查看endpoints:
[root@master01 ~]# kubectl create -f glusterfs-endpoints.json
[root@master01 ~]# kubectl get ep
NAME ENDPOINTS AGE
glusterfs-cluster 192.168.200.150:6000,192.168.200.151:6001,192.168.200.152:6002 1h
配置PersistentVolumeClaim:
创建glusterfs-pv.yaml文件,指定storage容量和读写属性
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv001
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteMany
glusterfs:
endpoints: "glusterfs-cluster"
path: "k8s-volume"
readOnly: false
然后执行:
kubectl create -f glusterfs-pv.yaml
可以查看pv:
[root@master01 ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv001 20Gi RWX Retain Bound default/pvc001 1h
配置PersistentVolumeClaim:
创建glusterfs-pvc.yaml文件,指定请求资源大小,如:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc001
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 8Gi
然后执行:
kubectl create -f glusterfs-pvc.yaml
在deployment中挂载pvc:
以创建nginx,把pvc挂载到容器内的/usr/share/nginx/html文件夹为例:
nginx_deployment.yaml文件如下
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-dm
spec:
replicas: 1
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: storage001
mountPath: "/usr/share/nginx/html"
volumes:
- name: storage001
persistentVolumeClaim:
claimName: pvc001
然后执行:
kubectl create -f nginx_deployment.yaml
接下来,可以查看是否挂载成功。
首先查看deployment
[root@master01 ~]# kubectl get pods | grep nginx-dm
nginx-dm-59d68cf8b6-whkhk 1/1 Running 0 56m
查看挂载
[root@master01 ~]# kubectl exec -it nginx-dm-59d68cf8b6-whkhk -- df -h | grep k8s-volume
192.168.200.150:k8s-volume 51G 25G 27G 48% /usr/share/nginx/html
创建文件:
[root@master01 ~]# kubectl exec -it nginx-dm-59d68cf8b6-whkhk -- touch /usr/share/nginx/html/123.txt
查看文件属性:
[root@master01 ~]# kubectl exec -it nginx-dm-59d68cf8b6-whkhk -- ls -lt /usr/share/nginx/html/123.txt
-rw-r--r-- 1 root root 0 Nov 25 02:47 /usr/share/nginx/html/123.txt
此时在3台物理主机上都可以看到123.txt文件。