k8s glusterfs mysql_K8S中使用glusterfs

环境信息

三台机器,操作系统CentOS 7.4:

hanyu-210 10.20.0.210

hanyu-211 10.20.0.211

hanyu-212 10.20.0.212

前提条件:

已搭建K8S集群(1个master 2个node节点)

1、搭建glusterFS集群(除非特别说明,否则三个节点都执行)

执行

[root@hanyu-210 k8s_glusterfs]# yum install centos-release-gluster

[root@hanyu-210 k8s_glusterfs]# yum install -y glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma

配置 GlusterFS 集群

[root@hanyu-210 k8s_glusterfs]# systemctl start glusterd.service

[root@hanyu-210 k8s_glusterfs]# systemctl enable glusterd.service

hanyu-210节点执行

[root@hanyu-210 k8s_glusterfs]# gluster peer probe hanyu-210

[root@hanyu-210 k8s_glusterfs]# gluster peer probe hanyu-211

[root@hanyu-210 k8s_glusterfs]# gluster peer probe hanyu-212

创建数据目录

[root@hanyu-210 k8s_glusterfs]# mkdir -p /opt/gfs_data

创建复制卷

[root@hanyu-210 k8s_glusterfs]# gluster volume create k8s-volume replica 3 hanyu-210:/opt/gfs_data hanyu-211:/opt/gfs_data hanyu-212:/opt/gfs_data force

启动卷

[root@hanyu-210 k8s_glusterfs]# gluster volume start k8s-volume

查询卷状态

[root@hanyu-210 k8s_glusterfs]# gluster volume status

Status of volume: k8s-volume

Gluster process TCP Port RDMA Port Online Pid

------------------------------------------------------------------------------

Brick hanyu-210:/opt/gfs_data 49152 0 Y 29445

Brick hanyu-212:/opt/gfs_data 49152 0 Y 32098

Self-heal Daemon on localhost N/A N/A Y 29466

Self-heal Daemon on hanyu-212 N/A N/A Y 32119

Task Status of Volume k8s-volume

------------------------------------------------------------------------------

There are no active volume tasks

[root@hanyu-210 k8s_glusterfs]# gluster volume info

Volume Name: k8s-volume

Type: Replicate

Volume ID: 7d7ecba3-7bc9-4e09-89ed-493b3a6a2454

Status: Started

Snapshot Count: 0

Number of Bricks: 1 x 3 = 3

Transport-type: tcp

Bricks:

Brick1: hanyu-210:/opt/gfs_data

Brick2: hanyu-211:/opt/gfs_data

Brick3: hanyu-212:/opt/gfs_data

Options Reconfigured:

transport.address-family: inet

nfs.disable: on

performance.client-io-threads: off

验证glusterFS集群可用

选择其中一台主机执行

yum install -y glusterfs glusterfs-fuse

mkdir -p /root/test

mount -t glusterfs hanyu-210:k8s-volume /root/test

df -h

umount /root/test

2、使用glusterfs(以下均在k8s master节点执行)

创建glusterfs的endpoints:kubectl apply -f glusterfs-cluster.yaml

[root@hanyu-210 k8s_glusterfs]# cat glusterfs-cluster.yaml

apiVersion: v1

kind: Endpoints

metadata:

name: glusterfs-cluster

namespace: default

subsets:

- addresses:

- ip: 10.20.0.210

- ip: 10.20.0.211

- ip: 10.20.0.212

ports:

- port: 49152

protocol: TCP

创建应用直接使用glusterfs作为存储卷:kubectl apply -f nginx_deployment_test.yaml

[root@hanyu-210 k8s_glusterfs]# cat nginx_deployment_test.yaml

apiVersion: apps/v1

kind: Deployment

metadata:

name: nginx-deployment-test

spec:

replicas: 3

selector:

matchLabels:

name: nginx

template:

metadata:

labels:

name: nginx

spec:

containers:

- name: nginx

image: nginx

ports:

- containerPort: 80

volumeMounts:

- name: storage001

mountPath: "/usr/share/nginx/html"

volumes:

- name: storage001

glusterfs:

endpoints: glusterfs-cluster

path: k8s-volume

readOnly: false

4ebf960b2075

1.png

4ebf960b2075

2.png

创建pv使用glusterfs:kubectl apply -f glusterfs-pv.yaml

[root@hanyu-210 k8s_glusterfs]# cat glusterfs-pv.yaml

apiVersion: v1

kind: PersistentVolume

metadata:

name: glusterfs-pv

spec:

capacity:

storage: 10Gi

accessModes:

- ReadWriteMany

glusterfs:

endpoints: glusterfs-cluster

path: k8s-volume

readOnly: false

创建pvc声明:kubectl apply -f glusterfs-pvc.yaml

[root@hanyu-210 k8s_glusterfs]# cat glusterfs-pvc.yaml

apiVersion: v1

kind: PersistentVolumeClaim

metadata:

name: glusterfs-pvc

spec:

accessModes:

- ReadWriteMany

resources:

requests:

storage: 2Gi

创建应用使用pvc:kubectl apply -f nginx_deployment.yaml

[root@hanyu-210 k8s_glusterfs]# cat nginx_deployment.yaml

apiVersion: apps/v1

kind: Deployment

metadata:

name: nginx-deployment

spec:

replicas: 3

selector:

matchLabels:

name: nginx

template:

metadata:

labels:

name: nginx

spec:

containers:

- name: nginx

image: nginx

ports:

- containerPort: 80

volumeMounts:

- name: storage001

mountPath: "/usr/share/nginx/html"

volumes:

- name: storage001

persistentVolumeClaim:

claimName: glusterfs-pvc

4ebf960b2075

3.png

4ebf960b2075

4.png

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值