glusterfs(分布式存储)部署

glusterfs(分布式存储)是企业中的主流部署形式
注意:现在版本对centos6 不太友好了可能会出现客户端挂载不上的情况,推荐使用centos7部署glusterfs
GlusterFS部署:
参考:http://docs.gluster.org/en/latest/Quick-Start-Guide/Quickstart/

准备:

将所有节点写入:
192.168.1.21 server1
192.168.1.111 server2,
将节点的hostname名字改成对应的server

glusterfs数据盘需要和系统盘隔开:所以没有盘的可以挂载一个盘,有的请忽略
格式化磁盘:

fdisk /dev/xvdb
mkfs.ext4 /dev/xvdb1
mkdir /glusterfs/data -p
echo '/dev/xvdb1 /glusterfs/data ext4 defaults 1 2' >> /etc/fstab
mount -a && mount

安装:

centos7安装扩展源

vim /etc/yum.repos.d/gluster.repo

[glusterfs]

name=glusterfs

#baseurl=http://download.gluster.org/pub/gluster/glusterfs/3.4/3.4.7/CentOS/epel-6.5/x86_64/
baseurl= http://buildlogs.centos.org/centos/7/storage/x86_64/gluster-3.12/

enabled=1

gpgcheck=0

使用yum安装

yum install glusterfs-server

centos6:

wget https://buildlogs.centos.org/centos/6/storage/x86_64/gluster-3.12/glusterfs-server-3.12.14-1.el6.x86_64.rpm
wget https://buildlogs.centos.org/centos/6/storage/x86_64/gluster-3.12/glusterfs-api-3.12.14-1.el6.x86_64.rpm
 wget https://buildlogs.centos.org/centos/6/storage/x86_64/gluster-3.12/glusterfs-cli-3.12.14-1.el6.x86_64.rpm
wget https://buildlogs.centos.org/centos/6/storage/x86_64/gluster-3.12/glusterfs-client-xlators-3.12.14-1.el6.x86_64.rpm
wget https://buildlogs.centos.org/centos/6/storage/x86_64/gluster-3.12/glusterfs-fuse-3.12.14-1.el6.x86_64.rpm
wget https://buildlogs.centos.org/centos/6/storage/x86_64/gluster-3.12/glusterfs-geo-replication-3.12.14-1.el6.x86_64.rpm
wget https://buildlogs.centos.org/centos/6/storage/x86_64/gluster-3.12/glusterfs-libs-3.12.14-1.el6.x86_64.rpm
wget https://buildlogs.centos.org/centos/6/storage/x86_64/gluster-3.12/glusterfs-3.12.14-1.el6.x86_64.rpm
wget https://buildlogs.centos.org/centos/6/storage/x86_64/gluster-3.12/userspace-rcu-0.7.16-2.el6.x86_64.rpm

安装:
强制安装:

rpm -ivh  *  --force --nodeps 

安装完毕后启动:

service glusterd start  

添加节点:

server1中:

gluster peer probe server2

server2中:

gluster peer probe server1

查看状态:

gluster peer status

删除节点:(注意做删除操作,无论是删除节点还是数据卷,所有节点都必须在线,不然会删除失败)

gluster peer detach server2

创建数据卷:

mkdir /home/test-volume
gluster volume create gv1 replica 2 server1:/home/test-volume server2:/home/test-volume

启动数据卷:

gluster volume start gv1

如果报错的话需要,清掉/home/test-volume 里面的文件重来:

cd /home/test-volume
rm -rf .glusterfs
setfattr -x trusted.glusterfs.volume-id ./
setfattr -x trusted.gfid ./

查看数据卷状态:

 gluster volume info

使用:

首先我们得在也好使用的服务器安装客户端
添加hosts

192.168.1.21 server1
192.168.1.111 server2

安装客户端

yum install glusterfs-fuse

挂载glusterfs
做完上述操作,我们就可以将数据卷挂载到我们指定的的地方了。
每台机器上都对应的要挂载

mount -t glusterfs server1:/gv1 /mnt/

挂载完毕后,我们可以进入/mnt/目录,创建文件,看看是否所有的机器都会共享文件。

k8s使用参考:
https://github.com/kubernetes/kubernetes/tree/8fd414537b5143ab039cb910590237cabf4af783/examples/volumes/glusterfs

[root@k8s-g1 gluster-yaml]# cat glusterfs-endpoints.json
{
  "kind": "Endpoints",
  "apiVersion": "v1",
  "metadata": {
    "name": "glusterfs-cluster"
  },
  "subsets": [
    {
      "addresses": [
        {
          "ip": "192.168.1.21"
        }
      ],
      "ports": [
        {
          "port": 1
        }
      ]
    },
    {
      "addresses": [
        {
          "ip": "192.168.1.111"
        }
      ],
      "ports": [
        {
          "port": 1
        }
      ]
    }
  ]
}

创建glusterfs服务:
[root@k8s-g1 gluster-yaml]# cat glusterfs-service.json
{
  "kind": "Service",
  "apiVersion": "v1",
  "metadata": {
    "name": "glusterfs-cluster"
  },
  "spec": {
    "ports": [
      {"port": 1}
    ]
  }
}

创建测试用例:

[root@k8s-g1 gluster-yaml]# cat nginx-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        volumeMounts:
        - name: glusterfsvol
          mountPath: /usr/share/nginx/html
        ports:
        - containerPort: 80
      volumes:
      - name: glusterfsvol
        glusterfs:
          endpoints: glusterfs-cluster
          path: gv1
          readOnly: false

---

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
  type: NodePort

创建服务

kubectl create -f glusterfs-endpoints.json
kubectl create -f glusterfs-service.json
kubectl create -f nginx-deployment.yaml

检测服务:

[root@k8s-g1 gluster-yaml]# kubectl get ep
NAME                ENDPOINTS                         AGE
glusterfs-cluster   192.168.1.111:1,192.168.1.21:1    26m
nginx-service       172.17.102.4:80,172.17.102.5:80   10m

kubectl get svc
glusterfs-cluster   ClusterIP   10.10.10.117   <none>        1/TCP          25m
nginx-service       NodePort    10.10.10.109   <none>        80:33725/TCP   10m

此时我们可以cur 10.10.10.109 了,看看是否会出现我们在test-volume中编辑的内容了

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值