部署GlusterFs持久存储

部署GlusterFs持久存储

部署glusterfs

修改/etc/hosts

[root@k8s-master01 amd64]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.100.129 k8s-master01
192.168.100.138 k8s-master02
192.168.100.139 k8s-master03
192.168.100.140 k8s-node01  gfs1
192.168.100.141 k8s-node02  gfs2
192.168.100.142 k8s-node03  gfs3

安装yum源(每台机器执行):

[root@k8s-master01~]#yum -y install centos-release-gluster

安装GlusterFS(每台机器执行):

[root@k8s-master01~]#yum -y install glusterfs glusterfs-fuse glusterfs-server

安装结束。

启动GlusterFS(每台机器执行):

[root@k8s-master01~]#systemctl start glusterd.service
[root@k8s-master01~]#systemctl enable glusterd.service

组建集群( gfs1 机器执行):

gluster peer probe gfs2  
gluster peer probe gfs3 

验证(gfs1 机器执行):

[root@node1 ~]# gluster peer status 
Number of Peers: 2

Hostname: gfs2
Uuid: c242e322-7ba5-4715-be02-1030e03e7972
State: Peer in Cluster (Connected)

Hostname: gfs3
Uuid: c7439332-f4b2-4c98-8217-16d63dcfe111
State: Peer in Cluster (Connected)

看到其他两个点的信息即代表GlusterFS集群组建成功。

Kubernetes使用GlusterFS

有两种方式,手动和自动,手动需要每次使用存储时自己创建GlusterFS的卷(GlusterFS的数据存储在卷Volume上);自动利用Kubernetes的 Dynamic Provisioning 特性,可以由Kubernetes自动创建GlusterFS卷,但是需要先部署Heketi软件,并且安装GlusterFS的机器上还要有裸磁盘。

生产推荐使用自动方式

自动方式

自动方式需要先部署Heketi软件,Heketi用来管理GlusterFS,并提供RESTful API接口供Kubernetes调用。Heketi需要使用裸磁盘,假设三个GlusterFS节点上都挂了一块裸磁盘 /dev/sdb。

部署Heketi

部署在:

master1

安装yum源:

yum install centos-release-gluster

安装:heketi

yum install heketi heketi-client -y

配置hekeit用户免密登录root

[root@heketi ~]# ssh-keygen -f /etc/heketi/heketi_key -t rsa -N ''
Generating public/private rsa key pair.
Your identification has been saved in /etc/heketi/heketi_key.
Your public key has been saved in /etc/heketi/heketi_key.pub.
The key fingerprint is:
SHA256:rTUHdBSM3C4yrr0DDsrJO3QVcoNzFKOgwI3r3Q8D/Yo root@heketi
The key's randomart image is:
+---[RSA 2048]----+
|o +  o+. ..=+.   |
|.+ o+.=. .o.o    |
|. . o= o  ..     |
| . . .. o....    |
|. . o...So+..    |
| ...o+...o o     |
| + +.o=+.        |
|  *E .o.o        |
|  .o    .o       |
+----[SHA256]-----+

[root@heketi ~]# touch /etc/heketi/gluster.json
[root@heketi ~]# chown -R heketi:heketi /etc/heketi/
[root@heketi ~]#  ll /etc/heketi/
总用量 12
-rw-r--r-- 1 heketi heketi    0 110 02:42 gluster.json
-rw-r--r-- 1 heketi heketi 1927 418 2019 heketi.json
-rw------- 1 heketi heketi 1675 110 02:42 heketi_key
-rw-r--r-- 1 heketi heketi  393 110 02:42 heketi_key.pub

密钥分发

[root@heketi etc]# ssh-copy-id -i /etc/heketi/heketi_key.pub k8s-node01
[root@heketi etc]# ssh-copy-id -i /etc/heketi/heketi_key.pub k8s-node02
[root@heketi etc]# ssh-copy-id -i /etc/heketi/heketi_key.pub k8s-node03
[root@heketi etc]# ssh-copy-id -i /etc/heketi/heketi_key.pub k8s-master02
[root@heketi etc]# ssh-copy-id -i /etc/heketi/heketi_key.pub k8s-master03

修改配置文件

修改/etc/heketi/heketi.json(省略了没有修改的部分):

heketi节点操作 配置文件中备注要删除 否则无法启动 !!!

[root@heketi ~]# vi /etc/heketi/heketi.json
{
  "_port_comment": "Heketi Server Port Number",
  //修改一下端口防止冲突
  "port": "8083", 
  // 默认值false,不需要认证 不修改
  "_use_auth": "Enable JWT authorization. Please enable for deployment",
  "use_auth": false,

  "_jwt": "Private keys for access",
  "jwt": {
    "_admin": "Admin has access to all APIs",
    "admin": {
      "key": "My Secret"
    },
    "_user": "User only has access to /volumes endpoint",
    "user": {
      "key": "My Secret"
    }
  },

  "_glusterfs_comment": "GlusterFS Configuration",
  "glusterfs": {
    "_executor_comment": [
      "Execute plugin. Possible choices: mock, ssh",
      "mock: This setting is used for testing and development.",
      "      It will not send commands to any node.",
      "ssh:  This setting will notify Heketi to ssh to the nodes.",
      "      It will need the values in sshexec to be configured.",
      "kubernetes: Communicate with GlusterFS containers over",
      "            Kubernetes exec api."
    ],
    // mock:测试环境下创建的volume无法挂载;
	// kubernetes:在GlusterFS由kubernetes创建时采用
	// ssh:这里使用
    "executor": "ssh",
    "_sshexec_comment": "SSH username and private key file information",
    "sshexec": {
      "keyfile": "/etc/heketi/heketi_key", // 这里修改密钥
      "user": "root",
      "port": "22", //按自己ssh端口修改
      "fstab": "/etc/fstab" //创建的volume挂载位置
    },

    "_kubeexec_comment": "Kubernetes configuration",
    "kubeexec": {
      "host" :"https://kubernetes.host:8443",
      "cert" : "/path/to/crt.file",
      "insecure": false,
      "user": "kubernetes username",
      "password": "password for kubernetes user",
      "namespace": "OpenShift project or Kubernetes namespace",
      "fstab": "Optional: Specify fstab file on node.  Default is /etc/fstab"
    },

    "_db_comment": "Database file name",
    "db": "/var/lib/heketi/heketi.db", // 存储位置

    "_loglevel_comment": [
      "Set log level. Choices are:",
      "  none, critical, error, warning, info, debug",
      "Default is warning"
    ],
    "loglevel" : "debug"
  }
}

这里主要把端口改为8083了(防止冲突),executor改为ssh, sshexec的各项配置也做了相应修改。

heketi 需要免密登录gfs节点之前部署集群已经做过了,这里就不做了,可参考部署集群时候的步骤

启动:

systemctl enable heketi
systemctl start heketi

看日志:

journalctl -u heketi

(Heketi数据目录: /var/lib/heketi)

验证:

curl http://192.168.XX.A:8083/hello

[root@master1 manifests]# curl http://127.0.0.1:8083/hello
Hello from Heketi

或:

heketi-cli --server http://192.168.XX.A:8083 cluster list

[root@master1 manifests]# heketi-cli --server http://127.0.0.1:8083 cluster list
Clusters:

配置节点:

新建topology.json:

{
"clusters": [
    {
        "nodes": [
            {
                "node": {
                    "hostnames": {
                        "manage": [
                            "gfs1"
                        ],
                        "storage": [
                            "192.168.100.140"
                        ]
                    },
                    "zone": 1
                },
                "devices": [
                    "/dev/sdb"
                ]
            },
            {
                "node": {
                    "hostnames": {
                        "manage": [
                            "gfs2"
                        ],
                        "storage": [
                            "192.168.100.141"
                        ]
                    },
                    "zone": 2
                },
                "devices": [
                    "/dev/sdb"
                ]
            },
            {
                "node": {
                    "hostnames": {
                        "manage": [
                            "gfs3"
                        ],
                        "storage": [
                            "192.168.100.142"
                        ]
                    },
                    "zone": 1
                },
                "devices": [
                    "/dev/sdb"
                ]
            }
        ]
    }
]
} 

载入配置:

export HEKETI_CLI_SERVER=http://192.168.XX.A:8083
heketi-cli topology load --json=topology.json
[root@master1 heketi]# heketi-cli -s $HEKETI_CLI_SERVER --user admin --secret '123456' topology info
Creating cluster ... ID: 07de04e5d34d3af05e825830b6d3cb89
	Allowing file volumes on cluster.
	Allowing block volumes on cluster.
	Creating node gfs1 ... ID: 0a7b0cff7920604a2d350c7fe2902b51
		Adding device /dev/sdb ... OK
	Creating node gfs2 ... ID: 2e8d6b1220b5e3ea092727a8e6e03e55
		Adding device /dev/sdb ... OK
	Creating node gfs3 ... ID: 3ecbc8adca9a7bfdae84753209667f5f
		Adding device /dev/sdb ... OK

查看拓扑:

heketi-cli topology info
[root@master1 heketi]# heketi-cli topology info

Cluster Id: 07de04e5d34d3af05e825830b6d3cb89

    File:  true
    Block: true

    Volumes:


    Nodes:

	Node Id: 0a7b0cff7920604a2d350c7fe2902b51
	State: online
	Cluster Id: 07de04e5d34d3af05e825830b6d3cb89
	Zone: 1
	Management Hostnames: gfs1
	Storage Hostnames: 192.168.60.135
	Devices:
		Id:912af1b21f3ccd7201ea27e668763675   Name:/dev/sdb            State:online    Size (GiB):500     Used (GiB):0       Free (GiB):500     
			Bricks:

	Node Id: 2e8d6b1220b5e3ea092727a8e6e03e55
	State: online
	Cluster Id: 07de04e5d34d3af05e825830b6d3cb89
	Zone: 2
	Management Hostnames: gfs2
	Storage Hostnames: 192.168.60.136
	Devices:
		Id:8021d6ade3f7d3f06102ffe1180a254c   Name:/dev/sdb            State:online    Size (GiB):500     Used (GiB):0       Free (GiB):500     
			Bricks:

	Node Id: 3ecbc8adca9a7bfdae84753209667f5f
	State: online
	Cluster Id: 07de04e5d34d3af05e825830b6d3cb89
	Zone: 1
	Management Hostnames: gfs3
	Storage Hostnames: 192.168.60.137
	Devices:
		Id:17c62a855c7fd2d1ba139815d961424c   Name:/dev/sdb            State:online    Size (GiB):500     Used (GiB):0       Free (GiB):500

建个大小为2G的volume试试:

heketi-cli volume create --size=2

查看:

heketi-cli volume list

删除:

heketi-cli volume delete <Id>

2)Kubernetes创建StorageClass

Kubernetes通过创建StorageClass来使用 Dynamic Provisioning 特性,StorageClass连接Heketi,可以根据需要自动创建GluserFS的Volume,StorageClass还是要系统管理员创建,不过StorageClass不需要每次创建,因为这个不需要很多,不同的PVC可以用同一个StorageClass。

k8s使用gluster

 vim glusterfs-storageclass.yaml
apiVersion: storage.k8s.io/v1     
kind: StorageClass
metadata:
  name: glusterfs
provisioner: kubernetes.io/glusterfs #表示存储分配器,需要根据后端存储的不同而变更
parameters:
  resturl: "http://192.168.60.132:8083" #heketi API服务提供的url
  restauthenabled: "true" #可选参数,默认值为”false”,heketi服务开启认证时必须设置为”true”
  restuser: "admin" #可选参数,开启认证时设置相应用户名;
  restuserkey: "adminkey" #可选参数,开启认证时设置相应密码;
  volumetype: "replicate:2" #可选参数,设置卷类型及其参数,如果未分配卷类型,则有分配器决定卷类型;如”volumetype: replicate:3”表示3副本的replicate卷,”volumetype: disperse:4:2”表示disperse卷,其中‘4’是数据,’2’是冗余校验,”volumetype: none”表示distribute卷

执行命令创建

kubectl apply -f glusterfs-storageclass.yaml

查看storageclass

[root@master1 heketi]#  kubectl get storageclass
NAME        PROVISIONER               RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
glusterfs   kubernetes.io/glusterfs   Delete          Immediate           false                  25s

测试创建pvc

创建pvc,glusterfs-pvc.yaml
vim  glusterfs-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: glusterfs-test
  namespace: default
  annotations:
    volume.beta.kubernetes.io/storage-class: "glusterfs"
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

执行命令创建

[root@master1 heketi]# kubectl create -f glusterfs-pvc.yaml 
persistentvolumeclaim/glusterfs-test created

查看pvc
状态为Bound说明创建成功

[root@master1 heketi]# kubectl get pvc
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
glusterfs-test   Bound    pvc-93a5101c-a8e3-440d-8f6e-f5ffa32698bf   1Gi        RWX            glusterfs      24s

查看pv
这里pv为动态创建的

[root@master1 heketi]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS   REASON   AGE
pvc-93a5101c-a8e3-440d-8f6e-f5ffa32698bf   1Gi        RWX            Delete           Bound    default/glusterfs-test   glusterfs               46s

查看gfs 使用情况

[root@master1 heketi]# heketi-cli topology info

Cluster Id: 07de04e5d34d3af05e825830b6d3cb89

    File:  true
    Block: true

    Volumes:

	Name: vol_3f8b03ac1b6e26f97638af6700426e16
	Size: 1
	Id: 3f8b03ac1b6e26f97638af6700426e16
	Cluster Id: 07de04e5d34d3af05e825830b6d3cb89
	Mount: 192.168.60.135:vol_3f8b03ac1b6e26f97638af6700426e16
	Mount Options: backup-volfile-servers=192.168.60.136,192.168.60.137
	Durability Type: replicate
	Replica: 2
	Snapshot: Enabled
	Snapshot Factor: 1.00

		Bricks:
			Id: 291a8ea6406c19e47ee59ea9ff448948
			Path: /var/lib/heketi/mounts/vg_912af1b21f3ccd7201ea27e668763675/brick_291a8ea6406c19e47ee59ea9ff448948/brick
			Size (GiB): 1
			Node: 0a7b0cff7920604a2d350c7fe2902b51
			Device: 912af1b21f3ccd7201ea27e668763675

			Id: 814d7115ece0dbcff68508b4ddd53915
			Path: /var/lib/heketi/mounts/vg_8021d6ade3f7d3f06102ffe1180a254c/brick_814d7115ece0dbcff68508b4ddd53915/brick
			Size (GiB): 1
			Node: 2e8d6b1220b5e3ea092727a8e6e03e55
			Device: 8021d6ade3f7d3f06102ffe1180a254c



    Nodes:

	Node Id: 0a7b0cff7920604a2d350c7fe2902b51
	State: online
	Cluster Id: 07de04e5d34d3af05e825830b6d3cb89
	Zone: 1
	Management Hostnames: gfs1
	Storage Hostnames: 192.168.60.135
	Devices:
		Id:912af1b21f3ccd7201ea27e668763675   Name:/dev/sdb            State:online    Size (GiB):500     Used (GiB):1       Free (GiB):498     
			Bricks:
				Id:291a8ea6406c19e47ee59ea9ff448948   Size (GiB):1       Path: /var/lib/heketi/mounts/vg_912af1b21f3ccd7201ea27e668763675/brick_291a8ea6406c19e47ee59ea9ff448948/brick

	Node Id: 2e8d6b1220b5e3ea092727a8e6e03e55
	State: online
	Cluster Id: 07de04e5d34d3af05e825830b6d3cb89
	Zone: 2
	Management Hostnames: gfs2
	Storage Hostnames: 192.168.60.136
	Devices:
		Id:8021d6ade3f7d3f06102ffe1180a254c   Name:/dev/sdb            State:online    Size (GiB):500     Used (GiB):1       Free (GiB):498     
			Bricks:
				Id:814d7115ece0dbcff68508b4ddd53915   Size (GiB):1       Path: /var/lib/heketi/mounts/vg_8021d6ade3f7d3f06102ffe1180a254c/brick_814d7115ece0dbcff68508b4ddd53915/brick

	Node Id: 3ecbc8adca9a7bfdae84753209667f5f
	State: online
	Cluster Id: 07de04e5d34d3af05e825830b6d3cb89
	Zone: 1
	Management Hostnames: gfs3
	Storage Hostnames: 192.168.60.137
	Devices:
		Id:17c62a855c7fd2d1ba139815d961424c   Name:/dev/sdb            State:online    Size (GiB):500     Used (GiB):0       Free (GiB):500     # 部署GlusterFs持久存储

## 部署glusterfs

修改/etc/hosts

```bash
[root@k8s-master01 amd64]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.100.129 k8s-master01
192.168.100.138 k8s-master02
192.168.100.139 k8s-master03
192.168.100.140 k8s-node01  gfs1
192.168.100.141 k8s-node02  gfs2
192.168.100.142 k8s-node03  gfs3

安装yum源(每台机器执行):

[root@k8s-master01~]#yum -y install centos-release-gluster

安装GlusterFS(每台机器执行):

[root@k8s-master01~]#yum -y install glusterfs glusterfs-fuse glusterfs-server

安装结束。

启动GlusterFS(每台机器执行):

[root@k8s-master01~]#systemctl start glusterd.service
[root@k8s-master01~]#systemctl enable glusterd.service

组建集群( gfs1 机器执行):

gluster peer probe gfs2  
gluster peer probe gfs3 

验证(gfs1 机器执行):

[root@node1 ~]# gluster peer status 
Number of Peers: 2

Hostname: gfs2
Uuid: c242e322-7ba5-4715-be02-1030e03e7972
State: Peer in Cluster (Connected)

Hostname: gfs3
Uuid: c7439332-f4b2-4c98-8217-16d63dcfe111
State: Peer in Cluster (Connected)

看到其他两个点的信息即代表GlusterFS集群组建成功。

Kubernetes使用GlusterFS

有两种方式,手动和自动,手动需要每次使用存储时自己创建GlusterFS的卷(GlusterFS的数据存储在卷Volume上);自动利用Kubernetes的 Dynamic Provisioning 特性,可以由Kubernetes自动创建GlusterFS卷,但是需要先部署Heketi软件,并且安装GlusterFS的机器上还要有裸磁盘。

生产推荐使用自动方式

自动方式

自动方式需要先部署Heketi软件,Heketi用来管理GlusterFS,并提供RESTful API接口供Kubernetes调用。Heketi需要使用裸磁盘,假设三个GlusterFS节点上都挂了一块裸磁盘 /dev/sdb。

部署Heketi

部署在:

master1

安装yum源:

yum install centos-release-gluster

安装:heketi

yum install heketi heketi-client -y

配置hekeit用户免密登录root

[root@heketi ~]# ssh-keygen -f /etc/heketi/heketi_key -t rsa -N ''
Generating public/private rsa key pair.
Your identification has been saved in /etc/heketi/heketi_key.
Your public key has been saved in /etc/heketi/heketi_key.pub.
The key fingerprint is:
SHA256:rTUHdBSM3C4yrr0DDsrJO3QVcoNzFKOgwI3r3Q8D/Yo root@heketi
The key's randomart image is:
+---[RSA 2048]----+
|o +  o+. ..=+.   |
|.+ o+.=. .o.o    |
|. . o= o  ..     |
| . . .. o....    |
|. . o...So+..    |
| ...o+...o o     |
| + +.o=+.        |
|  *E .o.o        |
|  .o    .o       |
+----[SHA256]-----+

[root@heketi ~]# touch /etc/heketi/gluster.json
[root@heketi ~]# chown -R heketi:heketi /etc/heketi/
[root@heketi ~]#  ll /etc/heketi/
总用量 12
-rw-r--r-- 1 heketi heketi    0 110 02:42 gluster.json
-rw-r--r-- 1 heketi heketi 1927 418 2019 heketi.json
-rw------- 1 heketi heketi 1675 110 02:42 heketi_key
-rw-r--r-- 1 heketi heketi  393 110 02:42 heketi_key.pub

密钥分发

[root@heketi etc]# ssh-copy-id -i /etc/heketi/heketi_key.pub k8s-node01
[root@heketi etc]# ssh-copy-id -i /etc/heketi/heketi_key.pub k8s-node02
[root@heketi etc]# ssh-copy-id -i /etc/heketi/heketi_key.pub k8s-node03
[root@heketi etc]# ssh-copy-id -i /etc/heketi/heketi_key.pub k8s-master02
[root@heketi etc]# ssh-copy-id -i /etc/heketi/heketi_key.pub k8s-master03

修改配置文件

修改/etc/heketi/heketi.json(省略了没有修改的部分):

heketi节点操作 配置文件中备注要删除 否则无法启动 !!!

[root@heketi ~]# vi /etc/heketi/heketi.json
{
  "_port_comment": "Heketi Server Port Number",
  //修改一下端口防止冲突
  "port": "8083", 
  // 默认值false,不需要认证 不修改
  "_use_auth": "Enable JWT authorization. Please enable for deployment",
  "use_auth": false,

  "_jwt": "Private keys for access",
  "jwt": {
    "_admin": "Admin has access to all APIs",
    "admin": {
      "key": "My Secret"
    },
    "_user": "User only has access to /volumes endpoint",
    "user": {
      "key": "My Secret"
    }
  },

  "_glusterfs_comment": "GlusterFS Configuration",
  "glusterfs": {
    "_executor_comment": [
      "Execute plugin. Possible choices: mock, ssh",
      "mock: This setting is used for testing and development.",
      "      It will not send commands to any node.",
      "ssh:  This setting will notify Heketi to ssh to the nodes.",
      "      It will need the values in sshexec to be configured.",
      "kubernetes: Communicate with GlusterFS containers over",
      "            Kubernetes exec api."
    ],
    // mock:测试环境下创建的volume无法挂载;
	// kubernetes:在GlusterFS由kubernetes创建时采用
	// ssh:这里使用
    "executor": "ssh",
    "_sshexec_comment": "SSH username and private key file information",
    "sshexec": {
      "keyfile": "/etc/heketi/heketi_key", // 这里修改密钥
      "user": "root",
      "port": "22", //按自己ssh端口修改
      "fstab": "/etc/fstab" //创建的volume挂载位置
    },

    "_kubeexec_comment": "Kubernetes configuration",
    "kubeexec": {
      "host" :"https://kubernetes.host:8443",
      "cert" : "/path/to/crt.file",
      "insecure": false,
      "user": "kubernetes username",
      "password": "password for kubernetes user",
      "namespace": "OpenShift project or Kubernetes namespace",
      "fstab": "Optional: Specify fstab file on node.  Default is /etc/fstab"
    },

    "_db_comment": "Database file name",
    "db": "/var/lib/heketi/heketi.db", // 存储位置

    "_loglevel_comment": [
      "Set log level. Choices are:",
      "  none, critical, error, warning, info, debug",
      "Default is warning"
    ],
    "loglevel" : "debug"
  }
}

这里主要把端口改为8083了(防止冲突),executor改为ssh, sshexec的各项配置也做了相应修改。

heketi 需要免密登录gfs节点之前部署集群已经做过了,这里就不做了,可参考部署集群时候的步骤

启动:

systemctl enable heketi
systemctl start heketi

看日志:

journalctl -u heketi

(Heketi数据目录: /var/lib/heketi)

验证:

curl http://192.168.XX.A:8083/hello

[root@master1 manifests]# curl http://127.0.0.1:8083/hello
Hello from Heketi

或:

heketi-cli --server http://192.168.XX.A:8083 cluster list

[root@master1 manifests]# heketi-cli --server http://127.0.0.1:8083 cluster list
Clusters:

配置节点:

新建topology.json:

{
"clusters": [
    {
        "nodes": [
            {
                "node": {
                    "hostnames": {
                        "manage": [
                            "gfs1"
                        ],
                        "storage": [
                            "192.168.100.140"
                        ]
                    },
                    "zone": 1
                },
                "devices": [
                    "/dev/sdb"
                ]
            },
            {
                "node": {
                    "hostnames": {
                        "manage": [
                            "gfs2"
                        ],
                        "storage": [
                            "192.168.100.141"
                        ]
                    },
                    "zone": 2
                },
                "devices": [
                    "/dev/sdb"
                ]
            },
            {
                "node": {
                    "hostnames": {
                        "manage": [
                            "gfs3"
                        ],
                        "storage": [
                            "192.168.100.142"
                        ]
                    },
                    "zone": 1
                },
                "devices": [
                    "/dev/sdb"
                ]
            }
        ]
    }
]
} 

载入配置:

export HEKETI_CLI_SERVER=http://192.168.XX.A:8083
heketi-cli topology load --json=topology.json
[root@master1 heketi]# heketi-cli -s $HEKETI_CLI_SERVER --user admin --secret '123456' topology info
Creating cluster ... ID: 07de04e5d34d3af05e825830b6d3cb89
	Allowing file volumes on cluster.
	Allowing block volumes on cluster.
	Creating node gfs1 ... ID: 0a7b0cff7920604a2d350c7fe2902b51
		Adding device /dev/sdb ... OK
	Creating node gfs2 ... ID: 2e8d6b1220b5e3ea092727a8e6e03e55
		Adding device /dev/sdb ... OK
	Creating node gfs3 ... ID: 3ecbc8adca9a7bfdae84753209667f5f
		Adding device /dev/sdb ... OK

查看拓扑:

heketi-cli topology info
[root@master1 heketi]# heketi-cli topology info

Cluster Id: 07de04e5d34d3af05e825830b6d3cb89

    File:  true
    Block: true

    Volumes:


    Nodes:

	Node Id: 0a7b0cff7920604a2d350c7fe2902b51
	State: online
	Cluster Id: 07de04e5d34d3af05e825830b6d3cb89
	Zone: 1
	Management Hostnames: gfs1
	Storage Hostnames: 192.168.60.135
	Devices:
		Id:912af1b21f3ccd7201ea27e668763675   Name:/dev/sdb            State:online    Size (GiB):500     Used (GiB):0       Free (GiB):500     
			Bricks:

	Node Id: 2e8d6b1220b5e3ea092727a8e6e03e55
	State: online
	Cluster Id: 07de04e5d34d3af05e825830b6d3cb89
	Zone: 2
	Management Hostnames: gfs2
	Storage Hostnames: 192.168.60.136
	Devices:
		Id:8021d6ade3f7d3f06102ffe1180a254c   Name:/dev/sdb            State:online    Size (GiB):500     Used (GiB):0       Free (GiB):500     
			Bricks:

	Node Id: 3ecbc8adca9a7bfdae84753209667f5f
	State: online
	Cluster Id: 07de04e5d34d3af05e825830b6d3cb89
	Zone: 1
	Management Hostnames: gfs3
	Storage Hostnames: 192.168.60.137
	Devices:
		Id:17c62a855c7fd2d1ba139815d961424c   Name:/dev/sdb            State:online    Size (GiB):500     Used (GiB):0       Free (GiB):500

建个大小为2G的volume试试:

heketi-cli volume create --size=2

查看:

heketi-cli volume list

删除:

heketi-cli volume delete <Id>

2)Kubernetes创建StorageClass

Kubernetes通过创建StorageClass来使用 Dynamic Provisioning 特性,StorageClass连接Heketi,可以根据需要自动创建GluserFS的Volume,StorageClass还是要系统管理员创建,不过StorageClass不需要每次创建,因为这个不需要很多,不同的PVC可以用同一个StorageClass。

k8s使用gluster

 vim glusterfs-storageclass.yaml
apiVersion: storage.k8s.io/v1     
kind: StorageClass
metadata:
  name: glusterfs
provisioner: kubernetes.io/glusterfs #表示存储分配器,需要根据后端存储的不同而变更
parameters:
  resturl: "http://192.168.60.132:8083" #heketi API服务提供的url
  restauthenabled: "true" #可选参数,默认值为”false”,heketi服务开启认证时必须设置为”true”
  restuser: "admin" #可选参数,开启认证时设置相应用户名;
  restuserkey: "adminkey" #可选参数,开启认证时设置相应密码;
  volumetype: "replicate:2" #可选参数,设置卷类型及其参数,如果未分配卷类型,则有分配器决定卷类型;如”volumetype: replicate:3”表示3副本的replicate卷,”volumetype: disperse:4:2”表示disperse卷,其中‘4’是数据,’2’是冗余校验,”volumetype: none”表示distribute卷

执行命令创建

kubectl apply -f glusterfs-storageclass.yaml

查看storageclass

[root@master1 heketi]#  kubectl get storageclass
NAME        PROVISIONER               RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
glusterfs   kubernetes.io/glusterfs   Delete          Immediate           false                  25s

测试创建pvc

创建pvc,glusterfs-pvc.yaml
vim  glusterfs-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: glusterfs-test
  namespace: default
  annotations:
    volume.beta.kubernetes.io/storage-class: "glusterfs"
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

执行命令创建

[root@master1 heketi]# kubectl create -f glusterfs-pvc.yaml 
persistentvolumeclaim/glusterfs-test created

查看pvc
状态为Bound说明创建成功

[root@master1 heketi]# kubectl get pvc
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
glusterfs-test   Bound    pvc-93a5101c-a8e3-440d-8f6e-f5ffa32698bf   1Gi        RWX            glusterfs      24s

查看pv
这里pv为动态创建的

[root@master1 heketi]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS   REASON   AGE
pvc-93a5101c-a8e3-440d-8f6e-f5ffa32698bf   1Gi        RWX            Delete           Bound    default/glusterfs-test   glusterfs               46s

查看gfs 使用情况

[root@master1 heketi]# heketi-cli topology info

Cluster Id: 07de04e5d34d3af05e825830b6d3cb89

    File:  true
    Block: true

    Volumes:

	Name: vol_3f8b03ac1b6e26f97638af6700426e16
	Size: 1
	Id: 3f8b03ac1b6e26f97638af6700426e16
	Cluster Id: 07de04e5d34d3af05e825830b6d3cb89
	Mount: 192.168.60.135:vol_3f8b03ac1b6e26f97638af6700426e16
	Mount Options: backup-volfile-servers=192.168.60.136,192.168.60.137
	Durability Type: replicate
	Replica: 2
	Snapshot: Enabled
	Snapshot Factor: 1.00

		Bricks:
			Id: 291a8ea6406c19e47ee59ea9ff448948
			Path: /var/lib/heketi/mounts/vg_912af1b21f3ccd7201ea27e668763675/brick_291a8ea6406c19e47ee59ea9ff448948/brick
			Size (GiB): 1
			Node: 0a7b0cff7920604a2d350c7fe2902b51
			Device: 912af1b21f3ccd7201ea27e668763675

			Id: 814d7115ece0dbcff68508b4ddd53915
			Path: /var/lib/heketi/mounts/vg_8021d6ade3f7d3f06102ffe1180a254c/brick_814d7115ece0dbcff68508b4ddd53915/brick
			Size (GiB): 1
			Node: 2e8d6b1220b5e3ea092727a8e6e03e55
			Device: 8021d6ade3f7d3f06102ffe1180a254c



    Nodes:

	Node Id: 0a7b0cff7920604a2d350c7fe2902b51
	State: online
	Cluster Id: 07de04e5d34d3af05e825830b6d3cb89
	Zone: 1
	Management Hostnames: gfs1
	Storage Hostnames: 192.168.60.135
	Devices:
		Id:912af1b21f3ccd7201ea27e668763675   Name:/dev/sdb            State:online    Size (GiB):500     Used (GiB):1       Free (GiB):498     
			Bricks:
				Id:291a8ea6406c19e47ee59ea9ff448948   Size (GiB):1       Path: /var/lib/heketi/mounts/vg_912af1b21f3ccd7201ea27e668763675/brick_291a8ea6406c19e47ee59ea9ff448948/brick

	Node Id: 2e8d6b1220b5e3ea092727a8e6e03e55
	State: online
	Cluster Id: 07de04e5d34d3af05e825830b6d3cb89
	Zone: 2
	Management Hostnames: gfs2
	Storage Hostnames: 192.168.60.136
	Devices:
		Id:8021d6ade3f7d3f06102ffe1180a254c   Name:/dev/sdb            State:online    Size (GiB):500     Used (GiB):1       Free (GiB):498     
			Bricks:
				Id:814d7115ece0dbcff68508b4ddd53915   Size (GiB):1       Path: /var/lib/heketi/mounts/vg_8021d6ade3f7d3f06102ffe1180a254c/brick_814d7115ece0dbcff68508b4ddd53915/brick

	Node Id: 3ecbc8adca9a7bfdae84753209667f5f
	State: online
	Cluster Id: 07de04e5d34d3af05e825830b6d3cb89
	Zone: 1
	Management Hostnames: gfs3
	Storage Hostnames: 192.168.60.137
	Devices:
		Id:17c62a855c7fd2d1ba139815d961424c   Name:/dev/sdb            State:online    Size (GiB):500     Used (GiB):0       Free (GiB):500     
			Bricks:

问题
heketi有些卷明明存在但是却删不了
直接删除heketi存储目录/var/lib/heketi/ 下的mounts/文件夹,然后> heketi.db 清空db文件,重新来
Can’t initialize physical volume “/dev/sdb1” of volume group “vg1” without –ff
这是因为没有卸载之前的vg和pv
使用命令vgremove,pvremove依次删除卷组,逻辑卷


  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值