openstack环境:
Controller 10.101.0.21
ComputeA 10.101.0.30
ComputeB 10.101.0.31
ComputeC 10.101.0.32
ComputeD 10.101.0.33
目的:
把四台Compute节点作为server和client一体的服务集群,在每个节点都划出一个目录作为共享目录,并作replica 2备份,当有一台计算节点挂掉的时候,整个系统仍然能够正常运行。
步骤:
在每台计算节点都新建一个lv,并格式化成ext4格式
lvcreate -L 100g -n glusterfs_lv openstackvg
mkfs.ext4 /dev/openstackvg/glusterfs_lv
mount /dev/openstackvg/glusterfs_lv /home/glusterfs
这样每台结算节点的/home/glusterfs都作为共享目录。
然后启动每台节点的gluster服务,启动服务之前记得关闭防火墙
service iptables stop
server glusterd start
然后在ComputeA上加入其他节点:
gluster peer add ComputeB
gluster peer add ComputeC
gluster peer add ComputeD
[root@Compute-34-40-B5-E0-10-B2 ~]# gluster peer status
Number of Peers: 3
Hostname: Compute-5c-f3-fc-96-b9-34
Uuid: 13a8ae46-1b8a-4f55-b792-e75a9abed8c6
State: Peer in Cluster (Connected)
Hostname: Compute-5c-f3-fc-5e-4e-32
Uuid: 284e43e9-e9e4-42cc-aac2-be01424aecd6
State: Peer in Cluster (Connected)
Hostname: 10.101.0.30
Uuid: 9fa8d87c-841f-4e55-87c1-42c7b2be57af
State: Peer in Cluster (Connected)
然后创建glusterfs的volume
glusterfs volume create gluster_vol replica 2 stripe 2 ComputeA:/home/glusterfs ComputeB:/home/glusterfs ComputeC:/home/glusterfs ComputeD:/home/glusterfs
glusterfs volume start
[root@Compute-34-40-B5-E0-10-B2 ~]# gluster volume status
Status of volume: gluster_vol
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick Compute-34-40-b5-e0-10-b2:/opt/lbs/glusterfs_lv 49152 Y 2137
Brick Compute-5c-f3-fc-5e-4e-32:/opt/lbs/glusterfs_lv 49152 Y 2180
Brick Compute-5c-f3-fc-96-b9-34:/opt/lbs/glusterfs_lv 49153 Y 27050
Brick 10.101.0.30:/opt/lbs/glusterfs_lv 49152 Y 2038
NFS Server on localhost 2049 Y 2153
Self-heal Daemon on localhost N/A Y 2148
NFS Server on Compute-5c-f3-fc-5e-4e-32 2049 Y 2187
Self-heal Daemon on Compute-5c-f3-fc-5e-4e-32 N/A Y 2188
NFS Server on Compute-5c-f3-fc-96-b9-34 2049 Y 5096
Self-heal Daemon on Compute-5c-f3-fc-96-b9-34 N/A Y 5103
NFS Server on 10.101.0.30 2049 Y 2042
Self-heal Daemon on 10.101.0.30 N/A Y 2046
There are no active volume tasks
[root@Compute-34-40-B5-E0-10-B2 ~]# gluster volume info
Volume Name: gluster_vol
Type: Distributed-Replicate
Volume ID: b4717c0a-50ad-41a0-a444-612054cfd1b3
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: Compute-34-40-b5-e0-10-b2:/opt/lbs/glusterfs_lv
Brick2: Compute-5c-f3-fc-5e-4e-32:/opt/lbs/glusterfs_lv
Brick3: Compute-5c-f3-fc-96-b9-34:/opt/lbs/glusterfs_lv
Brick4: 10.101.0.30:/opt/lbs/glusterfs_lv
然后到每个节点上把glusterfs共享出来的volume挂在到特定目录
mount -t glusterfs <本机IP>:/gluster_vol /var/lib/nova/instances
chown -R nova:nova /var/lib/nova/instances