网上一般找的关于GlusterFS的要么是介绍replicate方式(RAID1)或者unify方式。而这两种方式混合使用的则很少很少。
我的理解,2-3台机器用replicate方式组成一个具有备份功能的文件系统集群,然后将多个这样的集群串联起来变成一个大容量的文件系统。
服务端
Ubuntu9.10已经提供了GlusterFS包,所以只需要按照下面的方法安装:
aptitude install glusterfs-server
查看版本
glusterfs --version
可以看到Ubuntu9.10提供的是2.0.2版本:
root@server1:~# glusterfs --version
glusterfs 2.0.2 built on Jun 29 2009 23:49:59
Repository revision: 07019da2e16534d527215a91904298ede09bb798
Copyright (c) 2006-2009 Z RESEARCH Inc. <http://www.zresearch.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
root@server1:~#
接下来,我们需要创建几个目录:
mkdir /data/
mkdir /data/export
mkdir /data/export-ns
现在我们需要配置GlusterFS服务端的配置文件/etc/glusterfs/glusterfsd.vol (我们先将原始文件/etc/glusterfs/glusterfsd.vol备份),允许客户端(192.168.0.102 = client1.example.com)可以连接到开放的文件夹(/data/export):
cp /etc/glusterfs/glusterfsd.vol /etc/glusterfs/glusterfsd.vol_orig
cat /dev/null > /etc/glusterfs/glusterfsd.vol
vi /etc/glusterfs/glusterfsd.vol
volume posix type storage/posix option directory /data/export end-volume
volume locks type features/locks subvolumes posix end-volume
volume brick type performance/io-threads option thread-count 8 subvolumes locks end-volume
volume posix-ns type storage/posix option directory /data/export-ns end-volume
volume locks-ns type features/locks subvolumes posix-ns end-volume
volume brick-ns type performance/io-threads option thread-count 8 subvolumes locks-ns end-volume
volume server type protocol/server option transport-type tcp option auth.addr.brick.allow * option auth.addr.brick-ns.allow * subvolumes brick brick-ns end-volume |
注意,IP地址是可以用通配符的 (例如192.168.*) 并且你可以使用逗号定义多个IP地址(例如192.168.0.102,192.168.0.103).
最后我们启动GlusterFS服务端:
/etc/init.d/glusterfs-server start
客户端
我们按照下面的命令安装GlusterFS客户端:
aptitude install glusterfs-client glusterfs-server
然后我们创建一个目录:
mkdir /mnt/glusterfs
接下来我们创建文件/etc/glusterfs/glusterfs.vol (我们首先将原始文件/etc/glusterfs/glusterfs.vol备份):
cp /etc/glusterfs/glusterfs.vol /etc/glusterfs/glusterfs.vol_orig
cat /dev/null > /etc/glusterfs/glusterfs.vol
vi /etc/glusterfs/glusterfs.vol
### Add client feature and attach to remote subvolume of server1 volume brick1 type protocol/client option transport-type tcp/client option remote-host 192.168.0.1 # IP address of the remote brick option remote-subvolume brick # name of the remote volume end-volume
### Add client feature and attach to remote subvolume of server2 volume brick2 type protocol/client option transport-type tcp/client option remote-host 192.168.0.2 # IP address of the remote brick option remote-subvolume brick # name of the remote volume end-volume
### Add client feature and attach to remote subvolume of server3 volume brick3 type protocol/client option transport-type tcp/client option remote-host 192.168.0.3 # IP address of the remote brick option remote-subvolume brick # name of the remote volume end-volume
### Add client feature and attach to remote subvolume of server4 volume brick4 type protocol/client option transport-type tcp/client option remote-host 192.168.0.4 # IP address of the remote brick option remote-subvolume brick # name of the remote volume end-volume
### Add client feature and attach to remote subvolume of server1 volume brick1-ns type protocol/client option transport-type tcp/client option remote-host 192.168.0.1 # IP address of the remote brick option remote-subvolume brick-ns # name of the remote volume end-volume
### Add client feature and attach to remote subvolume of server2 volume brick2-ns type protocol/client option transport-type tcp/client option remote-host 192.168.0.2 # IP address of the remote brick option remote-subvolume brick-ns # name of the remote volume end-volume
volume afr1 type cluster/afr subvolumes brick1 brick4 end-volume
volume afr2 type cluster/afr subvolumes brick2 brick3 end-volume
volume afr-ns type cluster/afr subvolumes brick1-ns brick2-ns end-volume
volume unify type cluster/unify option scheduler rr # round robin option namespace afr-ns subvolumes afr1 afr2 end-volume |
保证option remote-host使用的主机名或者IP地址是正确的
现在我们可以加载GlusterFS文件系统到/mnt/glusterfs:
glusterfs -f /etc/glusterfs/glusterfs.vol /mnt/glusterfs
或者
mount -t glusterfs /etc/glusterfs/glusterfs.vol /mnt/glusterfs
然后你可以查看加载设备
root@client1:~# mount
/dev/sda1 on / type ext4 (rw,errors=remount-ro)
proc on /proc type proc (rw)
none on /sys type sysfs (rw,noexec,nosuid,nodev)
none on /sys/fs/fuse/connections type fusectl (rw)
none on /sys/kernel/debug type debugfs (rw)
none on /sys/kernel/security type securityfs (rw)
udev on /dev type tmpfs (rw,mode=0755)
none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
none on /dev/shm type tmpfs (rw,nosuid,nodev)
none on /var/run type tmpfs (rw,nosuid,mode=0755)
none on /var/lock type tmpfs (rw,noexec,nosuid,nodev)
none on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noexec,nosuid,nodev)
/etc/glusterfs/glusterfs.vol on /mnt/glusterfs type fuse.glusterfs (rw,max_read=131072,allow_other,default_permissions)
root@client1:~#
root@client1:/mnt/glusterfs/test# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 145G 6.1G 131G 5% /
udev 470M 240K 470M 1% /dev
none 470M 1.5M 469M 1% /dev/shm
none 470M 116K 470M 1% /var/run
none 470M 0 470M 0% /var/lock
none 470M 0 470M 0% /lib/init/rw
/etc/glusterfs/glusterfs.vol
144G 8.3G 128G 7% /mnt/glusterfs
再来看服务端:
root@server1:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 145G 2.3G 135G 2% /
udev 470M 228K 470M 1% /dev
none 470M 396K 470M 1% /dev/shm
none 470M 92K 470M 1% /var/run
none 470M 0 470M 0% /var/lock
none 470M 0 470M 0% /lib/init/rw
root@server1:~#
root@server2:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 71G 2.3G 65G 4% /
udev 470M 232K 470M 1% /dev
none 470M 164K 470M 1% /dev/shm
none 470M 92K 470M 1% /var/run
none 470M 0 470M 0% /var/lock
none 470M 0 470M 0% /lib/init/rw
root@server2:~#
root@server3:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 73G 5.7G 64G 9% /
udev 486M 228K 486M 1% /dev
none 486M 1.2M 485M 1% /dev/shm
none 486M 100K 486M 1% /var/run
none 486M 0 486M 0% /var/lock
none 486M 0 486M 0% /lib/init/rw
可见,replicate方式是以卷空间最小的那个为准,unify方式则是多个卷空间累加。
为了避免每次都手动挂在GlusterFS共享区可以修改/etc/fstab,这样客户端启动时就会自动挂在共享区了。
打开/etc/fstab然后在尾部添加:
vi /etc/fstab
[...] /etc/glusterfs/glusterfs.vol /mnt/glusterfs glusterfs defaults 0 0 |
为了验证是否有效,重启客户端:
reboot
重启完成后执行命令:
df -h
和
mount
测试
先看看创建文件
touch /mnt/glusterfs/{1,2,3,4,5,6,7,8,9,10}
然后在每个server端
ls /data/export
就会看到每个server端有啥异同呢
然后再试试看创建目录
mkdir test1 test2 test3
再看看/data/export目录
可以看到创建文件的时候根据轮换方式将文件放在不同的unify方式server,创建目录的时候则是所有的server都会有。
根据上面提供的客户端配置方式可以进行任意修改,完成更多机器更复杂的配置。