一、生成rhel7.3版本的虚拟机
server1 mfs_master 172.25.58.1
server2 mfs_chunkserver1 172.25.58.2
server3 mfs_chunkserver2 172.25.58.3
客户端 client 172.25.58.250
二、本地软件仓库的准备:
moosefs的官网:https://moosefs.com/download/
提前下载好安装包,然后在真机上进行共享~~~~
[root@foundation58 mfs]# ls
moosefs-cgi-3.0.103-1.rhsystemd.x86_64.rpm moosefs-client-3.0.103-1.rhsystemd.x86_64.rpm
moosefs-cgiserv-3.0.103-1.rhsystemd.x86_64.rpm moosefs-master-3.0.103-1.rhsystemd.x86_64.rpm
moosefs-chunkserver-3.0.103-1.rhsystemd.x86_64.rpm moosefs-metalogger-3.0.103-1.rhsystemd.x86_64.rpm
moosefs-cli-3.0.103-1.rhsystemd.x86_64.rpm
[root@foundation58 mfs]# createrepo .
Spawning worker 0 with 2 pkgs
Spawning worker 1 with 2 pkgs
Spawning worker 2 with 2 pkgs
Spawning worker 3 with 1 pkgs
Workers Finished
Saving Primary metadata
Saving file lists metadata
Saving other metadata
Generating sqlite DBs
Sqlite DBs complete
[root@foundation58 mfs]# cd /var/www/html/
[root@foundation58 html]# ls
2019 4.0 mfs new rhel6.5 rhel7.0 rhel7.3
[root@foundation58 html]# chmod -R 777 mfs/
分别在其它虚拟机上配置yum源
[root@server1 yum.repos.d]# cat mfs.repo
[mfs]
name=mfs
baseurl=http://172.25.58.250/mfs
gpgcheck=0
三、配置master节点
1、在server1r上进行安装:
yum install moosefs-master moosefs-cgi moosefs-cgiserv moosefs-cli
2、在master上添加解析
[root@server1 mfs]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.25.58.1 sever1 mfsmaster
172.25.58.2 sever2
172.25.58.3 sever3
3、开启master节点,并且查看端口号
[root@server1 mfs]# pwd
/etc/mfs
[root@server1 mfs]# ls
mfsexports.cfg mfsexports.cfg.sample mfsmaster.cfg mfsmaster.cfg.sample mfstopology.cfg mfstopology.cfg.sample
[root@server1 mfs]# systemctl start moosefs-master
[root@server1 mfs]# netstat -antlp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:9419 0.0.0.0:* LISTEN 11549/mfsmaster
tcp 0 0 0.0.0.0:9420 0.0.0.0:* LISTEN 11549/mfsmaster
tcp 0 0 0.0.0.0:9421 0.0.0.0:* LISTEN 11549/mfsmaster
端口说明:
- 9419 metalogger 监听的端口地址(默认是9419),和源数据日志结合。定期和master端同步数据
- 9420 用于chunkserver 连接的端口地址(默认是9420),通信节点
- 9421 用于客户端对外连接的端口地址(默认是9421)
4、打开图形化处理工具并查看端口号(9425)
[root@server1 mfs]# systemctl start moosefs-cgiserv
[root@server1 mfs]# netstat -antlp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:9419 0.0.0.0:* LISTEN 11549/mfsmaster
tcp 0 0 0.0.0.0:9420 0.0.0.0:* LISTEN 11549/mfsmaster
tcp 0 0 0.0.0.0:9421 0.0.0.0:* LISTEN 11549/mfsmaster
tcp 0 0 0.0.0.0:9425 0.0.0.0:* LISTEN 11584/python
5、浏览器检测 http://172.25.58.1:9425/mfs.cgi
出现以下页面~
四、配置两个chunk(server2和server3)节点
chunkserver是真正存储数据的节点
1、在servcer2和server3上安装chunkserver,安装成功后,两个节点上会分别生成mfs用户。
[root@server2 ~]# yum install moosefs-chunkserver -y
[root@server3 ~]# yum install moosefs-chunkserver -y
2、添加解析
[root@server3 3.0.103]# vim /etc/hosts
172.25.58.1 server1 mfsmaster
[root@server2 3.0.103]# vim /etc/hosts
172.25.58.1 server1 mfsmaster
3、编辑客户端的配置文件
[root@server2 mfs]# mkdir /mnt/chunk1
[root@server2 ~]# vim /etc/mfs/mfshdd.cfg
最后一行添加: /mnt/chunk1 #存储位置
[root@server3 ~]# vim /etc/mfs/mfshdd.cfg
/mnt/chunk2
4、建立该目录,修改该挂载目录的所有人和所有组,这样才可以在目录中进行读写操作
[root@server2 ~]# mkdir /mnt/chunk1
[root@server2 ~]# chown mfs.mfs /mnt/chunk1/
[root@server3 ~]# mkdir /mnt/chunk2
[root@server3 ~]# chown mfs.mfs /mnt/chunk2/
5、开启服务
[root@server2 mfs]# systemctl start moosefs-chunkserver
[root@server3 mfs]# systemctl start moosefs-chunkserver
此时,在页面上此时server2和server3两个存储节点都在监控当中了.
可以查看到添加的server信息
开启服务后会发现目录下有256个目录,这些目录里就存放着信息
五、在真机上搭建客户端
1、安装客户端软件
[root@foundation58 mfs]# ls
moosefs-cgi-3.0.103-1.rhsystemd.x86_64.rpm
moosefs-cgiserv-3.0.103-1.rhsystemd.x86_64.rpm
moosefs-chunkserver-3.0.103-1.rhsystemd.x86_64.rpm
moosefs-cli-3.0.103-1.rhsystemd.x86_64.rpm
moosefs-client-3.0.103-1.rhsystemd.x86_64.rpm
moosefs-master-3.0.103-1.rhsystemd.x86_64.rpm
moosefs-metalogger-3.0.103-1.rhsystemd.x86_64.rpm
[root@foundation58 mfs]# rpm -ivh moosefs-client-3.0.103-1.rhsystemd.x86_64.rpm
2、编辑解析文件
172.25.58.1 server1 mfsmaster
[root@foundation58 dir2]#
3、创建在真机挂载数据的目录并编辑配置文件
[root@foundation58 ~]# mkdir /mnt/mfs
[root@foundation58 ~]# vim /etc/mfs/mfsmount.cfg
最后一行添加:
/mnt/mfs
[root@foundation58 ~]# mfsmount #客户端挂载文件系统
mfsmaster accepted connection with parameters: read-write,restricted_ip,admin ; root mapped to root:root
[root@foundation58 ~]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/html/rhel7.3
/dev/loop2 3762278 3762278 0 100% /var/www
mfsmaster:9421 35622912 2702912 32920000 8% /mnt/mfs
可以看到已挂载上
4、在挂载的目录下新建两个目录,并查看数据存储服务器,有几个副本
[root@foundation58 ~]# cd /mnt/mfs/
[root@foundation58 mfs]# ls
[root@foundation58 mfs]# mkdir dir1 dir2
[root@foundation58 mfs]# ls
dir1 dir2
[root@foundation58 mfs]# mfsgetgoal dir1
dir1: 2
[root@foundation58 mfs]# mfsgetgoal dir2
dir2: 2
5、修改dir1的文件备份份数为1(为了做实验对比)
[root@foundation58 mfs]# mfssetgoal -r 1 dir1/
dir1/:
inodes with goal changed: 1
inodes with goal not changed: 0
inodes with permission denied: 0
[root@foundation58 mfs]# mfsgetgoal dir1/
dir1/: 1
6、测试:
- 将文件存储到/mnt/mfs/dir中,并查看文件信息,发现文件完整的存储在server3上面
[root@foundation58 mfs]# cp /etc/passwd dir1
[root@foundation58 mfs]# cp /etc/fstab dir2
[root@foundation58 mfs]# cd dir1
[root@foundation58 dir1]# ls
passwd
[root@foundation58 dir1]# mfsfileinfo passwd
##数据存储默认为1份,由于刚才已经修改成备份为1了
passwd:
chunk 0: 0000000000000001_00000001 / (id:1 ver:1)
copy 1: 172.25.58.3:9422 (status:VALID)
[root@foundation58 dir1]# cd ..
[root@foundation58 mfs]# cd dir2
[root@foundation58 dir2]# mfsfileinfo fstab
fstab:
chunk 0: 0000000000000002_00000001 / (id:2 ver:1) ##数据存储默认为两份
copy 1: 172.25.58.2:9422 (status:VALID)
copy 2: 172.25.58.3:9422 (status:VALID)
- 存储passwd的chunkserver(server3)关掉(冷备份)发现客户端存储在server2上的数据还在,存储在server3上面的数据没有了
[root@server3 3.0.103]# systemctl stop moosefs-chunkserver
[root@foundation58 dir2]# cd ..
[root@foundation58 mfs]# cd dir1/
[root@foundation58 dir1]# mfsfileinfo passwd
passwd:
chunk 0: 0000000000000001_00000001 / (id:1 ver:1)
no valid copies !!! ##
##此时已经查看不到passwd的数据,如果此时要打开passwd文件电脑会卡住因为数据已经不在这个主机上存储
[root@foundation58 dir2]# mfsfileinfo fstab ##对于fatab文件来说不影响,因为备份了两份
fstab:
chunk 0: 0000000000000002_00000001 / (id:2 ver:1)
copy 1: 172.25.14.2:9422 (status:VALID) ####数据存储默认变为1份
[root@server3 3.0.103]# systemctl start moosefs-chunkserver
重新开启server3的chunkserver之后,就会恢复
[root@foundation58 dir1]# mfsfileinfo *
passwd:
chunk 0: 0000000000000001_00000001 / (id:1 ver:1)
copy 1: 172.25.28.3:9422 (status:VALID)
[root@foundation58 dir1]# cd ..
[root@foundation58 mfs]# cd dir2
[root@foundation58 dir2]# mfsfileinfo *
fstab:
chunk 0: 0000000000000002_00000001 / (id:2 ver:1)
copy 1: 172.25.28.2:9422 (status:VALID)
copy 2: 172.25.28.3:9422 (status:VALID)
再次测试,在第一个目录下面生成一个100M的文件
可以看出该文件被分成了两部分存储在两个chunk节点上面,各个chunk节点的副本个数均为1
原因:一个dir目录大小是64M,当存入的数据大于64M时,此时就要将文件进行分割~~
[root@foundation58 dir2]# cd ..
[root@foundation58 mfs]# cd dir1
[root@foundation58 dir1]#
[root@foundation58 dir1]# ls
passwd
[root@foundation58 dir1]# dd if=/dev/zero of=file1 bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 0.536621 s, 195 MB/s
[root@foundation58 dir1]# mfscheckfile file1
file1:
chunks with 1 copy: 2
[root@foundation58 dir1]# mfsfileinfo file1
file1:
chunk 0: 0000000000000003_00000001 / (id:3 ver:1)
copy 1: 172.25.58.3:9422 (status:VALID)
chunk 1: 0000000000000004_00000001 / (id:4 ver:1)
copy 1: 172.25.58.2:9422 (status:VALID)
同理,在第二个实验目录下面生成一个100M的文件
发现该文件被存储在两个chunk节点上面,并且每个chunk节点上面存储数据的副本个数均为2个
[root@foundation58 dir1]# cd ../dir2
[root@foundation58 dir2]# ls
fstab
[root@foundation58 dir2]# dd if=/dev/zero of=file2 bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 4.80126 s, 21.8 MB/s
[root@foundation28 dir2]# mfscheckfile file2
file2:
chunks with 2 copies: 2
[root@foundation58 dir2]# mfsfileinfo file2
file2:
chunk 0: 0000000000000005_00000001 / (id:5 ver:1)
copy 1: 172.25.58.2:9422 (status:VALID)
copy 2: 172.25.58.3:9422 (status:VALID)
chunk 1: 0000000000000006_00000001 / (id:6 ver:1)
copy 1: 172.25.58.2:9422 (status:VALID)
copy 2: 172.25.58.3:9422 (status:VALID)
[root@foundation58 dir2]# ls
file2 fstab
[root@foundation58 dir2]# ls ../dir1
file1 passwd
此时再把server3上的chunkserver关闭:
[root@server3 ~]# systemctl stop moosefs-chunkserver
发现第二个实验目录下面的数据完整在server2上存在一份
发现第一个实验目录下面的数据存储在chunk3上的数据不见了
[root@foundation58 dir2]# mfsfileinfo file2
file2:
chunk 0: 0000000000000005_00000001 / (id:5 ver:1)
copy 1: 172.25.58.2:9422 (status:VALID)
chunk 1: 0000000000000006_00000001 / (id:6 ver:1)
copy 1: 172.25.58.2:9422 (status:VALID)
[root@foundation58 dir2]# cd ../dir1
[root@foundation58 dir1]# mfsfileinfo *
file1:
chunk 0: 0000000000000003_00000001 / (id:3 ver:1)
no valid copies !!!
chunk 1: 0000000000000004_00000001 / (id:4 ver:1)
no valid copies !!!
再次开启,再次验证~~~
[root@foundation58 dir1]# mfsfileinfo file1
file1:
chunk 0: 0000000000000003_00000001 / (id:3 ver:1)
copy 1: 172.25.58.3:9422 (status:VALID)
chunk 1: 0000000000000004_00000001 / (id:4 ver:1)
copy 1: 172.25.58.3:9422 (status:VALID)
[root@foundation58 dir1]# cd ../dir2
[root@foundation58 dir2]# mfsfileinfo file2
file2:
chunk 0: 0000000000000005_00000001 / (id:5 ver:1)
copy 1: 172.25.58.2:9422 (status:VALID)
copy 2: 172.25.58.3:9422 (status:VALID)
chunk 1: 0000000000000006_00000001 / (id:6 ver:1)
copy 1: 172.25.58.2:9422 (status:VALID)
copy 2: 172.25.58.3:9422 (status:VALID)
总结
其实就是我们的客户端(真机)在存储数据的时候,先去找mfs 的master端(即server1的9421端口),
然后mfs master端(server1的9420端口)后面挂着很多台真正存储数据的mfs chunk端(server2和server3),
其实master会根据自己内部的均衡算法把客户端的数据合理的存储在后端的各个节点上面,
其实也就是server1把客户端的大文件拆分成很多的小文件,然后分布存储在后端的各个节点上面,
这样做的好处就是,存储效率高,客户端的读取速率也会很快,
而且分布存储也可以设置副本的个数,这样也不至于后台一台chunk端挂了,导致客户端所有的数据丢失,
因为每一个服务器的io都是有限制的,如果只针对一个节点来存储的话,性能会变的很差,
而且分布存储的关键在于master端的分配与调用,
我们的客户端会仍然感觉自己的文件还是一个大文件形式的存储