mfs分布式高可用集群搭建

一、环境准备
1.至少6台4c 8G虚拟机,且网络可互相通信
2.操作系统版本:CentOS Linux release 7.3
3启用vm.overcommit_memory

echo "vm.overcommit_memory = 1" >> /etc/sysctl.conf
sysctl vm.overcommit_memory=1

4.关闭透明传输(#后为相关命令)

#grep Huge /proc/meminfo
AnonHugePages:      8192 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

#grubby --default-kernel
/boot/vmlinuz-3.10.0-514.el7.x86_64

#grubby --info /boot/vmlinuz-3.10.0-514.el7.x86_64
index=0
kernel=/boot/vmlinuz-3.10.0-514.el7.x86_64
args="ro crashkernel=auto rd.lvm.lv=vg00/root rd.lvm.lv=vg00/swap rhgb quiet net.ifnames=0 biosdevname=0 "
root=/dev/mapper/vg00-root
initrd=/boot/initramfs-3.10.0-514.el7.x86_64.img
title=CentOS Linux (3.10.0-514.el7.x86_64) 7 (Core)

#grubby --args="transparent_hugepage=never" --update-kernel /boot/vmlinuz-3.10.0-514.el7.x86_64

# grubby --info /boot/vmlinuz-3.10.0-514.el7.x86_64
index=0
kernel=/boot/vmlinuz-3.10.0-514.el7.x86_64
args="ro crashkernel=auto rd.lvm.lv=vg00/root rd.lvm.lv=vg00/swap rhgb quiet net.ifnames=0 biosdevname=0 transparent_hugepage=never"
root=/dev/mapper/vg00-root
initrd=/boot/initramfs-3.10.0-514.el7.x86_64.img
title=CentOS Linux (3.10.0-514.el7.x86_64) 7 (Core)

重启生效,查看
# grep Huge /proc/meminfo
AnonHugePages:         0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

5.配置mfs源


添加yum key
# curl "http://ppa.moosefs.com/RPM-GPG-KEY-MooseFS" > /etc/pki/rpm-gpg/RPM-GPG-KEY-MooseFS

下载库配置文件
# curl "http://ppa.moosefs.com/MooseFS-3-el7.repo" > /etc/yum.repos.d/MooseFS.repo

二、 安装(一台master,一台 Metaloggers,三台chunk server,一台client)
1.For Master Server
yum install moosefs-master moosefs-cli moosefs-cgi moosefs-cgiserv -y

2.For Metaloggers
yum install moosefs-metalogger -y

3.For Chunkservers:
yum install moosefs-chunkserver -y

4.For Clients:
yum install moosefs-client
5.创建并修改目录权限

mkdir /mfs/mfs_meta   (mfsmaster,mfsmetalogger共享目录,主存mfsmater,从存mfsmetalogger)
mkdir /mfs/mfschunkserver_meta
chown -R mfs.mfs /mfs
chown -R mfs.mfs /data

#上述步骤说明 mfs_master 和metalogger上创建/mfs/mfs_meta,然后赋权。chunkserver上创建chunkserver 的目录,然后赋权
6.安装keepalive(所有keepalive节点)

yum install keepalived -y 

7.配置keepalived

vim /etc/keepalived/keepalived.conf
global_defs {
	notification_email {
		qifeng.wu@haiziwang.com
	}
	notification_email_from qifeng.wu@haiziwang.com
	smtp_server 127.0.0.1
	smtp_connect_timeout 30
	router_id mfs-mysql
}
vrrp_sync_group mfs-mysql {
	group {
		mfs-mysql
	}
}
vrrp_script chk_mfs {
	script "/etc/keepalived/failover.sh mfs_check"
	interval 2
}
vrrp_instance mfs-mysql {
	state BACKUP
	nopreempt
	interface eth0
	unicast_peer {
		对方ipxxx.xxx.xxx.xxx
	}
	virtual_router_id 251
	priority 200
	advert_int 1
	authentication {
		auth_type PASS
		auth_pass mfs
	}
	track_script {
		chk_mfs
	}
	virtual_ipaddress {
		vip:xxx.xxx.xxx.xxx
	}
	notify_master "/etc/keepalived/failover.sh notify_master"
	notify_backup "/etc/keepalived/failover.sh notify_backup"
	notify_fault "/etc/keepalived/failover.sh notify_fault"
	notify_stop "/etc/keepalived/failover.sh notify_stop"
}
#从节点priority 需要小于200,因为我这里主节点配置得200.从节点权重值保证不大于主节点即可

8.配置/etc/keepalived/failover.sh文件

#!/bin/bash
#*********************************#
# example failover.sh command
#
#
#*********************************#
str_date="date +%Y-%m-%d-%H:%M:%S"
LOGFILE="/etc/keepalived/mfs_state.log"
CHECKLOG="/etc/keepalived/mfs_check.log"
KEEPSTAT="/etc/keepalived/keepalived.status"
mfs_check(){
	keep_stat=`cat $KEEPSTAT`
	if [ ${keep_stat} -eq 0 ]
	then
		exit 0
	fi
	mfs_master=`ps -ef | grep -i mfsmaster | grep -v "grep" | wc -l`
	if [ ${mfs_master} -eq 0 ]
	then
		pkill keepalived
		echo "`$str_date` Kill keepalived" >> $CHECKLOG
	fi
}
notify_master(){
	echo "`$str_date` [notify_master]" >> $LOGFILE
	mfs_master=`ps -ef | grep -i mfsmaster | grep -v grep | wc -l`
	if [ ${mfs_master} -eq 0 ]
	then
		mfsmetalogger stop
		mfsmaster -a
		echo "`$str_date` mfsmaster is starting !" >> $LOGFILE
	fi
	echo 1 > $KEEPSTAT
}
notify_backup(){
	echo 0 > $KEEPSTAT
	echo "`$str_date` [notify_backup]" >> $LOGFILE
}
notify_fault(){
	echo "`$str_date` [notify_fault]" >> $LOGFILE
}
notify_stop(){
	echo "`$str_date` [notify_stop]" >> $LOGFILE
}
case "$1" in
	mfs_check)
		mfs_check
		exit 0
		;;
	notify_master)
		notify_master && exit 0
		;;
	notify_backup)
		notify_backup && exit 0
		;;
	notify_fault)
		notify_fault && exit 0
		;;
	notify_stop)
		notify_stop && exit 0
		;;
esac


然后赋权chmod +x /etc/keepalived/failover.sh

三、配置启动服务
1、mfsmaster启动(所有master节点)
(1)修改/etc/mfs/mfsmaster.cfg
//mfsmaster的存储的元数据路径
DATA_PATH = /mfs/mfs_meta

(2)配置/etc/mfs/mfsexports.cfg

  • / rw,alldirs,maproot=0,mingoal=2,mintrashtime=600s,maxtrashtime=1d

(3)启动mfsmaster
初次启动需要添加文件

# cp -a /var/lib/mfs/metadata.mfs.empty /mfs/mfs_meta/metadata.mfs

启动(主节点启动,从节点不启动)

#mfsmaster start

open files limit has been set to: 16384
working directory: /mfs/mfs_meta
lockfile created and locked
initializing mfsmaster modules ...
exports file has been loaded
topology file has been loaded
loading metadata ...
metadata file has been loaded
no charts data file - initializing empty charts
master <-> metaloggers module: listen on *:9419
master <-> chunkservers module: listen on *:9420
main master server module: listen on *:9421
mfsmaster daemon initialized properly

2、启动keepalived-------------(到这一步)

systemctl start keepalived

3、mfschunkserver启动(所有mfschunkserve节点)
(1)配置文件/etc/mfs/mfschunkserver.cfg
DATA_PATH = /mfs/mfschunkserver_meta
MASTER_HOST = xxx.xxx.xxx.xxx

(2)配置文件/etc/mfs/mfshdd.cfg
/data -5GiB

(3)启动mfschunkserver

# mfschunkserver start
open files limit has been set to: 16384
working directory: /mfs/mfschunkserver_meta
lockfile created and locked
setting glibc malloc arena max to 4
setting glibc malloc arena test to 4
initializing mfschunkserver modules ...
hdd space manager: path to scan: /data/
hdd space manager: start background hdd scanning (searching for available chunks)
main server module: listen on *:9422
no charts data file - initializing empty charts
mfschunkserver daemon initialized properly

4、mfsmetalogger启动(所有mfsmetalogger节点)
(1)配置文件/etc/mfs/mfsmetalogger.cfg
DATA_PATH = /mfs/mfs_meta
MASTER_HOST = xxx.xxx.xxx.xxx

(2)启动mfsmetalogger(从节点启动,主节点不启动)

# mfsmetalogger start

open files limit has been set to: 4096
working directory: /mfs/mfs_meta
lockfile created and locked
initializing mfsmetalogger modules ...
mfsmetalogger daemon initialized properly

完成安装
测试安装结果
1.安装客户端
yum install moosefs-client -y

2.挂载
mkdir /MFS_data
mfsmount /MFS_data -H xxx.xxx.xxx.xxx

3.速度测试
dd if=/dev/zero of=/MFS_data/test.dbf bs=8k count=200000 conv=fdatasync
4.在client上写入数据或者上传文件,然后查看chunk server上得/data目录使用率是否增加,增加则集群搭建成功。

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值