Linux服务器MongoDB4.0搭建分布式集群
规划如下:
主机名 主机IP 组件mongos 组件configserver shard
ygathree1 172.31.57.83 port:20000 port:21000 port:22001,22002,22003,22004
ygathree2 172.31.57.84 port:20000 port:21000 port:22001,22002,22003,22004
ygathree3 172.31.57.85 port:20000 port:21000 port:22001,22002,22003,22004
ygathree4 172.31.57.86 port:20000 port:21000 port:22001,22002,22003,22004
搭建集群
1)创建目录
分别在ygathree1/ygathree2/ygathree3/ygathree4创建目录及日志文件
mkdir -p /YGA_Project/software/mongodb/mongos/{log,conf}
mkdir -p /YGA_Project/software/mongodb/mongoconf/{data,log,conf}
mkdir -p /YGA_Project/software/mongodb/shard1/{data,log,conf}
mkdir -p /YGA_Project/software/mongodb/shard2/{data,log,conf}
mkdir -p /YGA_Project/software/mongodb/shard3/{data,log,conf}
mkdir -p /YGA_Project/software/mongodb/shard4/{data,log,conf}
touch /YGA_Project/software/mongodb/mongos/log/mongos.log
touch /YGA_Project/software/mongodb/mongoconf/log/mongoconf.log
touch /YGA_Project/software/mongodb/shard1/log/shard1.log
touch /YGA_Project/software/mongodb/shard2/log/shard2.log
touch /YGA_Project/software/mongodb/shard3/log/shard3.log
touch /YGA_Project/software/mongodb/shard4/log/shard4.log
2)配置config server 副本集
在四台服务器上配置config server副本集配置文件mongoconf.conf,并启动服务。
dbpath=/YGA_Project/software/mongodb/mongoconf/data
logpath=/YGA_Project/software/mongodb/mongoconf/log/mongoconf.log
logappend=true
bind_ip=0.0.0.0
port=21000
journal=true
fork=true
syncdelay=60
oplogSize=1000
configsvr=true
#config server配置集replconf
replSet=confs
启动config server
cd /YGA_Project/software/mongodb/bin
./mongod -f /YGA_Project/software/mongodb/mongoconf/conf/mongoconf.conf
登录一台服务器进行配置服务器副本集初始化
./mongo 172.31.57.83:21000
use admin
rs.initiate({_id:“confs”,members:[{_id:0,host:“172.31.57.83:21000”},{_id:1,host:“172.31.57.84:21000”},{_id:2,host:“172.31.57.85:21000”},{_id:3,host:“172.31.57.86:21000”},]})
查看集群配置
rs.status()
3)配置shard集群
四台服务器均进行shard集群配置,在shard1的conf中创建文件shard1.conf
shard1.conf:
dbpath=/YGA_Project/software/mongodb/shard1/data
logpath=/YGA_Project/software/mongodb/shard1/log/shard1.log
bind_ip=0.0.0.0
port=22001
logappend=true
#nohttpinterface=true
fork=true
oplogSize=4096
journal=true
#engine=wiredTiger
#cacheSizeGB=38G
shardsvr=true
replSet=shard1
cd /YGA_Project/software/mongodb/shard1/conf/
rz -y
分别在4台启动shard1服务:
cd /YGA_Project/software/mongodb/bin
./mongod -f /YGA_Project/software/mongodb/shard1/conf/shard1.conf
查看此时服务已经正常启动,shard1的22001端口已经正常监听,接下来登录suplicthree4服务器进行shard1副本集初始化
登录172.31.57.83的mongodb
./mongo 172.31.57.83:22001
use admin
rs.initiate({_id:“shard1”,members:[{_id:0,host:“172.31.57.83:22001”},{_id:1,host:“172.31.57.84:22001”,},{_id:2,host:“172.31.57.85:22001”,},{_id:3,host:“172.31.57.86:22001”,arbiterOnly:true},]})
查看集群状态:
rs.status();
同样的操作进行shard2配置/shard3配置/shard4配置,在suplicthree3上进行shard2的副本集初始化,在suplicthree2上进行shard3的副本集初始化,在suplicthree1上进行shard4的副本集初始化。
shard2.conf:
dbpath=/YGA_Project/software/mongodb/shard2/data
logpath=/YGA_Project/software/mongodb/shard2/log/shard2.log
bind_ip=0.0.0.0
port=22002
logappend=true
#nohttpinterface = true
fork=true
oplogSize=4096
journal=true
#engine=wiredTiger
#cacheSizeGB=38G
shardsvr=true
replSet=shard2
cd /YGA_Project/software/mongodb/shard2/conf/
rz -y
分别在4台启动shard2服务:
cd /YGA_Project/software/mongodb/bin
./mongod -f /YGA_Project/software/mongodb/shard2/conf/shard2.conf
在suplicthree3上进行shard2的副本集初始化:
登录172.31.57.84的mongodb
./mongo 172.31.57.84:22002
use admin
rs.initiate({_id:“shard2”,members:[{_id:0,host:“172.31.57.83:22002”},{_id:1,host:“172.31.57.84:22002”},{_id:2,host:“172.31.57.85:22002”,arbiterOnly:true},{_id:3,host:“172.31.57.86:22002”},]})
查看集群状态:
rs.status();
shard3.conf:
dbpath=/YGA_Project/software/mongodb/shard3/data
logpath=/YGA_Project/software/mongodb/shard3/log/shard3.log
bind_ip=0.0.0.0
port=22003
logappend=true
#nohttpinterface = true
fork=true
oplogSize=4096
journal=true
#engine=wiredTiger
#cacheSizeGB=38G
shardsvr=true
replSet=shard3
cd /YGA_Project/software/mongodb/shard3/conf/
rz -y
分别在4台启动shard2服务:
cd /YGA_Project/software/mongodb/bin
./mongod -f /YGA_Project/software/mongodb/shard3/conf/shard3.conf
在suplicthree2上进行shard3的副本集初始化:
登录172.31.57.85的mongodb
./mongo 172.31.57.85:22003
use admin
rs.initiate({_id:“shard3”,members:[{_id:0,host:“172.31.57.83:22003”},{_id:1,host:“172.31.57.84:22003”,arbiterOnly:true},{_id:2,host:“172.31.57.85:22003”},{_id:3,host:“172.31.57.86:22003”},]})
查看集群状态:
rs.status();
shard4.conf:
dbpath=/YGA_Project/software/mongodb/shard4/data
logpath=/YGA_Project/software/mongodb/shard4/log/shard4.log
bind_ip=0.0.0.0
port=22004
logappend=true
#nohttpinterface = true
fork=true
oplogSize=4096
journal=true
#engine=wiredTiger
#cacheSizeGB=38G
shardsvr=true
replSet=shard4
cd /YGA_Project/software/mongodb/shard4/conf/
rz -y
分别在4台启动shard2服务:
cd /YGA_Project/software/mongodb/bin
./mongod -f /YGA_Project/software/mongodb/shard4/conf/shard4.conf
在suplicthree1上进行shard4的副本集初始化:
登录172.31.57.86的mongodb
./mongo 172.31.57.86:22004
use admin
rs.initiate({_id:“shard4”,members:[{_id:0,host:“172.31.57.83:22004”,arbiterOnly:true},{_id:1,host:“172.31.57.84:22004”},{_id:2,host:“172.31.57.85:22004”},{_id:3,host:“172.31.57.86:22004”},]})
查看集群状态:
rs.status();
4)配置路由服务器mongos
四台服务器的配置服务器和分片服务器均已启动,配置四台mongos服务器,由于mongos服务器的配置是从内存中加载,所以自己没有存在数据目录configdb连接为配置服务器集群。
mongos.conf:
logpath=/YGA_Project/software/mongodb/mongos/log/mongos.log
logappend=true
bind_ip=0.0.0.0
port=20000
maxConns=10000
configdb=confs/172.31.57.83:21000,172.31.57.84:21000,172.31.57.85:21000,172.31.57.86:21000
fork=true
cd /YGA_Project/software/mongodb/mongos/conf/
rz -y
分别启动4台mongos服务:
cd /YGA_Project/software/mongodb/bin
./mongos -f /YGA_Project/software/mongodb/mongos/conf/mongos.conf
登录一台mongos:
./mongo 172.31.57.83:20000
use admin
db.runCommand({addshard:“shard1/172.31.57.83:22001,172.31.57.84:22001,172.31.57.85:22001,172.31.57.86:22001”})
db.runCommand({addshard:“shard2/172.31.57.83:22002,172.31.57.84:22002,172.31.57.85:22002,172.31.57.86:22002”})
db.runCommand({addshard:“shard3/172.31.57.83:22003,172.31.57.84:22003,172.31.57.85:22003,172.31.57.86:22003”})
db.runCommand({addshard:“shard4/172.31.57.83:22004,172.31.57.84:22004,172.31.57.85:22004,172.31.57.86:22004”})
查看集群:
sh.status();
注意:mongodb的启动顺序是,先启动所有节点配置服务器,再启动所有节点的分片,最后启动所有节点的mongos。
如下:
cd /YGA_Project/software/mongodb/bin
./mongod -f /YGA_Project/software/mongodb/mongoconf/conf/mongoconf.conf
./mongod -f /YGA_Project/software/mongodb/shard1/conf/shard1.conf
./mongod -f /YGA_Project/software/mongodb/shard2/conf/shard2.conf
./mongod -f /YGA_Project/software/mongodb/shard3/conf/shard3.conf
./mongod -f /YGA_Project/software/mongodb/shard4/conf/shard4.conf
./mongos -f /YGA_Project/software/mongodb/mongos/conf/mongos.conf