基于CentOs7搭建MongoDB集群
参考文章:https://blog.csdn.net/richie696/article/details/114660811
https://www.cnblogs.com/littleatp/p/8563273.html
工作中经常使用到mongodb,在此总结下 集群部署过程;
目标
部署集群(分片+副本集)
配置环境说明
mongdb版本:4.2.14
虚拟机列表:
192.168.1.1
192.168.1.2
192.168.1.3
系统为:CentOs 7
规划:
Mongos:192.168.1.1:26001,192.168.1.2:26001,192.168.1.3:26001
Config:192.168.1.1:26002,192.168.1.2:26002,192.168.1.3:26002
shard1/192.168.1.1:27017,192.168.1.2:27017,192.168.1.3:27017
shard2/192.168.1.1:27018,192.168.1.2:27018,192.168.1.3:27018
Shard3/192.168.1.1:27019,192.168.1.2:27019,192.168.1.3:27019
开放端口:
firewall-cmd --add-port=26001/tcp --permanent --zone=public
firewall-cmd --add-port=26002/tcp --permanent --zone=public
firewall-cmd --add-port=27017/tcp --permanent --zone=public
firewall-cmd --add-port=27018/tcp --permanent --zone=public
firewall-cmd --add-port=27019/tcp --permanent --zone=public
firewall-cmd --reload
firewall-cmd --list-all
安装及配置
1.安装下载安装包文件,版本为4.2.14
官方地址:https://www.mongodb.com/download-center
执行脚本
rpm -ih --force --nodeps *.rpm
安装过程略…
2.创建文件夹
mkdir -p /opt/local/mongo-cluster/conf
chmod 755 conf/ -R
3.配置文件:mongo_node.conf、mongos.conf
mongod_node_config.conf
##config 节点配置
storage:
# 数据库文件存储路径
dbPath: /opt/local/mongo-cluster/nodes/config/node1/data
engine: wiredTiger
directoryPerDB: true
journal:
enabled: true
systemLog:
destination: file
logAppend: true
# 日志保存路径
path: /opt/local/mongo-cluster/nodes/config/node1/db.log
operationProfiling:
slowOpThresholdMs: 10000
replication:
oplogSizeMB: 10240
processManagement:
fork: true
net:
# mongodb的端口
port: 26002
# mongodb的主机地址
bindIp: 0.0.0.0
security:
# 分片集群同步密钥文件路径
authorization: enabled
# 是否启用权限登陆(enabled:启用,disabled:禁用)该节点在成功增加root账号权限之前,请勿开启,否则会无法访问
keyFile: /opt/local/mongo-cluster/keyfile/mongo.key
replication:
# 集群名称,如果不是同一个集群内的机器,请不要配置重复
replSetName: configReplSet
sharding:
# 分片集群中当前实例的角色(configsvr:配置中心实例,shardsvr:分片实例)
clusterRole: configsvr
数据节点配置:mongo_node_data.conf
storage:
# 数据库文件存储路径
#dbPath: /opt/local/mongo-cluster/nodes/data/node1/data
engine: wiredTiger
directoryPerDB: true
journal:
enabled: true
systemLog:
destination: file
logAppend: true
# 日志保存路径
#path: /opt/local/mongo-cluster/nodes/data/node1/db.log
operationProfiling:
slowOpThresholdMs: 10000
replication:
oplogSizeMB: 10240
processManagement:
fork: true
net:
# mongodb的端口
#port: 26002
# mongodb的主机地址
bindIp: 0.0.0.0
security:
# 分片集群同步密钥文件路径
keyFile: /opt/local/mongo-cluster/keyfile/mongo.key
# 是否启用权限登陆(enabled:启用,disabled:禁用)该节点在成功增加root账号权限之前,请勿开启,否则会无法访问
authorization: enabled
sharding:
# 分片集群中当前实例的角色(configsvr:配置中心实例,shardsvr:分片实例)
clusterRole: shardsvr
mongos.conf
systemLog:
destination: file
logAppend: true
processManagement:
fork: true
net:
bindIp: 0.0.0.0
security:
clusterAuthMode: keyFile
keyFile: /opt/local/mongo-cluster/keyfile/mongo.key
javascriptEnabled: true
4.创建keyfile
cd /opt/local/mongo-cluster
mkdir keyfile
openssl rand -base64 756 > mongo.key
chmod 600 mongo.key
mv mongo.key keyfile
## 三个主机使用同一份mongo.key
5.创建节点目录
WORK_DIR=/opt/local/mongo-cluster
##192.168.1.1
mkdir -p $WORK_DIR/nodes/config/node1/data
mkdir -p $WORK_DIR/nodes/mongos/node1
mkdir -p $WORK_DIR/nodes/shard1/node1/data
mkdir -p $WORK_DIR/nodes/shard2/node1/data
mkdir -p $WORK_DIR/nodes/shard3/node1/data
##192.168.1.2
mkdir -p $WORK_DIR/nodes/config/node2/data
mkdir -p $WORK_DIR/nodes/mongos/node2
mkdir -p $WORK_DIR/nodes/shard1/node2/data
mkdir -p $WORK_DIR/nodes/shard2/node2/data
mkdir -p $WORK_DIR/nodes/shard3/node2/data
##192.168.1.3
mkdir -p $WORK_DIR/nodes/config/node3/data
mkdir -p $WORK_DIR/nodes/mongos/node3
mkdir -p $WORK_DIR/nodes/shard1/node3/data
mkdir -p $WORK_DIR/nodes/shard2/node3/data
mkdir -p $WORK_DIR/nodes/shard3/node3/data
搭建集群
编写启动脚本
防火墙开放端口
firewall-cmd --add-port=26001/tcp --permanent --zone=public
firewall-cmd --add-port=26002/tcp --permanent --zone=public
firewall-cmd --add-port=27017/tcp --permanent --zone=public
firewall-cmd --add-port=27018/tcp --permanent --zone=public
firewall-cmd --add-port=27019/tcp --permanent --zone=public
firewall-cmd --reload
firewall-cmd --list-all
1.Config副本集
在安装过程中,将配置中
security:
# 分片集群同步密钥文件路径
authorization: enabled
# 是否启用权限登陆(enabled:启用,disabled:禁用)该节点在成功增加root账号权限之前,请勿开启,否则会无法访问
keyFile: /opt/local/mongo-cluster/keyfile/mongo.key
先注释掉
#security:
# 分片集群同步密钥文件路径
#authorization: enabled
# 是否启用权限登陆(enabled:启用,disabled:禁用)该节点在成功增加root账号权限之前,请勿开启,否则会无法访问
#keyFile: /opt/local/mongo-cluster/keyfile/mongo.key
启动config节点
WORK_DIR=/opt/local/mongo-cluster
KEYFILE=$WORK_DIR/keyfile/mongo.key
CONFFILE=$WORK_DIR/conf/mongod_node_config.conf
MONGOD=mongod
##192.168.1.1
$MONGOD --port 26002 --dbpath $WORK_DIR/nodes/config/node1/data --pidfilepath $WORK_DIR/nodes/config/node1/db.pid --logpath $WORK_DIR/nodes/config/node1/db.log --config $CONFFILE
##192.168.1.2
$MONGOD --port 26002 --dbpath $WORK_DIR/nodes/config/node2/data --pidfilepath $WORK_DIR/nodes/config/node2/db.pid --logpath $WORK_DIR/nodes/config/node2/db.log --config $CONFFILE
##192.168.1.3
$MONGOD --port 26002 --dbpath $WORK_DIR/nodes/config/node3/data --pidfilepath $WORK_DIR/nodes/config/node3/db.pid --logpath $WORK_DIR/nodes/config/node3/db.log --config $CONFFILE
连接其中一个Config进程,执行副本集初始化
mongo --port 26002 --host 192.168.1.1
> cfg={
_id:"configReplSet",
configsvr: true,
members:[
{_id:0, host:'192.168.1.1:26002'},
{_id:1, host:'192.168.1.2:26002'},
{_id:2, host:'192.168.1.3:26002'}
]};
rs.initiate(cfg);
##使用主节点(PRIMARY) 添加管理账号
use admin
db.createUser({
user:'admin',pwd:'******',
roles:[
{role:'clusterAdmin',db:'admin'},
{role:'userAdminAnyDatabase',db:'admin'},
{role:'dbAdminAnyDatabase',db:'admin'},
{role:'readWriteAnyDatabase',db:'admin'}
]})
###按以上步骤完成后创建分片,先不改关于登录认证的配置
##待重启之后可是测试认证
#db.auth("admin","******")
2.创建分片
启动前将配置中 security开头的配置全部注释掉
WORK_DIR=/opt/local/mongo-cluster
KEYFILE=$WORK_DIR/keyfile/mongo.key
CONFFILE=$WORK_DIR/conf/mongo_node_data.conf
MONGOD=mongod
echo "start shard1 replicaset"
##192.168.1.1
$MONGOD --port 27017 --shardsvr --replSet shard1 --dbpath $WORK_DIR/nodes/shard1/node1/data --pidfilepath $WORK_DIR/nodes/shard1/node1/db.pid --logpath $WORK_DIR/nodes/shard1/node1/db.log --config $CONFFILE
$MONGOD --port 27018 --shardsvr --replSet shard2 --dbpath $WORK_DIR/nodes/shard2/node1/data --pidfilepath $WORK_DIR/nodes/shard2/node1/db.pid --logpath $WORK_DIR/nodes/shard2/node1/db.log --config $CONFFILE
$MONGOD --port 27019 --shardsvr --replSet shard3 --dbpath $WORK_DIR/nodes/shard3/node1/data --pidfilepath $WORK_DIR/nodes/shard3/node1/db.pid --logpath $WORK_DIR/nodes/shard3/node1/db.log --config $CONFFILE
##192.168.1.2
$MONGOD --port 27017 --shardsvr --replSet shard1 --dbpath $WORK_DIR/nodes/shard1/node2/data --pidfilepath $WORK_DIR/nodes/shard1/node2/db.pid --logpath $WORK_DIR/nodes/shard1/node2/db.log --config $CONFFILE
$MONGOD --port 27018 --shardsvr --replSet shard2 --dbpath $WORK_DIR/nodes/shard2/node2/data --pidfilepath $WORK_DIR/nodes/shard2/node2/db.pid --logpath $WORK_DIR/nodes/shard2/node2/db.log --config $CONFFILE
$MONGOD --port 27019 --shardsvr --replSet shard3 --dbpath $WORK_DIR/nodes/shard3/node2/data --pidfilepath $WORK_DIR/nodes/shard3/node2/db.pid --logpath $WORK_DIR/nodes/shard3/node2/db.log --config $CONFFILE
##192.168.1.3
$MONGOD --port 27017 --shardsvr --replSet shard1 --dbpath $WORK_DIR/nodes/shard1/node3/data --pidfilepath $WORK_DIR/nodes/shard1/node3/db.pid --logpath $WORK_DIR/nodes/shard1/node3/db.log --config $CONFFILE
$MONGOD --port 27018 --shardsvr --replSet shard2 --dbpath $WORK_DIR/nodes/shard2/node3/data --pidfilepath $WORK_DIR/nodes/shard2/node3/db.pid --logpath $WORK_DIR/nodes/shard2/node3/db.log --config $CONFFILE
$MONGOD --port 27019 --shardsvr --replSet shard3 --dbpath $WORK_DIR/nodes/shard3/node3/data --pidfilepath $WORK_DIR/nodes/shard3/node3/db.pid --logpath $WORK_DIR/nodes/shard3/node3/db.log --config $CONFFILE
启动之后
连接其中一个Shard进程,执行副本集初始化
mongo --port 27017 --host 192.168.1.1
> cfg={
_id:"shard1",
members:[
{_id:0, host:'192.168.1.1:27017', priority:3},
{_id:1, host:'192.168.1.2:27017', priority:1},
{_id:2, host:'192.168.1.3:27017', priority:2}
]};
rs.initiate(cfg);
shard2:
mongo --port 27018 --host 192.168.1.2
> cfg={
_id:"shard2",
members:[
{_id:1, host:'192.168.1.1:27018', priority:1},
{_id:0, host:'192.168.1.2:27018', priority:3},
{_id:2, host:'192.168.1.3:27018', priority:2}
]};
rs.initiate(cfg);
shar3:
mongo --port 27019 --host 192.168.1.3
> cfg={
_id:"shard3",
members:[
{_id:0, host:'192.168.1.1:27019', priority:1},
{_id:1, host:'192.168.1.2:27019', priority:2},
{_id:2, host:'192.168.1.3:27019', priority:3}
]};
rs.initiate(cfg);
##单独设置密码,可略过
use admin
db.createUser({
user:'admin',pwd:'******',
roles:[
{role:'clusterAdmin',db:'admin'},
{role:'userAdminAnyDatabase',db:'admin'},
{role:'dbAdminAnyDatabase',db:'admin'},
{role:'readWriteAnyDatabase',db:'admin'}
]})
3.启动mongos路由
启动前将配置中 security开头的配置全部注释掉
执行以下脚本启动mongos进程
WORK_DIR=/opt/local/mongo-cluster
KEYFILE=$WORK_DIR/keyfile/mongo.key
CONFFILE=$WORK_DIR/conf/mongos.conf
MONGOS=$WORK_DIR/bin/mongos
##192.168.1.1
echo "start mongos instances"
$MONGOS --port=26001 --configdb configReplSet/192.168.1.1:26002,192.168.1.2:26002,192.168.1.3:26002 --pidfilepath $WORK_DIR/nodes/mongos/node1/db.pid --logpath $WORK_DIR/nodes/mongos/node1/db.log --config $CONFFILE
##192.168.1.2
$MONGOS --port 26001 --configdb configReplSet/192.168.1.1:26002,192.168.1.2:26002,192.168.1.3:26002 --pidfilepath $WORK_DIR/nodes/mongos/node2/db.pid --logpath $WORK_DIR/nodes/mongos/node2/db.log --config $CONFFILE
##192.168.1.3
$MONGOS --port 26001 --configdb configReplSet/192.168.1.1:26002,192.168.1.2:26002,192.168.1.3:26002 --pidfilepath $WORK_DIR/nodes/mongos/node3/db.pid --logpath $WORK_DIR/nodes/mongos/node3/db.log --config $CONFFILE
接入其中一个mongos实例,执行添加分片操作:
mongo --port 26001 --host 192.168.1.1
mongos> sh.addShard("shard1/192.168.1.1:27017,192.168.1.2:27017,192.168.1.3:27017")
mongos> sh.addShard("shard2/192.168.1.1:27018,192.168.1.2:27018,192.168.1.3:27018")
mongos> sh.addShard("shard3/192.168.1.1:27019,192.168.1.2:27019,192.168.1.3:27019")
至此,分布式集群架构启动完毕,但进一步操作需要先添加用户。
4.初始化用户
连接其中一个mongos实例,添加管理员用户
use admin
db.createUser({
user:'manager',pwd:'******',
roles:[
{role:'clusterAdmin',db:'admin'},
{role:'userAdminAnyDatabase',db:'admin'},
{role:'dbAdminAnyDatabase',db:'admin'},
{role:'readWriteAnyDatabase',db:'admin'}
]})
当前manager用户具有集群管理权限、所有数据库的操作权限。
检查集群状态
mongos> sh.status()
集群用户
分片集群中的访问都会通过mongos入口,而鉴权数据是存储在config副本集中的,即config实例中system.users数据库存储了集群用户及角色权限配置。mongos与shard实例则通过内部鉴权(keyfile机制)完成,因此shard实例上可以通过添加本地用户以方便操作管理。在一个副本集上,只需要在Primary节点上添加用户及权限,相关数据会自动同步到Secondary节点。
数据操作
在案例中,创建appuser用户、为数据库实例appdb启动分片。
use appdb
db.createUser({user:'appuser',pwd:'******',roles:[{role:'dbOwner',db:'appdb'}]})
sh.enableSharding("appdb")
创建集合book,为其执行分片初始化。
use appdb
db.createCollection("book")
db.device.ensureIndex({createTime:1})
sh.shardCollection("appdb.book", {bookId:"hashed"}, false, { numInitialChunks: 4} )
继续往device集合写入1000W条记录,观察chunks的分布情况
use appdb
var cnt = 0;
for(var i=0; i<1000; i++){
var dl = [];
for(var j=0; j<100; j++){
dl.push({
"bookId" : "BBK-" + i + "-" + j,
"type" : "Revision",
"version" : "IricSoneVB0001",
"title" : "Jackson's Life",
"subCount" : 10,
"location" : "China CN Shenzhen Futian District",
"author" : {
"name" : 50,
"email" : "RichardFoo@yahoo.com",
"gender" : "female"
},
"createTime" : new Date()
});
}
cnt += dl.length;
db.book.insertMany(dl);
print("insert ", cnt);
}
执行db.book.getShardDistribution(),输出如下:
Shard shard1 at shard1/192.168.1.1:27017,192.168.1.2:27017,192.168.1.3:27017
data : 13.51MiB docs : 50294 chunks : 2
estimated data per chunk : 6.75MiB
estimated docs per chunk : 25147
Shard shard2 at shard2/192.168.1.1:27018,192.168.1.2:27018,192.168.1.3:27018
data : 6.62MiB docs : 24641 chunks : 1
estimated data per chunk : 6.62MiB
estimated docs per chunk : 24641
Shard shard3 at shard3/192.168.1.1:27019,192.168.1.2:27019,192.168.1.3:27019
data : 6.73MiB docs : 25065 chunks : 1
estimated data per chunk : 6.73MiB
estimated docs per chunk : 25065
Totals
data : 26.87MiB docs : 100000 chunks : 4
Shard shard1 contains 50.29% data, 50.29% docs in cluster, avg obj size on shard : 281B
Shard shard2 contains 24.64% data, 24.64% docs in cluster, avg obj size on shard : 281B
Shard shard3 contains 25.06% data, 25.06% docs in cluster, avg obj size on shard : 281B
启用认证
做好以上步骤之后
停掉所有mongo服务进程
pkill mongod
pkill mongos
修改配置文件中的 security开头的配置,放开注释,启用登录认证;
然后 按 config-> shard -> mongos 顺序启动所有进程;
- 启动config配置中心
- 启动分片集群服务器
- 启动路由服务器
完成集群部署
总结
-
Mongodb集群架构由Mongos、Config副本集和多个分片组成;
安装过程中先初始化Config副本集、分片副本集,最后通过Mongos添加分片安装过程中将security配置项 全部注释掉,添加完成用户密码之后,修改配置文件,重启服务
-
Config副本集存储了集群访问的用户及角色权限,为了方便管理,可以给分片副本集添加本地用户