MongoDB笔记 -- Sharding分布式

环境说明

  • 架构说明
    在这里插入图片描述
  1. Mongos Router向外提供应用访问的接口
  2. Config Server决定分片策略
  3. Shard Set存储最终的数据
  • 虚拟机准备
IPHost功用端口号
13.13.2.2/16mongos01Mongos Router Active27000
13.13.3.3/16configsvr01Config Server Primary27017
13.13.4.4/16shard1st01First Shard Primary27016
13.13.5.5/16shard2nd01Second Shard Primary27015
13.13.8.8/16mix01Mongos Router Standby27000
Config Server Secondary-127017
First Shard Secondary-127016
Second Shard Secondary-127015
13.13.9.9/16mix02Config Server Secondary-227017
First Shard Secondary-227016
Second Shard Secondary-227015
后来发现:
	1. SELinux中默认开放了6个MongoDB端口;
[root@mongos01 ~]# semanage port -l | grep mongod_port_t
mongod_port_t                  tcp      27017-27019, 28017-28019
[root@mongos01 ~]#
	2. ConfigServer的默认端口是27019;
	3. 所以下次可以调整上面的端口规划来规避setroubleshoot报警。
  1. Mongos Router、Config Server和两组Shard Set的主节点(/活动节点)独占一台虚拟机
  2. 次节点(/后备节点)共享一台虚拟机
  • 配置/etc/hosts
[root@configsrv01 ~]# tail -6 /etc/hosts
13.13.2.2 mongos01
13.13.3.3 configsvr01
13.13.4.4 shard1st01
13.13.5.5 shard2nd01
13.13.8.8 mix01
13.13.9.9 mix02
[root@configsrv01 ~]# 

ConfigSvr组搭建

ConfigSvr组统一使用默认端口27017(后面发现让Config Server的官方端口是27019)。

  1. 修改配置文件(configsvr01、mix01、mix02)
[root@configsvr01 ~]# cp /etc/mongod.conf{,.bak}
[root@configsvr01 ~]# vi /etc/mongod.conf
[root@configsvr01 ~]# tail /etc/mongod.conf
# Edit sections below
net:
  port: 27017
  bindIp: 127.0.0.1,configsvr01

replication:
  replSetName: configset

sharding:
  clusterRole: configsvr
[root@configsvr01 ~]# 
[root@mix01 ~]# tail /etc/mongod.conf
....
  bindIp: 127.0.0.1,mix01
....
[root@mix01 ~]# 
[root@mix02 ~]# tail /etc/mongod.conf
....
  bindIp: 127.0.0.1,mix02
....
[root@mix02 ~]# 
  1. 设定防火墙(configsvr01、mix01、mix02)
[root@configsvr01 ~]# firewall-cmd --permanent --add-service=mongodb
success
[root@configsvr01 ~]# firewall-cmd --reload
success
[root@configsvr01 ~]# 
  1. 初始configset组(configsvr01)

标准端口的话就不用单独指定了。

[root@configsvr01 ~]# systemctl start mongod.service 
[root@configsvr01 ~]# mongo
> rs.initiate({
... _id: "configset",
... configsvr: true,
... members: [
... 	{ _id : 0, host : "configsvr01" },
... 	{ _id : 1, host : "mix01" },
... 	{ _id : 2, host : "mix02" }] })
{
	"ok" : 1,
	"$gleStats" : {
		"lastOpTime" : Timestamp(1601776109, 1),
		"electionId" : ObjectId("000000000000000000000000")
	},
	"lastCommittedOpTime" : Timestamp(0, 0),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1601776109, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	},
	"operationTime" : Timestamp(1601776109, 1)
}
configset:SECONDARY> 
configset:PRIMARY> 
[root@mix01 ~]# mongo
configset:SECONDARY> 
[root@mix02 ~]# mongo
configset:SECONDARY> 

ShardSet1组搭建

ShardSet1组统一使用端口27016

  1. 初始化实例(shard1st01、mix01、mix02)

单机多实例初始化脚本参考:https://blog.csdn.net/weixin_42480750/article/details/108914266

[root@shard1st01 ~]# head -12 setup_another_mongodb_instance.sh 
#!/bin/bash

# Setup Another Mongodb Instance
#  -- all you need to do is defining two vars below
#
# suffix : distinguish from standard mongodb instance
#          e.g standard mongodb instance conf-file : /etc/mongod.conf
#              this new created instance conf-file : /etc/mongod-new.conf : which will add suffix 'new'
# mongodb-port : distinguish from the standard port 27017
#
suffix=sh1
mongodb_port=27016
[root@shard1st01 ~]# sh setup_another_mongodb_instance.sh 
Setup work seems done!
Now you can either check the setup log file "/tmp/setup_multiple_mongodb_instance_2020-10-04.log" to see if had something wrong.
Or just start the service directly with command : "systemctl start mongod-sh1.service" 
Then log into mongodb with command : "mongo --port 27016" 
[root@shard1st01 ~]# 
  1. 修改配置文件(shard1st01、mix01、mix02)
[root@shard1st01 ~]# cp /etc/mongod-sh1.conf{,.bak}
[root@shard1st01 ~]# vi /etc/mongod-sh1.conf
[root@shard1st01 ~]# cat /etc/mongod-sh1.conf
systemLog:
  destination: file
  logAppend: true
  path: /var/log/mongodb-sh1/mongod.log

storage:
  dbPath: /var/lib/mongo-sh1
  journal:
    enabled: true

processManagement:
  fork: true
  pidFilePath: /var/run/mongodb-sh1/mongod.pid
  timeZoneInfo: /usr/share/zoneinfo

# sections below need to change
net:
  port: 27016
  bindIp: 127.0.0.1,shard1st01

sharding:
    clusterRole: shardsvr

replication:
    replSetName: sh1
[root@shard1st01 ~]# 
[root@mix01 ~]# cat /etc/mongod-sh1.conf
....
  bindIp: 127.0.0.1,mix01
....
[root@mix01 ~]# 
[root@mix02 ~]# cat /etc/mongod-sh1.conf
....
  bindIp: 127.0.0.1,mix02
....
[root@mix02 ~]# 
  1. 设定防火墙(shard1st01、mix01、mix02)
[root@shard1st01 ~]# firewall-cmd --permanent --add-port=27016/tcp
success
[root@shard1st01 ~]# firewall-cmd --reload
success
[root@shard1st01 ~]# 
  1. 初始化sh1组(shard1st01)
[root@shard1st01 ~]# systemctl start mongod-sh1.service 
[root@shard1st01 ~]# mongo --port 27016
> rs.initiate({
...     _id : "sh1",
...     members: [
...       { _id : 0, host : "shard1st01:27016" },
...       { _id : 1, host : "mix01:27016" },
...       { _id : 2, host : "mix02:27016" }] })
{
	"ok" : 1,
	"$clusterTime" : {
		"clusterTime" : Timestamp(1601778520, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	},
	"operationTime" : Timestamp(1601778520, 1)
}
sh1:SECONDARY>  
sh1:PRIMARY> 
[root@mix01 ~]# mongo --port 27016
sh1:SECONDARY> 
[root@mix02 ~]# mongo --port 27016
sh1:SECONDARY> 

ShardSet2组搭建

ShardSet1组统一使用端口27015

  1. 初始化实例(shard2nd01、mix01、mix02)
[root@shard2nd01 ~]# head -12 setup_another_mongodb_instance.sh 
#!/bin/bash

# Setup Another Mongodb Instance
#  -- all you need to do is defining two vars below
#
# suffix : distinguish from standart mongodb instance
#          e.g standard mongodb instance conf-file : /etc/mongod.conf
#              this new created instance conf-file : /etc/mongod-new.conf : which will add suffix 'new'
# mongodb-port : distinguish from the standard port 27017
#
suffix=sh2
mongodb_port=27015
[root@shard2nd01 ~]# sh setup_another_mongodb_instance.sh 
Setup work seems done!
Now you can either check the setup log file "/tmp/setup_multiple_mongodb_instance_2020-10-04.log" to see if had something wrong.
Or just start the service directly with command : "systemctl start mongod-sh2.service" 
Then log into mongodb with command : "mongo --port 27015" 
[root@shard2nd01 ~]# 
  1. 修改配置文件(shard2nd01、mix01、mix02)
[root@shard2nd01 ~]# vi /etc/mongod-sh2.conf 
[root@shard2nd01 ~]# cat /etc/mongod-sh2.conf 
systemLog:
  destination: file
  logAppend: true
  path: /var/log/mongodb-sh2/mongod.log

storage:
  dbPath: /var/lib/mongo-sh2
  journal:
    enabled: true

processManagement:
  fork: true
  pidFilePath: /var/run/mongodb-sh2/mongod.pid
  timeZoneInfo: /usr/share/zoneinfo

# sections below need to change
net:
  port: 27015
  bindIp: 127.0.0.1,shard2nd01

sharding:
    clusterRole: shardsvr

replication:
    replSetName: sh2
[root@shard2nd01 ~]# 
[root@mix01 ~]# cat /etc/mongod-sh2.conf 
....
  bindIp: 127.0.0.1,mix01
....
[root@mix01 ~]# 
[root@mix02 ~]# cat /etc/mongod-sh2.conf 
....
  bindIp: 127.0.0.1,mix02
....
[root@mix02 ~]# 
  1. 设定防火墙(shard2nd01、mix01、mix02)
[root@shard2nd01 ~]# firewall-cmd --permanent --add-port=27015/tcp
success
[root@shard2nd01 ~]# firewall-cmd --reload
success
[root@shard2nd01 ~]#
  1. 初始化sh1组(shard2nd01)
[root@shard2nd01 ~]# systemctl start mongod-sh2.service 
[root@shard2nd01 ~]# mongo --port 27015
> rs.initiate({
...     _id : "sh2",
...     members: [
...       { _id : 0, host : "shard2nd01:27015" },
...       { _id : 1, host : "mix01:27015" },
...       { _id : 2, host : "mix02:27015" }] })
{
	"ok" : 1,
	"$clusterTime" : {
		"clusterTime" : Timestamp(1601779334, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	},
	"operationTime" : Timestamp(1601779334, 1)
}
sh2:SECONDARY> 
sh2:PRIMARY> 
[root@mix01 ~]# mongo --port 27015
> 
sh2:SECONDARY>
[root@mix02 ~]# mongo --port 27015
> 
sh2:SECONDARY>

MongosRouter搭建

MongosRouter统一使用端口27000

  1. 初始化实例(mongos01、mix01)

这里仍旧使用shell脚本模板来进行实例初始化搭建工作,但有需要额外修改的地方。

[root@mongos01 ~]# head -12 setup_another_mongodb_instance.sh 
#!/bin/bash

# Setup Another Mongodb Instance
#  -- all you need to do is defining two vars below
#
# suffix : distinguish from standart mongodb instance
#          e.g standard mongodb instance conf-file : /etc/mongod.conf
#              this new created instance conf-file : /etc/mongod-new.conf : which will add suffix 'new'
# mongodb-port : distinguish from the standard port 27017
#
suffix=router
mongodb_port=27000
[root@mongos01 ~]# sh setup_another_mongodb_instance.sh 
Setup work seems done!
Now you can either check the setup log file "/tmp/setup_multiple_mongodb_instance_2020-10-04.log" to see if had something wrong.
Or just start the service directly with command : "systemctl start mongod-router.service" 
Then log into mongodb with command : "mongo --port 27000" 
[root@mongos01 ~]# 
  1. 修改Mongos服务启动文件(mongos01、mix01)

MongosRouter是通过命令mongos而不是mongod启动的。

[root@mongos01 ~]# sed 's#/usr/bin/mongod#/usr/bin/mongos#' /etc/systemd/system/mongod-router.service 
[Unit]
Description=MongoDB Database Server
Documentation=https://docs.mongodb.org/manual
After=network.target

[Service]
User=mongod
Group=mongod
Environment="OPTIONS=-f /etc/mongod-router.conf"
ExecStart=/usr/bin/mongos $OPTIONS
ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb-router
ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb-router
ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb-router
PermissionsStartOnly=true
PIDFile=/var/run/mongodb-router/mongod.pid
Type=forking
LimitFSIZE=infinity
LimitCPU=infinity
LimitAS=infinity
LimitNOFILE=64000
LimitNPROC=64000
LimitMEMLOCK=infinity
TasksMax=infinity
TasksAccounting=false

[Install]
WantedBy=multi-user.target
[root@mongos01 ~]# sed -i 's#/usr/bin/mongod#/usr/bin/mongos#' /etc/systemd/system/mongod-router.service 
[root@mongos01 ~]#
  1. 修改配置文件(mongos01、mix01)
[root@mongos01 ~]# cp /etc/mongod-router.conf{,.bak}
[root@mongos01 ~]# vi /etc/mongod-router.conf
[root@mongos01 ~]# cat /etc/mongod-router.conf
systemLog:
  destination: file
  logAppend: true
  path: /var/log/mongodb-router/mongod.log

# storage section removed

processManagement:
  fork: true  # fork and run in background
  pidFilePath: /var/run/mongodb-router/mongod.pid
  timeZoneInfo: /usr/share/zoneinfo

# sections below need to change
net:
  port: 27000
  bindIp: 127.0.0.1,mongos01

# default config server port is 27019
sharding:
  configDB: configset/configsvr01:27017,mix01:27017,mix02:27017
[root@mongos01 ~]# 
[root@mix01 ~]# cat /etc/mongod-router.conf
....
  bindIp: 127.0.0.1,mix01
....
[root@mix01 ~]# 
  1. 开放防火墙(mongos01、mix01)
[root@mongos01 ~]# firewall-cmd --permanent --add-port=27000/tcp
success
[root@mongos01 ~]# firewall-cmd --reload
success
[root@mongos01 ~]# 
  1. 启动服务(mongos01、mix01)
[root@mongos01 ~]# systemctl start mongod-router.service 
[root@mongos01 ~]# systemctl status mongod-router.service 
....
   Active: failed (Result: signal) since Sun 2020-10-04 12:09:06 CST; 6s ago
....
[root@mongos01 ~]# journalctl -xe
Oct 04 12:09:10 mongos01 setroubleshoot[4212]: SELinux is preventing ftdc from create access on the directory mongod.diagnostic.data. For complete SELinux messages run: sealert -l 5c0dff1f-4191-413d-9223-2bc2433>
Oct 04 12:09:10 mongos01 platform-python[4212]: SELinux is preventing ftdc from create access on the directory mongod.diagnostic.data.
                                                
                                                *****  Plugin catchall (100. confidence) suggests   **************************
                                                
                                                If you believe that ftdc should be allowed create access on the mongod.diagnostic.data directory by default.
                                                Then you should report this as a bug.
                                                You can generate a local policy module to allow this access.
                                                Do
                                                allow this access for now by executing:
                                                # ausearch -c 'ftdc' --raw | audit2allow -M my-ftdc
                                                # semodule -X 300 -i my-ftdc.pp
                                                
[root@mongos01 ~]# ausearch -c 'ftdc' --raw | audit2allow -M my-ftdc
******************** IMPORTANT ***********************
To make this policy package active, execute:

semodule -i my-ftdc.pp

[root@mongos01 ~]# semodule -i my-ftdc.pp
[root@mongos01 ~]# systemctl restart mongod-router.service 
[root@mongos01 ~]# 
  1. 添加分片sh1(mongos01)

MongosRouter是互通的,在一个上面添加分片即可,与另外一个稍后搭建高可用。

[root@mongos01 ~]# mongo --port 27000
mongos> sh.addShard( "sh1/shard1st01:27016,mix01:27016,mix02:27016")
{
	"ok" : 0,
	"errmsg" : "Could not find host matching read preference { mode: \"primary\" } for set sh1",
	"code" : 133,
	"codeName" : "FailedToSatisfyReadPreference",
	"operationTime" : Timestamp(1601786011, 1),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1601786011, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}
[root@mongos01 ~]# journalctl -xe                           
Oct 04 14:39:08 mongos01 setroubleshoot[4879]: SELinux is preventing Replica.xecutor from name_connect access on the tcp_socket port 27016. For complete SELinux messages run: sealert -l a0a055aa-f049-445b-9414-7>
Oct 04 14:39:08 mongos01 platform-python[4879]: SELinux is preventing Replica.xecutor from name_connect access on the tcp_socket port 27016.
                                                
                                                *****  Plugin connect_ports (92.2 confidence) suggests   *********************
                                                
                                                If you want to allow Replica.xecutor to connect to network port 27016
                                                Then you need to modify the port type.
                                                Do
                                                # semanage port -a -t PORT_TYPE -p tcp 27016
                                                    where PORT_TYPE is one of the following: dns_port_t, dnssec_port_t, kerberos_port_t, mongod_port_t, ocsp_port_t.                                 
[root@mongos01 ~]# semanage port -a -t mongod_port_t -p tcp 27016
[root@mongos01 ~]# 
[root@configsrv01 ~]# journalctl -xe
Oct 04 14:35:20 configsrv01 setroubleshoot[3099]: SELinux is preventing Replica.xecutor from name_connect access on the tcp_socket port 27016. For complete SELinux messages run: sealert -l 28e0faa5-f5c1-49bf-a7c>
Oct 04 14:35:20 configsrv01 platform-python[3099]: SELinux is preventing Replica.xecutor from name_connect access on the tcp_socket port 27016.
                                                   
                                                   *****  Plugin connect_ports (92.2 confidence) suggests   *********************
                                                   
                                                   If you want to allow Replica.xecutor to connect to network port 27016
                                                   Then you need to modify the port type.
                                                   Do
                                                   # semanage port -a -t PORT_TYPE -p tcp 27016
                                                       where PORT_TYPE is one of the following: dns_port_t, dnssec_port_t, kerberos_port_t, mongod_port_t, ocsp_port_t.
[root@configsrv01 ~]# semanage port -a -t mongod_port_t -p tcp 27016
[root@configsrv01 ~]# 

这里报了好多端口错误,其实默认27017-27019,28017-28019是SELinux开放的,下次用这些好了。

[root@mongos01 ~]# semanage port -l | grep mongod_port_t
mongod_port_t                  tcp      27016, 27000, 27017-27019, 28017-28019
[root@mongos01 ~]# 
mongos> sh.addShard( "sh1/shard1st01:27016,mix01:27016,mix02:27016")
{
	"shardAdded" : "sh1",
	"ok" : 1,
	"operationTime" : Timestamp(1601793390, 5),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1601793390, 5),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}
mongos> 
  1. 添加分片sh2(mongos01)
[root@configsrv01 ~]# semanage port -a -t mongod_port_t -p tcp 27015
[root@configsrv01 ~]# 
[root@mongos01 ~]# semanage port -a -t mongod_port_t -p tcp 27015
[root@mongos01 ~]# mongo --port 27000
mongos> sh.addShard( "sh2/shard2nd01:27015,mix01:27015,mix02:27015")
{
	"shardAdded" : "sh2",
	"ok" : 1,
	"operationTime" : Timestamp(1601794253, 5),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1601794253, 5),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}
mongos>
  1. 设置SELinux端口访问

复制集添加完成后,setroubleshoot疯狂报错,端口都互相开放一下,下次注意使用标准端口。

[root@shard1st01 ~]# semanage port -l | grep mongod_port_t
mongod_port_t                  tcp      27016, 27017-27019, 28017-28019
[root@shard1st01 ~]# semanage port -a -t mongod_port_t -p tcp 27015
[root@shard1st01 ~]# 
[root@shard2nd01 ~]# semanage port -l | grep mongod_port_t
mongod_port_t                  tcp      27015, 27017-27019, 28017-28019
[root@shard2nd01 ~]# semanage port -a -t mongod_port_t -p tcp 27016
[root@shard2nd01 ~]# 
  1. 确认另一台MongosRouter(mix01)
[root@mix01 ~]# mongo --port 27000
mongos> sh.status()
--- Sharding Status --- 
  shards:
        {  "_id" : "sh1",  "host" : "sh1/mix01:27016,mix02:27016,shard1st01:27016",  "state" : 1 }
        {  "_id" : "sh2",  "host" : "sh2/mix01:27015,mix02:27015,shard2nd01:27015",  "state" : 1 }
mongos> 

KeepAlived搭建

  1. 安装软件
[root@mongos01 ~]# dnf install keepalived
[root@mongos01 ~]# cp /etc/keepalived/keepalived.conf{,.bak}
[root@mongos01 ~]# vi /etc/keepalived/keepalived.conf
[root@mongos01 ~]# cat /etc/keepalived/keepalived.conf
vrrp_script chk_mongos_router {
  script "/usr/bin/killall -0 /usr/bin/mongos"
  interval 2
  fall 2
}

vrrp_instance VI_1 {
  interface ens160
  state MASTER
  virtual_router_id 51
  priority 101
  advert_int 1
  nopreempt
  virtual_ipaddress {
    13.13.13.13/16
  }
  track_script {
    chk_mongos_router
  }
}
[root@mongos01 ~]# 
[root@mix01 ~]# cat /etc/keepalived/keepalived.conf
vrrp_instance VI_1 {
  interface ens160
  state BACKUP
  virtual_router_id 51
  priority 100 
  advert_int 1
  nopreempt
  virtual_ipaddress {
    13.13.13.13/16
  }
}
[root@mix01 ~]# 
  1. 开放防火墙
[root@mongos01 ~]# firewall-cmd --zone=public --permanent \
> --add-rich-rule='rule protocol value="vrrp" accept'
success
[root@mongos01 ~]# firewall-cmd --reload
success
[root@mongos01 ~]# 
  1. 解决SELinux阻止killall命令访问mongod_t标签的mongos程序
Oct 04 16:33:42 mongos01 setroubleshoot[2099]: SELinux is preventing killall from using the signull access on a process. For complete SELinux messages run: sealert -l e4335bbb-ea78-462e-86b3-9da1ca377fa8
Oct 04 16:33:42 mongos01 platform-python[2099]: SELinux is preventing killall from using the signull access on a process.
                                                
                                                *****  Plugin catchall (100. confidence) suggests   **************************
                                                
                                                If you believe that killall should be allowed signull access on processes labeled mongod_t by default.
                                                Then you should report this as a bug.
                                                You can generate a local policy module to allow this access.
                                                Do
                                                allow this access for now by executing:
                                                # ausearch -c 'killall' --raw | audit2allow -M my-killall
                                                # semodule -X 300 -i my-killall.pp
                                                
[root@mongos01 ~]# ausearch -c 'killall' --raw | audit2allow -M my-killall
******************** IMPORTANT ***********************
To make this policy package active, execute:

semodule -i my-killall.pp

[root@mongos01 ~]# semodule -i my-killall.pp
[root@mongos01 ~]# 

ReplicaMemberPriority的问题

  1. 重启第一分片上的MongoDB,发现:

初始化ReplicaSet的时候没有指定,默认所有成员优先级相同,所以会出现主节点恢复后不能自动为主的情况。

[root@shard1st01 ~]# mongo --port 27016
sh1:SECONDARY> 
[root@mix01 ~]# mongo --port 27016
sh1:PRIMARY> 
  1. 修改第一分片shard1st01的优先级,使其可用时即为主:
sh1:PRIMARY> cfg = rs.conf()
{
	"_id" : "sh1",
	"members" : [
		{
			"_id" : 0,
			"host" : "shard1st01:27016",
			"priority" : 1,
			"votes" : 1
		},
		{
			"_id" : 1,
			"host" : "mix01:27016",
			"priority" : 1,
			"votes" : 1
		},
		{
			"_id" : 2,
			"host" : "mix02:27016",
			"priority" : 1,
			"votes" : 1
		}
	]
}
sh1:PRIMARY> cfg.members[0].priority = 10
10
sh1:PRIMARY> rs.reconfig(cfg)
{
	"ok" : 1
}
sh1:PRIMARY> 
sh1:SECONDARY> 
[root@shard1st01 ~]# mongo --port 27016
sh1:SECONDARY> 
sh1:PRIMARY> 
  1. 同样配置其它主节点的优先级
[root@configsrv01 ~]# mongo
configset:PRIMARY> cfg = rs.conf()
configset:PRIMARY> cfg.members[0].priority = 10
configset:PRIMARY> rs.reconfig(cfg)
configset:PRIMARY> 
[root@mix02 ~]# mongo --port 27015
sh2:PRIMARY> cfg = rs.conf()
sh2:PRIMARY> cfg.members[0].priority = 10
sh2:PRIMARY> rs.reconfig(cfg)
sh2:PRIMARY> 
sh2:SECONDARY> 
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值