1. 原副本集(单节点):
# mongo 127.0.0.1:22001
use admin
config = { _id:"shard1", members:[
{_id:0,host:"10.101.1.140:22001"}
]
}
rs.initiate(config);
登录主节点:
# mongo 10.101.1.140:22001
shard1:PRIMARY> rs.conf()
{
"_id" : "shard1",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "10.101.1.140:22001"
}
]
}
shard1:PRIMARY> rs.add("10.101.1.140:22011");
{ "ok" : 1 }
shard1:PRIMARY> rs.conf();
{
"_id" : "shard1",
"version" : 2,
"members" : [
{
"_id" : 0,
"host" : "10.101.1.140:22001"
},
{
"_id" : 1,
"host" : "10.101.1.140:22011"
}
]
}
shard1:PRIMARY> rs.status();
{
"set" : "shard1",
"date" : ISODate("2015-08-14T07:53:13Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "10.101.1.140:22001",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 856639,
"optime" : Timestamp(1439538734, 1),
"optimeDate" : ISODate("2015-08-14T07:52:14Z"),
"electionTime" : Timestamp(1438736839, 2),
"electionDate" : ISODate("2015-08-05T01:07:19Z"),
"self" : true
},
{
"_id" : 1,
"name" : "10.101.1.140:22011",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 59,
"optime" : Timestamp(1439538734, 1),
"optimeDate" : ISODate("2015-08-14T07:52:14Z"),
"lastHeartbeat" : ISODate("2015-08-14T07:53:12Z"),
"lastHeartbeatRecv" : ISODate("2015-08-14T07:53:11Z"),
"pingMs" : 0,
"syncingTo" : "10.101.1.147:22001"
}
],
"ok" : 1
}
STARTUP2> 状态表示正在初始化/同步数据,同步完变成 SECONDARY> 备份节点
查看同步状态
shard1:PRIMARY> db.printSlaveReplicationInfo()source: 10.101.1.140:22011
syncedTo: Fri Aug 14 2015 15:52:14 GMT+0800 (CST)
0 secs (0 hrs) behind the primary
登录从节点:
# mongo 10.101.1.140:22011
shard1:SECONDARY> show tables;
2015-08-14T16:00:00.650+0800 error: { "$err" : "not master and slaveOk=false", "code" : 13435 }
此时从节点不可读写,使用 rs.slaveOk() 命令开启只读。
shard1:SECONDARY> rs.slaveOk();
shard1:SECONDARY> show tables;
查看是否主节点:
rs.isMaster()
主从切换:
这里复制集有主备两个节点,手动停止主节点,备节点并没有(不会)自动成为主节点。
注意到,当主节点宕机,分片的复制集中需要有一个仲裁节点,才会自动将备节点选为主节点。
启动一个mongo备份节点,作为仲裁节点加入复制集:rs.addArb("host:port"),然后停止主节点,备节点自动成为主节点了。
(官方说明:如果你有偶数个投票成员,那么部署一个仲裁节点,使投票节点数变成奇数。)
rs.remove("host:port")
停止/启用Balancer
db.printShardingStatus()
sh.isBalancerRunning()
sh.startBalancer();