之前以为启动mongod的时候配置好分片参数就直接分片了,还是太业余了,还需要两步:
1. 为db开启分片功能
2. 对开启分片功能的db的collection指定片键并分片
分别是这几个命令
# 启用数据库分片
db.runCommand({"enablesharding":"test"})
# 启用数据集分片
db.runCommand({"shardcollection" : "test.users", "key" : {"_id" : 1,"uid":1}})
# 在mongos shell查看分片状态
sh.status()
下面是一个例子
--- Sharding Status ---
sharding version: {
"_id" : 1,
"version" : 4,
"minCompatibleVersion" : 4,
"currentVersion" : 5,
"clusterId" : ObjectId("578cbd23665b83cc995bedd1")
}
shards:
{ "_id" : "shard1", "host" : "shard1/172.17.10.227:27021,172.17.10.228:27021,172.17.10.229:27021", "maxSize" : NumberLong(20480) }
{ "_id" : "shard2", "host" : "shard2/172.17.10.227:27022,172.17.10.228:27022,172.17.10.229:27022", "maxSize" : NumberLong(20480) }
most recently active mongoses:
"2.6.6" : 1
balancer:
Currently enabled: yes
Currently running: yes
Balancer lock taken at Sun Oct 09 2016 16:33:40 GMT+0800 (CST) by 172-17-10-227:30000:1474867375:1804289383:Balancer:846930886
Collections with active migrations:
reducer-monitor.db_eth0_bps_in_metric started at Sun Oct 09 2016 16:57:19 GMT+0800 (CST)
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : false, "primary" : "shard2" }
{ "_id" : "db", "partitioned" : false, "primary" : "shard1" }
{ "_id" : "reducer-monitor", "partitioned" : true, "primary" : "shard1" }
reducer-monitor.db_conntrack_rate_metric
shard key: { "host" : 1, "BusinessMinute" : 1 }
unique: false
balancing: true
chunks:
shard1 12
shard2 8
too many chunks to print, use verbose if you want to force print
reducer-monitor.db_cpu_metric
shard key: { "host" : 1 }
unique: false
balancing: true
chunks:
shard1 194
shard2 2
too many chunks to print, use verbose if you want to force print
reducer-monitor.db_eth0_bps_in_metric
shard key: { "host" : 1, "BusinessMinute" : 1 }
unique: false
balancing: true
chunks:
shard1 193
too many chunks to print, use verbose if you want to force print
reducer-monitor.db_eth0_bps_out_metric
shard key: { "host" : 1, "BusinessMinute" : 1 }
unique: false
balancing: true
chunks:
shard1 192
too many chunks to print, use verbose if you want to force print
{ "_id" : "amin", "partitioned" : false, "primary" : "shard2" }
{ "_id" : "uhelm", "partitioned" : false, "primary" : "shard2" }
{ "_id" : "uadb", "partitioned" : false, "primary" : "shard2" }
它正在banlancing
# 查看balancer的状态
sh.getBalancerState()
# 停止分片
sh.stopBalancer()
# 开始分片
sh.startBalancer()
true
表示正在执行,可以banlancer配置平衡的时间段
下一步我要将删除mongo数据造成的空洞数据释放,方法介绍如下:
把已有的空洞数据,remove掉,重新生成一份无空洞数据。那么具体如何落地?先预热从库;把预热的从库提升为主库;把之前主库的数据全部删除;重新同步;同步完成后,预热此库;把此库提升为主库。
具体操作如下:
检查服务器各节点是否正常运行 (ps -ef |grep mongod);
登陆要处理的主节点 /mongodb/bin/mongo–port 88888;
做降权处理rs.stepDown(),并通过命令 rs.status()来查看是否降权;
切换成功之后,停掉该节点;检查是否已经降权,可以通过web页面查看status,我们建议最好登录进去保证有数据进入,或者是mongostat 查看; kill 掉对应mongo的进程: kill 进程号;删除数据,进入对应的分片删除数据文件,比如: rm -fr /mongodb/shard11/*;重新启动该节点,执行重启命令,比如:如:/mongodb/bin/mongod –config /mongodb/shard11.conf;通过日志查看进程;数据同步完成后,在修改后的主节点上执行命令 rs.stepDown() ,做降权处理。