数据库系列之MongoDB分片集群环境

MongoDB集群环境有3种模式:主从复制、副本集和分片,本文详细介绍分片模式的部署和配置,并进行测试验证。

数据库系列之MongoDB主从复制集群环境
数据库系列之MongoDB副本集集群环境
数据库系列之MongoDB分片集群环境


4、分片
4.1 分片集群环境规划

Shard服务器使用Replica Sets确保每个数据节点都具有备份、自动容错转移、自动恢复的能力,其中:

  • 配置服务器:使用3台服务器配置replica sets架构确保元数据完整性
  • 路由进程:使用3个路由进程实现平衡,提高客户端接入性能
  • 3个分片进程:Shard11、Shard12和Shard13 组成一个副本集(replication set和arbiter),提供Sharding中shard1的功能
  • 3个分片进程:Shard21、Shard22和Shard23 组成一个副本集(replication set和arbiter),提供Sharding中Shard2的功能

首先确定各个组件的数量:mongos 3个,config server replica set 1个3台服务器,数据分3片shard server 2个,每个shard有一个副本一个仲裁也就是2*3=6个,总共需要部署12个实例。这些实例可以部署在独立机器也可以部署在一台机器,我们这里测试资源有限,只准备了 3台机器,在同一台机器只要端口不同就可以,(注:所有的仲裁节点放在一台机器,其余两台机器承担了全部读写操作,但是作为仲裁的192.168.112.103相当空闲。因此实际生产环境的部署架构可以这样调整,把机器的负载分的更加均衡一点,每个机器既可以作为主节点、副本节点、仲裁节点)具体物理部署图如下:

在这里插入图片描述

服务器具体环境配置如下表所示:

主机名ip服务进程端口号
tango-centos01192.168.112.101Mongos10041
tango-centos02192.168.112.102Mongos10042
tango-centos03192.168.112.103Mongos10043
4.2 服务器环境配置
4.2.1 配置Shard和Replica Sets

1)配置shard11.conf

dbpath = /usr/local/mongodb/data/shard11
logpath = /usr/local/mongodb/logs/shard11.log
pidfilepath = /usr/local/mongodb/pid/shard11.pid
directoryperdb = true
logappend = true
replSet = shard-rs-01
port = 10011
bind_ip = 192.168.112.101
fork = true
shardsvr = true
journal = true

2)配置shard12.conf

dbpath = /usr/local/mongodb/data/shard12
logpath = /usr/local/mongodb/logs/shard12.log
pidfilepath = /usr/local/mongodb/pid/shard12.pid
directoryperdb = true
logappend = true
replSet = shard-rs-01
port = 10012
bind_ip = 192.168.112.102
fork = true
shardsvr = true
journal = true

3)配置shard13.conf

dbpath = /usr/local/mongodb/data/shard13
logpath = /usr/local/mongodb/logs/shard13.log
pidfilepath = /usr/local/mongodb/pid/shard13.pid
directoryperdb = true
logappend = true
replSet = shard-rs-01
port = 10013
bind_ip = 192.168.112.103
fork = true
shardsvr = true
journal = true

4)配置shard21.conf

dbpath = /usr/local/mongodb/data/shard21
logpath = /usr/local/mongodb/logs/shard21.log
pidfilepath = /usr/local/mongodb/pid/shard21.pid
directoryperdb = true
logappend = true
replSet = shard-rs-02
port = 10021
bind_ip = 192.168.112.101
fork = true
shardsvr = true
journal = true

5)配置shard22.conf

dbpath = /usr/local/mongodb/data/shard22
logpath = /usr/local/mongodb/logs/shard22.log
pidfilepath = /usr/local/mongodb/pid/shard22.pid
directoryperdb = true
logappend = true
replSet = shard-rs-02
port = 10022
bind_ip = 192.168.112.102
fork = true
shardsvr = true
journal = true

6)配置shard23.conf

dbpath = /usr/local/mongodb/data/shard23
logpath = /usr/local/mongodb/logs/shard23.log
pidfilepath = /usr/local/mongodb/pid/shard23.pid
directoryperdb = true
logappend = true
replSet = shard-rs-02
bind_ip = 192.168.112.103
port = 10023
fork = true
shardsvr = true
journal = true

7)配置config1.conf

dbpath = /usr/local/mongodb/data/config1
logpath = /usr/local/mongodb/logs/config1.log
pidfilepath = /usr/local/mongodb/pid/config1.pid
directoryperdb = true
logappend = true
replSet = config-server-rs
port = 10031
bind_ip = 192.168.112.101
fork = true
configsvr = true
journal = true

8)配置config2.conf

dbpath = /usr/local/mongodb/data/config2
logpath = /usr/local/mongodb/logs/config2.log
pidfilepath = /usr/local/mongodb/pid/config2.pid
directoryperdb = true
logappend = true
replSet = config-server-rs
port = 10032
bind_ip = 192.168.112.102
fork = true
configsvr = true
journal = true

9)配置config3.conf

dbpath = /usr/local/mongodb/data/config3
logpath = /usr/local/mongodb/logs/config3.log
pidfilepath = /usr/local/mongodb/pid/config3.pid
directoryperdb = true
logappend = true
replSet = config-server-rs
port = 10033
bind_ip = 192.168.112.103
fork = true
configsvr = true
journal = true

10)配置route1.conf

configdb = config-server-rs/192.168.112.101:10031,192.168.112.102:10032,192.168.112.103:10033
logpath = /usr/local/mongodb/logs/route1.log
pidfilepath = /usr/local/mongodb/pid/ route1.pid
logappend = true
port = 10041
bind_ip = 192.168.112.101
fork = true

11)配置route2.conf

configdb = config-server-rs/192.168.112.101:10031,192.168.112.102:10032,192.168.112.103:10033
logpath = /usr/local/mongodb/logs/route2.log
pidfilepath = /usr/local/mongodb/pid/ route2.pid
logappend = true
port = 10042
bind_ip = 192.168.112.102
fork = true

12)配置route3.conf

configdb = config-server-rs/192.168.112.101:10031,192.168.112.102:10032,192.168.112.103:10033
logpath = /usr/local/mongodb/logs/route3.log
pidfilepath = /usr/local/mongodb/pid/ route3.pid
logappend = true
port = 10043
bind_ip = 192.168.112.103
fork = true
4.2.2 创建目录文件
  • 节点1
[root@tango-centos01 data]# mkdir shard11
[root@tango-centos01 data]# mkdir shard21
[root@tango-centos01 data]# mkdir config1
[root@tango-centos01 mongodb]# mkdir pid
  • 节点2
[root@tango-centos02 data]# mkdir shard12
[root@tango-centos02 data]# mkdir shard22
[root@tango-centos02 data]# mkdir config2
[root@tango-centos02 mongodb]# mkdir pid
  • 节点3
[root@tango-centos03 data]# mkdir shard13
[root@tango-centos03 data]# mkdir shard23
[root@tango-centos03 data]# mkdir config3
[root@tango-centos03 mongodb]# mkdir pid

4.2.3 参数说明

  • dbpath:数据存放目录
  • logpath:日志存放路径
  • pidfilepath:进程文件,方便停止mongodb
  • directoryperdb:为每一个数据库按照数据库名建立文件夹存放
  • logappend:以追加的方式记录日志
  • replSet:replica set的名字
  • bind_ip:mongodb所绑定的ip地址
  • port:mongodb进程所使用的端口号,默认为27017
  • oplogSize:mongodb操作日志文件的最大大小。单位为Mb,默认为硬盘剩余空间的5%
  • fork:以后台方式运行进程
  • noprealloc:不预先分配存储
  • journal:写日志,为了快速启动并节约测试环境存储空间,可以加上nojournal是为了关闭日志信息,在我们的测试环境不需要初始化这么大的redo日志
  • smallfiles:当提示空间不够时添加此参数
  • shardsvr:分片
  • configsvr:配置服务节点
  • configdb:配置config节点到route节点
4.3 启动集群环境

1)以下命令启动MongoDB环境

[root@tango-centos01]# ./bin/mongod -f ./config/shard11.conf
[root@tango-centos02]# ./bin/mongod -f ./config/shard12.conf
[root@tango-centos03]# ./bin/mongod -f ./config/shard13.conf
[root@tango-centos01]# ./bin/mongod -f ./config/shard21.conf
[root@tango-centos02]# ./bin/mongod -f ./config/shard22.conf
[root@tango-centos03]# ./bin/mongod -f ./config/shard23.conf
[root@tango-centos01]# ./bin/mongod -f ./config/config1.conf
[root@tango-centos02]# ./bin/mongod -f ./config/config2.conf
[root@tango-centos03]# ./bin/mongod -f ./config/config3.conf

2)查看服务和端口状态

  • 节点1
[root@tango-centos01 mongodb-linux-x86_64-rhel70-3.6.3]# ps -ef|grep mongo
root       1225      1  1 11:14 ?        00:00:02 ./bin/mongod -f ./config/shard11.conf
root       1257      1  2 11:14 ?        00:00:01 ./bin/mongod -f ./config/shard21.conf
root       1288      1  5 11:15 ?        00:00:01 ./bin/mongod -f ./config/config1.conf
[root@tango-centos01 mongodb-linux-x86_64-rhel70-3.6.3]# netstat -lntp|grep mongo
tcp        0      0 192.168.112.101:10031   0.0.0.0:*  LISTEN      1288/./bin/mongod   
tcp        0      0 192.168.112.101:10011   0.0.0.0:*  LISTEN      1225/./bin/mongod   
tcp        0      0 192.168.112.101:10021   0.0.0.0:*  LISTEN      1257/./bin/mongod
  • 节点2
[root@tango-centos02 mongodb-linux-x86_64-rhel70-3.6.3]# ps -ef|grep mongo
root       1101      1  1 11:14 ?        00:00:02 ./bin/mongod -f ./config/shard12.conf
root       1132      1  1 11:15 ?        00:00:02 ./bin/mongod -f ./config/shard22.conf
root       1163      1  1 11:15 ?        00:00:02 ./bin/mongod -f ./config/config2.conf
[root@tango-centos02 mongodb-linux-x86_64-rhel70-3.6.3]# netstat -lntp|grep mongo
tcp        0      0 192.168.112.102:10032   0.0.0.0:*  LISTEN      1157/./bin/mongod   
tcp        0      0 192.168.112.102:10012   0.0.0.0:*  LISTEN      1090/./bin/mongod   
tcp        0      0 192.168.112.102:10022   0.0.0.0:*  LISTEN      1125/./bin/mongod
  • 节点3
[root@tango-centos03 mongodb-linux-x86_64-rhel70-3.6.3]# ps -ef|grep mongo
root       1093      1  1 11:14 ?        00:00:02 ./bin/mongod -f ./config/shard13.conf
root       1124      1  1 11:15 ?        00:00:02 ./bin/mongod -f ./config/shard23.conf
root       1155      1  1 11:15 ?        00:00:02 ./bin/mongod -f ./config/config3.conf
[root@tango-centos03 mongodb-linux-x86_64-rhel70-3.6.3]# netstat -lntp|grep mongo
tcp        0      0 192.168.112.103:10033   0.0.0.0:*  LISTEN      1156/./bin/mongod   
tcp        0      0 192.168.112.103:10013   0.0.0.0:*  LISTEN      1089/./bin/mongod   
tcp        0      0 192.168.112.103:10023   0.0.0.0:*  LISTEN      1121/./bin/mongod
4.4 配置分片副本集环境

最新版本的MongoDB搭建分片副本集环境时候,需要将config server配置为replica set。

4.4.1 配置Config Server Replica Set

1)登录Config Server的mongo shell

[root@tango-centos01 ]# ./bin/mongo 192.168.112.101:10031

2)初始化replica sets

>cfgsvr={_id:"config-server-rs",configsvr:true,members:[{_id:0,host:"192.168.112.101:10031"},{_id:1,host:"192.168.112.102:10032"},{_id:2,host:"192.168.112.103:10033"}]}
{
        "_id" : "config-server-rs",
        "configsvr" : true,
        "members" : [
                {
                        "_id" : 0,
                        "host" : "192.168.112.101:10031"
                },
                {
                        "_id" : 1,
                        "host" : "192.168.112.102:10032"
                },
                {
                        "_id" : 2,
                        "host" : "192.168.112.103:10033"
                }
        ]
}
> rs.initiate(cfgsvr)
{
        "ok" : 1,
        "operationTime" : Timestamp(1527055459, 1),
        "$gleStats" : {
                "lastOpTime" : Timestamp(1527055459, 1),
                "electionId" : ObjectId("000000000000000000000000")
        },
        "$clusterTime" : {
                "clusterTime" : Timestamp(1527055459, 1),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}
config-server-rs:SECONDARY>
config-server-rs:PRIMARY>
4.4.2 配置副本集

1)登录副本集1的mongo shell

[root@tango-centos01 ]# ./bin/mongo 192.168.112.101:10011

2)初始化副本集1的replica sets

>cfg1={_id:"shard-rs-01",members:[{_id:0,host:"192.168.112.101:10011"},{_id:1,host:"192.168.112.102:10012"},{_id:2,host:"192.168.112.103:10013",arbiterOnly:true}]}
> rs.initiate(cfg1)
shard-rs-01:PRIMARY> rs.status()rs.status()
{
        "set" : "shard-rs-01",
        "date" : ISODate("2018-05-23T06:29:40.794Z"),
        "myState" : 1,
        "term" : NumberLong(1),
        "heartbeatIntervalMillis" : NumberLong(2000),
        "optimes" : {
                "lastCommittedOpTime" : {
                        "ts" : Timestamp(1527056972, 1),
                        "t" : NumberLong(1)
                },
                "readConcernMajorityOpTime" : {
                        "ts" : Timestamp(1527056972, 1),
                        "t" : NumberLong(1)
                },
                "appliedOpTime" : {
                        "ts" : Timestamp(1527056972, 1),
                        "t" : NumberLong(1)
                },
                "durableOpTime" : {
                        "ts" : Timestamp(1527056972, 1),
                        "t" : NumberLong(1)
                }
        },
        "members" : [
                {
                        "_id" : 0,
                        "name" : "192.168.112.101:10011",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 2298,
                        "optime" : {
                                "ts" : Timestamp(1527056972, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2018-05-23T06:29:32Z"),
                        "electionTime" : Timestamp(1527056330, 1),
                        "electionDate" : ISODate("2018-05-23T06:18:50Z"),
                        "configVersion" : 3,
                        "self" : true
                },
                {
                        "_id" : 1,
                        "name" : "192.168.112.102:10012",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 660,
                        "optime" : {
                                "ts" : Timestamp(1527056972, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1527056972, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2018-05-23T06:29:32Z"),
                        "optimeDurableDate" : ISODate("2018-05-23T06:29:32Z"),
                        "lastHeartbeat" : ISODate("2018-05-23T06:29:39.625Z"),
                        "lastHeartbeatRecv" : ISODate("2018-05-23T06:29:39.632Z"),
                        "pingMs" : NumberLong(0),
                        "syncingTo" : "192.168.112.101:10011",
                        "configVersion" : 3
                },
                {
                        "_id" : 2,
                        "name" : "192.168.112.103:10013",
                        "health" : 1,
                        "state" : 7,
                        "stateStr" : "ARBITER",
                        "uptime" : 5,
                        "lastHeartbeat" : ISODate("2018-05-23T06:29:39.624Z"),
                        "lastHeartbeatRecv" : ISODate("2018-05-23T06:29:40.502Z"),
                        "pingMs" : NumberLong(0),
                        "configVersion" : 3
                }
        ],
        "ok" : 1
}

3)登录副本集2的mongo shell

[root@tango-centos01 ]# ./bin/mongo 192.168.112.101:10021

4)初始化副本集2的replica sets

>cfg2={_id:"shard-rs-02",members:[{_id:0,host:"192.168.112.101:10021"},{_id:1,host:"192.168.112.102:10022"},{_id:2,host:"192.168.112.103:10023",arbiterOnly:true}]}
{
        "_id" : "shard-rs-02",
        "members" : [
                {
                        "_id" : 0,
                        "host" : "192.168.112.101:10021"
                },
                {
                        "_id" : 1,
                        "host" : "192.168.112.102:10022"
                },
                {
                        "_id" : 2,
                        "host" : "192.168.112.103:10023",
                        "arbiterOnly" : true
                }
        ]
}
> rs.initiate(cfg2)
{ "ok" : 1 }
shard-rs-02:PRIMARY> rs.status()rs.status()
{
        "set" : "shard-rs-02",
        "date" : ISODate("2018-05-23T06:39:46.683Z"),
        "myState" : 1,
        "term" : NumberLong(1),
        "heartbeatIntervalMillis" : NumberLong(2000),
        "optimes" : {
                "lastCommittedOpTime" : {
                        "ts" : Timestamp(1527057586, 1),
                        "t" : NumberLong(1)
                },
                "readConcernMajorityOpTime" : {
                        "ts" : Timestamp(1527057586, 1),
                        "t" : NumberLong(1)
                },
                "appliedOpTime" : {
                        "ts" : Timestamp(1527057586, 1),
                        "t" : NumberLong(1)
                },
                "durableOpTime" : {
                        "ts" : Timestamp(1527057586, 1),
                        "t" : NumberLong(1)
                }
        },
        "members" : [
                {
                        "_id" : 0,
                        "name" : "192.168.112.101:10021",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 2862,
                        "optime" : {
                                "ts" : Timestamp(1527057586, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2018-05-23T06:39:46Z"),
                        "electionTime" : Timestamp(1527057425, 1),
                        "electionDate" : ISODate("2018-05-23T06:37:05Z"),
                        "configVersion" : 1,
                        "self" : true
                },
                {
                        "_id" : 1,
                        "name" : "192.168.112.102:10022",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 172,
                        "optime" : {
                                "ts" : Timestamp(1527057576, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1527057576, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2018-05-23T06:39:36Z"),
                        "optimeDurableDate" : ISODate("2018-05-23T06:39:36Z"),
                        "lastHeartbeat" : ISODate("2018-05-23T06:39:45.728Z"),
                        "lastHeartbeatRecv" : ISODate("2018-05-23T06:39:46.125Z"),
                        "pingMs" : NumberLong(0),
                        "syncingTo" : "192.168.112.101:10021",
                        "configVersion" : 1
                },
                {
                        "_id" : 2,
                        "name" : "192.168.112.103:10023",
                        "health" : 1,
                        "state" : 7,
                        "stateStr" : "ARBITER",
                        "uptime" : 172,
                        "lastHeartbeat" : ISODate("2018-05-23T06:39:45.727Z"),
                        "lastHeartbeatRecv" : ISODate("2018-05-23T06:39:41.742Z"),
                        "pingMs" : NumberLong(0),
                        "configVersion" : 1
                }
        ],
        "ok" : 1
}
3.4.3 配置分片环境

目前已经搭建了MongoDB配置服务器、路由服务器和各个分片服务器,但是应用程序连接到 Mongos路由服务器并不能使用分片机制,还需要在程序里设置分片配置,让分片生效。

1)启动Mongos

[root@tango-centos01]# ./bin/mongos -f ./config/route1.conf
[root@tango-centos02]# ./bin/mongos -f ./config/route2.conf
[root@tango-centos03]# ./bin/mongos -f ./config/route3.conf

2)连接到mongos

[root@tango-centos01 mongodb-linux-x86_64-rhel70-3.6.3]# ./bin/mongo 192.168.112.101:10041
MongoDB shell version v3.6.3
connecting to: mongodb://192.168.112.101:10041/test
MongoDB server version: 3.6.3
mongos>

3)使用admin数据库

mongos> use adminuse admin
switched to db admin

4)串联路由服务器与分配副本集1

mongos> sh.addShard("shard-rs-01/192.168.112.101:10011,192.168.112.102:10012")
{
        "shardAdded" : "shard-rs-01",
        "ok" : 1,
        "$clusterTime" : {
                "clusterTime" : Timestamp(1527059259, 5),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        },
        "operationTime" : Timestamp(1527059259, 5)
}

5)串联路由服务器与分配副本集2

mongos> sh.addShard("shard-rs-02/192.168.112.101:10021,192.168.112.102:10022")
{
        "shardAdded" : "shard-rs-02",
        "ok" : 1,
        "$clusterTime" : {
                "clusterTime" : Timestamp(1527059432, 5),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        },
        "operationTime" : Timestamp(1527059432, 5)
}
mongos>

6)查看分片服务器的配置

mongos> db.runCommand({listshards:1})db.runCommand({listshards:1})
{
        "shards" : [
                {
                        "_id" : "shard-rs-01",
                        "host" : "shard-rs-01/192.168.112.101:10011,192.168.112.102:10012",
                        "state" : 1
                },
                {
                        "_id" : "shard-rs-02",
                        "host" : "shard-rs-02/192.168.112.101:10021,192.168.112.102:10022",
                        "state" : 1
                }
        ],
        "ok" : 1,
        "$clusterTime" : {
                "clusterTime" : Timestamp(1527059589, 4),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        },
        "operationTime" : Timestamp(1527059589, 4)
4.4.4 配置分片的表和片键

MongoDB支持2种分区策略:Hashed sharding和Ranged Sharding

  • Hashed Sharding
    基于键值计算出hash值,再根据hash值来分片。当sharded key的区间很接近时候,它们的哈希值通常不可能在相同的chunk中。

在这里插入图片描述

  • Ranged Sharding
    将数据分片键值按照区间保存,每个chunk会分配一定的区间键值。键值相近的数据更有可能在相同的chunk或shard中。

在这里插入图片描述

目前配置服务、路由服务、分片服务、副本集服务都已经串联起来了,但我们的目的是希望插入数据,数据能够自动分片,因此需要连接在mongos,准备让指定的数据库、指定的集合分片生效。

1)指定testdb分片生效

mongos> sh.enableSharding("testDB")sh.enableSharding("testDB")
{
        "ok" : 1,
        "$clusterTime" : {
                "clusterTime" : Timestamp(1527060052, 7),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        },
        "operationTime" : Timestamp(1527060052, 7)
}

2)指定数据库里需要分片的集合和片键

mongos> sh.shardCollection("testDB.test_tb1",{id:1})
{
        "collectionsharded" : "testDB.test_tb1",
        "collectionUUID" : UUID("57a23194-2172-4068-ad6f-689f3344590d"),
        "ok" : 1,
        "$clusterTime" : {
                "clusterTime" : Timestamp(1527060322, 15),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        },
        "operationTime" : Timestamp(1527060322, 15)
}

我们设置testDB的test_tb表需要分片,根据id自动分片到shard1、shard2上面去。要这样设置是因为MongoDB中不是所有数据库和表都需要分片。Shard collection使用sh.shardCollection()函数,需要指定collection的全称以及包含sharding key的文档,同时database必须已经enable sharding功能。Shard key的选择会影响sharding的效率,如果collection中已经包含有数据,在使用shardCollection之前需要使用db.collection().createIndex创建基于shard key的索引,如果为空在shardCollection时会自动创建。

4.5 分片集群测试验证
4.5.1 测试分片配置结果

向collection test_tb1中插入记录,查看分片的分配情况
1)Chunk size默认为64MB,测试需要将其调整为4MB

mongos> db.settings.save({_id:"chunksize",value:4})
WriteResult({ "nMatched" : 0, "nUpserted" : 1, "nModified" : 0, "_id" : "chunksize" })

2)表test_tb1中写入数据

mongos> use testDBuse testDB
switched to db testDB
mongos> for (var i=1;i<100000;i++) db.test_tb1.save({id:i,"test2":"testval2"})
WriteResult({ "nInserted" : 1 })

3)查看表test_tb1的分片情况,数据已经分布在2个sharding上

mongos> db.test_tb1.stats()db.test_tb1.stats()
{
        "sharded" : true,
        "capped" : false,
        "ns" : "testDB.test_tb1",
        "count" : 200001,
        "size" : 10800054,
        "storageSize" : 3493888,
        "totalIndexSize" : 4390912,
        "indexSizes" : {
                "_id_" : 1884160,
                "id_1" : 2506752
        },
        "avgObjSize" : 54,
        "nindexes" : 2,
        "nchunks" : 5,
        "shards" : {
                "shard-rs-01" : {
                        "ns" : "testDB.test_tb1",
                        "size" : 1740366,
                        "count" : 32229,
                        "avgObjSize" : 54,
                        "storageSize" : 581632,
                        "capped" : false,
                        "wiredTiger" : {
                                "metadata" : {
                                        "formatVersion" : 1
                                },
                "shard-rs-02" : {
                        "ns" : "testDB.test_tb1",
                        "size" : 9059688,
                        "count" : 167772,
                        "avgObjSize" : 54,
                        "storageSize" : 2912256,
                        "capped" : false,
                        "wiredTiger" : {
                                "metadata" : {
                                        "formatVersion" : 1
                                },

4)查看chunks信息

mongos> db.chunks.find()db.chunks.find()
{ "_id" : "config.system.sessions-_id_MinKey", "ns" : "config.system.sessions", "min" : { "_id" : { "$minKey" : 1 } }, "max" : { "_id" : { "$maxKey" : 1 } }, "shard" : "shard-rs-01", "lastmod" : Timestamp(1, 0), "lastmodEpoch" : ObjectId("5b05133fb00ebe2ebc79980e") }
{ "_id" : "testDB.test_tb1-id_MinKey", "lastmod" : Timestamp(2, 0), "lastmodEpoch" : ObjectId("5b0531fbb00ebe2ebc7a9f7e"), "ns" : "testDB.test_tb1", "min" : { "id" : { "$minKey" : 1 } }, "max" : { "id" : 2 }, "shard" : "shard-rs-01" }
{ "_id" : "testDB.test_tb1-id_2.0", "lastmod" : Timestamp(3, 1), "lastmodEpoch" : ObjectId("5b0531fbb00ebe2ebc7a9f7e"), "ns" : "testDB.test_tb1", "min" : { "id" : 2 }, "max" : { "id" : 77674 }, "shard" : "shard-rs-02" }
{ "_id" : "testDB.test_tb1-id_77674.0", "lastmod" : Timestamp(2, 2), "lastmodEpoch" : ObjectId("5b0531fbb00ebe2ebc7a9f7e"), "ns" : "testDB.test_tb1", "min" : { "id" : 77674 }, "max" : { "id" : 116510 }, "shard" : "shard-rs-02" }
{ "_id" : "testDB.test_tb1-id_116510.0", "lastmod" : Timestamp(2, 3), "lastmodEpoch" : ObjectId("5b0531fbb00ebe2ebc7a9f7e"), "ns" : "testDB.test_tb1", "min" : { "id" : 116510 }, "max" : { "id" : 167772 }, "shard" : "shard-rs-02" }
{ "_id" : "testDB.test_tb1-id_167772.0", "lastmod" : Timestamp(3, 0), "lastmodEpoch" : ObjectId("5b0531fbb00ebe2ebc7a9f7e"), "ns" : "testDB.test_tb1", "min" : { "id" : 167772 }, "max" : { "id" : { "$maxKey" : 1 } }, "shard" : "shard-rs-01" }
4.5.2 集群高可用验证

1)杀掉shard21进程

[root@tango-centos01 mongodb-linux-x86_64-rhel70-3.6.3]# ps -ef|grep mongo
root       1257      1  5 13:52 ?        00:13:16 ./bin/mongod -f ./config/shard21.conf
root       1288      1  0 13:52 ?        00:01:45 ./bin/mongod -f ./config/config1.conf
root       1789      1  3 14:53 ?        00:05:34 ./bin/mongos -f ./config/route1.conf
root       2094   1195  4 15:41 pts/0    00:05:38 ./bin/mongo 192.168.112.101:10041
root       2339      1  1 16:14 ?        00:01:18 ./bin/mongod -f ./config/shard11.conf [root@tango-centos01 ~]# kill -9 1257
[root@tango-centos01 mongodb-linux-x86_64-rhel70-3.6.3]# ps -ef|grep mongo
root       1288      1  0 13:52 ?        00:01:45 ./bin/mongod -f ./config/config1.conf
root       1789      1  3 14:53 ?        00:05:34 ./bin/mongos -f ./config/route1.conf
root       2094   1195  4 15:41 pts/0    00:05:38 ./bin/mongo 192.168.112.101:10041
root       2339      1  1 16:14 ?        00:01:18 ./bin/mongod -f ./config/shard11.conf
root       2866   2310  0 17:39 pts/1    00:00:00 grep --color=auto mongo

2)查看sharding状态

在这里插入图片描述

3)测试数据插入操作

mongos> for (var i=1;i<100000;i++) db.test_tb1.save({id:i,"test2":"testval2"})for (var i=1;i<100000;i++) db.test_tb1.save({id:i,"test2":"testval2"})
WriteResult({ "nInserted" : 1 })

4)查看sharding状态

mongos> db.test_tb1.stats()db.test_tb1.stats()
{
        "sharded" : true,
        "capped" : false,
        "ns" : "testDB.test_tb1",
        "count" : 299999,
        "size" : 16199946,
        "storageSize" : 7774208,
        "totalIndexSize" : 9248768,
        "indexSizes" : {
                "_id_" : 4734976,
                "id_1" : 4513792
        },
        "avgObjSize" : 54,
        "nindexes" : 2,
        "nchunks" : 7,
        "shards" : {
                "shard-rs-01" : {
                        "ns" : "testDB.test_tb1",
                        "size" : 1740420,
                        "count" : 32230,
                        "avgObjSize" : 54,
                        "storageSize" : 581632,
                        "capped" : false,
                        "wiredTiger" : {
                                "metadata" : {
                                        "formatVersion" : 1
                                },
                "shard-rs-02" : {
                        "ns" : "testDB.test_tb1",
                        "size" : 14459526,
                        "count" : 267769,
                        "avgObjSize" : 54,
                        "storageSize" : 7192576,
                        "capped" : false,
                        "wiredTiger" : {
                                "metadata" : {
                                        "formatVersion" : 1
                                },

5)查看chunks信息

mongos> db.chunks.find()db.chunks.find()
{ "_id" : "config.system.sessions-_id_MinKey", "ns" : "config.system.sessions", "min" : { "_id" : { "$minKey" : 1 } }, "max" : { "_id" : { "$maxKey" : 1 } }, "shard" : "shard-rs-01", "lastmod" : Timestamp(1, 0), "lastmodEpoch" : ObjectId("5b05133fb00ebe2ebc79980e") }
{ "_id" : "testDB.test_tb1-id_MinKey", "lastmod" : Timestamp(2, 0), "lastmodEpoch" : ObjectId("5b0531fbb00ebe2ebc7a9f7e"), "ns" : "testDB.test_tb1", "min" : { "id" : { "$minKey" : 1 } }, "max" : { "id" : 2 }, "shard" : "shard-rs-01" }
{ "_id" : "testDB.test_tb1-id_2.0", "lastmod" : Timestamp(3, 2), "lastmodEpoch" : ObjectId("5b0531fbb00ebe2ebc7a9f7e"), "ns" : "testDB.test_tb1", "min" : { "id" : 2 }, "max" : { "id" : 23304 }, "shard" : "shard-rs-02" }
{ "_id" : "testDB.test_tb1-id_77674.0", "lastmod" : Timestamp(2, 2), "lastmodEpoch" : ObjectId("5b0531fbb00ebe2ebc7a9f7e"), "ns" : "testDB.test_tb1", "min" : { "id" : 77674 }, "max" : { "id" : 116510 }, "shard" : "shard-rs-02" }
{ "_id" : "testDB.test_tb1-id_116510.0", "lastmod" : Timestamp(2, 3), "lastmodEpoch" : ObjectId("5b0531fbb00ebe2ebc7a9f7e"), "ns" : "testDB.test_tb1", "min" : { "id" : 116510 }, "max" : { "id" : 167772 }, "shard" : "shard-rs-02" }
{ "_id" : "testDB.test_tb1-id_167772.0", "lastmod" : Timestamp(3, 0), "lastmodEpoch" : ObjectId("5b0531fbb00ebe2ebc7a9f7e"), "ns" : "testDB.test_tb1", "min" : { "id" : 167772 }, "max" : { "id" : { "$maxKey" : 1 } }, "shard" : "shard-rs-01" }
{ "_id" : "testDB.test_tb1-id_23304.0", "lastmod" : Timestamp(3, 3), "lastmodEpoch" : ObjectId("5b0531fbb00ebe2ebc7a9f7e"), "ns" : "testDB.test_tb1", "min" : { "id" : 23304 }, "max" : { "id" : 62141 }, "shard" : "shard-rs-02" }
{ "_id" : "testDB.test_tb1-id_62141.0", "lastmod" : Timestamp(3, 4), "lastmodEpoch" : ObjectId("5b0531fbb00ebe2ebc7a9f7e"), "ns" : "testDB.test_tb1", "min" : { "id" : 62141 }, "max" : { "id" : 77674 }, "shard" : "shard-rs-02" }
mongos>

参考资料

  1. MongoDB Manual
  2. 高可用的MongoDB集群

转载请注明原文地址:https://blog.csdn.net/solihawk/article/details/116203939
文章会同步在公众号“牧羊人的方向”更新,感兴趣的可以关注公众号,谢谢!
在这里插入图片描述

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值