和Replica Set类似,都需要一个仲裁节点,但是Sharding还需要配置节点和路由节点。就三种集群搭建方式来说,这种是最复杂的。部署图如下:
MongoDB Sharding Cluster需要三种角色:
(1)Shard Server:即存储实际数据的分片,每个Shard可以是一个mongod实例,也可以是一组mongod实例构成的Replica Set。
(2)Config Server:用来存储所有shard节点的配置信息、每个chunk的shard key范围、chunk在各shard的分布情况、该集群中所有DB和collection的sharding配置信息。
(3)Route Process:这是一个前端路由,客户端由此接入,然后询问Config Servers需要到哪个shard上查询或保存记录,再连接相应的shard进行操作,最后将结果返回给客户端,而这一切对客户端是透明的,客户端不用关心所操作的记录存储在哪个shard上。
下面我们配置测试一下Sharding:
一、机器环境
1.Shard Server
shard_a
192.168.110.71:10000
192.168.100.90:10000
192.168.100.110:10000 (Arbiter Server)
shard_b
192.168.110.71:10001
192.168.100.90:10001
192.168.100.110:10001 (Arbiter Server)
2.Config Server
192.168.110.71:20000
3.Route Process
192.168.100.110:30000
二、配置
1.创建数据目录
在这3台机器的mongodb安装目录下创建日志目录和数据目录
mkdir -p logs
mkdir -p conf
mkdir -p data/shard_a
mkdir -p data/shard_b
在192.168.110.71创建config数据目录
mkdir -p data/config
2.创建配置文件
shard_a.conf
port=10000
pidfilepath=/home/slim/mongodb-2.6.8/data/shard_a.pid
dbpath=/home/slim/mongodb-2.6.8/data/shard_a
directoryperdb=true
logpath=/home/slim/mongodb-2.6.8/logs/shard_a.log
logappend=true
fork=true
profile=1
slowms = 5
noprealloc=false
replSet=shard_a
oplogSize=100
shardsvr=true
shard_b.conf
port=100001
pidfilepath=/home/slim/mongodb-2.6.8/data/shard_b.pid
dbpath=/home/slim/mongodb-2.6.8/data/shard_b
directoryperdb=true
logpath=/home/slim/mongodb-2.6.8/logs/shard_b.log
logappend=true
fork=true
profile=1
slowms = 5
noprealloc=false
replSet=shard_b
oplogSize=100
shardsvr=true
config.conf
port=20000
pidfilepath=/home/slim/mongodb-2.6.8/data/config.pid
dbpath=/home/slim/mongodb-2.6.8/data/config
directoryperdb=true
logpath=/home/slim/mongodb-2.6.8/logs/config.log
logappend=true
fork=true
profile=0
configsvr=true
mongos.conf
port=30000
logpath=/home/slim/mongodb-2.6.8/logs/mongos.log
logappend=true
fork=true
maxConns=1000
chunkSize=1
configdb=192.168.110.71:20000
./bin/mongo 192.168.100.90:10000
MongoDB shell version: 2.0.6
connecting to: 192.168.100.90:10000/test
> use admin;
switched to db admin
> config_shard_a={_id : 'shard_a', members:[{_id:0, host:'192.168.110.71:10000'},{_id:1, host:'192.168.100.90:10000'},{_id:2,host:'192.168.100.110:10000',arbiterOnly:true}]};
{
"_id" : "shard_a",
"members" : [
{
"_id" : 0,
"host" : "192.168.110.71:10000"
},
{
"_id" : 1,
"host" : "192.168.100.90:10000"
},
{
"_id" : 2,
"host" : "192.168.100.110:10000",
"arbiterOnly" : true
}
]
}
> rs.initiate(config_shard_a);
{
"info" : "Config now saved locally. Should come online in about a minute.",
"ok" : 1
}
> rs.status();
{
"set" : "shard_a",
"date" : ISODate("2015-03-17T05:10:38Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "192.168.110.71:10000",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 88,
"optime" : {
"t" : 1426568948000,
"i" : 1
},
"optimeDate" : ISODate("2015-03-17T05:09:08Z"),
"lastHeartbeat" : ISODate("2015-03-17T05:10:37Z"),
"lastHeartbeatRecv" : ISODate("2015-03-17T05:10:37Z"),
"pingMs" : 0,
"syncingTo" : "192.168.100.90:10000"
},
{
"_id" : 1,
"name" : "192.168.100.90:10000",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 6289,
"optime" : {
"t" : 1426568948000,
"i" : 1
},
"optimeDate" : ISODate("2015-03-17T05:09:08Z"),
"electionTime" : {
"t" : 1426568958000,
"i" : 1
},
"electionDate" : ISODate("2015-03-17T05:09:18Z"),
"self" : true
},
{
"_id" : 2,
"name" : "192.168.100.110:10000",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 88,
"lastHeartbeat" : ISODate("2015-03-17T05:10:37Z"),
"lastHeartbeatRecv" : ISODate("2015-03-17T05:10:37Z"),
"pingMs" : 0
}
],
"ok" : 1
}
2.配置shard_b
登录192.168.100.90:10001
./bin/mongo 192.168.100.90:10001
MongoDB shell version: 2.0.6
connecting to: 192.168.110.71:10001/test
> use admin;
switched to db admin
> config_shard_b={_id : 'shard_b', members:[{_id:0, host:'192.168.110.71:10001'},{_id:1, host:'192.168.100.90:10001'},{_id:2,host:'192.168.100.110:10001',arbiterOnly:true}]};
{
"_id" : "shard_b",
"members" : [
{
"_id" : 0,
"host" : "192.168.110.71:10001"
},
{
"_id" : 1,
"host" : "192.168.100.90:10001"
},
{
"_id" : 2,
"host" : "192.168.100.110:10001",
"arbiterOnly" : true
}
]
}
> rs.initiate(config_shard_b);
{
"info" : "Config now saved locally. Should come online in about a minute.",
"ok" : 1
}
STARTUP2> rs.status();
{
"set" : "shard_b",
"date" : ISODate("2015-03-17T05:17:48Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "192.168.110.71:10001",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 128,
"optime" : {
"t" : 1426569457000,
"i" : 1
},
"optimeDate" : ISODate("2015-03-17T05:17:37Z"),
"electionTime" : {
"t" : 1426569463000,
"i" : 1
},
"electionDate" : ISODate("2015-03-17T05:17:43Z"),
"self" : true
},
{
"_id" : 1,
"name" : "192.168.100.90:10001",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 11,
"optime" : {
"t" : 1426569457000,
"i" : 1
},
"optimeDate" : ISODate("2015-03-17T05:17:37Z"),
"lastHeartbeat" : ISODate("2015-03-17T05:17:47Z"),
"lastHeartbeatRecv" : ISODate("2015-03-17T05:17:48Z"),
"pingMs" : 1,
"lastHeartbeatMessage" : "syncing to: 192.168.110.71:10001",
"syncingTo" : "192.168.110.71:10001"
},
{
"_id" : 2,
"name" : "192.168.100.110:10001",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 11,
"lastHeartbeat" : ISODate("2015-03-17T05:17:47Z"),
"lastHeartbeatRecv" : ISODate("2015-03-17T05:17:47Z"),
"pingMs" : 0
}
],
"ok" : 1
}
注:配置shard时不能连接ARBITER配置,否则会抛出:
"errmsg" : "couldn't initiate : initiation and reconfiguration of a replica set must be sent to a node that can become primary"
三、配置Sharding
1.启动配置服务
./bin/mongod -f conf/config.conf &
2.启动路由服务
./bin/mongos -f conf/mongos.conf &
3. 路由节点配置
./bin/mongo 192.168.100.110:30000
MongoDB shell version: 2.0.6
connecting to: 192.168.100.110:30000/test
mongos> use admin;
switched to db admin
mongos> db.runCommand({addshard:"shard_a/192.168.110.71:10000,192.168.100.90:10000",name:"shard_a"});
{ "shardAdded" : "shard_a", "ok" : 1 }
mongos> db.runCommand({addshard:"shard_b/192.168.110.71:10001,192.168.100.90:10001",name:"shard_b"});
{ "shardAdded" : "shard_b", "ok" : 1 }
mongos> db.adminCommand({listshards:1});
{
"shards" : [
{
"_id" : "shard_a",
"host" : "shard_a/192.168.100.90:10000,192.168.110.71:10000"
},
{
"_id" : "shard_b",
"host" : "shard_b/192.168.100.90:10001,192.168.110.71:10001"
}
],
"ok" : 1
}
replica set + shard功能就配置好了,注意:虽然配置好了,但是还要声明库和表要进行分片。
注:
启动路由服务时,若报如下错误:
BadValue need either 1 or 3 configdbs
try './bin/mongos --help' for more information
原因以及解决办法:
mongos.conf文件中configdb参数有误,在这里路由个数,只能为 1或3。configdb =192.168.110.71:20000
4.声明库和表要分片
添加分片存储的数据库
mongos> db.runCommand({enablesharding:"shard"});
{ "ok" : 1 }
设置分片的集合名称,且必须指定Shard Key:
mongos> db.runCommand({shardcollection:"shard.user", key:{_id:1}});
{ "collectionsharded" : "shard.user", "ok" : 1 }
查看sharding状态:
mongos> db.printShardingStatus();
--- Sharding Status ---
sharding version: {
"_id" : 1,
"version" : 4,
"minCompatibleVersion" : 4,
"currentVersion" : 5,
"clusterId" : ObjectId("5507bbdfd9b4a799a27e0120")
}
shards:
{ "_id" : "shard_a", "host" : "shard_a/192.168.100.90:10000,192.168.110.71:10000" }
{ "_id" : "shard_b", "host" : "shard_b/192.168.100.90:10001,192.168.110.71:10001" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "shard", "partitioned" : true, "primary" : "shard_b" }
shard.user chunks:
shard_b 1
{ "_id" : { $minKey : 1 } } -->> { "_id" : { $maxKey : 1 } } on : shard_b { "t" : 1000, "i" : 0 }
查看数据库状态:
mongos> use shard;
mongos> db.stats();
{
"raw" : {
"shard_b/192.168.100.90:10001,192.168.110.71:10001" : {
"db" : "shard",
"collections" : 4,
"objects" : 6,
"avgObjSize" : 122.66666666666667,
"dataSize" : 736,
"storageSize" : 1073152,
"numExtents" : 4,
"indexes" : 1,
"indexSize" : 8176,
"fileSize" : 67108864,
"nsSizeMB" : 16,
"dataFileVersion" : {
"major" : 4,
"minor" : 5
},
"extentFreeList" : {
"num" : 0,
"totalSize" : 0
},
"ok" : 1
}
},
"objects" : 6,
"avgObjSize" : 122,
"dataSize" : 736,
"storageSize" : 1073152,
"numExtents" : 4,
"indexes" : 1,
"indexSize" : 8176,
"fileSize" : 67108864,
"extentFreeList" : {
"num" : 0,
"totalSize" : 0
},
"ok" : 1
}
查看表状态:
mongos> db.user.stats();
{
"sharded" : true,
"systemFlags" : 1,
"userFlags" : 1,
"ns" : "shard.user",
"count" : 0,
"numExtents" : 1,
"size" : 0,
"storageSize" : 8192,
"totalIndexSize" : 8176,
"indexSizes" : {
"_id_" : 8176
},
"avgObjSize" : 0,
"nindexes" : 1,
"nchunks" : 1,
"shards" : {
"shard_b" : {
"ns" : "shard.user",
"count" : 0,
"size" : 0,
"storageSize" : 8192,
"numExtents" : 1,
"nindexes" : 1,
"lastExtentSize" : 8192,
"paddingFactor" : 1,
"systemFlags" : 1,
"userFlags" : 1,
"totalIndexSize" : 8176,
"indexSizes" : {
"_id_" : 8176
},
"ok" : 1
}
},
"ok" : 1
}
注:
1. 报以下错误,原因是库还不允许分片
{ "ok" : 0, "errmsg" : "sharding not enabled for db" }
2.报以下错误,原因是Shard Key设置有误
"errmsg" : "Unsupported shard key pattern. Pattern must either be a single hashed field, or a list of ascending fields."
四、测试
登录mongos服务,执行如下脚本:
for (var i = 1; i <= 20000; i++) db.user.save({id:i,name:"jack"+i,sex:"male",age:27,value:"test"});
查看结果:
mongos> db.user.stats();
{
"sharded" : true,
"systemFlags" : 0,
"userFlags" : 1,
"ns" : "shard.user",
"count" : 20000,
"numExtents" : 9,
"size" : 2240000,
"storageSize" : 3489792,
"totalIndexSize" : 670432,
"indexSizes" : {
"_id_" : 670432
},
"avgObjSize" : 112,
"nindexes" : 1,
"nchunks" : 4,
"shards" : {
"shard_a" : {
"ns" : "shard.user",
"count" : 15058,
"size" : 1686496,
"avgObjSize" : 112,
"storageSize" : 2793472,
"numExtents" : 5,
"nindexes" : 1,
"lastExtentSize" : 2097152,
"paddingFactor" : 1,
"systemFlags" : 0,
"userFlags" : 1,
"totalIndexSize" : 498736,
"indexSizes" : {
"_id_" : 498736
},
"ok" : 1
},
"shard_b" : {
"ns" : "shard.user",
"count" : 4942,
"size" : 553504,
"avgObjSize" : 112,
"storageSize" : 696320,
"numExtents" : 4,
"nindexes" : 1,
"lastExtentSize" : 524288,
"paddingFactor" : 1,
"systemFlags" : 1,
"userFlags" : 1,
"totalIndexSize" : 171696,
"indexSizes" : {
"_id_" : 171696
},
"ok" : 1
}
},
"ok" : 1
}
从结果可以看到:
shard_a:15058
shard_b:4942