利用Mongodb的复制集搭建高可用分片,Replica Sets + Sharding的搭建过程

参考资料 reference:  http://mongodb.blog.51cto.com/1071559/740131  http://docs.mongodb.org/manual/tutorial/deploy-shard-cluster/#sharding-setup-shard-collection

感谢网友Mr.Sharp,他给了我很多很有用的建议。

概念梳理
Sharded cluster has the following components: shards, query routers and config servers.

Shards A : 
A shard is a MongoDB instance that holds a subset of a collection’s data. Each shard is either a single
mongod instance or a replica set. In production, all shards are replica sets.

Config Servers Each :
config server (page 10) is a mongod instance that holds metadata about the cluster. The metadata
maps chunks to shards. For more information, see Config Servers (page 10).

Routing Instances :
Each router is a mongos instance that routes the reads and writes from applications to the shards.
Applications do not access the shards directly.

Sharding is the only solution for some classes of deployments. Use sharded clusters if:
1 your data set approaches or exceeds the storage capacity of a single MongoDB instance.
2 the size of your system’s active working set will soon exceed the capacity of your system’s maximum RAM.
3 a single MongoDB instance cannot meet the demands of your write operations, and all other approaches have
not reduced contention.

 

 

开始部署搭建

Deploy a Sharded Cluster
1 Start the Config Server Database Instances?
 The mongos instances are lightweight and do not require data directories. You can run a mongos instance on a system that runs other cluster components,
 such as on an application server or a server running a mongod process. By default, a mongos instance runs on port 27017.
 /db/mongodb/bin/mongos --configdb 20.10x.91.119:37017,20.10x.91.119:37018,20.10x.91.119:37019
 or run " 
nohup /db/mongodb/bin/mongos --configdb 20.10x.91.119:37017,20.10x.91.119:37018,20.10x.91.119:37019 & "
 
 2.1 start mongos on default port 27017
 [root@472322 ~]# nohup /db/mongodb/bin/mongos --configdb 20.10x.91.119:37017,20.10x.91.119:37018,20.10x.91.119:37019 &
 Tue Aug  6 07:40:09 /db/mongodb/bin/mongos db version v2.0.1, pdfile version 4.5 starting (--help for usage)
 Tue Aug  6 07:40:09 git version: 3a5cf0e2134a830d38d2d1aae7e88cac31bdd684
 Tue Aug  6 07:40:09 build info: Linux bs-linux64.10gen.cc 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 20 17:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_41
 Tue Aug  6 07:40:09 SyncClusterConnection connecting to [20.10x.91.119:37017]
 Tue Aug  6 07:40:09 SyncClusterConnection connecting to [20.10x.91.119:37018]
 Tue Aug  6 07:40:09 SyncClusterConnection connecting to [20.10x.91.119:37019]
 Tue Aug  6 07:40:09 [Balancer] about to contact config servers and shards
 Tue Aug  6 07:40:09 [websvr] admin web console waiting for connections on port 28017
 Tue Aug  6 07:40:09 [Balancer] SyncClusterConnection connecting to [20.10x.91.119:37017]
 Tue Aug  6 07:40:09 [mongosMain] waiting for connections on port 27017
 Tue Aug  6 07:40:09 [Balancer] SyncClusterConnection connecting to [20.10x.91.119:37018]
 Tue Aug  6 07:40:09 [Balancer] SyncClusterConnection connecting to [20.10x.91.119:37019]
 Tue Aug  6 07:40:09 [Balancer] config servers and shards contacted successfully
 Tue Aug  6 07:40:09 [Balancer] balancer id: 472322.ea.com:27017 started at Aug  6 07:40:09
 Tue Aug  6 07:40:09 [Balancer] created new distributed lock for balancer on 20.10x.91.119:37017,20.10x.91.119:37018,20.10x.91.119:37019 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 Tue Aug  6 07:40:09 [Balancer] SyncClusterConnection connecting to [20.10x.91.119:37017]
 Tue Aug  6 07:40:09 [Balancer] SyncClusterConnection connecting to [20.10x.91.119:37018]
 Tue Aug  6 07:40:09 [Balancer] SyncClusterConnection connecting to [20.10x.91.119:37019]
 Tue Aug  6 07:40:09 [Balancer] SyncClusterConnection connecting to [20.10x.91.119:37017]
 Tue Aug  6 07:40:09 [Balancer] SyncClusterConnection connecting to [20.10x.91.119:37018]
 Tue Aug  6 07:40:09 [Balancer] SyncClusterConnection connecting to [20.10x.91.119:37019]
 Tue Aug  6 07:40:09 [LockPinger] creating distributed lock ping thread for 20.10x.91.119:37017,20.10x.91.119:37018,20.10x.91.119:37019 and process 472322.ea.com:27017:1375774809:1804289383 (sleeping for 30000ms)
 Tue Aug  6 07:40:10 [Balancer] distributed lock 'balancer/472322.ea.com:27017:1375774809:1804289383' acquired, ts : 5200a8598caa3e21e9888bd3
 Tue Aug  6 07:40:10 [Balancer] distributed lock 'balancer/472322.ea.com:27017:1375774809:1804289383' unlocked.

 Tue Aug  6 07:40:20 [Balancer] distributed lock 'balancer/472322.ea.com:27017:1375774809:1804289383' acquired, ts : 5200a8648caa3e21e9888bd4
 Tue Aug  6 07:40:20 [Balancer] distributed lock 'balancer/472322.ea.com:27017:1375774809:1804289383' unlocked.
 Tue Aug  6 07:40:30 [Balancer] distributed lock 'balancer/472322.ea.com:27017:1375774809:1804289383' acquired, ts : 5200a86e8caa3e21e9888bd5
 Tue Aug  6 07:40:30 [Balancer] distributed lock 'balancer/472322.ea.com:27017:1375774809:1804289383' unlocked.
 Tue Aug  6 07:40:40 [Balancer] distributed lock 'balancer/472322.ea.com:27017:1375774809:1804289383' acquired, ts : 5200a8788caa3e21e9888bd6
 Tue Aug  6 07:40:40 [Balancer] distributed lock 'balancer/472322.ea.com:27017:1375774809:1804289383' unlocked.

 2.2 start others mongos in appointed port 27018,27019
 nohup /db/mongodb/bin/mongos --configdb 20.10x.91.119:37017,20.10x.91.119:37018,20.10x.91.119:37019 --port 27018 --chunkSize 1 --logpath /var/log/mongos2.log --fork &
 nohup /db/mongodb/bin/mongos --configdb 20.10x.91.119:37017,20.10x.91.119:37018,20.10x.91.119:37019 --port 27019 --chunkSize 1 --logpath /var/log/mongos3.log --fork &

 

3 [root@472322 ~]# /db/mongodb/bin/mongo --host 20.10x.91.119 --port 27017
   the replica sets have not been created, so do it soon.

 

4 prepare the replica sets
 A shard can be a standalone mongod or a replica set. In a production environment, each shard should be a replica set.
 5.1 From a mongo shell, connect to the mongos instance. Issue a command using the following syntax:
 [root@472322 mongodb]# /db/mongodb/bin/mongo --host 20.10x.91.119 --port 27017
 MongoDB shell version: 2.0.1
 connecting to: 20.10x.91.119:27017/test
 mongos> 
    
 5.2  add the replica rpl1 into shards
 sh.addShard( "rpl1/127.0.0.1:27027" );
 mongos> sh.addShard( "rpl1/127.0.0.1:27027" );
 command failed: {
   "ok" : 0,
   "errmsg" : "can't use localhost as a shard since all shards need to communicate. either use all shards and configdbs in localhost or all in actual IPs  host: 127.0.0.1:27027 isLocalHost:1"
 }
 Tue Aug  6 09:41:16 uncaught exception: error { "$err" : "can't find a shard to put new db on", "code" : 10185 }
 mongos> 
 can't use the localhost or 127.0.0.1, so change to the real ip address of "20.10x.91.119".
 mongos> sh.addShard( "rpl1/20.10x.91.119:27027" );
 command failed: {
   "ok" : 0,
   "errmsg" : "in seed list rpl1/20.10x.91.119:27027, host 20.10x.91.119:27027 does not belong to replica set rpl1"
 }
 Tue Aug  6 09:42:26 uncaught exception: error { "$err" : "can't find a shard to put new db on", "code" : 10185 }
 mongos> 
 sh.addShard( "rpl1/20.10x.91.119:27027,20.10x.91.119:27028,20.10x.91.119:27029" );
 
 
so, replica sets config 127.0.0.1, 20.10x.91.119 can't be recognized,it is wrong. i should config it again.
 Before you can shard a collection, you must enable sharding for the collection’s database.
 Enabling sharding for a database does not redistribute data but make it possible to shard the collections in that database.
   6.1 1.From a mongo shell, connect to the mongos instance. Issue a command using the following syntax:
 sh.enableSharding("")

   6.2 Optionally, you can enable sharding for a database using the enableSharding command, which uses the following syntax:
 db.runCommand( { enableSharding: } )

 

7  Convert a Replica Set to a Replicated Sharded Cluster
 db.adminCommand( { listShards: 1 } ); it  is ok. when i insert data for tickets in mongo 27027 of first shard, the data can't be switched to the second shard.
 why ? i asked for HYG, he told me that we must insert data in mongos windown,if not ,the data can't be switched,so i will try to do it soon.<span color:#3366ff;"="" style="word-wrap: break-word; font-size: 9pt;">
 
 7.7 insert data in mongos command window
 db.tc2.ensureIndex({xx:1});
 db.runCommand( { shardCollection : "test.tc2", key : {"xx":1} })
 for( var i = 1; i < 2000000; i++ ) db.tc2.insert({ "xx":i, "fapp" : "84eb9fb556074d6481e31915ac2427f0",  "dne" : "ueeDEhIB6tmP4cfY43NwWvAenzKWx19znmbheAuBl4j39U8uFXS1QGi2GCMHO7L21szgeF6Iquqmnw8kfJbvZUs/11RyxcoRm+otbUJyPPxFkevzv4SrI3kGxczG6Lsd19NBpyskaElCTtVKxwvQyBNgciXYq6cO/8ntV2C6cwQ=",  "eml" : "q5x68h3qyVBqp3ollJrY3XEkXECjPEncXhbJjga+3hYoa4zYNhrNmBN91meL3o7jsBI/N6qe2bb2BOnOJNAnBMDzhNmPqJKG/ZVLfT9jpNkUD/pQ4oJENMv72L2GZoiyym2IFT+oT3N0KFhcv08b9ke9tm2EHTGcBsGg1R40Ah+Y/5z89OI4ERmI/48qjvaw",  "uid" : "sQt92NUPr3CpCVnbpotU2lqRNVfZD6k/9TGW62UT7ExZYF8Dp1cWIVQoYNQVyFRLkxjmCoa8m6DiLiL/fPdG1k7WYGUH4ueXXK2yfVn/AGUk3pQbIuh7nFbqZCrAQtEY7gU0aIGC4sotAE8kghvCa5qWnSX0SWTViAE/esaWORo=",  "agt" : "PHP/eaSSOPlugin",  "sd" : "S15345853557133877",  "pt" : 3795,  "ip" : "211.223.160.34",  "av" : "http://secure.download.dm.origin.com/production/avatar/prod/1/599/40x40.JPEG",  "nex" : ISODate("2013-01-18T04:00:32.41Z"),  "exy" : ISODate("2013-01-18T01:16:32.015Z"),  "chk" : "sso4648609868740971",  "aid" : "Ciyvab0tregdVsBtboIpeChe4G6uzC1v5_-SIxmvSLLINJClUkXJhNvWkSUnajzi8xsv1DGYm0D3V46LLFI-61TVro9-HrZwyRwyTf9NwYIyentrgAY_qhs8fh7unyfB",  "tid" : "rUqFhONysi0yA==13583853927872",  "date" : ISODate("2012-02-17T01:16:32.787Z"),  "v" : "2.0.0",  "scope" : [],  "rug" : false,  "schk" : true,  "fjs" : false,  "sites" : [{      "name" : "Origin.com",      "id" : "nb09xrt8384147bba27c2f12e112o8k9",      "last" : ISODate("2013-01-17T01:16:32.787Z"),      "_id" : ObjectId("50f750f06a56028661000f20")    }]});
 
 
watch the status of sharding, use the command of "db.printShardingStatus();"
       too many chunks to print, use verbose if you want to force print
     test.tickets chunks:
         rpl1    1
       { "ip" : { $minKey : 1 } } -->> { "ip" : { $maxKey : 1 } } on : rpl1 { "t" : 1000, "i" : 0 }
   {  "_id" : "adin",  "partitioned" : false,  "primary" : "rpl1" }

 mongos> 
 
 if we want to see the detail of "too many chunks to print, use verbose if you want to force print",we should use the parameter vvvv.
 db.printShardingStatus("vvvv");
 mongos> db.printShardingStatus("vvvv")
 --- Sharding Status --- 
   sharding version: { "_id" : 1, "version" : 3 }
   shards:
   {  "_id" : "rpl1",  "host" : "rpl1/20.10x.91.119:27027,20.10x.91.119:27028" }
   {  "_id" : "rpl2",  "host" : "rpl2/20.10x.91.119:27037,20.10x.91.119:27038" }
   databases:
   {  "_id" : "admin",  "partitioned" : false,  "primary" : "config" }
   {  "_id" : "test",  "partitioned" : true,  "primary" : "rpl1" }
     test.tc chunks:
         rpl1    1
       { "ip" : { $minKey : 1 } } -->> { "ip" : { $maxKey : 1 } } on : rpl1 { "t" : 1000, "i" : 0 }
     test.tc2 chunks:
         rpl1    62
         rpl2    61
       { "xx" : { $minKey : 1 } } -->> { "xx" : 1 } on : rpl1 { "t" : 59000, "i" : 1 }
       { "xx" : 1 } -->> { "xx" : 2291 } on : rpl1 { "t" : 2000, "i" : 2 }
       { "xx" : 2291 } -->> { "xx" : 4582 } on : rpl1 { "t" : 2000, "i" : 4 }
       { "xx" : 4582 } -->> { "xx" : 6377 } on : rpl1 { "t" : 2000, "i" : 6 }
       { "xx" : 6377 } -->> { "xx" : 8095 } on : rpl1 { "t" : 2000, "i" : 8 }
       { "xx" : 8095 } -->> { "xx" : 9813 } on : rpl1 { "t" : 2000, "i" : 10 }
       { "xx" : 9813 } -->> { "xx" : 11919 } on : rpl1 { "t" : 2000, "i" : 12 }
       { "xx" : 11919 } -->> { "xx" : 14210 } on : rpl1 { "t" : 2000, "i" : 14 }
       { "xx" : 14210 } -->> { "xx" : 18563 } on : rpl1 { "t" : 2000, "i" : 16 }
       { "xx" : 18563 } -->> { "xx" : 23146 } on : rpl1 { "t" : 2000, "i" : 18 }
       { "xx" : 23146 } -->> { "xx" : 27187 } on : rpl1 { "t" : 2000, "i" : 20 }
       { "xx" : 27187 } -->> { "xx" : 31770 } on : rpl1 { "t" : 2000, "i" : 22 }
       { "xx" : 31770 } -->> { "xx" : 35246 } on : rpl1 { "t" : 2000, "i" : 24 }
       { "xx" : 35246 } -->> { "xx" : 38683 } on : rpl1 { "t" : 2000, "i" : 26 }
       { "xx" : 38683 } -->> { "xx" : 42120 } on : rpl1 { "t" : 2000, "i" : 28 }
       { "xx" : 42120 } -->> { "xx" : 45557 } on : rpl1 { "t" : 2000, "i" : 30 }
       { "xx" : 45557 } -->> { "xx" : 48994 } on : rpl1 { "t" : 2000, "i" : 32 }
       { "xx" : 48994 } -->> { "xx" : 53242 } on : rpl1 { "t" : 2000, "i" : 34 }
       { "xx" : 53242 } -->> { "xx" : 62409 } on : rpl1 { "t" : 2000, "i" : 36 }
       { "xx" : 62409 } -->> { "xx" : 71576 } on : rpl1 { "t" : 2000, "i" : 38 }
       { "xx" : 71576 } -->> { "xx" : 80743 } on : rpl1 { "t" : 2000, "i" : 40 }
       { "xx" : 80743 } -->> { "xx" : 89910 } on : rpl1 { "t" : 2000, "i" : 42 }
       { "xx" : 89910 } -->> { "xx" : 99077 } on : rpl1 { "t" : 2000, "i" : 44 }
       { "xx" : 99077 } -->> { "xx" : 119382 } on : rpl1 { "t" : 2000, "i" : 46 }
       { "xx" : 119382 } -->> { "xx" : 133133 } on : rpl1 { "t" : 2000, "i" : 48 }
       { "xx" : 133133 } -->> { "xx" : 146884 } on : rpl1 { "t" : 2000, "i" : 50 }
       { "xx" : 146884 } -->> { "xx" : 160635 } on : rpl1 { "t" : 2000, "i" : 52 }
       { "xx" : 160635 } -->> { "xx" : 178457 } on : rpl1 { "t" : 2000, "i" : 54 }
       { "xx" : 178457 } -->> { "xx" : 200508 } on : rpl1 { "t" : 2000, "i" : 56 }
       { "xx" : 200508 } -->> { "xx" : 214259 } on : rpl1 { "t" : 2000, "i" : 58 }
       { "xx" : 214259 } -->> { "xx" : 228010 } on : rpl1 { "t" : 2000, "i" : 60 }
       { "xx" : 228010 } -->> { "xx" : 241761 } on : rpl1 { "t" : 2000, "i" : 62 }
       { "xx" : 241761 } -->> { "xx" : 255512 } on : rpl1 { "t" : 2000, "i" : 64 }
       { "xx" : 255512 } -->> { "xx" : 279630 } on : rpl1 { "t" : 2000, "i" : 66 }
       { "xx" : 279630 } -->> { "xx" : 301723 } on : rpl1 { "t" : 2000, "i" : 68 }
       { "xx" : 301723 } -->> { "xx" : 317196 } on : rpl1 { "t" : 2000, "i" : 70 }
       { "xx" : 317196 } -->> { "xx" : 336533 } on : rpl1 { "t" : 2000, "i" : 72 }
       { "xx" : 336533 } -->> { "xx" : 359500 } on : rpl1 { "t" : 2000, "i" : 74 }
       { "xx" : 359500 } -->> { "xx" : 385354 } on : rpl1 { "t" : 2000, "i" : 76 }
       { "xx" : 385354 } -->> { "xx" : 400837 } on : rpl1 { "t" : 2000, "i" : 78 }
       { "xx" : 400837 } -->> { "xx" : 422259 } on : rpl1 { "t" : 2000, "i" : 80 }
       { "xx" : 422259 } -->> { "xx" : 444847 } on : rpl1 { "t" : 2000, "i" : 82 }
       { "xx" : 444847 } -->> { "xx" : 472084 } on : rpl1 { "t" : 2000, "i" : 84 }
       { "xx" : 472084 } -->> { "xx" : 490796 } on : rpl1 { "t" : 2000, "i" : 86 }
       { "xx" : 490796 } -->> { "xx" : 509498 } on : rpl1 { "t" : 2000, "i" : 88 }
       { "xx" : 509498 } -->> { "xx" : 534670 } on : rpl1 { "t" : 2000, "i" : 90 }
       { "xx" : 534670 } -->> { "xx" : 561927 } on : rpl1 { "t" : 2000, "i" : 92 }
       { "xx" : 561927 } -->> { "xx" : 586650 } on : rpl1 { "t" : 2000, "i" : 94 }
       { "xx" : 586650 } -->> { "xx" : 606316 } on : rpl1 { "t" : 2000, "i" : 96 }
       { "xx" : 606316 } -->> { "xx" : 632292 } on : rpl1 { "t" : 2000, "i" : 98 }
       { "xx" : 632292 } -->> { "xx" : 650179 } on : rpl1 { "t" : 2000, "i" : 100 }
       { "xx" : 650179 } -->> { "xx" : 670483 } on : rpl1 { "t" : 2000, "i" : 102 }
       { "xx" : 670483 } -->> { "xx" : 695898 } on : rpl1 { "t" : 2000, "i" : 104 }
       { "xx" : 695898 } -->> { "xx" : 719853 } on : rpl1 { "t" : 2000, "i" : 106 }
       { "xx" : 719853 } -->> { "xx" : 734751 } on : rpl1 { "t" : 2000, "i" : 108 }
       { "xx" : 734751 } -->> { "xx" : 757143 } on : rpl1 { "t" : 2000, "i" : 110 }
       { "xx" : 757143 } -->> { "xx" : 773800 } on : rpl1 { "t" : 2000, "i" : 112 }
       { "xx" : 773800 } -->> { "xx" : 796919 } on : rpl1 { "t" : 2000, "i" : 114 }
       { "xx" : 796919 } -->> { "xx" : 814262 } on : rpl1 { "t" : 2000, "i" : 116 }
       { "xx" : 814262 } -->> { "xx" : 837215 } on : rpl1 { "t" : 2000, "i" : 118 }
       { "xx" : 837215 } -->> { "xx" : 855766 } on : rpl1 { "t" : 2000, "i" : 120 }
       { "xx" : 855766 } -->> { "xx" : 869517 } on : rpl1 { "t" : 2000, "i" : 122 }
       { "xx" : 869517 } -->> { "xx" : 883268 } on : rpl2 { "t" : 59000, "i" : 0 }
       { "xx" : 883268 } -->> { "xx" : 897019 } on : rpl2 { "t" : 58000, "i" : 0 }
       { "xx" : 897019 } -->> { "xx" : 919595 } on : rpl2 { "t" : 57000, "i" : 0 }
       { "xx" : 919595 } -->> { "xx" : 946611 } on : rpl2 { "t" : 56000, "i" : 0 }
       { "xx" : 946611 } -->> { "xx" : 966850 } on : rpl2 { "t" : 55000, "i" : 0 }
       { "xx" : 966850 } -->> { "xx" : 989291 } on : rpl2 { "t" : 54000, "i" : 0 }
       { "xx" : 989291 } -->> { "xx" : 1008580 } on : rpl2 { "t" : 53000, "i" : 0 }
       { "xx" : 1008580 } -->> { "xx" : 1022331 } on : rpl2 { "t" : 52000, "i" : 0 }
       { "xx" : 1022331 } -->> { "xx" : 1036082 } on : rpl2 { "t" : 51000, "i" : 0 }
       { "xx" : 1036082 } -->> { "xx" : 1060888 } on : rpl2 { "t" : 50000, "i" : 0 }
       { "xx" : 1060888 } -->> { "xx" : 1088121 } on : rpl2 { "t" : 49000, "i" : 0 }
       { "xx" : 1088121 } -->> { "xx" : 1101872 } on : rpl2 { "t" : 48000, "i" : 0 }
       { "xx" : 1101872 } -->> { "xx" : 1122160 } on : rpl2 { "t" : 47000, "i" : 0 }
       { "xx" : 1122160 } -->> { "xx" : 1143537 } on : rpl2 { "t" : 46000, "i" : 0 }
       { "xx" : 1143537 } -->> { "xx" : 1168372 } on : rpl2 { "t" : 45000, "i" : 0 }
       { "xx" : 1168372 } -->> { "xx" : 1182123 } on : rpl2 { "t" : 44000, "i" : 0 }
       { "xx" : 1182123 } -->> { "xx" : 1201952 } on : rpl2 { "t" : 43000, "i" : 0 }
       { "xx" : 1201952 } -->> { "xx" : 1219149 } on : rpl2 { "t" : 42000, "i" : 0 }
       { "xx" : 1219149 } -->> { "xx" : 1232900 } on : rpl2 { "t" : 41000, "i" : 0 }
       { "xx" : 1232900 } -->> { "xx" : 1247184 } on : rpl2 { "t" : 40000, "i" : 0 }
       { "xx" : 1247184 } -->> { "xx" : 1270801 } on : rpl2 { "t" : 39000, "i" : 0 }
       { "xx" : 1270801 } -->> { "xx" : 1294343 } on : rpl2 { "t" : 38000, "i" : 0 }
       { "xx" : 1294343 } -->> { "xx" : 1313250 } on : rpl2 { "t" : 37000, "i" : 0 }
       { "xx" : 1313250 } -->> { "xx" : 1336332 } on : rpl2 { "t" : 36000, "i" : 0 }
       { "xx" : 1336332 } -->> { "xx" : 1358840 } on : rpl2 { "t" : 35000, "i" : 0 }
       { "xx" : 1358840 } -->> { "xx" : 1372591 } on : rpl2 { "t" : 34000, "i" : 0 }
       { "xx" : 1372591 } -->> { "xx" : 1386342 } on : rpl2 { "t" : 33000, "i" : 0 }
       { "xx" : 1386342 } -->> { "xx" : 1400093 } on : rpl2 { "t" : 32000, "i" : 0 }
       { "xx" : 1400093 } -->> { "xx" : 1418372 } on : rpl2 { "t" : 31000, "i" : 0 }
       { "xx" : 1418372 } -->> { "xx" : 1440590 } on : rpl2 { "t" : 30000, "i" : 0 }
       { "xx" : 1440590 } -->> { "xx" : 1461034 } on : rpl2 { "t" : 29000, "i" : 0 }
       { "xx" : 1461034 } -->> { "xx" : 1488305 } on : rpl2 { "t" : 28000, "i" : 0 }
       { "xx" : 1488305 } -->> { "xx" : 1510326 } on : rpl2 { "t" : 27000, "i" : 0 }
       { "xx" : 1510326 } -->> { "xx" : 1531986 } on : rpl2 { "t" : 26000, "i" : 0 }
       { "xx" : 1531986 } -->> { "xx" : 1545737 } on : rpl2 { "t" : 25000, "i" : 0 }
       { "xx" : 1545737 } -->> { "xx" : 1559488 } on : rpl2 { "t" : 24000, "i" : 0 }
       { "xx" : 1559488 } -->> { "xx" : 1576755 } on : rpl2 { "t" : 23000, "i" : 0 }
       { "xx" : 1576755 } -->> { "xx" : 1596977 } on : rpl2 { "t" : 22000, "i" : 0 }
       { "xx" : 1596977 } -->> { "xx" : 1619863 } on : rpl2 { "t" : 21000, "i" : 0 }
       { "xx" : 1619863 } -->> { "xx" : 1633614 } on : rpl2 { "t" : 20000, "i" : 0 }
       { "xx" : 1633614 } -->> { "xx" : 1647365 } on : rpl2 { "t" : 19000, "i" : 0 }
       { "xx" : 1647365 } -->> { "xx" : 1668372 } on : rpl2 { "t" : 18000, "i" : 0 }
       { "xx" : 1668372 } -->> { "xx" : 1682123 } on : rpl2 { "t" : 17000, "i" : 0 }
       { "xx" : 1682123 } -->> { "xx" : 1695874 } on : rpl2 { "t" : 16000, "i" : 0 }
       { "xx" : 1695874 } -->> { "xx" : 1711478 } on : rpl2 { "t" : 15000, "i" : 0 }
       { "xx" : 1711478 } -->> { "xx" : 1738684 } on : rpl2 { "t" : 14000, "i" : 0 }
       { "xx" : 1738684 } -->> { "xx" : 1758083 } on : rpl2 { "t" : 13000, "i" : 0 }
       { "xx" : 1758083 } -->> { "xx" : 1773229 } on : rpl2 { "t" : 12000, "i" : 0 }
       { "xx" : 1773229 } -->> { "xx" : 1794767 } on : rpl2 { "t" : 11000, "i" : 0 }
       { "xx" : 1794767 } -->> { "xx" : 1808518 } on : rpl2 { "t" : 10000, "i" : 0 }
       { "xx" : 1808518 } -->> { "xx" : 1834892 } on : rpl2 { "t" : 9000, "i" : 0 }
       { "xx" : 1834892 } -->> { "xx" : 1848643 } on : rpl2 { "t" : 8000, "i" : 0 }
       { "xx" : 1848643 } -->> { "xx" : 1873297 } on : rpl2 { "t" : 7000, "i" : 0 }
       { "xx" : 1873297 } -->> { "xx" : 1887048 } on : rpl2 { "t" : 6000, "i" : 2 }
       { "xx" : 1887048 } -->> { "xx" : 1911702 } on : rpl2 { "t" : 6000, "i" : 3 }
       { "xx" : 1911702 } -->> { "xx" : 1918372 } on : rpl2 { "t" : 5000, "i" : 0 }
       { "xx" : 1918372 } -->> { "xx" : 1932123 } on : rpl2 { "t" : 7000, "i" : 2 }
       { "xx" : 1932123 } -->> { "xx" : 1959185 } on : rpl2 { "t" : 7000, "i" : 3 }
       { "xx" : 1959185 } -->> { "xx" : 1972936 } on : rpl2 { "t" : 9000, "i" : 2 }
       { "xx" : 1972936 } -->> { "xx" : 1999999 } on : rpl2 { "t" : 9000, "i" : 3 }
       { "xx" : 1999999 } -->> { "xx" : { $maxKey : 1 } } on : rpl2 { "t" : 2000, "i" : 0 }
     test.tickets chunks:
         rpl1    1
       { "ip" : { $minKey : 1 } } -->> { "ip" : { $maxKey : 1 } } on : rpl1 { "t" : 1000, "i" : 0 }
   {  "_id" : "adin",  "partitioned" : false,  "primary" : "rpl1" }

 mongos> 
 
 7.7.2 vertify if it is a sharding
 [root@472322 ~]# /db/mongodb/bin/mongo --port 27027
 MongoDB shell version: 2.0.1
 connecting to: 127.0.0.1:27027/test
 PRIMARY> db.runCommand({ isdbgrid:1 });
 {
   "errmsg" : "no such cmd: isdbgrid",
   "bad cmd" : {
     "isdbgrid" : 1
   },
   "ok" : 0
 }
 PRIMARY> 
 there is errmsg info, run it in wrong windows,maybe run it in the mongos window.
 
 [root@472322 ~]# /db/mongodb/bin/mongo --port 27017
 MongoDB shell version: 2.0.1
 connecting to: 127.0.0.1:27017/test
 mongos> db.runCommand({ isdbgrid:1 });
 { "isdbgrid" : 1, "hostname" : "472322.ea.com", "ok" : 1 }
 mongos> 
 ok,  "isdbgrid" : 1 means it is a sharding.
 
 7.8 shared for unshared collection
 7.8.1 prepared a collection in mongo window.
 for( var i = 1; i < 2000000; i++ ) db.tc3.insert({ "xx":i, "fapp" : "84eb9fb556074d6481e31915ac2427f0",  "dne" : "ueeDEhIB6tmP4cfY43NwWvAenzKWx19znmbheAuBl4j39U8uFXS1QGi2GCMHO7L21szgeF6Iquqmnw8kfJbvZUs/11RyxcoRm+otbUJyPPxFkevzv4SrI3kGxczG6Lsd19NBpyskaElCTtVKxwvQyBNgciXYq6cO/8ntV2C6cwQ=",  "eml" : "q5x68h3qyVBqp3ollJrY3XEkXECjPEncXhbJjga+3hYoa4zYNhrNmBN91meL3o7jsBI/N6qe2bb2BOnOJNAnBMDzhNmPqJKG/ZVLfT9jpNkUD/pQ4oJENMv72L2GZoiyym2IFT+oT3N0KFhcv08b9ke9tm2EHTGcBsGg1R40Ah+Y/5z89OI4ERmI/48qjvaw",  "uid" : "sQt92NUPr3CpCVnbpotU2lqRNVfZD6k/9TGW62UT7ExZYF8Dp1cWIVQoYNQVyFRLkxjmCoa8m6DiLiL/fPdG1k7WYGUH4ueXXK2yfVn/AGUk3pQbIuh7nFbqZCrAQtEY7gU0aIGC4sotAE8kghvCa5qWnSX0SWTViAE/esaWORo=",  "agt" : "PHP/eaSSOPlugin",  "sd" : "S15345853557133877",  "pt" : 3795,  "ip" : "211.223.160.34",  "av" : "http://secure.download.dm.origin.com/production/avatar/prod/1/599/40x40.JPEG",  "nex" : ISODate("2013-01-18T04:00:32.41Z"),  "exy" : ISODate("2013-01-18T01:16:32.015Z"),  "chk" : "sso4648609868740971",  "aid" : "Ciyvab0tregdVsBtboIpeChe4G6uzC1v5_-SIxmvSLLINJClUkXJhNvWkSUnajzi8xsv1DGYm0D3V46LLFI-61TVro9-HrZwyRwyTf9NwYIyentrgAY_qhs8fh7unyfB",  "tid" : "rUqFhONysi0yA==13583853927872",  "date" : ISODate("2012-02-17T01:16:32.787Z"),  "v" : "2.0.0",  "scope" : [],  "rug" : false,  "schk" : true,  "fjs" : false,  "sites" : [{      "name" : "Origin.com",      "id" : "nb09xrt8384147bba27c2f12e112o8k9",      "last" : ISODate("2013-01-17T01:16:32.787Z"),      "_id" : ObjectId("50f750f06a56028661000f20")    }]});
 db.tc3.ensureIndex({xx:1});
 see the status of sharding
 db.tc3.stats();
 mongos> db.tc3.stats();
 {
   "sharded" : false,
   "primary" : "rpl1",
   "ns" : "test.tc3",
   "count" : 1999999,
   "size" : 2439998940,
   "avgObjSize" : 1220.00008000004,
   "storageSize" : 2480209920,
   "numExtents" : 29,
   "nindexes" : 2,
   "lastExtentSize" : 417480704,
   "paddingFactor" : 1,
   "flags" : 1,
   "totalIndexSize" : 115232544,
   "indexSizes" : {
     "_id_" : 64909264,
     "xx_1" : 50323280
   },
   "ok" : 1
 }
 mongos> 
 
 7.8.2 enable the sharding,Shard a Collection Using a Hashed Shard Key "xx"
 sh.shardCollection( "test.tc3", { xx: "hashed" } ) or  db.runCommand( { shardCollection : "test.tc3", key : {"xx":1} })
 mongos>  sh.shardCollection( "test.tc3", { xx: "hashed" } ) ;
 command failed: { "ok" : 0, "errmsg" : "shard keys must all be ascending" }
 shard keys must all be ascending
 mongos> 
 why failed ? 
i see thehttp://docs.mongodb.org/manual/reference/method/sh.shardCollection/<span color:red;"="" style="word-wrap: break-word; font-size: 9pt;">
 and found "New in version 2.4: Use the form {field: "hashed"} to create a hashed shard key. Hashed shard keys may not be compound indexes.",
 and the current is 2.0.1,so i should use the other way to enable.
 mongos> db.runCommand( { shardCollection : "test.tc3", key : {"xx":1} })
 { "ok" : 0, "errmsg" : "access denied - use admin db" }
 mongos> use admin
 switched to db admin
 mongos> db.runCommand( { shardCollection : "test.tc3", key : {"xx":1} })
 { "collectionsharded" : "test.tc3", "ok" : 1 }
 mongos> 
 nice,enable successfully.
 
 7.8.3 and see the status of sharding
 mongos> use test;
 switched to db test
 mongos> db.tc3.stats();
 {
   "sharded" : true,  -- nice, it is a sharding now
   "flags" : 1,
   "ns" : "test.tc3",
   "count" : 2005360,
   "numExtents" : 48,
   "size" : 2446539448,
   "storageSize" : 2979864576,
   "totalIndexSize" : 116573408,
   "indexSizes" : {
     "_id_" : 65105488,
     "xx_1" : 51467920
   },
   "avgObjSize" : 1220.0001236685682,
   "nindexes" : 2,
   "nchunks" : 73,
   "shards" : {
     "rpl1" : {
       "ns" : "test.tc3",
       "count" : 1647809,
       "size" : 2010326980,
       "avgObjSize" : 1220,
       "storageSize" : 2480209920,
       "numExtents" : 29,
       "nindexes" : 2,
       "lastExtentSize" : 417480704,
       "paddingFactor" : 1,
       "flags" : 1,
       "totalIndexSize" : 94956064,
       "indexSizes" : {
         "_id_" : 53487392,
         "xx_1" : 41468672
       },
       "ok" : 1
     },
     "rpl2" : {
       "ns" : "test.tc3",
       "count" : 357551,
       "size" : 436212468,
       "avgObjSize" : 1220.0006936073455,
       "storageSize" : 499654656,
       "numExtents" : 19,
       "nindexes" : 2,
       "lastExtentSize" : 89817088,
       "paddingFactor" : 1,
       "flags" : 1,
       "totalIndexSize" : 21617344,
       "indexSizes" : {
         "_id_" : 11618096,
         "xx_1" : 9999248
       },
       "ok" : 1
     }
   },
   "ok" : 1
 }
 mongos>  
 
 mongos> db.printShardingStatus();
 --- Sharding Status --- 
   sharding version: { "_id" : 1, "version" : 3 }
   shards:
   {  "_id" : "rpl1",  "host" : "rpl1/20.10x.91.119:27027,20.10x.91.119:27028" }
   {  "_id" : "rpl2",  "host" : "rpl2/20.10x.91.119:27037,20.10x.91.119:27038" }
   databases:
   {  "_id" : "admin",  "partitioned" : false,  "primary" : "config" }
   {  "_id" : "test",  "partitioned" : true,  "primary" : "rpl1" }
     test.tc chunks:
         rpl1    1
       { "ip" : { $minKey : 1 } } -->> { "ip" : { $maxKey : 1 } } on : rpl1 { "t" : 1000, "i" : 0 }
     test.tc2 chunks:
         rpl1    62
         rpl2    61
       too many chunks to print, use verbose if you want to force print
     test.tc3 chunks:
         rpl2    19
         rpl1    54
       too many chunks to print, use verbose if you want to force print  
       -- see this,there are 19 chunks in rpl2,so data is switching to rpl2 now.
     test.tickets chunks:
         rpl1    1

       { "ip" : { $minKey : 1 } } -->> { "ip" : { $maxKey

转载:http://blog.itpub.net/26230597/viewspace-1098147/

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值