mongodb 在线添加新的分片复制集 后续

 

 

前面添加新的分片复制集因为版本不一致无法兼容,见:http://blog.csdn.net/mchdba/article/details/51867303,现在统一版本,继续在线上添加新的分片复制集。

 

1、在master2上建立3个mongodb

 

                准备3个mongodb进程数,先准备数据目录和日志目录:

                mkdir-p /data/mongodb/shard37017

                mkdir-p /data/mongodb/shard37027

                mkdir-p /data/mongodb/shard37037

                mkdir-p /data/mongodb/logs

               

                3个启动命令:

                /usr/local/mongodb-linux-x86_64-2.4.4/bin/mongod --shardsvr --replSet shard3 --port 37017 --dbpath /data/mongodb/shard37017 --oplogSize 2048 --logpath /data/mongodb/logs/shard_m2_37017.log --logappend --fork

                /usr/local/mongodb-linux-x86_64-2.4.4/bin/mongod --shardsvr --replSet shard3 --port 37027 --dbpath /data/mongodb/shard37027 --oplogSize 2048 --logpath /data/mongodb/logs/shard_m2_37027.log --logappend --fork

                /usr/local/mongodb-linux-x86_64-2.4.4/bin/mongod --shardsvr --replSet shard3 --port 37037 --dbpath /data/mongodb/shard37037 --oplogSize 2048 --logpath /data/mongodb/logs/shard_m2_37037.log --logappend --fork

 

 

 

PS:启动mongodb进程可能会卡住,比较慢,不过需要耐心等待,ps -eaf|grep mongodb可能会有好几一个一样的进程在跑(比如37037端口),不过这只是暂时的,没有关系,等一会启动完成了后,就会只有一个37037的端口在运行:

# 会有2个37037的端口在运行着

[root@db_master_2 ~]# ps -eaf|grep mongo

                root     52721   752  0 21:27 pts/0    00:00:00 su - mongodb

                mongodb  52722 52721  0 21:27 pts/0    00:00:00 -bash

                mongodb  56368     1  0 21:36 ?        00:00:00 /usr/local/mongodb-linux-x86_64-2.4.4/bin/mongod --shardsvr --replSet shard3 --port 37017 --dbpath /data/mongodb/shard37017 --oplogSize 2048 --logpath /data/mongodb/logs/shard_m2_37017.log --logappend --fork

                mongodb  56421     1  0 21:36 ?        00:00:00 /usr/local/mongodb-linux-x86_64-2.4.4/bin/mongod --shardsvr --replSet shard3 --port 37027 --dbpath /data/mongodb/shard37027 --oplogSize 2048 --logpath /data/mongodb/logs/shard_m2_37027.log --logappend --fork

                mongodb  56473 52722  1 21:36 pts/0    00:00:00 /usr/local/mongodb-linux-x86_64-2.4.4/bin/mongod --shardsvr --replSet shard3 --port 37037 --dbpath /data/mongodb/shard37037 --oplogSize 2048 --logpath /data/mongodb/logs/shard_m2_37037.log --logappend --fork

                mongodb  56474 56473  0 21:36 ?        00:00:00 /usr/local/mongodb-linux-x86_64-2.4.4/bin/mongod --shardsvr --replSet shard3 --port 37037 --dbpath /data/mongodb/shard37037 --oplogSize 2048 --logpath /data/mongodb/logs/shard_m2_37037.log --logappend --fork

                mongodb  56475 56474 17 21:36 ?        00:00:01 /usr/local/mongodb-linux-x86_64-2.4.4/bin/mongod --shardsvr --replSet shard3 --port 37037 --dbpath /data/mongodb/shard37037 --oplogSize 2048 --logpath /data/mongodb/logs/shard_m2_37037.log --logappend --fork

                root     56489 55041  0 21:36 pts/1    00:00:00 grep mongo

                [root@db_master_2 ~]#

               

# 过了一段时间后,等项目启动完后,进程数目变成了正常的3个了。

                [root@db_master_2 ~]#

                [root@db_master_2 ~]# ps -eaf|grep mongo

                root     52721   752  0 21:27 pts/0    00:00:00 su - mongodb

                mongodb  52722 52721  0 21:27 pts/0    00:00:00 -bash

                mongodb  56368     1  0 21:36 ?        00:00:03 /usr/local/mongodb-linux-x86_64-2.4.4/bin/mongod --shardsvr --replSet shard3 --port 37017 --dbpath /data/mongodb/shard37017 --oplogSize 2048 --logpath /data/mongodb/logs/shard_m2_37017.log --logappend --fork

                mongodb  56421     1  0 21:36 ?        00:00:03 /usr/local/mongodb-linux-x86_64-2.4.4/bin/mongod --shardsvr --replSet shard3 --port 37027 --dbpath /data/mongodb/shard37027 --oplogSize 2048 --logpath /data/mongodb/logs/shard_m2_37027.log --logappend --fork

                mongodb  56475     1  0 21:36 ?        00:00:05 /usr/local/mongodb-linux-x86_64-2.4.4/bin/mongod --shardsvr --replSet shard3 --port 37037 --dbpath /data/mongodb/shard37037 --oplogSize 2048 --logpath /data/mongodb/logs/shard_m2_37037.log --logappend --fork

                root     61460 55041  0 21:49 pts/1    00:00:00 grep mongo

                [root@db_master_2 ~]# 

 

 

                               

 

2、初始化新的分片集

                # 设置新的分片

                >config = { _id:"shard3", members:[

                {_id:0,host:"mongodb_shard3:37017",priority:1},

                {_id:1,host:"mongodb_shard3:37027",priority:2},

                {_id:2,host:"mongodb_shard3:37037",arbiterOnly:true}

                ]

                };

 

                # 初始化副本集

                >rs.initiate(config);

               

                # 执行过程如下:

                [mongodb@db_master_2~]$ /usr/local/mongodb-linux-x86_64-2.4.4/bin/mongo mongodb_shard3:37017/admin

                MongoDBshell version: 2.4.4

                connectingto: mongodb_shard3:37017/admin

                Serverhas startup warnings:

                2016-07-08T17:39:51.888+0800I CONTROL  [initandlisten]

                2016-07-08T17:39:51.888+0800I CONTROL  [initandlisten] ** WARNING:/sys/kernel/mm/transparent_hugepage/enabled is 'always'.

                2016-07-08T17:39:51.888+0800I CONTROL  [initandlisten] **        We suggest setting it to 'never'

                2016-07-08T17:39:51.888+0800I CONTROL  [initandlisten]

                2016-07-08T17:39:51.888+0800I CONTROL  [initandlisten] ** WARNING:/sys/kernel/mm/transparent_hugepage/defrag is 'always'.

                2016-07-08T17:39:51.888+0800I CONTROL  [initandlisten] **        We suggest setting it to 'never'

                2016-07-08T17:39:51.888+0800I CONTROL  [initandlisten]

                >

                >config = { _id:"shard3", members:[

                ...{_id:0,host:"mongodb_shard3:37017",priority:1},

                ...{_id:1,host:"mongodb_shard3:37027",priority:2},

                ...{_id:2,host:"mongodb_shard3:37037",arbiterOnly:true}

                ...]

                ...};

                {

                                "_id": "shard3",

                                "members": [

                                                {

                                                                "_id": 0,

                                                                "host": "mongodb_shard3:37017",

                                                                "priority": 1

                                                },

                                                {

                                                                "_id": 1,

                                                                "host": "mongodb_shard3:37027",

                                                                "priority": 2

                                                },

                                                {

                                                                "_id": 2,

                                                                "host": "mongodb_shard3:37037",

                                                                "arbiterOnly": true

                                                }

                                ]

                }

                >rs.initiate(config);

                {"ok" : 1 }

                shard3:OTHER>

               

 

 

3、mongos上在线添加新的分片shard3

添加命令:db.runCommand({ addshard :"shard3/mongodb_shard3:37017,mongodb_shard3:37027,mongodb_shard3:37037"});

 

                [mongodb@db_m1_slave_1logs]$ /usr/local/mongodb-linux-x86_64-2.4.4/bin/mongo mongodbs1:30000/admin

                MongoDBshell version: 2.4.4

                connectingto: localhost:30000/admin

                mongos>db.runCommand( { addshard :"shard3/mongodb_shard3:37017,mongodb_shard3:37027,mongodb_shard3:37037"});

                {"shardAdded" : "shard1", "ok" : 1 }

                mongos>

 

                 db.runCommand( { addShard:"shard3/mongodb_shard3:37017,mongodb_shard3:37027,mongodb_shard3:37037",maxSize: 0, name: "shard3" });

 

                 sh.addShard("shard3/mongodb_shard3:37017,mongodb_shard3:37027,mongodb_shard3:37037")

                 

执行过程如下:

[mongodb@db_master_2 ~]$/usr/local/mongodb-linux-x86_64-2.4.4/bin/mongo mongodbs1:30000/admin

MongoDB shell version: 2.4.4

connecting to: mongodbs1:30000/admin

mongos> db.runCommand( { addshard :"shard3/mongodb_shard3:37017,mongodb_shard3:37027,mongodb_shard3:37037"});

{

                "ok": 0,

                "errmsg": "couldn't connect to new shard socket exception [CONNECT_ERROR] forshard3/mongodb_shard3:37017,mongodb_shard3:37027,mongodb_shard3:37037"

}

mongos> db.runCommand( { removeShard :"shard3" } );

{

                "msg": "draining started successfully",

                "state": "started",

                "shard": "shard3",

                "ok": 1

}

mongos>

 

4、添加移除分片过程

添加shard3报错:

mongos> db.runCommand( { addshard :"shard3/mongodb_shard3:37017,mongodb_shard3:37027,mongodb_shard3:37037"});

{

                "ok": 0,

                "errmsg": "E11000 duplicate key error index: config.shards.$_id_  dup key: { : \"shard3\" }"

}

mongos>

 

查看下当前的sharding信息:

mongos> db.printShardingStatus();

--- Sharding Status ---

 sharding version: {

                "_id": 1,

                "version": 3,

                "minCompatibleVersion": 3,

                "currentVersion": 4,

                "clusterId": ObjectId("56eec856472f21af28119fdc")

}

 shards:

                {  "_id" : "shard1",  "host" : "shard1/192.168.3.62:27017,192.168.3.63:27017"}

                {  "_id" : "shard2",  "host" : "shard2/192.168.3.62:27018,192.168.3.63:27018"}

                {  "_id" : "shard3",  "draining" : true,  "host" :"shard3/mongodb_shard3:37017,mongodb_shard3:37027" }

 databases:

                {  "_id" : "admin",  "partitioned" : false,  "primary" : "config" }

                {  "_id" : "report",  "partitioned" : true,  "primary" : "shard2" }

                                report.print

                                                shardkey: { "_id" : 1 }

                                                chunks:

                                                                shard1   1

                                                                shard2   1

                                                {"_id" : { "$minKey" : 1 } } -->> { "_id" :ObjectId("5517aac945ce6df1bdf8a508") } on : shard1 { "t" :2, "i" : 0 }

                                                {"_id" : ObjectId("5517aac945ce6df1bdf8a508") } -->> {"_id" : { "$maxKey" : 1 } } on : shard2 { "t" :2, "i" : 1 }

                {  "_id" : "screen",  "partitioned" : true,  "primary" : "shard2" }

                {  "_id" : "search",  "partitioned" : true,  "primary" : "shard2" }

                {  "_id" : "traffice",  "partitioned" : true,  "primary" : "shard2" }

                {  "_id" : "wifi",  "partitioned" : true,  "primary" : "shard2" }

                {  "_id" : "nagios",  "partitioned" : true,  "primary" : "shard2" }

                                nagios.last_primary_server

                                                shardkey: { "_id" : 1 }

                                                chunks:

                                                                shard2   1

                                                {"_id" : { "$minKey" : 1 } } -->> { "_id" :{ "$maxKey" : 1 } } on : shard2 { "t" : 1, "i" :0 }

                {  "_id" : "office",  "partitioned" : true,  "primary" : "shard1" }

                                office.guard

                                                shardkey: { "_id" : 1 }

                                                chunks:

                                                                shard2   1

                                                                shard1   1

                                                {"_id" : { "$minKey" : 1 } } -->> { "_id" :ObjectId("554dc72f45ce6df1be127ddc") } on : shard2 { "t" :2, "i" : 0 }

                                                {"_id" : ObjectId("554dc72f45ce6df1be127ddc") } -->> {"_id" : { "$maxKey" : 1 } } on : shard1 { "t" :2, "i" : 1 }

                {  "_id" :"receivereceiptdata", "partitioned" : true, "primary" : "shard2" }

                {  "_id" : "test",  "partitioned" : false,  "primary" : "shard2" }

                {  "_id" : "app",  "partitioned" : false,  "primary" : "shard2" }

                {  "_id" : "ibeacon",  "partitioned" : false,  "primary" : "shard2" }

                {  "_id" : "leadvideo",  "partitioned" : false,  "primary" : "shard2" }

                {  "_id" : "parking",  "partitioned" : false,  "primary" : "shard2" }

                {  "_id" : "pos",  "partitioned" : false,  "primary" : "shard2" }

                {  "_id" : "pv",  "partitioned" : false,  "primary" : "shard1" }

                {  "_id" : "im",  "partitioned" : false,  "primary" : "shard2" }

                {  "_id" : "queue",  "partitioned" : false,  "primary" : "shard2" }

                {  "_id" :"receiveposinfodata", "partitioned" : false, "primary" : "shard2" }

                {  "_id" : "db",  "partitioned" : false,  "primary" : "shard2" }

                {  "_id" : "ibeancon",  "partitioned" : false,  "primary" : "shard2" }

 

               

看到还有shard3的信息,再移除remove一次试试看:

mongos> db.runCommand( { removeShard :"shard3" } );

{

                "msg": "removeshard completed successfully",

                "state": "completed",

                "shard": "shard3",

                "ok": 1

}

mongos>

 

 

再查看Shard信息,没有了shard3的信息了

mongos> db.printShardingStatus();

--- Sharding Status ---

 sharding version: {

                "_id": 1,

                "version": 3,

                "minCompatibleVersion": 3,

                "currentVersion": 4,

                "clusterId": ObjectId("56eec856472f21af28119fdc")

}

 shards:

                {  "_id" : "shard1",  "host" : "shard1/192.168.3.62:27017,192.168.3.63:27017"}

                {  "_id" : "shard2",  "host" : "shard2/192.168.3.62:27018,192.168.3.63:27018"}

 databases:

                {  "_id" : "admin",  "partitioned" : false,  "primary" : "config" }

                {  "_id" : "report",  "partitioned" : true,  "primary" : "shard2" }

                                report.print

                                                shardkey: { "_id" : 1 }

                                                chunks:

                                                                shard1   1

                                                                shard2   1

                                                {"_id" : { "$minKey" : 1 } } -->> { "_id" :ObjectId("5517aac945ce6df1bdf8a508") } on : shard1 { "t" : 2,"i" : 0 }

                                                {"_id" : ObjectId("5517aac945ce6df1bdf8a508") } -->> {"_id" : { "$maxKey" : 1 } } on : shard2 { "t" :2, "i" : 1 }

                {  "_id" : "screen",  "partitioned" : true,  "primary" : "shard2" }

                {  "_id" : "search",  "partitioned" : true,  "primary" : "shard2" }

                {  "_id" : "traffice",  "partitioned" : true,  "primary" : "shard2" }

                {  "_id" : "wifi",  "partitioned" : true,  "primary" : "shard2" }

                {  "_id" : "nagios",  "partitioned" : true,  "primary" : "shard2" }

                                nagios.last_primary_server

                                                shardkey: { "_id" : 1 }

                                                chunks:

                                                                shard2   1

                                                {"_id" : { "$minKey" : 1 } } -->> { "_id" :{ "$maxKey" : 1 } } on : shard2 { "t" : 1, "i" :0 }

                {  "_id" : "office",  "partitioned" : true,  "primary" : "shard1" }

                                office.guard

                                                shardkey: { "_id" : 1 }

                                                chunks:

                                                                shard2   1

                                                                shard1   1

                                                {"_id" : { "$minKey" : 1 } } -->> { "_id" :ObjectId("554dc72f45ce6df1be127ddc") } on : shard2 { "t" :2, "i" : 0 }

                                                {"_id" : ObjectId("554dc72f45ce6df1be127ddc") } -->> {"_id" : { "$maxKey" : 1 } } on : shard1 { "t" :2, "i" : 1 }

                {  "_id" :"receivereceiptdata", "partitioned" : true, "primary" : "shard2" }

                {  "_id" : "test",  "partitioned" : false,  "primary" : "shard2" }

                {  "_id" : "app",  "partitioned" : false,  "primary" : "shard2" }

                {  "_id" : "ibeacon",  "partitioned" : false,  "primary" : "shard2" }

                {  "_id" : "leadvideo",  "partitioned" : false,  "primary" : "shard2" }

                {  "_id" : "parking",  "partitioned" : false,  "primary" : "shard2" }

                {  "_id" : "pos",  "partitioned" : false,  "primary" : "shard2" }

                {  "_id" : "pv",  "partitioned" : false,  "primary" : "shard1" }

                {  "_id" : "im",  "partitioned" : false,  "primary" : "shard2" }

                {  "_id" : "queue",  "partitioned" : false,  "primary" : "shard2" }

                {  "_id" : "receiveposinfodata",  "partitioned" : false,  "primary" : "shard2" }

                {  "_id" : "db",  "partitioned" : false,  "primary" : "shard2" }

                {  "_id" : "ibeancon",  "partitioned" : false,  "primary" : "shard2" }

 

mongos>

 

 

然后继续添加shard3信息,添加成功了:

mongos> db.runCommand( { addshard :"shard3/mongodb_shard3:37017,mongodb_shard3:37027,mongodb_shard3:37037"});

{ "shardAdded" :"shard3", "ok" : 1 }

mongos>

 

 

               

5、查看最新分片信息

mongos> db.printShardingStatus();

--- Sharding Status ---

 sharding version: {

                "_id": 1,

                "version": 3,

                "minCompatibleVersion": 3,

                "currentVersion": 4,

                "clusterId": ObjectId("56eec856472f21af28119fdc")

}

 shards:

                {  "_id" : "shard1",  "host" : "shard1/192.168.3.62:27017,192.168.3.63:27017"}

                {  "_id" : "shard2",  "host" : "shard2/192.168.3.62:27018,192.168.3.63:27018"}

                {  "_id" : "shard3",  "host" :"shard3/mongodb_shard3:37017,mongodb_shard3:37027" }

 databases:

                {  "_id" : "admin",  "partitioned" : false,  "primary" : "config" }

                {  "_id" : "report",  "partitioned" : true,  "primary" : "shard2" }

                                report.print

                                                shardkey: { "_id" : 1 }

                                                chunks:

                                                                shard1   1

                                                                shard2   1

                                                {"_id" : { "$minKey" : 1 } } -->> { "_id" :ObjectId("5517aac945ce6df1bdf8a508") } on : shard1 { "t" :2, "i" : 0 }

                                                {"_id" : ObjectId("5517aac945ce6df1bdf8a508") } -->> {"_id" : { "$maxKey" : 1 } } on : shard2 { "t" :2, "i" : 1 }

                {  "_id" : "screen",  "partitioned" : true,  "primary" : "shard2" }

                {  "_id" : "search",  "partitioned" : true,  "primary" : "shard2" }

                {  "_id" : "traffice",  "partitioned" : true,  "primary" : "shard2" }

                {  "_id" : "wifi",  "partitioned" : true,  "primary" : "shard2" }

                {  "_id" : "nagios",  "partitioned" : true,  "primary" : "shard2" }

                                nagios.last_primary_server

                                                shardkey: { "_id" : 1 }

                                                chunks:

                                                                shard2   1

                                                {"_id" : { "$minKey" : 1 } } -->> { "_id" :{ "$maxKey" : 1 } } on : shard2 { "t" : 1, "i" :0 }

                {  "_id" : "office",  "partitioned" : true,  "primary" : "shard1" }

                                office.guard

                                                shardkey: { "_id" : 1 }

                                                chunks:

                                                                shard2   1

                                                                shard1   1

                                                {"_id" : { "$minKey" : 1 } } -->> { "_id" :ObjectId("554dc72f45ce6df1be127ddc") } on : shard2 { "t" :2, "i" : 0 }

                                                {"_id" : ObjectId("554dc72f45ce6df1be127ddc") } -->> {"_id" : { "$maxKey" : 1 } } on : shard1 { "t" :2, "i" : 1 }

                {  "_id" : "receivereceiptdata",  "partitioned" : true,  "primary" : "shard2" }

                {  "_id" : "test",  "partitioned" : false,  "primary" : "shard2" }

                {  "_id" : "app",  "partitioned" : false,  "primary" : "shard2" }

                {  "_id" : "ibeacon",  "partitioned" : false,  "primary" : "shard2" }

                {  "_id" : "leadvideo",  "partitioned" : false,  "primary" : "shard2" }

                {  "_id" : "parking",  "partitioned" : false,  "primary" : "shard2" }

                {  "_id" : "pos",  "partitioned" : false,  "primary" : "shard2" }

                {  "_id" : "pv",  "partitioned" : false,  "primary" : "shard1" }

                {  "_id" : "im",  "partitioned" : false,  "primary" : "shard2" }

                {  "_id" : "queue",  "partitioned" : false,  "primary" : "shard2" }

                {  "_id" :"receiveposinfodata", "partitioned" : false, "primary" : "shard2" }

                {  "_id" : "db",  "partitioned" : false,  "primary" : "shard2" }

                {  "_id" : "ibeancon",  "partitioned" : false,  "primary" : "shard2" }

 

mongos>

 

6、设置均衡器

# 查看状态

mongos> sh.isBalancerRunning();

false

mongos>

 

# 设置均衡器

mongos> sh.setBalancerState(true);

mongos>

mongos> sh.enableBalancing("bg");

mongos>

 

均衡器参考:https://docs.mongodb.com/manual/reference/method/sh.getBalancerState/

参考文章地址:https://docs.mongodb.com/manual/reference/command/addShard/

 

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值