windows系统下MongoDB分片集群搭建--初级篇

在搭建MongoDB分片集群之前,首先要了解MongoDB分片集群的目的:

对于单台数据库服务器,庞大的数据量及高吞吐量的应用程序对它而言无疑是个巨大的挑战。频繁的CRUD操作能够耗尽服务器的CPU资源,快速的数据增长也会让硬盘存储无能为力,最终内存无法满足数据需要导致大量的I/O,主机负载严重。为了解决这种问题,对于数据库系统一般有两种方法:垂直扩展分片(水平扩展)。

一、垂直扩展

简单理解的话,就是硬件方向的

二、分片(这就是本篇文章的目的)

分片,简单理解就是把多个MongoDB副本集群整合成一个大的整体,再通过一定的分发规则(也叫片键),将数据存储到不同的副本集群里面。大概的流程图就比如流程图,因此,搭建一个MongoDB分片集群就要从最基础的底层开始

1.创建Mongo文件夹。

这部分就随意了,为了方便管理,下图是我创建的文件夹

文件夹

config:配置文件存放目录

configServer:配置服务器存放目录

route:路由服务器存放目录

shard1:分片1存放目录(副本集群1)

shard2:分片2存放目录(副本集群2)

shard3:分片3存放目录(副本集群3)

文件夹内部如何创建,根据自己的开发习惯即可。不再赘述

2.创建副本集群(副本集群1,副本集群2,副本集群3)

创建副本集群方式可参考副本集群创建,配置文件多少有点区别,参考如下:

logpath=G:\MongoDB3.6.21\shard1\data1\log\mingo1.log
dbpath=G:\MongoDB3.6.21\shard1\data1\db
port=1111
directoryperdb=true
logappend=true
oplogSize=512
replSet=shard1
shardsvr=true

replSet:副本集名称,每个副本集集群名称都不能一样

shardsvr=true:这个必须有

3.创建配置服务器

创建配置服务器配置文件

#conf1.config
logpath=G:\MongoDB3.6.21\configServer\conf1\log\conf1.log
dbpath=G:\MongoDB3.6.21\configServer\conf1\db
logappend=true
port=30000
configsvr=true
oplogSize=512
replSet=config
-----------------------------------------------
#conf2.config
logpath=G:\MongoDB3.6.21\configServer\conf2\log\conf2.log
dbpath=G:\MongoDB3.6.21\configServer\conf2\db
logappend=true
port=30001
configsvr=true
oplogSize=512
replSet=config
-----------------------------------------------
#conf3.config
logpath=G:\MongoDB3.6.21\configServer\conf3\log\conf3.log
dbpath=G:\MongoDB3.6.21\configServer\conf3\db
logappend=true
port=30002
configsvr=true
oplogSize=512
replSet=config

添加配置服务器服务

mongod --config G:\MongoDB3.6.21\config\conf\conf1.config --serviceName "conf1" --serviceDisplayName "conf1" --install

mongod --config G:\MongoDB3.6.21\config\conf\conf2.config --serviceName "conf2" --serviceDisplayName "conf2" --install

mongod --config G:\MongoDB3.6.21\config\conf\conf3.config --serviceName "conf3" --serviceDisplayName "conf3" --install

启动配置服务器

net start conf1
net start conf2
net start conf3

4.配置路由服务器

直接执行如下代码,即可创建路由服务

mongos --configdb config/127.0.0.1:30000,127.0.0.1:30001,127.0.0.1:30002 --port 40004 --logpath G:\MongoDB3.6.21\route\log\route.log --logappend --serviceName "route" --serviceDisplayName "route" --install

--configdb:这段代码表示的是把配置服务器配置到路由服务,config这个不是固定的,这个是你在配置服务器的配置文件内,在replSet=config这个位置配置的,一定要对应上。

启动路由服务器

net start route

5.连接路由服务

mongo --host 127.0.0.1 --port 40004

出现下面这样则表示成功链接路由

MongoDB shell version v3.6.21
connecting to: mongodb://127.0.0.1:40004/?gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("a310a2fc-1059-4067-be5a-45dcee2dc69a") }
MongoDB server version: 3.6.21
Server has startup warnings:
2021-03-18T13:40:17.693+0800 I CONTROL  [main]
2021-03-18T13:40:17.693+0800 I CONTROL  [main] ** WARNING: Access control is not enabled for the database.
2021-03-18T13:40:17.693+0800 I CONTROL  [main] **          Read and write access to data and configuration is unrestricted.
2021-03-18T13:40:17.693+0800 I CONTROL  [main]
2021-03-18T13:40:17.693+0800 I CONTROL  [main] ** WARNING: This server is bound to localhost.
2021-03-18T13:40:17.693+0800 I CONTROL  [main] **          Remote systems will be unable to connect to this server.
2021-03-18T13:40:17.693+0800 I CONTROL  [main] **          Start the server with --bind_ip <address> to specify which IP
2021-03-18T13:40:17.693+0800 I CONTROL  [main] **          addresses it should serve responses from, or with --bind_ip_all to
2021-03-18T13:40:17.693+0800 I CONTROL  [main] **          bind to all interfaces. If this behavior is desired, start the
2021-03-18T13:40:17.693+0800 I CONTROL  [main] **          server with --bind_ip 127.0.0.1 to disable this warning.
2021-03-18T13:40:17.694+0800 I CONTROL  [main]
mongos>

6.添加分片服务器,也就是关联副本集群,一定要先切换到admin,不然没有权限,操作不了

mongos> use admin

mongos> db.runCommand({addshard:"shard1/127.0.0.1:1111,127.0.0.1:2222,127.0.0.1:3333",name:"shard1"})

执行之后,显示是这样表示成功

{
        "shardAdded" : "shard1",
        "ok" : 1,
        "operationTime" : Timestamp(1616046216, 4),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1616046216, 5),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}

依次添加shard2和shard3

mongos> db.runCommand({addshard:"shard2/127.0.0.1:4444,127.0.0.1:5555,127.0.0.1:6666",name:"shard2"})

{
        "shardAdded" : "shard2",
        "ok" : 1,
        "operationTime" : Timestamp(1616046249, 3),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1616046249, 3),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}

mongos> db.runCommand({addshard:"shard3/127.0.0.1:7777,127.0.0.1:8888,127.0.0.1:9999",name:"shard1"})

{
        "shardAdded" : "shard3",
        "ok" : 1,
        "operationTime" : Timestamp(1616046271, 3),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1616046271, 3),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}

7.查看分片配置状态

mongos> sh.status()

成功配置显示如下

--- Sharding Status ---
  sharding version: {
        "_id" : 1,
        "minCompatibleVersion" : 5,
        "currentVersion" : 6,
        "clusterId" : ObjectId("6052bcde1397a09583e97cf3")
  }
  shards:
        {  "_id" : "shard1",  "host" : "shard1/127.0.0.1:1111,127.0.0.1:2222",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/127.0.0.1:4444,127.0.0.1:5555",  "state" : 1 }
        {  "_id" : "shard3",  "host" : "shard3/127.0.0.1:7777,127.0.0.1:8888",  "state" : 1 }
  active mongoses:
        "3.6.21" : 1
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours:
                No recent migrations
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }

8.接下来就是测试分片服务是否能成功做到数据的分片存储

查看当前数据库,admin系统库,config配置服务器库,不能在这里面做测试数据

mongos> show dbs
admin   0.000GB
config  0.001GB

创建新库

mongos> use admin

mongos> db.runCommand({enablesharding:"testdb"})

#成功返回
{
        "ok" : 1,
        "operationTime" : Timestamp(1616046772, 1),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1616046772, 10),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}

创建分片规则--hash分片

mongos> sh.shardCollection("testdb.t",{id:"hashed"})

#成功返回
{
        "collectionsharded" : "testdb.t",
        "collectionUUID" : UUID("1c2b9ee9-e8bc-449e-bfb9-f2a87d60a7d9"),
        "ok" : 1,
        "operationTime" : Timestamp(1616046791, 13),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1616046791, 13),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}

插入测试数据

mongos> use testdb
#成功返回
switched to db testdb
mongos> for(i=1;i<=1000;i++){db.t.insert({id:i,name:"Leo"})}
#成功返回
WriteResult({ "nInserted" : 1 })

查看数据分片情况

mongos> sh.status()

#部分参数
databases:
 {  "_id" : "testdb",  "primary" : "shard3",  "partitioned" : true }
                testdb.t
                        shard key: { "id" : "hashed" }
                        unique: false
                        balancing: true
                        chunks:
                                shard1  2
                                shard2  2
                                shard3  2
                        { "id" : { "$minKey" : 1 } } -->> { "id" : NumberLong("-6148914691236517204") } on : shard1 Timestamp(3, 2)
                        { "id" : NumberLong("-6148914691236517204") } -->> { "id" : NumberLong("-3074457345618258602") } on : shard1 Timestamp(3, 3)
                        { "id" : NumberLong("-3074457345618258602") } -->> { "id" : NumberLong(0) } on : shard2 Timestamp(3, 4)
                        { "id" : NumberLong(0) } -->> { "id" : NumberLong("3074457345618258602") } on : shard2 Timestamp(3, 5)
                        { "id" : NumberLong("3074457345618258602") } -->> { "id" : NumberLong("6148914691236517204") } on : shard3 Timestamp(3, 6)
                        { "id" : NumberLong("6148914691236517204") } -->> { "id" : { "$maxKey" : 1 } } on : shard3 Timestamp(3, 7)

可以看到id为testdb的数据库,数据的存储方式。不明显的话,也可以通过navicat连接mongo查看数据存储

同一集群下的数据库数据

数据库

分片下的数据库数据

数据库

到此,分片服务器的部署算是基本完成了。感兴趣的小伙伴也可以测试一下testdb.users集合以id列为shard key(片键),为了测试数据方便观察,可以先设置一下chunksize

mongos> use config
mongos> db.settings.save({_id:"chunksize", value: 1 })
WriteResult({ "nMatched" : 0, "nUpserted" : 1, "nModified" : 0, "_id" : "chunksize" })
mongos> db.settings.find()
{"_id" : "chunksize", "value" : 1}

接下来就是创建集合,设置片键,插入数据

mongos> use admin
mongos> sh.shardCollection("testdb.users",{id:1,name:1})
{
        "collectionsharded" : "testdb.users",
        "collectionUUID" : UUID("01ce680c-7b25-4977-8e34-8e10c10318eb"),
        "ok" : 1,
        "operationTime" : Timestamp(1616046916, 3),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1616046916, 9),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}
mongos> use testdb
mongos> for(i=1;i<=100000;i++){db.users.insert({id:i,name:"Leo"+i})}
#这个过程有点久,耐心等一会
WriteResult({ "nInserted" : 1 })

查看数据

mongos> sh.status()

#部分参数
databases:
 {  "_id" : "testdb",  "primary" : "shard3",  "partitioned" : true }
                testdb.t
                        shard key: { "id" : "hashed" }
                        unique: false
                        balancing: true
                        chunks:
                                shard1  2
                                shard2  2
                                shard3  2
                        { "id" : { "$minKey" : 1 } } -->> { "id" : NumberLong("-6148914691236517204") } on : shard1 Timestamp(3, 2)
                        { "id" : NumberLong("-6148914691236517204") } -->> { "id" : NumberLong("-3074457345618258602") } on : shard1 Timestamp(3, 3)
                        { "id" : NumberLong("-3074457345618258602") } -->> { "id" : NumberLong(0) } on : shard2 Timestamp(3, 4)
                        { "id" : NumberLong(0) } -->> { "id" : NumberLong("3074457345618258602") } on : shard2 Timestamp(3, 5)
                        { "id" : NumberLong("3074457345618258602") } -->> { "id" : NumberLong("6148914691236517204") } on : shard3 Timestamp(3, 6)
                        { "id" : NumberLong("6148914691236517204") } -->> { "id" : { "$maxKey" : 1 } } on : shard3 Timestamp(3, 7)
                testdb.users
                        shard key: { "id" : 1, "name" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard1  4
                                shard2  3
                                shard3  2
                        { "id" : { "$minKey" : 1 }, "name" : { "$minKey" : 1 } } -->> { "id" : 2, "name" : "Leo2" } on : shard1 Timestamp(2, 0)
                        { "id" : 2, "name" : "Leo2" } -->> { "id" : 20166, "name" : "Leo20166" } on : shard2 Timestamp(5, 1)
                        { "id" : 20166, "name" : "Leo20166" } -->> { "id" : 30248, "name" : "Leo30248" } on : shard3 Timestamp(4, 1)
                        { "id" : 30248, "name" : "Leo30248" } -->> { "id" : 42668, "name" : "Leo42668" } on : shard3 Timestamp(3, 3)
                        { "id" : 42668, "name" : "Leo42668" } -->> { "id" : 52750, "name" : "Leo52750" } on : shard2 Timestamp(4, 2)
                        { "id" : 52750, "name" : "Leo52750" } -->> { "id" : 64041, "name" : "Leo64041" } on : shard2 Timestamp(4, 3)
                        { "id" : 64041, "name" : "Leo64041" } -->> { "id" : 74123, "name" : "Leo74123" } on : shard1 Timestamp(5, 2)
                        { "id" : 74123, "name" : "Leo74123" } -->> { "id" : 85414, "name" : "Leo85414" } on : shard1 Timestamp(5, 3)
                        { "id" : 85414, "name" : "Leo85414" } -->> { "id" : { "$maxKey" : 1 }, "name" : { "$maxKey" : 1 } } on : shard1 Timestamp(5, 4)

好了,接下来就自己去观察发现吧。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值