MongoDB架构应用

一、Replica Sets 复制集

下图是在一台服务器上创建多个mongo实例,模拟复制集。在创建实例之前,需要创建:

  1. 数据文件存储路径
[root@master ~]# mkdir -p /data/data/r0
[root@master ~]# mkdir -p /data/data/r1
[root@master ~]# mkdir -p /data/data/r2
  1. 日志文件路径
[root@master ~]# mkdir -p /data/log
  1. 主从key文件,key文件用于标识集群的私钥的完整路径,如果各个实例的key file内容不一致,程序将不能正常使用
[root@master ~]# mkdir -p /data/key
[root@master ~]# echo "this is rs1 super secret key" > /data/key/r0 
[root@master ~]# echo "this is rs1 super secret key" > /data/key/r1
[root@master ~]# echo "this is rs1 super secret key" > /data/key/r2
[root@master ~]# chmod 600 /data/key/r*
  1. 启动3个实例
[root@master run]# cd mongodb/bin
[root@master bin]# ./mongod --replSet rs1 --keyFile /data/key/r0 --fork --port 28010 --dbpath=/data/data/r0 --logpath=/data/log/r0.log --logappend
about to fork child process, waiting until server is ready for connections.
forked process: 40878
child process started successfully, parent exiting
[root@master bin]# ./mongod --replSet rs1 --keyFile /data/key/r1 --fork --port 28011 --dbpath=/data/data/r1 --logpath=/data/log/r1.log --logappend
about to fork child process, waiting until server is ready for connections.
forked process: 40906
child process started successfully, parent exiting
[root@master bin]# ./mongod --replSet rs1 --keyFile /data/key/r2 --fork --port 28012 --dbpath=/data/data/r2 --logpath=/data/log/r2.log --logappend
about to fork child process, waiting until server is ready for connections.
forked process: 40934
child process started successfully, parent exiting
[root@master bin]# ps aux|grep mongo
root      40878  1.5  5.5 1082596 56108 ?       SLl  19:42   0:01 ./mongod --replSet rs1 --keyFile /data/key/r0 --fork --port 28010 --dbpath=/data/data/r0 --logpath=/data/log/r0.log --logappend
root      40906  2.2  5.2 1082596 53124 ?       SLl  19:42   0:01 ./mongod --replSet rs1 --keyFile /data/key/r1 --fork --port 28011 --dbpath=/data/data/r1 --logpath=/data/log/r1.log --logappend
root      40934  3.0  5.2 1082592 52516 ?       SLl  19:42   0:01 ./mongod --replSet rs1 --keyFile /data/key/r2 --fork --port 28012 --dbpath=/data/data/r2 --logpath=/data/log/r2.log --logappend
root      40961  0.0  0.0 103328   848 pts/2    S+   19:43   0:00 grep mongo
  1. 配置及初始化 Replica Sets
[root@master bin]# ./mongo -port 28010  连接到其中一个实例
下面是配置Replica Sets信息
> config_rs1={_id:'rs1',members:[
... {_id:0,host:'localhost:28010',priority:1},
... {_id:1,host:'localhost:28011'},
... {_id:2,host:'localhost:28012'}]
... }
{
	"_id" : "rs1",
	"members" : [
		{
			"_id" : 0,
			"host" : "localhost:28010",
			"priority" : 1
		},
		{
			"_id" : 1,
			"host" : "localhost:28011"
		},
		{
			"_id" : 2,
			"host" : "localhost:28012"
		}
	]
}
初始化配置信息
> rs.initiate(config_rs1);
{
	"ok" : 1,
	"operationTime" : Timestamp(1539917262, 1),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1539917262, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}

  1. 查看复制集状态
> rs.status(); 可以查看到members下面的各个实例的 health状态为1,说明是健康的,	"stateStr" : "PRIMARY" 代表是主实例
> rs.isMaster(); 查看Replica Sets状态
  1. 设置权限认证机制(我使用mongo3.6.4时当使用show dbs查看库文件时,说没有权限)
在admin里添加一个用户,该用户的role为userAdminAnyDatabase(只在admin数据库中可用,赋予用户所有数据库的userAdmin权限)
>use admin
>db.createUser({user:"myUserAdmmin", pwd:"123456", roles:[{role:"userAdminAnyDatabase", db:"admin"}]})
>db.auth("myUserAdmmin", "123456")  返回1表示授权成功

在自己创建的数据库下创建用户,role为readWrite(允许用户读写指定数据库)
>use test
>db.createUser({user:"test", pwd:"123456", roles:[{role:"readWrite", db:"test"}]})
>db.auth("test", "123456") 返回1表示授权成功

如果在replica sets情况下,需要使用rs.status()、rs.conf()等集群操作时报无权限操作,需要给myUserAdmmin授予集群管理的权限
>db.grantRolesToUser("myUserAdmin", ["clusterAdmin"]);
clusterAdmin:只在admin数据库中可用,赋予用户所有分片和复制集相关函数的管理权限。
  1. 读写分离(默认情况下,从库是不能进行读操作的)
在从库中使用以下命令可以查看数据,这样可以分担主库的大量的查询需求
>rs1:SECONDARY>  db.getMongo().setSlaveOk()
  1. 故障转移(如果集群中其中一个节点挂掉,系统会自动从剩余节点中自动选取一个节点为主库)
把主库继进程杀掉
[root@master mongod_config]# ps -ef|grep mongo
root      42914      1  0 01:36 ?        00:00:25 ./mongod -f /etc/mongod_config/mongod_0.conf
root      42990      1  0 01:36 ?        00:00:26 ./mongod -f /etc/mongod_config/mongod_1.conf
root      43069      1  0 01:36 ?        00:00:26 ./mongod -f /etc/mongod_config/mongod_2.conf
root      43166  42179  0 01:37 pts/4    00:00:00 ./mongo -port 28010
root      43171  42520  0 01:39 pts/0    00:00:00 ./mongo -port 28011
root      43176  42539  0 01:39 pts/2    00:00:00 ./mongo -port 28012
root      43397  42123  1 02:41 pts/1    00:00:00 grep mongo
[root@master mongod_config]# kill -9 42914
[root@master mongod_config]# ps -ef|grep mongo
root      42990      1  0 01:36 ?        00:00:26 ./mongod -f /etc/mongod_config/mongod_1.conf
root      43069      1  0 01:36 ?        00:00:27 ./mongod -f /etc/mongod_config/mongod_2.conf
root      43166  42179  0 01:37 pts/4    00:00:00 ./mongo -port 28010
root      43171  42520  0 01:39 pts/0    00:00:00 ./mongo -port 28011
root      43176  42539  0 01:39 pts/2    00:00:00 ./mongo -port 28012
root      43401  42123  0 02:42 pts/1    00:00:00 grep mongo
[root@master mongod_config]# 

在剩余节点中查看集群状态
rs1:SECONDARY> rs.status(); 
可以查看到杀掉的节点的health为0(不健康),剩余的节点中有一个节点的stateStr 为PRIMARY
  1. 增加节点
(1)通过oplog增加节点
>类似前面方法启动一个host为localhost:28013的实例
rs1:PRIMARY> rs.add("localhost:28013");
rs1:PRIMARY> rs.status(); 可以查看到新增加的节点
后面执行前面的操作同样可以查询到数据

(2)通过数据库快照 (--fastsync)和 oplog 结合的方式来增加节点
先取某个节点的物理文件来初始化数据
[root@master ~]# scp -r /data/data/r3 /data/data/r4 
[root@master ~]# echo "this is rs1 super secret key" > /data/key/r4 
[root@master ~]# chmod 600 /data/key/r4 
然后启动host为localhost:28014的实例,在启动命令后面接--fastsync即可
  1. 删除节点
rs1:PRIMARY> rs.remove("localhost:28013");
{
	"ok" : 1,
	"operationTime" : Timestamp(1539943659, 1),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1539943659, 1),
		"signature" : {
			"hash" : BinData(0,"h/LLynb73TWPcFhtjGNWsBsxwLA="),
			"keyId" : NumberLong("6613894330375471105")
		}
	}
}

二、Sharding分片

  • 这是一种将海量数据水平扩展的数据库集群系统,数据分表存储在 sharding的各个节点上
  • MongoDB的数据分块称为chunk,每个chunk都是Collection中一段连续的数据记录,通常最大尺寸是200M,超出则生成新的数据块
  • 构建一个MongoDB Sharding Cluster,需要以下三种角色:
    1. Shard Server。实际存储数据的分片,官方建议每个 Shard为一组Replica Set,可以防止数据丢失。
    2. Config Server。用来存储所有shard节点的配置信息,每个chunk的shard key范围,chunk在各shard的分布情况,该集群中所有DB和collection和sharding配置信息。
    3. Route Process。一个前端路由,客户端由此接入,然后询问config server需要到哪个shard上查询或保存记录,再连接相应的shard进行操作,最后将结果返回给客户端。
      在这里插入图片描述

上图是一个简单Sharding Cluster结构图,现在一台机器上构建一个简单的:

  1. 启动Shard Server
[root@master ~]# mkdir -p /data/shard/s0	创建数据目录
[root@master ~]# mkdir -p /data/shard/s1	创建数据目录
[root@master ~]# mkdir -p /data/shard/log	创建日志目录
[root@master ~]# /run/mongodb/bin/mongod --shardsvr --port 20000 --dbpath /data/shard/s0 --fork --logpath /data/shard/log/s0.log --directoryperdb 
about to fork child process, waiting until server is ready for connections.
forked process: 2568	启动Shard Server实例1
child process started successfully, parent exiting
[root@master ~]# /run/mongodb/bin/mongod --shardsvr --port 20001 --dbpath /data/shard/s1 --fork --logpath /data/shard/log/s1.log --directoryperdb  
about to fork child process, waiting until server is ready for connections.
forked process: 2591	启动Shard Server实例1
child process started successfully, parent exiting
  1. 启动Config Server
[root@master ~]# mkdir -p /data/shard/config	创建数据目录
[root@master ~]# /run/mongodb/bin/mongod --configsvr --port 30000 --dbpath /data/shard/config --fork --logpath /data/shard/log/config.log --directoryperdb 
about to fork child process, waiting until server is ready for connections.
forked process: 2617	启动Config Server实例
child process started successfully, parent exiting
  1. 启动Route Process
在这里插入代码片

二、Replica Sets + Sharding(mongo版本3.6.4)

  • Shard使用Replica Sets,确保每个数据节点都具有备份、自动容错转移、自动恢复能力。
  • Config,使用三个配置服务器,确保元数据完整性。
  • Route,使用3个路由进程,实现负载均衡,提高客户端接入性能。

下表是一个Replica Sets + Sharding的架构图:

Server AServer BServer C
Replica Set 1mongod shard1_1mongo shard1_2mongo shard1_3
Replica Set 2mongod shard2_1mongo shard2_2mongo shard2_3
配置服务器3个mongod config 1mongo config 2mongo config 3
路由进程3个mongos 1mongos 2mongos3
  1. 在三台服务器上创建数据目录
在Server A上:
[root@master ~]# mkdir -p /data/shard1_1
[root@master ~]# mkdir -p /data/shard2_1
[root@master ~]# mkdir -p /data/config

在Server B上:
[root@slave1 ~]# mkdir -p /data/shard1_2
[root@slave1 ~]# mkdir -p /data/shard2_2
[root@slave1 ~]# mkdir -p /data/config

在Server C上:
[root@slave2 ~]# mkdir -p /data/shard1_3
[root@slave2 ~]# mkdir -p /data/shard2_3
[root@slave2 ~]# mkdir -p /data/config

  1. 配置Replica Sets
    配置shard1所用到的Replica Sets:
    以配置启动,首先在master上配置mongod_shard1.conf,然后scp到salve1和slave2上,只需要修复对应的的logpath、dbpath、pidfilepath即可。
# mongod_shard1.conf

# where to log
logpath=/data/shard1_1/shard1_1.log

logappend=true

# where to data
dbpath=/data/shard1_1

# location of pid file
pidfilepath=/data/pidfile/pidfile_shard1_1

# replica set name
replSet=shard1

# the port
port=27017

# Listen to local interface only. Comment out to listen on all interfaces.
bind_ip=0.0.0.0

journal=true

# fork and run in background
fork=true

# declare this is a shard db of a cluster
shardsvr=true
在Server A上:
[root@master ~]# /run/mongodb/bin/mongod -f /etc/mongod_config/mongod_shard1.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 2875
child process started successfully, parent exiting

在Server B上:
[root@slave1 ~]# /run/mongodb/bin/mongod -f /etc/mongod_config/mongod_shard1.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 2620
child process started successfully, parent exiting

在Server C上:
[root@slave2 ~]# /run/mongodb/bin/mongod -f /etc/mongod_config/mongod_shard1.conf
about to fork child process, waiting until server is ready for connections.
forked process: 2609
child process started successfully, parent exiting

然后连接其中一台服务器的mongod,初始化Replica Sets “shard1”:
[root@master bin]# ./mongo
MongoDB shell version v3.6.4
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.6.4
> config={
... }
{ }
> config={_id:"shard1",members:[
... {_id:0,host:"192.168.126.128:27017"},
... {_id:1,host:"192.168.126.129:27017"},
... {_id:2,host:"192.168.126.130:27017"}]
... }
{
	"_id" : "shard1",
	"members" : [
		{
			"_id" : 0,
			"host" : "192.168.126.128:27017"
		},
		{
			"_id" : 1,
			"host" : "192.168.126.129:27017"
		},
		{
			"_id" : 2,
			"host" : "192.168.126.130:27017"
		}
	]
}

如果报下面错,查看下服务器的防火墙状态,一般是因为防火墙没有关闭导致的
> rs.initiate(config)
{
	"ok" : 0,
	"errmsg" : "replSetInitiate quorum check failed because not all proposed set members responded affirmatively: 192.168.126.129:27017 failed with No route to host, 192.168.126.130:27017 failed with No route to host",
	"code" : 74,
	"codeName" : "NodeNotFound"
}

配置shard2所用到的Replica Sets:
与shard1配置类似,在master配置mongod_shard2.conf,然后scp到salve1和slave2上,只需要修复对应的的logpath、dbpath、pidfilepath即可。

# mongod_shard2.conf

# where to log
logpath=/data/shard2_1/shard2_1.log

logappend=true

# where to data
dbpath=/data/shard2_1

# location of pid file
pidfilepath=/data/pidfile/pidfile_shard2_1

# replica set name
replSet=shard2

# the port
port=27018

# Listen to local interface only. Comment out to listen on all interfaces.
bind_ip=0.0.0.0

journal=true

# fork and run in background
fork=true

# declare this is a shard db of a cluster
shardsvr=true
在Server A上:
[root@master ~]# /run/mongodb/bin/mongod -f /etc/mongod_config/mongod_shard2.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 3447
child process started successfully, parent exiting

在Server B上:
[root@slave1 ~]# /run/mongodb/bin/mongod -f /etc/mongod_config/mongod_shard2.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 24589
child process started successfully, parent exiting

在Server C上:
[root@slave2 ~]# /run/mongodb/bin/mongod -f /etc/mongod_config/mongod_shard2.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 2869
child process started successfully, parent exiting

然后连接其中一台服务器的mongod,初始化Replica Sets “shard2”:
[root@master bin]# ./mongo --port 27018
MongoDB shell version v3.6.4
connecting to: mongodb://127.0.0.1:27018/
MongoDB server version: 3.6.4
> config={_id:"shard2",members:[
... {_id:0,host:"192.168.126.128:27018"},
... {_id:1,host:"192.168.126.129:27018"},
... {_id:2,host:"192.168.126.130:27018"}]
... }
{
	"_id" : "shard2",
	"members" : [
		{
			"_id" : 0,
			"host" : "192.168.126.128:27018"
		},
		{
			"_id" : 1,
			"host" : "192.168.126.129:27018"
		},
		{
			"_id" : 2,
			"host" : "192.168.126.130:27018"
		}
	]
}
> rs.initiate(config);
{
	"ok" : 1,
	"operationTime" : Timestamp(1540203730, 1),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1540203730, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}
  1. 配置3台Config Server
    以配置启动,先在master上配置mongod_config.conf,然后scp到slave1、slave2
# mongod_config.conf

# where to log
logpath=/data/config/config.log

logappend=true

# where to data
dbpath=/data/config

# replica set name
replSet=confSet

# the port
port=20000

# Listen to local interface only. Comment out to listen on all interfaces.
bind_ip=0.0.0.0

# fork and run in background
fork=true

# declare this is a config db of a cluster
configsvr=true
在Server A上:
[root@master ~]# /run/mongodb/bin/mongod -f /etc/mongod_config/mongod_config.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 11168
child process started successfully, parent exiting

在Server B上:
[root@slave1 ~]# /run/mongodb/bin/mongod -f /etc/mongod_config/mongod_config.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 27021
child process started successfully, parent exiting

在Server C上:
[root@slave2 ~]# /run/mongodb/bin/mongod -f /etc/mongod_config/mongod_config.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 27119
child process started successfully, parent exiting

然后连接一台服务器的mongo,初始化Replica Set “confSet”:
[root@master bin]# ./mongo -port 20000
MongoDB shell version v3.6.4
connecting to: mongodb://127.0.0.1:20000/
MongoDB server version: 3.6.4
> config = {
... ...    _id : "confSet",
... ...     members : [
... ...         {_id : 0, host : "192.168.126.128:20000" },
... ...         {_id : 1, host : "192.168.126.129:20000" },
... ...         {_id : 2, host : "192.168.126.130:20000" }
... ...     ]
... ... }
{
	"_id" : "confSet",
	"members" : [
		{
			"_id" : 0,
			"host" : "192.168.126.128:20000"
		},
		{
			"_id" : 1,
			"host" : "192.168.126.129:20000"
		},
		{
			"_id" : 2,
			"host" : "192.168.126.130:20000"
		}
	]
}
> rs.initiate(config)
{
	"ok" : 1,
	"operationTime" : Timestamp(1540264957, 1),
	"$gleStats" : {
		"lastOpTime" : Timestamp(1540264957, 1),
		"electionId" : ObjectId("000000000000000000000000")
	},
	"$clusterTime" : {
		"clusterTime" : Timestamp(1540264957, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}

  1. 配置3 台 Route Process
    以配置启动,先在master上配置mongos.conf,然后scp到slave1、slave2
# mongos.conf

# where to log
logpath=/data/mongos.log

logappend=true

#  Connection string for communicating with config servers
configdb=confSet/192.168.126.128:20000,192.168.126.129:20000,192.168.126.130:20000

# the port
port=30000

# Listen to local interface only. Comment out to listen on all interfaces.
bind_ip=0.0.0.0

# fork and run in background
fork=true
在Server A上:
[root@master ~]# /run/mongodb/bin/mongos -f /etc/mongod_config/mongos.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 11636
child process started successfully, parent exiting

在Server B上:
[root@slave1 ~]# /run/mongodb/bin/mongos -f /etc/mongod_config/mongos.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 27476
child process started successfully, parent exiting

在Server C上:
[root@slave2 ~]# /run/mongodb/bin/mongos -f /etc/mongod_config/mongos.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 27500
child process started successfully, parent exiting
  1. 配置Shard Cluster
[root@master bin]# ./mongo --port 30000
MongoDB shell version v3.6.4
connecting to: mongodb://127.0.0.1:30000/
MongoDB server version: 3.6.4
mongos> use admin
switched to db admin

串联路由服务器与分配副本集:
mongos> sh.addShard("shard1/192.168.126.128:27017,192.168.126.129:27017,192.168.126.130:27017")
{
	"shardAdded" : "shard1",
	"ok" : 1,
	"$clusterTime" : {
		"clusterTime" : Timestamp(1540266140, 8),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	},
	"operationTime" : Timestamp(1540266140, 8)
}
mongos> sh.addShard("shard2/192.168.126.128:27018,192.168.126.129:27018,192.168.126.130:27018")
{
	"shardAdded" : "shard2",
	"ok" : 1,
	"$clusterTime" : {
		"clusterTime" : Timestamp(1540266155, 5),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	},
	"operationTime" : Timestamp(1540266155, 5)
}

查看集群状态
mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
  	"_id" : 1,
  	"minCompatibleVersion" : 5,
  	"currentVersion" : 6,
  	"clusterId" : ObjectId("5bce9409f8c24f1a2f1c9344")
  }
  shards:
        {  "_id" : "shard1",  "host" : "shard1/192.168.126.128:27017,192.168.126.129:27017,192.168.126.130:27017",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/192.168.126.128:27018,192.168.126.129:27018,192.168.126.130:27018",  "state" : 1 }
  active mongoses:
        "3.6.4" : 3
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                No recent migrations
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
  1. 测试Sharding正常工作
    连接在mongos上,然后进行测试
# 设置chunk大小,为了测试设置为1M,默认是200M
mongos> use config
switched to db config
mongos> db.settings.save({_id:"chunksize",value:1});
WriteResult({ "nMatched" : 0, "nUpserted" : 1, "nModified" : 0, "_id" : "chunkSize" })

# 指定test分片生效
mongos> sh.enableSharding("test")

# 指定数据库里需要分片的集合和片键
mongos> use test
switched to db test
mongos> db.users.createIndex({user_id:1})
mongos> use admin
switched to db admin
mongos> sh.shardCollection("test.users",{user_id:1})

# 插入数据,为了 测试分片是否成功
mongos> for (var i = 1; i <=100000; i++){
... db.users.save({user_id: i, username: "user"+i});
... }
WriteResult({ "nInserted" : 1 })

查看插入之后的状态,可以查看到数据分布在shard1和shard2上,而且每个shard的chunks数不一致,说明已经成功了
mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
  	"_id" : 1,
  	"minCompatibleVersion" : 5,
  	"currentVersion" : 6,
  	"clusterId" : ObjectId("5bce9409f8c24f1a2f1c9344")
  }
  shards:
        {  "_id" : "shard1",  "host" : "shard1/192.168.126.128:27017,192.168.126.129:27017,192.168.126.130:27017",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/192.168.126.128:27018,192.168.126.129:27018,192.168.126.130:27018",  "state" : 1 }
  active mongoses:
        "3.6.4" : 3
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                6 : Success
                1 : Failed with error 'aborted', from shard2 to shard1
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                config.system.sessions
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard1	1
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0) 
        {  "_id" : "test1",  "primary" : "shard1",  "partitioned" : true }
                test1.users
                        shard key: { "user_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard1	7
                                shard2	8
                               ............
  1. 集群的启动顺序和关闭顺序
  • 启动:先启动shard和config server,然后再启动route process
  • 关闭:先关闭Route process,然后再关闭config server、shard
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值