MongoDB 分片集群部署

MongoDB 分片集群部署

分片概述

question:

副本集的特点,不管我搭载多少个副本节点, 它存储的数据都是一样的。存储的数据一样就会带来一个问题了,数据只能存在一台服务器上面。那完整的数据随着业务量非常大的时候,就会发生一台服务器已经存不下的情况,我们怎么办

Answer:

​ 那我们可以用分片集群的方式来解决问题。

分片(sharding)是一种跨多台机器分布数据的方法, MongoDB使用分片来支持具有非常大的数据集高吞吐量操作的部署。换句话说:分片(sharding)是指将数据拆分,将其分散存在不同的机器上的过程。有时也用分区(partitioning)来表示这个概念。将数据分散到不同的机器上,不需要功能强大的大型计算机就可以储存更多的数据,处理更多的负载。


分片具有大型数据集或高吞吐量应用程序的数据库系统可以会挑战单个服务器的容量。例如,高查询率会耗尽服务器的CPU容量。工作集大小大于系统的RAM会强调磁盘驱动器的I / O容量。

有两种解决系统增长的方法:垂直扩展和水平扩展。

垂直扩展意味着增加单个服务器的容量,例如使用更强大的CPU,添加更多RAM或增加存储空间量。可用技术的局限性可能会限制单个机器对于给定工作负载而言足够强大。此外,基于云的提供商基于可用的硬件配置具有硬性上限。结果,垂直缩放有实际的最大值。

水平扩展意味着划分系统数据集并加载多个服务器,添加其他服务器以根据需要增加容量。虽然单个机器的总体速度或容量可能不高,但每台机器处理整个工作负载的子集,可能提供比单个高速大容量服务器更高的效率。扩展部署容量只需要根据需要添加额外的服务器,这可能比单个机器的高端硬件的总体成本更低。权衡是基础架构和部署维护的复杂性增加。

MongoDB支持通过分片进行水平扩展。

一,分片节点副本集的创建

1.创建副本集的目录
##
#创建存放数据和日志的目录:
##
#主节点
#myshardrs01_27018
mkdir -p /usr/local/mongodb/sharded_cluster/myshardrs01_27018/log
mkdir -p /usr/local/mongodb/sharded_cluster/myshardrs01_27018/data/db

#myshardrs01_270318
mkdir -p /usr/local/mongodb/sharded_cluster/myshardrs02_27318/log 
mkdir -p /usr/local/mongodb/sharded_cluster/myshardrs02_27318/data/db
#副节点
#myshardrs01_27118
mkdir -p /usr/local/mongodb/sharded_cluster/myshardrs01_27118/log 
mkdir -p /usr/local/mongodb/sharded_cluster/myshardrs01_27118/data/db

#myshardrs02_27418
mkdir -p /usr/local/mongodb/sharded_cluster/myshardrs02_27418/log 
mkdir -p /usr/local/mongodb/sharded_cluster/myshardrs02_27418/data/db

#仲裁节点
#myshardrs01_27218
mkdir -p /usr/local/mongodb/sharded_cluster/myshardrs01_27218/log 
mkdir -p /usr/local/mongodb/sharded_cluster/myshardrs01_27218/data/db

#myshardrs02_27518
mkdir -p /usr/local/mongodb/sharded_cluster/myshardrs02_27518/log 
mkdir -p /usr/local/mongodb/sharded_cluster/myshardrs02_27518/data/db


2.编写配置文件

Tips:

编写配置文件,两套副本集的配置文件中的bindIp最好改为0.0.0.0

这样在物理机IP变化的情况下不用修改配置文件,但项目发布到线上的时候不要用0.0.0.0了。

27018的配置文件

#新建27018的配置文件:
vim /usr/local/mongodb/sharded_cluster/myshardrs01_27018/mongod.conf

🚀配置文件

systemLog: 
  #MongoDB发送所有日志输出的目标指定为文件
  destination: file
  #mongod或mongos应向其发送所有诊断日志记录信息的日志文件的路径
  path: "/usr/local/mongodb/sharded_cluster/myshardrs01_27018/log/mongod.log"
  #当mongos或mongod实例重新启动时,mongos或mongod会将新条目附加到现有日志文件的末尾。
  logAppend: true
storage: 
  #mongod实例存储其数据的目录。storage.dbPath设置仅适用于mongod。
  dbPath: "/usr/local/mongodb/sharded_cluster/myshardrs01_27018/data/db"
  journal: 
    enabled: true
processManagement: 
  #启用在后台运行mongos或mongod进程的守护进程模式。
  fork: true 
  #指定用于保存mongos或mongod进程的进程ID的文件位置,其中mongos或mongod将写入其PID 
  pidFilePath: "/usr/local/mongodb/sharded_cluster/myshardrs01_27018/log/mongod.pid" 
net:
  #绑定的IP为 192.168.28.101【你自己的IP】,为方便配置我写的0.0.0.0
  bindIp: 0.0.0.0
  #绑定的端口为27018 
  port: 27018 
replication: 
  #副本集的名称 
  replSetName: myshardrs01 
sharding: 
  #分片角色 
  clusterRole: shardsvr

Tips:
设置sharding.clusterRole需要mongod实例运行复制。 要将实例部署为副本集成员

使用replSetName设置并指定副本集的名称。


27118的配置文件

#新建27118的配置文件:
vim /usr/local/mongodb/sharded_cluster/myshardrs01_27118/mongod.conf

🚀配置文件

systemLog: 
  #MongoDB发送所有日志输出的目标指定为文件
  destination: file
  #mongod或mongos应向其发送所有诊断日志记录信息的日志文件的路径
  path: "/usr/local/mongodb/sharded_cluster/myshardrs01_27118/log/mongod.log"
  #当mongos或mongod实例重新启动时,mongos或mongod会将新条目附加到现有日志文件的末尾。
  logAppend: true
storage: 
  #mongod实例存储其数据的目录。storage.dbPath设置仅适用于mongod。
  dbPath: "/usr/local/mongodb/sharded_cluster/myshardrs01_27118/data/db"
  journal: 
    enabled: true
processManagement: 
  #启用在后台运行mongos或mongod进程的守护进程模式。
  fork: true 
  #指定用于保存mongos或mongod进程的进程ID的文件位置,其中mongos或mongod将写入其PID 
  pidFilePath: "/usr/local/mongodb/sharded_cluster/myshardrs01_27118/log/mongod.pid" 
net:
  #绑定的IP为 192.168.28.101【你自己的IP】,为方便配置我写的0.0.0.0
  bindIp: 0.0.0.0
  #绑定的端口为27118 
  port: 27118 
replication: 
  #副本集的名称 
  replSetName: myshardrs01 
sharding: 
  #分片角色 
  clusterRole: shardsvr

27218的配置文件

#新建27218的配置文件:
vim /usr/local/mongodb/sharded_cluster/myshardrs01_27218/mongod.conf

🚀配置文件

systemLog: 
  #MongoDB发送所有日志输出的目标指定为文件
  destination: file
  #mongod或mongos应向其发送所有诊断日志记录信息的日志文件的路径
  path: "/usr/local/mongodb/sharded_cluster/myshardrs01_27218/log/mongod.log"
  #当mongos或mongod实例重新启动时,mongos或mongod会将新条目附加到现有日志文件的末尾。
  logAppend: true
storage: 
  #mongod实例存储其数据的目录。storage.dbPath设置仅适用于mongod。
  dbPath: "/usr/local/mongodb/sharded_cluster/myshardrs01_27218/data/db"
  journal: 
    enabled: true
processManagement: 
  #启用在后台运行mongos或mongod进程的守护进程模式。
  fork: true 
  #指定用于保存mongos或mongod进程的进程ID的文件位置,其中mongos或mongod将写入其PID 
  pidFilePath: "/usr/local/mongodb/sharded_cluster/myshardrs01_27218/log/mongod.pid" 
net:
  #绑定的IP为 192.168.28.101【你自己的IP】,为方便配置我写的0.0.0.0
  bindIp: 0.0.0.0
  #绑定的端口为27218 
  port: 27218 
replication: 
  #副本集的名称 
  replSetName: myshardrs01 
sharding: 
  #分片角色 
  clusterRole: shardsvr

27318的配置文件

#新建27318的配置文件:
vim /usr/local/mongodb/sharded_cluster/myshardrs02_27318/mongod.conf

🚀配置文件

systemLog: 
  #MongoDB发送所有日志输出的目标指定为文件
  destination: file
  #mongod或mongos应向其发送所有诊断日志记录信息的日志文件的路径
  path: "/usr/local/mongodb/sharded_cluster/myshardrs02_27318/log/mongod.log"
  #当mongos或mongod实例重新启动时,mongos或mongod会将新条目附加到现有日志文件的末尾。
  logAppend: true
storage: 
  #mongod实例存储其数据的目录。storage.dbPath设置仅适用于mongod。
  dbPath: "/usr/local/mongodb/sharded_cluster/myshardrs02_27318/data/db"
  journal: 
    enabled: true
processManagement: 
  #启用在后台运行mongos或mongod进程的守护进程模式。
  fork: true 
  #指定用于保存mongos或mongod进程的进程ID的文件位置,其中mongos或mongod将写入其PID 
  pidFilePath: "/usr/local/mongodb/sharded_cluster/myshardrs02_27318/log/mongod.pid" 
net:
  #绑定的IP为 192.168.28.101【你自己的IP】,为方便配置我写的0.0.0.0
  bindIp: 0.0.0.0
  #绑定的端口为27318 
  port: 27318 
replication: 
  #副本集的名称 
  replSetName: myshardrs01 
sharding: 
  #分片角色 
  clusterRole: shardsvr

27418的配置文件

#新建27418的配置文件:
vim /usr/local/mongodb/sharded_cluster/myshardrs02_27418/mongod.conf

🚀配置文件

systemLog: 
  #MongoDB发送所有日志输出的目标指定为文件
  destination: file
  #mongod或mongos应向其发送所有诊断日志记录信息的日志文件的路径
  path: "/usr/local/mongodb/sharded_cluster/myshardrs02_27418/log/mongod.log"
  #当mongos或mongod实例重新启动时,mongos或mongod会将新条目附加到现有日志文件的末尾。
  logAppend: true
storage: 
  #mongod实例存储其数据的目录。storage.dbPath设置仅适用于mongod。
  dbPath: "/usr/local/mongodb/sharded_cluster/myshardrs02_27418/data/db"
  journal: 
    enabled: true
processManagement: 
  #启用在后台运行mongos或mongod进程的守护进程模式。
  fork: true 
  #指定用于保存mongos或mongod进程的进程ID的文件位置,其中mongos或mongod将写入其PID 
  pidFilePath: "/usr/local/mongodb/sharded_cluster/myshardrs02_27418/log/mongod.pid" 
net:
  #绑定的IP为 192.168.28.101【你自己的IP】,为方便配置我写的0.0.0.0
  bindIp: 0.0.0.0
  #绑定的端口为27418 
  port: 27418 
replication: 
  #副本集的名称 
  replSetName: myshardrs01 
sharding: 
  #分片角色 
  clusterRole: shardsvr

27518的配置文件

#新建27518的配置文件:
vim /usr/local/mongodb/sharded_cluster/myshardrs02_27518/mongod.conf

🚀配置文件

systemLog: 
  #MongoDB发送所有日志输出的目标指定为文件
  destination: file
  #mongod或mongos应向其发送所有诊断日志记录信息的日志文件的路径
  path: "/usr/local/mongodb/sharded_cluster/myshardrs02_27518/log/mongod.log"
  #当mongos或mongod实例重新启动时,mongos或mongod会将新条目附加到现有日志文件的末尾。
  logAppend: true
storage: 
  #mongod实例存储其数据的目录。storage.dbPath设置仅适用于mongod。
  dbPath: "/usr/local/mongodb/sharded_cluster/myshardrs02_27518/data/db"
  journal: 
    enabled: true
processManagement: 
  #启用在后台运行mongos或mongod进程的守护进程模式。
  fork: true 
  #指定用于保存mongos或mongod进程的进程ID的文件位置,其中mongos或mongod将写入其PID 
  pidFilePath: "/usr/local/mongodb/sharded_cluster/myshardrs02_27518/log/mongod.pid" 
net:
  #绑定的IP为 192.168.28.101【你自己的IP】,为方便配置我写的0.0.0.0
  bindIp: 0.0.0.0
  #绑定的端口为27518 
  port: 27518 
replication: 
  #副本集的名称 
  replSetName: myshardrs01 
sharding: 
  #分片角色 
  clusterRole: shardsvr

3.启动 myshardrs01 副本集

myshardrs01副本集的启动

先启动第一套副本集:一主一副本一仲裁
依次启动三个mongod服务,并查看服务是否启动:

##启动主节点
[root@localhost ~]# mongod -f /usr/local/mongodb/sharded_cluster/myshardrs01_27018/mongod.conf
about to fork child process, waiting until server is ready for connections.
forked process: 7835
child process started successfully, parent exiting
##启动副本节点
[root@localhost ~]# mongod -f /usr/local/mongodb/sharded_cluster/myshardrs01_27118/mongod.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 7884
child process started successfully, parent exiting
##启动仲裁节点
[root@localhost ~]# mongod -f /usr/local/mongodb/sharded_cluster/myshardrs01_27218/mongod.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 7934
child process started successfully, parent exiting
##查看是否启动成功
[root@localhost ~]# ps -ef | grep mongod
root       7835      1  8 22:51 ?        00:00:02 mongod -f /usr/local/mongodb/sharded_cluster/myshardrs01_27018/mongod.conf
root       7884      1 12 22:51 ?        00:00:01 mongod -f /usr/local/mongodb/sharded_cluster/myshardrs01_27118/mongod.conf
root       7934      1 19 22:51 ?        00:00:01 mongod -f /usr/local/mongodb/sharded_cluster/myshardrs01_27218/mongod.conf
root       7977   7599  0 22:51 pts/0    00:00:00 grep --color=auto mongod
[root@localhost ~]# 
初始化myshardrs01副本集

Tips:

1)初始化副本集和创建主节点:

​ 使用客户端命令连接任意一个节点:
mongo --host=192.168.28.101 --port=27018
​ 执行初始化副本集命令:rs.initiate()
​ 查看副本集情况:rs.status()

2)主节点配置查看:rs.conf()

3)添加副本节点:rs.add("192.168.28.101:27118")

4)添加仲裁节点:rs.addArb("192.168.28.101:27218")
查看副本集的配置情况:rs.status()

🚀 注意:这里的 IP自己虚拟机的IP

代码如下:

[root@localhost ~]# mongo --host=192.168.28.101 --port=27018
MongoDB shell version v4.2.18
connecting to: mongodb://192.168.28.101:27018/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("f5e3d71c-235d-445e-8438-22ee42c3dd67") }
MongoDB server version: 4.2.18
Server has startup warnings: 
.......
---
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).

The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.

To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---

> rs.initiate()
{
	"info2" : "no configuration specified. Using a default configuration for the set",
	"me" : "192.168.28.101:27018",
	"ok" : 1
}
myshardrs01:SECONDARY> 
#再回车一下就变成主节点了
myshardrs01:PRIMARY> 
#添加副本节点
myshardrs01:PRIMARY> rs.add("192.168.28.101:27118")
{
	"ok" : 1,
	"$clusterTime" : {
		"clusterTime" : Timestamp(1648912062, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	},
	"operationTime" : Timestamp(1648912062, 1)
}
#添加仲裁节点
myshardrs01:PRIMARY> rs.addArb("192.168.28.101:27218")
{
	"ok" : 1,
	"$clusterTime" : {
		"clusterTime" : Timestamp(1648912077, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	},
	"operationTime" : Timestamp(1648912077, 1)
}
#查看状态
myshardrs01:PRIMARY> rs.status()
{
	"set" : "myshardrs01",
	"date" : ISODate("2022-04-02T15:08:12.233Z"),
	"myState" : 1,
	"term" : NumberLong(1),
	"syncingTo" : "",
	"syncSourceHost" : "",
	"syncSourceId" : -1,
	"heartbeatIntervalMillis" : NumberLong(2000),
	"majorityVoteCount" : 2,
	"writeMajorityCount" : 2,
	"optimes" : {
		"lastCommittedOpTime" : {
			"ts" : Timestamp(1648912091, 1),
			"t" : NumberLong(1)
		},
		"lastCommittedWallTime" : ISODate("2022-04-02T15:08:11.986Z"),
		"readConcernMajorityOpTime" : {
			"ts" : Timestamp(1648912091, 1),
			"t" : NumberLong(1)
		},
		"readConcernMajorityWallTime" : ISODate("2022-04-02T15:08:11.986Z"),
		"appliedOpTime" : {
			"ts" : Timestamp(1648912091, 1),
			"t" : NumberLong(1)
		},
		"durableOpTime" : {
			"ts" : Timestamp(1648912091, 1),
			"t" : NumberLong(1)
		},
		"lastAppliedWallTime" : ISODate("2022-04-02T15:08:11.986Z"),
		"lastDurableWallTime" : ISODate("2022-04-02T15:08:11.986Z")
	},
	"lastStableRecoveryTimestamp" : Timestamp(1648912091, 1),
	"lastStableCheckpointTimestamp" : Timestamp(1648912091, 1),
	"electionCandidateMetrics" : {
		"lastElectionReason" : "electionTimeout",
		"lastElectionDate" : ISODate("2022-04-02T15:07:11.929Z"),
		"electionTerm" : NumberLong(1),
		"lastCommittedOpTimeAtElection" : {
			"ts" : Timestamp(0, 0),
			"t" : NumberLong(-1)
		},
		"lastSeenOpTimeAtElection" : {
			"ts" : Timestamp(1648912031, 1),
			"t" : NumberLong(-1)
		},
		"numVotesNeeded" : 1,
		"priorityAtElection" : 1,
		"electionTimeoutMillis" : NumberLong(10000),
		"newTermStartDate" : ISODate("2022-04-02T15:07:11.973Z"),
		"wMajorityWriteAvailabilityDate" : ISODate("2022-04-02T15:07:12.031Z")
	},
	"members" : [
		{
			"_id" : 0,
			"name" : "192.168.28.101:27018",
			"health" : 1,
			"state" : 1,
			"stateStr" : "PRIMARY",
			"uptime" : 1007,
			"optime" : {
				"ts" : Timestamp(1648912091, 1),
				"t" : NumberLong(1)
			},
			"optimeDate" : ISODate("2022-04-02T15:08:11Z"),
			"syncingTo" : "",
			"syncSourceHost" : "",
			"syncSourceId" : -1,
			"infoMessage" : "could not find member to sync from",
			"electionTime" : Timestamp(1648912031, 2),
			"electionDate" : ISODate("2022-04-02T15:07:11Z"),
			"configVersion" : 3,
			"self" : true,
			"lastHeartbeatMessage" : ""
		},
		{
			"_id" : 1,
			"name" : "192.168.28.101:27118",
			"health" : 1,
			"state" : 2,
			"stateStr" : "SECONDARY",
			"uptime" : 30,
			"optime" : {
				"ts" : Timestamp(1648912077, 1),
				"t" : NumberLong(1)
			},
			"optimeDurable" : {
				"ts" : Timestamp(1648912077, 1),
				"t" : NumberLong(1)
			},
			"optimeDate" : ISODate("2022-04-02T15:07:57Z"),
			"optimeDurableDate" : ISODate("2022-04-02T15:07:57Z"),
			"lastHeartbeat" : ISODate("2022-04-02T15:08:11.104Z"),
			"lastHeartbeatRecv" : ISODate("2022-04-02T15:08:10.653Z"),
			"pingMs" : NumberLong(0),
			"lastHeartbeatMessage" : "",
			"syncingTo" : "192.168.28.101:27018",
			"syncSourceHost" : "192.168.28.101:27018",
			"syncSourceId" : 0,
			"infoMessage" : "",
			"configVersion" : 3
		},
		{
			"_id" : 2,
			"name" : "192.168.28.101:27218",
			"health" : 1,
			"state" : 7,
			"stateStr" : "ARBITER",
			"uptime" : 15,
			"lastHeartbeat" : ISODate("2022-04-02T15:08:11.104Z"),
			"lastHeartbeatRecv" : ISODate("2022-04-02T15:08:11.151Z"),
			"pingMs" : NumberLong(1),
			"lastHeartbeatMessage" : "",
			"syncingTo" : "",
			"syncSourceHost" : "",
			"syncSourceId" : -1,
			"infoMessage" : "",
			"configVersion" : 3
		}
	],
	"ok" : 1,
	"$clusterTime" : {
		"clusterTime" : Timestamp(1648912091, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	},
	"operationTime" : Timestamp(1648912091, 1)
}
#退出
myshardrs01:PRIMARY> exit
bye
[root@localhost ~]# 

Tips:

所有的操作返回OK :1 才是操作成功


4.启动 myshardrs02 副本集

myshardrs02副本集的启动

先启动第一套副本集:一主一副本一仲裁
依次启动三个mongod服务,并查看服务是否启动:

root@localhost ~]# mongod -f /usr/local/mongodb/sharded_cluster/myshardrs02_27318/mongod.conf
about to fork child process, waiting until server is ready for connections.
forked process: 8466
child process started successfully, parent exiting
[root@localhost ~]# mongod -f /usr/local/mongodb/sharded_cluster/myshardrs02_27418/mongod.conf
about to fork child process, waiting until server is ready for connections.
forked process: 8508
child process started successfully, parent exiting
[root@localhost ~]# mongod -f /usr/local/mongodb/sharded_cluster/myshardrs02_27518/mongod.conf
about to fork child process, waiting until server is ready for connections.
forked process: 8551
child process started successfully, parent exiting
[root@localhost ~]# ps -ef | grep mongod
root       7835      1  1 22:51 ?        00:00:06 mongod -f /usr/local/mongodb/sharded_cluster/myshardrs01_27018/mongod.conf
root       7884      1  1 22:51 ?        00:00:06 mongod -f /usr/local/mongodb/sharded_cluster/myshardrs01_27118/mongod.conf
root       7934      1  1 22:51 ?        00:00:05 mongod -f /usr/local/mongodb/sharded_cluster/myshardrs01_27218/mongod.conf
root       8466      1  7 22:59 ?        00:00:01 mongod -f /usr/local/mongodb/sharded_cluster/myshardrs02_27318/mongod.conf
root       8508      1 12 22:59 ?        00:00:01 mongod -f /usr/local/mongodb/sharded_cluster/myshardrs02_27418/mongod.conf
root       8551      1 25 22:59 ?        00:00:02 mongod -f /usr/local/mongodb/sharded_cluster/myshardrs02_27518/mongod.conf
root       8593   7599  0 22:59 pts/0    00:00:00 grep --color=auto mongod
[root@localhost ~]# 
初始化myshardrs02副本集

Tips:

1)初始化副本集和创建主节点:

​ 使用客户端命令连接任意一个节点:
mongo --host=192.168.28.101 --port=27018
​ 执行初始化副本集命令:rs.initiate()
​ 查看副本集情况:rs.status()

2)主节点配置查看:rs.conf()

3)添加副本节点:rs.add("192.168.28.101:27118")

4)添加仲裁节点:rs.addArb("192.168.28.101:27218")
查看副本集的配置情况:rs.status()

🚀 注意:这里的 IP自己虚拟机的IP

代码如下:

[root@localhost ~]# mongo --host=192.168.28.101 --port=27318
MongoDB shell version v4.2.18
connecting to: mongodb://192.168.28.101:27318/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("1a927c47-bd43-47ad-8ed0-f98b2107e631") }
MongoDB server version: 4.2.18
Server has startup warnings: 
......
---
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).

The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.

To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---

> rs.initiate()
{
	"info2" : "no configuration specified. Using a default configuration for the set",
	"me" : "192.168.28.101:27318",
	"ok" : 1
}
myshardrs02:SECONDARY>
#再回车一下就变成了主节点
myshardrs02:PRIMARY>
#添加副本节点
myshardrs02:PRIMARY> rs.add("192.168.28.101:27418")
{
	"ok" : 1,
	"$clusterTime" : {
		"clusterTime" : Timestamp(1648912357, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	},
	"operationTime" : Timestamp(1648912357, 1)
}
#添加仲裁节点
myshardrs02:PRIMARY> rs.addArb("192.168.28.101:27518")
{
	"ok" : 1,
	"$clusterTime" : {
		"clusterTime" : Timestamp(1648912368, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	},
	"operationTime" : Timestamp(1648912368, 1)
}
#查看副本集状态
myshardrs02:PRIMARY> rs.status()
{
	"set" : "myshardrs02",
	"date" : ISODate("2022-04-02T15:13:00.198Z"),
	"myState" : 1,
	"term" : NumberLong(1),
	"syncingTo" : "",
	"syncSourceHost" : "",
	"syncSourceId" : -1,
	"heartbeatIntervalMillis" : NumberLong(2000),
	"majorityVoteCount" : 2,
	"writeMajorityCount" : 2,
	"optimes" : {
		"lastCommittedOpTime" : {
			"ts" : Timestamp(1648912368, 1),
			"t" : NumberLong(1)
		},
		"lastCommittedWallTime" : ISODate("2022-04-02T15:12:48.688Z"),
		"readConcernMajorityOpTime" : {
			"ts" : Timestamp(1648912368, 1),
			"t" : NumberLong(1)
		},
		"readConcernMajorityWallTime" : ISODate("2022-04-02T15:12:48.688Z"),
		"appliedOpTime" : {
			"ts" : Timestamp(1648912368, 1),
			"t" : NumberLong(1)
		},
		"durableOpTime" : {
			"ts" : Timestamp(1648912368, 1),
			"t" : NumberLong(1)
		},
		"lastAppliedWallTime" : ISODate("2022-04-02T15:12:48.688Z"),
		"lastDurableWallTime" : ISODate("2022-04-02T15:12:48.688Z")
	},
	"lastStableRecoveryTimestamp" : Timestamp(1648912334, 5),
	"lastStableCheckpointTimestamp" : Timestamp(1648912334, 5),
	"electionCandidateMetrics" : {
		"lastElectionReason" : "electionTimeout",
		"lastElectionDate" : ISODate("2022-04-02T15:12:14.400Z"),
		"electionTerm" : NumberLong(1),
		"lastCommittedOpTimeAtElection" : {
			"ts" : Timestamp(0, 0),
			"t" : NumberLong(-1)
		},
		"lastSeenOpTimeAtElection" : {
			"ts" : Timestamp(1648912334, 1),
			"t" : NumberLong(-1)
		},
		"numVotesNeeded" : 1,
		"priorityAtElection" : 1,
		"electionTimeoutMillis" : NumberLong(10000),
		"newTermStartDate" : ISODate("2022-04-02T15:12:14.602Z"),
		"wMajorityWriteAvailabilityDate" : ISODate("2022-04-02T15:12:14.677Z")
	},
	"members" : [
		{
			"_id" : 0,
			"name" : "192.168.28.101:27318",
			"health" : 1,
			"state" : 1,
			"stateStr" : "PRIMARY",
			"uptime" : 803,
			"optime" : {
				"ts" : Timestamp(1648912368, 1),
				"t" : NumberLong(1)
			},
			"optimeDate" : ISODate("2022-04-02T15:12:48Z"),
			"syncingTo" : "",
			"syncSourceHost" : "",
			"syncSourceId" : -1,
			"infoMessage" : "could not find member to sync from",
			"electionTime" : Timestamp(1648912334, 2),
			"electionDate" : ISODate("2022-04-02T15:12:14Z"),
			"configVersion" : 3,
			"self" : true,
			"lastHeartbeatMessage" : ""
		},
		{
			"_id" : 1,
			"name" : "192.168.28.101:27418",
			"health" : 1,
			"state" : 2,
			"stateStr" : "SECONDARY",
			"uptime" : 22,
			"optime" : {
				"ts" : Timestamp(1648912368, 1),
				"t" : NumberLong(1)
			},
			"optimeDurable" : {
				"ts" : Timestamp(1648912368, 1),
				"t" : NumberLong(1)
			},
			"optimeDate" : ISODate("2022-04-02T15:12:48Z"),
			"optimeDurableDate" : ISODate("2022-04-02T15:12:48Z"),
			"lastHeartbeat" : ISODate("2022-04-02T15:12:58.706Z"),
			"lastHeartbeatRecv" : ISODate("2022-04-02T15:12:59.715Z"),
			"pingMs" : NumberLong(0),
			"lastHeartbeatMessage" : "",
			"syncingTo" : "192.168.28.101:27318",
			"syncSourceHost" : "192.168.28.101:27318",
			"syncSourceId" : 0,
			"infoMessage" : "",
			"configVersion" : 3
		},
		{
			"_id" : 2,
			"name" : "192.168.28.101:27518",
			"health" : 1,
			"state" : 7,
			"stateStr" : "ARBITER",
			"uptime" : 11,
			"lastHeartbeat" : ISODate("2022-04-02T15:12:58.706Z"),
			"lastHeartbeatRecv" : ISODate("2022-04-02T15:12:58.771Z"),
			"pingMs" : NumberLong(0),
			"lastHeartbeatMessage" : "",
			"syncingTo" : "",
			"syncSourceHost" : "",
			"syncSourceId" : -1,
			"infoMessage" : "",
			"configVersion" : 3
		}
	],
	"ok" : 1,
	"$clusterTime" : {
		"clusterTime" : Timestamp(1648912368, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	},
	"operationTime" : Timestamp(1648912368, 1)
}
myshardrs02:PRIMARY> exit
bye
[root@localhost ~]# 

Tips:

所有的操作返回OK :1 才是操作成功


二,配置节点副本集的创建

##
#创建存放数据和日志的目录:
##
#主
mkdir -p /usr/local/mongodb/sharded_cluster/myconfigrs_27019/log 
mkdir -p /usr/local/mongodb/sharded_cluster/myconfigrs_27019/data/db
#副
mkdir -p /usr/local/mongodb/sharded_cluster/myconfigrs_27119/log 
mkdir -p /usr/local/mongodb/sharded_cluster/myconfigrs_27119/data/db
#副
mkdir -p /usr/local/mongodb/sharded_cluster/myconfigrs_27219/log 
mkdir -p /usr/local/mongodb/sharded_cluster/myconfigrs_27219/data/db
1.创建配置文件

Tips:

编写配置文件,两套副本集的配置文件中的bindIp最好改为0.0.0.0

这样在物理机IP变化的情况下不用修改配置文件,但项目发布到线上的时候不要用0.0.0.0了。

27019的配置文件

#新建配置文件:
vim /usr/local/mongodb/sharded_cluster/myconfigrs_27019/mongod.conf

🚀配置文件:

systemLog: 
  #MongoDB发送所有日志输出的目标指定为文件
  destination: file
  #mongod或mongos应向其发送所有诊断日志记录信息的日志文件的路径
  path: "/usr/local/mongodb/sharded_cluster/myconfigrs_27019/log/mongod.log"
  #当mongos或mongod实例重新启动时,mongos或mongod会将新条目附加到现有日志文件的末尾。
  logAppend: true
storage: 
  #mongod实例存储其数据的目录。storage.dbPath设置仅适用于mongod。
  dbPath: "/usr/local/mongodb/sharded_cluster/myconfigrs_27019/data/db"
  journal: 
    enabled: true
processManagement: 
  #启用在后台运行mongos或mongod进程的守护进程模式。
  fork: true 
  #指定用于保存mongos或mongod进程的进程ID的文件位置,其中mongos或mongod将写入其PID 
  pidFilePath: "/usr/local/mongodb/sharded_cluster/myconfigrs_27019/log/mongod.pid" 
net:
  #服务实例绑定所有IP,有副作用,副本集初始化的时候,节点名字会自动设置为本地域名,而不是ip 
  #bindIpAll: true 
  #绑定的IP为 192.168.28.101【你自己的IP】,为方便配置我写的0.0.0.0  
  bindIp: 0.0.0.0 
  #bindIp 
  #绑定的端口 
  port: 27019 
replication: 
  #副本集的名称 
  replSetName: myconfigrs 
sharding: 
  #分片角色 
  clusterRole: configsvr

27119的配置文件

#新建配置文件:
vim /usr/local/mongodb/sharded_cluster/myconfigrs_27119/mongod.conf

🚀配置文件:

systemLog: 
  #MongoDB发送所有日志输出的目标指定为文件
  destination: file
  #mongod或mongos应向其发送所有诊断日志记录信息的日志文件的路径
  path: "/usr/local/mongodb/sharded_cluster/myconfigrs_27119/log/mongod.log"
  #当mongos或mongod实例重新启动时,mongos或mongod会将新条目附加到现有日志文件的末尾。
  logAppend: true
storage: 
  #mongod实例存储其数据的目录。storage.dbPath设置仅适用于mongod。
  dbPath: "/usr/local/mongodb/sharded_cluster/myconfigrs_27119/data/db"
  journal: 
    enabled: true
processManagement: 
  #启用在后台运行mongos或mongod进程的守护进程模式。
  fork: true 
  #指定用于保存mongos或mongod进程的进程ID的文件位置,其中mongos或mongod将写入其PID 
  pidFilePath: "/usr/local/mongodb/sharded_cluster/myconfigrs_27119/log/mongod.pid" 
net:
  #服务实例绑定所有IP,有副作用,副本集初始化的时候,节点名字会自动设置为本地域名,而不是ip 
  #bindIpAll: true 
  #绑定的IP为 192.168.28.101【你自己的IP】,为方便配置我写的0.0.0.0  
  bindIp: 0.0.0.0 
  #bindIp 
  #绑定的端口 
  port: 27119 
replication: 
  #副本集的名称 
  replSetName: myconfigrs 
sharding: 
  #分片角色 
  clusterRole: configsvr

27219的配置文件

#新建配置文件:
vim /usr/local/mongodb/sharded_cluster/myconfigrs_27219/mongod.conf

🚀配置文件:

systemLog: 
  #MongoDB发送所有日志输出的目标指定为文件
  destination: file
  #mongod或mongos应向其发送所有诊断日志记录信息的日志文件的路径
  path: "/usr/local/mongodb/sharded_cluster/myconfigrs_27219/log/mongod.log"
  #当mongos或mongod实例重新启动时,mongos或mongod会将新条目附加到现有日志文件的末尾。
  logAppend: true
storage: 
  #mongod实例存储其数据的目录。storage.dbPath设置仅适用于mongod。
  dbPath: "/usr/local/mongodb/sharded_cluster/myconfigrs_27219/data/db"
  journal: 
    enabled: true
processManagement: 
  #启用在后台运行mongos或mongod进程的守护进程模式。
  fork: true 
  #指定用于保存mongos或mongod进程的进程ID的文件位置,其中mongos或mongod将写入其PID 
  pidFilePath: "/usr/local/mongodb/sharded_cluster/myconfigrs_27219/log/mongod.pid" 
net:
  #服务实例绑定所有IP,有副作用,副本集初始化的时候,节点名字会自动设置为本地域名,而不是ip 
  #bindIpAll: true 
  #绑定的IP为 192.168.28.101【你自己的IP】,为方便配置我写的0.0.0.0  
  bindIp: 0.0.0.0 
  #绑定的端口 
  port: 27219 
replication: 
  #副本集的名称 
  replSetName: myconfigrs 
sharding: 
  #分片角色 
  clusterRole: configsvr

2.启动配置副本集

Tips:

启动配置副本集:一主两副本
依次启动三个mongod服务,并查看服务是否启动:

[root@localhost ~]# mongod -f /usr/local/mongodb/sharded_cluster/myconfigrs_27019/mongod.conf
about to fork child process, waiting until server is ready for connections.
forked process: 8882
child process started successfully, parent exiting
[root@localhost ~]# mongod -f /usr/local/mongodb/sharded_cluster/myconfigrs_27119/mongod.conf
about to fork child process, waiting until server is ready for connections.
forked process: 8932
child process started successfully, parent exiting
[root@localhost ~]# mongod -f /usr/local/mongodb/sharded_cluster/myconfigrs_27219/mongod.conf
about to fork child process, waiting until server is ready for connections.
forked process: 8982
child process started successfully, parent exiting
[root@localhost ~]# ps -ef | grep mongod
root       7835      1  1 22:51 ?        00:00:08 mongod -f /usr/local/mongodb/sharded_cluster/myshardrs01_27018/mongod.conf
root       7884      1  1 22:51 ?        00:00:08 mongod -f /usr/local/mongodb/sharded_cluster/myshardrs01_27118/mongod.conf
root       7934      1  1 22:51 ?        00:00:08 mongod -f /usr/local/mongodb/sharded_cluster/myshardrs01_27218/mongod.conf
root       8466      1  1 22:59 ?        00:00:03 mongod -f /usr/local/mongodb/sharded_cluster/myshardrs02_27318/mongod.conf
root       8508      1  1 22:59 ?        00:00:04 mongod -f /usr/local/mongodb/sharded_cluster/myshardrs02_27418/mongod.conf
root       8551      1  1 22:59 ?        00:00:04 mongod -f /usr/local/mongodb/sharded_cluster/myshardrs02_27518/mongod.conf
root       8882      1  7 23:04 ?        00:00:01 mongod -f /usr/local/mongodb/sharded_cluster/myconfigrs_27019/mongod.conf
root       8932      1 13 23:04 ?        00:00:01 mongod -f /usr/local/mongodb/sharded_cluster/myconfigrs_27119/mongod.conf
root       8982      1 24 23:04 ?        00:00:01 mongod -f /usr/local/mongodb/sharded_cluster/myconfigrs_27219/mongod.conf
root       9032   7599  0 23:04 pts/0    00:00:00 grep --color=auto mongod
[root@localhost ~]# 

初始化配置节点副本集

Tips:

1)初始化副本集和创建主节点:使用客户端命令连接任意一个节点,但这里尽量要连接主节点:
mongo --host=192.168.28.101 --port=27019
执行初始化副本集命令:rs.initiate()
查看副本集情况:rs.status()

2)主节点配置查看:rs.conf()

3)添加2个副本节点:rs.add("192.168.28.101:27119") rs.add("192.168.28.101:27219")
查看副本集的配置情况:rs.status()

🚀 注意:这里的 IP自己虚拟机的IP

代码过程如下:

[root@localhost ~]# mongo --host=192.168.28.101 --port=27019
MongoDB shell version v4.2.18
connecting to: mongodb://192.168.28.101:27019/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("1fbb4b8b-316f-48c1-aa8c-74e518713381") }
MongoDB server version: 4.2.18
Server has startup warnings: 
...
---
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).

The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.

To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---
#初始化
> rs.initiate()
{
	"info2" : "no configuration specified. Using a default configuration for the set",
	"me" : "192.168.28.101:27019",
	"ok" : 1,
	"$gleStats" : {
		"lastOpTime" : Timestamp(1648912625, 1),
		"electionId" : ObjectId("000000000000000000000000")
	},
	"lastCommittedOpTime" : Timestamp(0, 0)
}
myconfigrs:OTHER> 
#回车一下就变成主节点了
myconfigrs:PRIMARY>
#添加副本节点
myconfigrs:PRIMARY> rs.add("192.168.28.101:27119")
{
	"ok" : 1,
	"$gleStats" : {
		"lastOpTime" : {
			"ts" : Timestamp(1648912664, 1),
			"t" : NumberLong(1)
		},
		"electionId" : ObjectId("7fffffff0000000000000001")
	},
	"lastCommittedOpTime" : Timestamp(1648912656, 1),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1648912664, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	},
	"operationTime" : Timestamp(1648912664, 1)
}
#添加副本节点
myconfigrs:PRIMARY> rs.add("192.168.28.101:27219")
{
	"ok" : 1,
	"$gleStats" : {
		"lastOpTime" : {
			"ts" : Timestamp(1648912676, 2),
			"t" : NumberLong(1)
		},
		"electionId" : ObjectId("7fffffff0000000000000001")
	},
	"lastCommittedOpTime" : Timestamp(1648912664, 2),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1648912676, 2),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	},
	"operationTime" : Timestamp(1648912676, 2)
}
#查看副本集状态
myconfigrs:PRIMARY> rs.status()
{
	"set" : "myconfigrs",
	"date" : ISODate("2022-04-02T15:18:11.941Z"),
	"myState" : 1,
	"term" : NumberLong(1),
	"syncingTo" : "",
	"syncSourceHost" : "",
	"syncSourceId" : -1,
	"configsvr" : true,
	"heartbeatIntervalMillis" : NumberLong(2000),
	"majorityVoteCount" : 2,
	"writeMajorityCount" : 2,
	"optimes" : {
		"lastCommittedOpTime" : {
			"ts" : Timestamp(1648912676, 2),
			"t" : NumberLong(1)
		},
		"lastCommittedWallTime" : ISODate("2022-04-02T15:17:56.999Z"),
		"readConcernMajorityOpTime" : {
			"ts" : Timestamp(1648912676, 2),
			"t" : NumberLong(1)
		},
		"readConcernMajorityWallTime" : ISODate("2022-04-02T15:17:56.999Z"),
		"appliedOpTime" : {
			"ts" : Timestamp(1648912676, 2),
			"t" : NumberLong(1)
		},
		"durableOpTime" : {
			"ts" : Timestamp(1648912676, 2),
			"t" : NumberLong(1)
		},
		"lastAppliedWallTime" : ISODate("2022-04-02T15:17:56.999Z"),
		"lastDurableWallTime" : ISODate("2022-04-02T15:17:56.999Z")
	},
	"lastStableRecoveryTimestamp" : Timestamp(1648912676, 2),
	"lastStableCheckpointTimestamp" : Timestamp(1648912676, 2),
	"electionCandidateMetrics" : {
		"lastElectionReason" : "electionTimeout",
		"lastElectionDate" : ISODate("2022-04-02T15:17:05.681Z"),
		"electionTerm" : NumberLong(1),
		"lastCommittedOpTimeAtElection" : {
			"ts" : Timestamp(0, 0),
			"t" : NumberLong(-1)
		},
		"lastSeenOpTimeAtElection" : {
			"ts" : Timestamp(1648912625, 1),
			"t" : NumberLong(-1)
		},
		"numVotesNeeded" : 1,
		"priorityAtElection" : 1,
		"electionTimeoutMillis" : NumberLong(10000),
		"newTermStartDate" : ISODate("2022-04-02T15:17:05.941Z"),
		"wMajorityWriteAvailabilityDate" : ISODate("2022-04-02T15:17:06.450Z")
	},
	"members" : [
		{
			"_id" : 0,
			"name" : "192.168.28.101:27019",
			"health" : 1,
			"state" : 1,
			"stateStr" : "PRIMARY",
			"uptime" : 838,
			"optime" : {
				"ts" : Timestamp(1648912676, 2),
				"t" : NumberLong(1)
			},
			"optimeDate" : ISODate("2022-04-02T15:17:56Z"),
			"syncingTo" : "",
			"syncSourceHost" : "",
			"syncSourceId" : -1,
			"infoMessage" : "could not find member to sync from",
			"electionTime" : Timestamp(1648912625, 2),
			"electionDate" : ISODate("2022-04-02T15:17:05Z"),
			"configVersion" : 3,
			"self" : true,
			"lastHeartbeatMessage" : ""
		},
		{
			"_id" : 1,
			"name" : "192.168.28.101:27119",
			"health" : 1,
			"state" : 2,
			"stateStr" : "SECONDARY",
			"uptime" : 27,
			"optime" : {
				"ts" : Timestamp(1648912676, 2),
				"t" : NumberLong(1)
			},
			"optimeDurable" : {
				"ts" : Timestamp(1648912676, 2),
				"t" : NumberLong(1)
			},
			"optimeDate" : ISODate("2022-04-02T15:17:56Z"),
			"optimeDurableDate" : ISODate("2022-04-02T15:17:56Z"),
			"lastHeartbeat" : ISODate("2022-04-02T15:18:11.052Z"),
			"lastHeartbeatRecv" : ISODate("2022-04-02T15:18:10.126Z"),
			"pingMs" : NumberLong(0),
			"lastHeartbeatMessage" : "",
			"syncingTo" : "192.168.28.101:27019",
			"syncSourceHost" : "192.168.28.101:27019",
			"syncSourceId" : 0,
			"infoMessage" : "",
			"configVersion" : 3
		},
		{
			"_id" : 2,
			"name" : "192.168.28.101:27219",
			"health" : 1,
			"state" : 2,
			"stateStr" : "SECONDARY",
			"uptime" : 14,
			"optime" : {
				"ts" : Timestamp(1648912676, 2),
				"t" : NumberLong(1)
			},
			"optimeDurable" : {
				"ts" : Timestamp(1648912676, 2),
				"t" : NumberLong(1)
			},
			"optimeDate" : ISODate("2022-04-02T15:17:56Z"),
			"optimeDurableDate" : ISODate("2022-04-02T15:17:56Z"),
			"lastHeartbeat" : ISODate("2022-04-02T15:18:11.053Z"),
			"lastHeartbeatRecv" : ISODate("2022-04-02T15:18:11.889Z"),
			"pingMs" : NumberLong(0),
			"lastHeartbeatMessage" : "",
			"syncingTo" : "",
			"syncSourceHost" : "",
			"syncSourceId" : -1,
			"infoMessage" : "",
			"configVersion" : 3
		}
	],
	"ok" : 1,
	"$gleStats" : {
		"lastOpTime" : {
			"ts" : Timestamp(1648912676, 2),
			"t" : NumberLong(1)
		},
		"electionId" : ObjectId("7fffffff0000000000000001")
	},
	"lastCommittedOpTime" : Timestamp(1648912676, 2),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1648912676, 2),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	},
	"operationTime" : Timestamp(1648912676, 2)
}
myconfigrs:PRIMARY> exit
bye
[root@localhost ~]# 

Tips:

所有的操作返回OK :1 才是操作成功


三,路由节点的创建和操作

1.mymongos_27017路由节点的创建和操作
# 第一步:准备存放数据和日志的目录:
mkdir -p /usr/local/mongodb/sharded_cluster/mymongos_27017/log

Tips:

路由节点的作用主要是分发,它 不需要存储具体的数据,因此不需要data目录。

创建配置文件

27017的配置文件

# mymongos_27017节点:
#新建配置文件:
vi /usr/local/mongodb/sharded_cluster/mymongos_27017/mongos.conf

🚀配置文件

systemLog: 
  #MongoDB发送所有日志输出的目标指定为文件
  destination: file
  #mongod或mongos应向其发送所有诊断日志记录信息的日志文件的路径
  path: "/usr/local/mongodb/sharded_cluster/mymongos_27017/log/mongod.log"
  #当mongos或mongod实例重新启动时,mongos或mongod会将新条目附加到现有日志文件的末尾。
  logAppend: true
processManagement: 
  #启用在后台运行mongos或mongod进程的守护进程模式。
  fork: true 
  #指定用于保存mongos或mongod进程的进程ID的文件位置,其中mongos或mongod将写入其PID 
  pidFilePath: "/usr/local/mongodb/sharded_cluster/mymongos_27017/log/mongod.pid" 
net:
  #服务实例绑定所有IP,有副作用,副本集初始化的时候,节点名字会自动设置为本地域名,而不是ip 
  #bindIpAll: true 
  #绑定的IP为 192.168.28.101【你自己的IP】,为方便配置我写的0.0.0.0
  bindIp: 0.0.0.0 
  #绑定的端口 
  port: 27017 
sharding: 
  #指定配置节点副本集【这儿IP必须要改IP天自己虚拟机的IP】
  configDB: myconfigrs/192.168.28.101:27019,192.168.28.101:27119,192.168.28.101:27219

Tips:

如果物理机连接不上的话可能是这儿的IP没有改

启动mymongos_27017路由mongos

Tips:

启动如果失败,可以查看log目录下的日志,查看失败原因。

[root@localhost ~]# mongos -f /usr/local/mongodb/sharded_cluster/mymongos_27017/mongos.conf
about to fork child process, waiting until server is ready for connections.
forked process: 10482
child process started successfully, parent exiting

Tips:

客户端登录mongos,此时,写不进去数据,如果写数据会报错:

[root@localhost ~]# mongo --host=192.168.28.101 --port=27017
MongoDB shell version v4.2.18
connecting to: mongodb://192.168.28.101:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("5bd001ce-16ba-464b-993b-9cbd6fbcddaf") }
MongoDB server version: 4.2.18
Server has startup warnings: 
.......
mongos> show dbs
admin   0.000GB
config  0.000GB
#创建数据库
mongos> use aabb
switched to db aabb
mongos> 
mongos> db.aa.insert({aa:"zhangsan"})
WriteCommandError({
	"ok" : 0,
	"errmsg" : "unable to initialize targeter for write op for collection aabb.aa :: caused by :: Database aabb could not be created :: caused by :: No shards found",
	"code" : 70,
	"codeName" : "ShardNotFound",
	"operationTime" : Timestamp(1648913224, 2),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1648913224, 2),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
})
mongos> 

Tips:

客户端登录mongos,此时,写不进去数据,如果写数据会报错:

​ 原因:通过路由节点操作,现在只是连接了配置节点,还没有连接分片数据节点(caused by :: No shards found),因此无法写入业务数据。


2.在路由节点上进行分片配置操作

Tips:

使用命令添加分片:

sh.addShard(“[分片名]/[IP]:[PORT],…”)

(1)添加分片

将第一套分片副本集添加进来:

mongos> sh.addShard("myshardrs01/192.168.28.101:27018,192.168.28.101:27118,192.168.28.101:27218")
{
	"shardAdded" : "myshardrs01",
	"ok" : 1,
	"operationTime" : Timestamp(1648913541, 6),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1648913541, 6),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}
mongos> 

Tips:

如果OK返回值不是 1 则分片失败

分片失败可以查看报错信息根据修改命令sh.addShard(“[分片名]/[IP]:[PORT],…”)

可能会把主节点IP改成 主机名

查看分片状态情况:

mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
  	"_id" : 1,
  	"minCompatibleVersion" : 5,
  	"currentVersion" : 6,
  	"clusterId" : ObjectId("624868f214f17e4ba54f23da")
  }
  shards:
        {  "_id" : "myshardrs01",  "host" : "myshardrs01/192.168.28.101:27018,192.168.28.101:27118",  "state" : 1 }
  active mongoses:
        "4.2.18" : 1
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                No recent migrations
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }

mongos> 

继续将第二套分片副本集添加进来:

mongos> sh.addShard("myshardrs02/192.168.28.101:27318,192.168.28.101:27418,192.168.28.101:27518")
{
	"shardAdded" : "myshardrs02",
	"ok" : 1,
	"operationTime" : Timestamp(1648913689, 5),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1648913689, 5),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}

查看分片状态:

mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
  	"_id" : 1,
  	"minCompatibleVersion" : 5,
  	"currentVersion" : 6,
  	"clusterId" : ObjectId("624868f214f17e4ba54f23da")
  }
  shards:
        {  "_id" : "myshardrs01",  "host" : "myshardrs01/192.168.28.101:27018,192.168.28.101:27118",  "state" : 1 }
        {  "_id" : "myshardrs02",  "host" : "myshardrs02/192.168.28.101:27318,192.168.28.101:27418",  "state" : 1 }
  active mongoses:
        "4.2.18" : 1
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                4 : Success
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                config.system.sessions
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                myshardrs01	1020
                                myshardrs02	4
                        too many chunks to print, use verbose if you want to force print

mongos> 

(2)开启分片功能

添加分片之后,还需要初始化才可以使用。(即在某一个库中开启分片功能)

sh.enableSharding("库名") sh.shardCollection("库名.集合名",{"key":1})

在mongos上的users数据库配置sharding:

mongos> sh.enableSharding("users")
{
	"ok" : 1,
	"operationTime" : Timestamp(1648914039, 11),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1648914039, 12),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}
mongos>

查看分片状态:

mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
  	"_id" : 1,
  	"minCompatibleVersion" : 5,
  	"currentVersion" : 6,
  	"clusterId" : ObjectId("624868f214f17e4ba54f23da")
  }
  shards:
        {  "_id" : "myshardrs01",  "host" : "myshardrs01/192.168.28.101:27018,192.168.28.101:27118",  "state" : 1 }
        {  "_id" : "myshardrs02",  "host" : "myshardrs02/192.168.28.101:27318,192.168.28.101:27418",  "state" : 1 }
  active mongoses:
        "4.2.18" : 1
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  yes
        Collections with active migrations: 
                config.system.sessions started at Sat Apr 02 2022 23:42:03 GMT+0800 (CST)
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                347 : Success
  databases:
        {  "_id" : "users",  "primary" : "myshardrs02",  "partitioned" : true,  "version" : {  "uuid" : UUID("c5a85211-06ac-46e4-bd29-47758191f460"),  "lastMod" : 1 } }
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                config.system.sessions
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                myshardrs01	676
                                myshardrs02	348
                        too many chunks to print, use verbose if you want to force print

mongos> 
(3)集合分片

对集合分片,必须使用 sh.shardCollection() 方法指定集合和分片键。

sh.shardCollection(namespace, key, unique)

对集合进行分片时,你需要选择一个 片键(Shard Key) , shard key 是每条记录都必须包含的,且建立了索引的单个字段或复合字段,MongoDB按照片键将数据划分到不同的 数据块中,并将 数据块均衡地分布到所有分片中.为了按照片键划分数据块,MongoDB使用 基于哈希的分片方式(随机平均分配)或者基于范围的分片方式。

用什么字段当片键都可以,如:nickname作为片键,但一定是必填字段。

🚀分片规则一:哈希策略

Tips:

基于哈希的分片 ,MongoDB计算一个字段的哈希值,并用这个哈希值来创建数据块.

在使用基于哈希分片的系统中,数据的分离性更好一些.

先开启库的分片功能,再开启分片规则,最后查看分片状态

# 先开启users库的分片功能
mongos> sh.enableSharding("users")
{
	"ok" : 1,
	"operationTime" : Timestamp(1648914317, 10),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1648914317, 17),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}

# 使用nickname作为片键,根据其值的哈希值进行数据分片
mongos> sh.shardCollection("users.comment",{"nickname":"hashed"})
{
	"collectionsharded" : "users.comment",
	"collectionUUID" : UUID("d9e4283f-fa83-48d4-9dac-ea30a05c8356"),
	"ok" : 1,
	"operationTime" : Timestamp(1648914330, 28),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1648914330, 28),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}

# 查看分片状态
mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
  	"_id" : 1,
  	"minCompatibleVersion" : 5,
  	"currentVersion" : 6,
  	"clusterId" : ObjectId("624868f214f17e4ba54f23da")
  }
  shards:
        {  "_id" : "myshardrs01",  "host" : "myshardrs01/192.168.28.101:27018,192.168.28.101:27118",  "state" : 1 }
        {  "_id" : "myshardrs02",  "host" : "myshardrs02/192.168.28.101:27318,192.168.28.101:27418",  "state" : 1 }
  active mongoses:
        "4.2.18" : 1
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                512 : Success
  databases:
        {  "_id" : "users",  "primary" : "myshardrs02",  "partitioned" : true,  "version" : {  "uuid" : UUID("c5a85211-06ac-46e4-bd29-47758191f460"),  "lastMod" : 1 } }
                articledb.comment
                        shard key: { "nickname" : "hashed" }
                        unique: false
                        balancing: true
                        chunks:
                                myshardrs01	2
                                myshardrs02	2
                        { "nickname" : { "$minKey" : 1 } } -->> { "nickname" : NumberLong("-4611686018427387902") } on : myshardrs01 Timestamp(1, 0) 
                        { "nickname" : NumberLong("-4611686018427387902") } -->> { "nickname" : NumberLong(0) } on : myshardrs01 Timestamp(1, 1) 
                        { "nickname" : NumberLong(0) } -->> { "nickname" : NumberLong("4611686018427387902") } on : myshardrs02 Timestamp(1, 2) 
                        { "nickname" : NumberLong("4611686018427387902") } -->> { "nickname" : { "$maxKey" : 1 } } on : myshardrs02 Timestamp(1, 3) 
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                config.system.sessions
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                myshardrs01	512
                                myshardrs02	512
                        too many chunks to print, use verbose if you want to force print

mongos> 

🚀分片规则二:范围策略

Tips:

基于范围的分片 ,MongoDB按照片键的范围把数据分成不同部分.假设有一个数字的片键:想象一个从负无穷到正无穷的直线,每一个片键的值都在直线上画了一个点.MongoDB把这条直线划分为更短的不重叠的片段,并称之为 数据块 ,每个数据块包含了片键在一定范围内的数据.

在使用片键做范围划分的系统中,也会存储在同一个分片中.

Tips:mongodb只能根据集合当中的一个分片键执行分片规则,以此这里不能再次使用comment。使用新的集合author

# 使用作者年龄字段作为片键,按照点赞数的值进行分片:
mongos> sh.shardCollection("users.author",{"age":1})
{
	"collectionsharded" : "users.author",
	"collectionUUID" : UUID("995c2443-16a2-4412-82e3-64b2c2b8aa28"),
	"ok" : 1,
	"operationTime" : Timestamp(1648914908, 12),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1648914908, 12),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}

# 查看分片状态
mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
  	"_id" : 1,
  	"minCompatibleVersion" : 5,
  	"currentVersion" : 6,
  	"clusterId" : ObjectId("624868f214f17e4ba54f23da")
  }
  shards:
        {  "_id" : "myshardrs01",  "host" : "myshardrs01/192.168.28.101:27018,192.168.28.101:27118",  "state" : 1 }
        {  "_id" : "myshardrs02",  "host" : "myshardrs02/192.168.28.101:27318,192.168.28.101:27418",  "state" : 1 }
  active mongoses:
        "4.2.18" : 1
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                512 : Success
  databases:
        {  "_id" : "users",  "primary" : "myshardrs02",  "partitioned" : true,  "version" : {  "uuid" : UUID("c5a85211-06ac-46e4-bd29-47758191f460"),  "lastMod" : 1 } }
                articledb.author
                        shard key: { "age" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                myshardrs02	1
                        { "age" : { "$minKey" : 1 } } -->> { "age" : { "$maxKey" : 1 } } on : myshardrs02 Timestamp(1, 0) 
                articledb.comment
                        shard key: { "nickname" : "hashed" }
                        unique: false
                        balancing: true
                        chunks:
                                myshardrs01	2
                                myshardrs02	2
                        { "nickname" : { "$minKey" : 1 } } -->> { "nickname" : NumberLong("-4611686018427387902") } on : myshardrs01 Timestamp(1, 0) 
                        { "nickname" : NumberLong("-4611686018427387902") } -->> { "nickname" : NumberLong(0) } on : myshardrs01 Timestamp(1, 1) 
                        { "nickname" : NumberLong(0) } -->> { "nickname" : NumberLong("4611686018427387902") } on : myshardrs02 Timestamp(1, 2) 
                        { "nickname" : NumberLong("4611686018427387902") } -->> { "nickname" : { "$maxKey" : 1 } } on : myshardrs02 Timestamp(1, 3) 
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                config.system.sessions
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                myshardrs01	512
                                myshardrs02	512
                        too many chunks to print, use verbose if you want to force print

mongos> 

Tips:

1)一个集合只能指定一个片键,否则报错。

2)一旦对一个集合分片,分片键和分片值就不可改变。 如:不能给集合选择不同的分片键、不能更新分片键的值。

3)根据age索引进行分配数据。


3.分片后插入数据测试
测试一(哈希规则):

登录mongs后,向comment循环插入1000条数据做测试:

mongos> use users
switched to db users
mongos> for(var i=1;i<=1000;i++){db.comment.insert({_id:i+"",nickname:"zhangsan"+i})}
WriteResult({ "nInserted" : 1 })
mongos> show collections
author
comment
mongos> show dbs
admin      0.000GB
users      0.000GB
config     0.003GB
mongos> db.comment.count()
1000
mongos> 

Tips:

​ js的语法,因为mongo的shell是一个JavaScript的shell。

​ 从路由上插入的数据,必须包含片键,否则无法插入。

🚀分别登陆两个片的主节点,统计文档数量

第一个分片副本集:

[root@localhost ~]# mongo --host=192.168.28.101 --port=27018
MongoDB shell version v4.2.18
connecting to: mongodb://192.168.28.101:27018/?
---

myshardrs01:PRIMARY> use users
switched to db users
myshardrs01:PRIMARY> db.comment.count()
507
myshardrs01:PRIMARY> 

第二个分片副本集:

[root@localhost ~]# mongo --host=192.168.28.101 --port=27318
MongoDB shell version v4.2.18
connecting to: mongodb://192.168.28.101:27318/?
---

myshardrs02:PRIMARY> use users
switched to db users
myshardrs02:PRIMARY> db.comment.count()
493
myshardrs02:PRIMARY> 

可以看到,1000条数据近似均匀的分布到了2个shard上。是根据片键的哈希值分配的。

这种分配方式非常易于水平扩展:一旦数据存储需要更大空间,可以直接再增加分片即可,同时提升了性能。

使用db.comment.stats()查看单个集合的完整情况,mongos执行该命令可以查看该集合的数据分片的情况。

使用sh.status()查看本库内所有集合的分片信息。


测试二(范围规则):

Tips:

如果查看状态发现没有分片,则可能是由于以下原因造成了:

1)系统繁忙,正在分片中。

2)数据块(chunk)没有填满,默认的数据块尺寸(chunksize)是64M,填满后才会考虑向其他片的数据块填充数据,因此,为了测试,可以将其改小,这里改为1M,操作如下:

mongos> use config
switched to db config
mongos> db.settings.save( { _id:"chunksize", value: 1 } )
WriteResult({ "nMatched" : 0, "nUpserted" : 1, "nModified" : 0, "_id" : "chunksize" })
mongos> 

登录mongs后,向comment循环插入20000条数据做测试:

mongos> use users
switched to db users
mongos> for(var i=1;i<=20000;i++){db.author.save({"name":"BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo"+i,"age":NumberInt(i%120)})}
WriteResult({ "nInserted" : 1 })
mongos> db.author.count()
20118
mongos> db.author.remove({})
WriteResult({ "nRemoved" : 20118 })
mongos> db.author.count()
0
mongos> for(var i=1;i<=20000;i++){db.author.save({"name":"BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo"+i,"age":NumberInt(i%120)})}
WriteResult({ "nInserted" : 1 })
mongos> db.author.count()
20000
mongos>  

Tips:

插入失败:

1)不在articledb库中

2)集合中由其他数据 可以使用 db.author.remove({}) 命令来情况当前集合中所有的文档

​ 插入成功后,仍然要分别查看两个分片副本集的数据情况。

🤖 分片效果:

# 查看myshardrs01中的数据
myshardrs01:PRIMARY> db.author.count()
166

# 查看myshardrs02中的数据
myshardrs02:PRIMARY> db.author.count()
19834
myshardrs02:PRIMARY> db.author.find()
{ "_id" : ObjectId("6248785b60db80fa633b9c15"), "name" : "BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo1", "age" : 1 }
{ "_id" : ObjectId("6248785b60db80fa633b9c16"), "name" : "BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo2", "age" : 2 }
{ "_id" : ObjectId("6248785b60db80fa633b9c17"), "name" : "BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo3", "age" : 3 }
{ "_id" : ObjectId("6248785b60db80fa633b9c18"), "name" : "BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo4", "age" : 4 }
{ "_id" : ObjectId("6248785b60db80fa633b9c19"), "name" : "BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo5", "age" : 5 }
{ "_id" : ObjectId("6248785b60db80fa633b9c1a"), "name" : "BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo6", "age" : 6 }
{ "_id" : ObjectId("6248785b60db80fa633b9c1b"), "name" : "BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo7", "age" : 7 }
{ "_id" : ObjectId("6248785b60db80fa633b9c1c"), "name" : "BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo8", "age" : 8 }
{ "_id" : ObjectId("6248785b60db80fa633b9c1d"), "name" : "BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo9", "age" : 9 }
{ "_id" : ObjectId("6248785b60db80fa633b9c1e"), "name" : "BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo10", "age" : 10 }
{ "_id" : ObjectId("6248785b60db80fa633b9c1f"), "name" : "BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo11", "age" : 11 }
{ "_id" : ObjectId("6248785b60db80fa633b9c20"), "name" : "BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo12", "age" : 12 }
{ "_id" : ObjectId("6248785b60db80fa633b9c21"), "name" : "BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo13", "age" : 13 }
{ "_id" : ObjectId("6248785b60db80fa633b9c22"), "name" : "BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo14", "age" : 14 }
{ "_id" : ObjectId("6248785b60db80fa633b9c23"), "name" : "BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo15", "age" : 15 }
{ "_id" : ObjectId("6248785b60db80fa633b9c24"), "name" : "BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo16", "age" : 16 }
{ "_id" : ObjectId("6248785b60db80fa633b9c25"), "name" : "BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo17", "age" : 17 }
{ "_id" : ObjectId("6248785b60db80fa633b9c26"), "name" : "BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo18", "age" : 18 }
{ "_id" : ObjectId("6248785b60db80fa633b9c27"), "name" : "BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo19", "age" : 19 }
{ "_id" : ObjectId("6248785b60db80fa633b9c28"), "name" : "BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo20", "age" : 20 }
Type "it" for more
myshardrs02:PRIMARY> it
{ "_id" : ObjectId("6248785b60db80fa633b9c29"), "name" : "BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo21", "age" : 21 }
{ "_id" : ObjectId("6248785b60db80fa633b9c2a"), "name" : "BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo22", "age" : 22 }
{ "_id" : ObjectId("6248785b60db80fa633b9c2b"), "name" : "BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo23", "age" : 23 }
{ "_id" : ObjectId("6248785b60db80fa633b9c2c"), "name" : "BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo24", "age" : 24 }
{ "_id" : ObjectId("6248785b60db80fa633b9c2d"), "name" : "BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo25", "age" : 25 }
{ "_id" : ObjectId("6248785b60db80fa633b9c2e"), "name" : "BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo26", "age" : 26 }
{ "_id" : ObjectId("6248785b60db80fa633b9c2f"), "name" : "BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo27", "age" : 27 }
{ "_id" : ObjectId("6248785b60db80fa633b9c30"), "name" : "BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo28", "age" : 28 }
{ "_id" : ObjectId("6248785b60db80fa633b9c31"), "name" : "BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo29", "age" : 29 }
{ "_id" : ObjectId("6248785b60db80fa633b9c32"), "name" : "BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo30", "age" : 30 }
{ "_id" : ObjectId("6248785b60db80fa633b9c33"), "name" : "BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo31", "age" : 31 }
{ "_id" : ObjectId("6248785b60db80fa633b9c34"), "name" : "BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo32", "age" : 32 }
{ "_id" : ObjectId("6248785b60db80fa633b9c35"), "name" : "BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo33", "age" : 33 }
{ "_id" : ObjectId("6248785b60db80fa633b9c36"), "name" : "BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo34", "age" : 34 }
{ "_id" : ObjectId("6248785b60db80fa633b9c37"), "name" : "BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo35", "age" : 35 }
{ "_id" : ObjectId("6248785b60db80fa633b9c38"), "name" : "BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo36", "age" : 36 }
{ "_id" : ObjectId("6248785b60db80fa633b9c39"), "name" : "BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo37", "age" : 37 }
{ "_id" : ObjectId("6248785b60db80fa633b9c3a"), "name" : "BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo38", "age" : 38 }
{ "_id" : ObjectId("6248785b60db80fa633b9c3b"), "name" : "BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo39", "age" : 39 }
{ "_id" : ObjectId("6248785b60db80fa633b9c3c"), "name" : "BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo40", "age" : 40 }
Type "it" for more

🚀 测试完改回来:

mongos> use config
switched to db config
mongos> db.settings.save( { _id:"chunksize", value: 64 } )
WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 })
mongos> 

Tips:

要先改小,再设置分片。测试过程中操作不当引起的错误,可以先删除集合,重新建立集合的分片策略,再插入数据测试即可。


四,可视化工具连接

直接连接到路由节点即可。

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
对于MongoDB分片集群部署,您可以按照以下步骤进行操作: 1. 部署配置服务器(config server):配置服务器用于存储分片集群的元数据,可以独立部署或与其他组件共享服务器。您可以选择部署多个配置服务器以增加可用性。 2. 启动分片服务器(shard server):分片服务器用于存储数据,您可以选择在不同的物理机器或虚拟机上启动多个分片服务器。每个分片服务器都可以容纳一部分数据。 3. 启动路由器(router):路由器也称为mongos进程,它是应用程序和分片集群之间的中间件。它将客户端请求路由到正确的分片服务器,并协调数据的读写操作。 4. 添加分片:在启动了路由器和分片服务器之后,您需要将分片服务器添加到集群中。使用mongos进程连接到配置服务器,并使用`sh.addShard()`命令添加每个分片服务器。 5. 配置分片键(shard key):分片键用于将数据划分为不同的分片。选择一个适合您数据模式和查询模式的字段作为分片键,并使用`sh.shardCollection()`命令启用分片。 6. 验证部署:您可以使用`sh.status()`命令来验证集群的状态,并确保所有组件都正常工作。 这只是一个简单的概述,实际部署过程可能会更加复杂,并且取决于您的环境和需求。建议您参考MongoDB官方文档中关于分片集群部署的详细指南,以获得更全面的了解和操作指导。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

方.丈之间

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值