mongoDB分片集群搭建演示

mongodb复制集群搭建演示可参考上一篇:mongoDB复制集搭建,演示

一:mongodb分片的背景简介

  1. 原因简介

    随着数据的增长,单机实例的瓶颈是很明显的。可以通过复制的机制应对压力,但mongodb中单个集群的 节点数量限制到了12个以内,所以需要通过分片进一步横向扩展。此外分片也可节约磁盘的存储。

  2. 分片优势简介

    a、透明化
       MongoDB自带mongos路由进程。通过mongos将客户端发来的请求准确无误的路由到集群中的一个或者一组服务器上,同时会把接收到的响应聚合起来发回到客户端。
    b.高可用
      MongoDB通过将副本集和分片集群结合使用,在确保数据实现分片的同时,也确保了各分片数据都有相应的备份,这样就可以确保当主服务器宕机时,其他的从库可以立即替换,继续工作。
    c.易扩展
      当系统需要更多的空间和资源的时候,MongoDB使我们可以按需方便的扩充系统容量。

    1. 分片集群组件简介
组件说明
Mongos提供对外应用访问,所有操作均通过mongos执行。一般有多个mongos节点。数据迁移和数据自动平衡。
Config Server存储集群所有节点、分片数据路由信息。默认需要配置3个Config Server节点。 路由节点基于元数据信息 决定把请求发给哪个分片。
Mongod存储应用数据记录。一般有多个Mongod节点,达到数据分片目的。实际存储的节点,其每个数据块默认为64M,满了之后就会产生新的数据库。

在这里插入图片描述
实际上,生产使用的所有组件都是复制集群的,实际架构如下:
在这里插入图片描述
(备注:下面为了简化演示,配置节点集群使用2个节点,分片节点集群使用2个,路由节点集群使用1个,27017~57018是相应的端口号,搭建演示的服务器环境是阿里云的centos7上做伪分片集群演示)

二:mongodb分片搭建演示

1、配置节点集群搭建
a、37017节点的conf配置信息:
#数据目录
dbpath=/root/Hugo/tools/mongodb/mongodb-linux-x86_64-4.0.5/data/config-37017
#端口
port=37017
#日志
logpath=/root/Hugo/tools/mongodb/mongodb-linux-x86_64-4.0.5/data/config-37017/37017.log
#后台启动
fork=true
#复制集名称
replSet=configCluster
#配置节点
configsvr=true

37018节点的conf配置信息:

dbpath=/root/Hugo/tools/mongodb/mongodb-linux-x86_64-4.0.5/data/config-37018
port=37018
logpath=/root/Hugo/tools/mongodb/mongodb-linux-x86_64-4.0.5/data/config-37018/37018.log
fork=true
replSet=configCluster
configsvr=true
b、启动37017,37018节点:
[root@hugo mongodb-linux-x86_64-4.0.5]# ./bin/mongod -f conf/conf-37017.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 32692
child process started successfully, parent exiting
[root@hugo mongodb-linux-x86_64-4.0.5]# ./bin/mongod -f conf/conf-37018.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 32726
child process started successfully, parent exiting
[root@hugo mongodb-linux-x86_64-4.0.5]# ps -ef|grep mongo
root     32692     1  5 14:46 ?        00:00:00 ./bin/mongod -f conf/conf-37017.conf
root     32726     1  9 14:46 ?        00:00:00 ./bin/mongod -f conf/conf-37018.conf
root     32761 32621  0 14:46 pts/3    00:00:00 grep --color=auto mongo

进入37017节点的客户端配置配置节点集群的集群设置:

[root@hugo mongodb-linux-x86_64-4.0.5]# ./bin/mongo --port 37017
MongoDB shell version v4.0.5
connecting to: mongodb://127.0.0.1:37017/?gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("434beeea-5179-4a2d-ad8d-01bd094d96a0") }
MongoDB server version: 4.0.5
Server has startup warnings: 
2019-11-15T14:46:04.066+0800 I STORAGE  [initandlisten] 
2019-11-15T14:46:04.066+0800 I STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2019-11-15T14:46:04.066+0800 I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
2019-11-15T14:46:04.782+0800 I CONTROL  [initandlisten] 
2019-11-15T14:46:04.782+0800 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
2019-11-15T14:46:04.782+0800 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
2019-11-15T14:46:04.782+0800 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2019-11-15T14:46:04.782+0800 I CONTROL  [initandlisten] 
2019-11-15T14:46:04.782+0800 I CONTROL  [initandlisten] ** WARNING: This server is bound to localhost.
2019-11-15T14:46:04.782+0800 I CONTROL  [initandlisten] **          Remote systems will be unable to connect to this server. 
2019-11-15T14:46:04.782+0800 I CONTROL  [initandlisten] **          Start the server with --bind_ip <address> to specify which IP 
2019-11-15T14:46:04.782+0800 I CONTROL  [initandlisten] **          addresses it should serve responses from, or with --bind_ip_all to
2019-11-15T14:46:04.782+0800 I CONTROL  [initandlisten] **          bind to all interfaces. If this behavior is desired, start the
2019-11-15T14:46:04.782+0800 I CONTROL  [initandlisten] **          server with --bind_ip 127.0.0.1 to disable this warning.
2019-11-15T14:46:04.782+0800 I CONTROL  [initandlisten] 
2019-11-15T14:46:04.782+0800 I CONTROL  [initandlisten] 
2019-11-15T14:46:04.782+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2019-11-15T14:46:04.782+0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2019-11-15T14:46:04.782+0800 I CONTROL  [initandlisten] 
2019-11-15T14:46:04.782+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2019-11-15T14:46:04.782+0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2019-11-15T14:46:04.782+0800 I CONTROL  [initandlisten] 
2019-11-15T14:46:04.782+0800 I CONTROL  [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 7230 processes, 65535 files. Number of processes should be at least 32767.5 : 0.5 times number of files.
2019-11-15T14:46:04.782+0800 I CONTROL  [initandlisten] 
> rs.status()
{
	"operationTime" : Timestamp(0, 0),
	"ok" : 0,
	"errmsg" : "no replset config has been received",
	"code" : 94,
	"codeName" : "NotYetInitialized",
	"$gleStats" : {
		"lastOpTime" : Timestamp(0, 0),
		"electionId" : ObjectId("000000000000000000000000")
	},
	"lastCommittedOpTime" : Timestamp(0, 0),
	"$clusterTime" : {
		"clusterTime" : Timestamp(0, 0),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}
> var cfg = {
... "_id":"configCluster",
... "protocolVersion":1,
... "members":[
... {
... "_id":0,"host":"127.0.0.1:37017"
... },{
... "_id":1,"host":"127.0.0.1:37018"
... }
... ]
... }
> rs.initiate(cfg)
{
	"ok" : 1,
	"operationTime" : Timestamp(1573800528, 1),
	"$gleStats" : {
		"lastOpTime" : Timestamp(1573800528, 1),
		"electionId" : ObjectId("000000000000000000000000")
	},
	"lastCommittedOpTime" : Timestamp(0, 0),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1573800528, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}
configCluster:SECONDARY> rs.status()
{
	"set" : "configCluster",
	"date" : ISODate("2019-11-15T06:49:09.813Z"),
	"myState" : 1,
	"term" : NumberLong(1),
	"syncingTo" : "",
	"syncSourceHost" : "",
	"syncSourceId" : -1,
	"configsvr" : true,
	"heartbeatIntervalMillis" : NumberLong(2000),
	"optimes" : {
		"lastCommittedOpTime" : {
			"ts" : Timestamp(1573800544, 1),
			"t" : NumberLong(1)
		},
		"readConcernMajorityOpTime" : {
			"ts" : Timestamp(1573800544, 1),
			"t" : NumberLong(1)
		},
		"appliedOpTime" : {
			"ts" : Timestamp(1573800544, 1),
			"t" : NumberLong(1)
		},
		"durableOpTime" : {
			"ts" : Timestamp(1573800544, 1),
			"t" : NumberLong(1)
		}
	},
	"lastStableCheckpointTimestamp" : Timestamp(1573800540, 1),
	"members" : [
		{
			"_id" : 0,
			"name" : "127.0.0.1:37017",
			"health" : 1,
			"state" : 1,
			"stateStr" : "PRIMARY",
			"uptime" : 185,
			"optime" : {
				"ts" : Timestamp(1573800544, 1),
				"t" : NumberLong(1)
			},
			"optimeDate" : ISODate("2019-11-15T06:49:04Z"),
			"syncingTo" : "",
			"syncSourceHost" : "",
			"syncSourceId" : -1,
			"infoMessage" : "could not find member to sync from",
			"electionTime" : Timestamp(1573800539, 1),
			"electionDate" : ISODate("2019-11-15T06:48:59Z"),
			"configVersion" : 1,
			"self" : true,
			"lastHeartbeatMessage" : ""
		},
		{
			"_id" : 1,
			"name" : "127.0.0.1:37018",
			"health" : 1,
			"state" : 2,
			"stateStr" : "SECONDARY",
			"uptime" : 21,
			"optime" : {
				"ts" : Timestamp(1573800544, 1),
				"t" : NumberLong(1)
			},
			"optimeDurable" : {
				"ts" : Timestamp(1573800544, 1),
				"t" : NumberLong(1)
			},
			"optimeDate" : ISODate("2019-11-15T06:49:04Z"),
			"optimeDurableDate" : ISODate("2019-11-15T06:49:04Z"),
			"lastHeartbeat" : ISODate("2019-11-15T06:49:09.586Z"),
			"lastHeartbeatRecv" : ISODate("2019-11-15T06:49:08.074Z"),
			"pingMs" : NumberLong(0),
			"lastHeartbeatMessage" : "",
			"syncingTo" : "127.0.0.1:37017",
			"syncSourceHost" : "127.0.0.1:37017",
			"syncSourceId" : 0,
			"infoMessage" : "",
			"configVersion" : 1
		}
	],
	"ok" : 1,
	"operationTime" : Timestamp(1573800544, 1),
	"$gleStats" : {
		"lastOpTime" : Timestamp(1573800528, 1),
		"electionId" : ObjectId("7fffffff0000000000000001")
	},
	"lastCommittedOpTime" : Timestamp(1573800544, 1),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1573800544, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}
configCluster:PRIMARY> 
2、路由节点集群搭建
a、路由节点27017的配置
#端口号
port=27017
#日志
logpath=/root/Hugo/tools/mongodb/mongodb-linux-x86_64-4.0.5/data/mongos-27017/27017.log
#后台启动
fork=true
#configCluster是我们上面配置的配置节点复制集的名称,要对应,不然启动不了
configdb=configCluster/127.0.0.1:37017,127.0.0.1:37018
b、启动27017,即路由节点
[root@hugo mongodb-linux-x86_64-4.0.5]# ./bin/mongos -f conf/mongos-27017.conf 
2019-11-15T15:52:52.052+0800 W SHARDING [main] Running a sharded cluster with fewer than 3 config servers should only be done for testing purposes and is not recommended for production.
about to fork child process, waiting until server is ready for connections.
forked process: 2191
child process started successfully, parent exiting
[root@hugo mongodb-linux-x86_64-4.0.5]# ps -ef|grep mongo
root      2191     1  0 15:52 ?        00:00:00 ./bin/mongos -f conf/mongos-27017.conf
root      2297  1471  0 15:56 pts/4    00:00:00 grep --color=auto mongo
root     32692     1  0 14:46 ?        00:00:18 ./bin/mongod -f conf/conf-37017.conf
root     32726     1  0 14:46 ?        00:00:18 ./bin/mongod -f conf/conf-37018.conf
[root@hugo mongodb-linux-x86_64-4.0.5]# 

现在启动的路由节点是还不能正常写入数据的,因为我们的shard(分片)节点集群还没有启动:

[root@hugo mongodb-linux-x86_64-4.0.5]# ./bin/mongo
MongoDB shell version v4.0.5
connecting to: mongodb://127.0.0.1:27017/?gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("f1734e03-cc66-44a5-b7cb-e880de610002") }
MongoDB server version: 4.0.5
Server has startup warnings: 
2019-11-15T15:52:52.057+0800 I CONTROL  [main] 
2019-11-15T15:52:52.058+0800 I CONTROL  [main] ** WARNING: Access control is not enabled for the database.
2019-11-15T15:52:52.058+0800 I CONTROL  [main] **          Read and write access to data and configuration is unrestricted.
2019-11-15T15:52:52.058+0800 I CONTROL  [main] ** WARNING: You are running this process as the root user, which is not recommended.
2019-11-15T15:52:52.058+0800 I CONTROL  [main] 
2019-11-15T15:52:52.058+0800 I CONTROL  [main] ** WARNING: This server is bound to localhost.
2019-11-15T15:52:52.058+0800 I CONTROL  [main] **          Remote systems will be unable to connect to this server. 
2019-11-15T15:52:52.058+0800 I CONTROL  [main] **          Start the server with --bind_ip <address> to specify which IP 
2019-11-15T15:52:52.058+0800 I CONTROL  [main] **          addresses it should serve responses from, or with --bind_ip_all to
2019-11-15T15:52:52.058+0800 I CONTROL  [main] **          bind to all interfaces. If this behavior is desired, start the
2019-11-15T15:52:52.058+0800 I CONTROL  [main] **          server with --bind_ip 127.0.0.1 to disable this warning.
2019-11-15T15:52:52.058+0800 I CONTROL  [main] 
mongos> show dbs;
admin   0.000GB
config  0.000GB
mongos> use hugo;
switched to db hugo
mongos> db.emp.insert({"name":"张三"})
WriteCommandError({
	"ok" : 0,
	"errmsg" : "unable to initialize targeter for write op for collection hugo.emp :: caused by :: Database hugo not found :: caused by :: No shards found",
	"code" : 70,
	"codeName" : "ShardNotFound",
	"operationTime" : Timestamp(1573804857, 2),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1573804857, 2),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
})

3、分片节点集群搭建
a、47017,47018是我们的一个shard分片集群,现在进行改shard集群的配置**

47017的配置:

#数据目录
dbpath=/root/Hugo/tools/mongodb/mongodb-linux-x86_64-4.0.5/data/shard-47017
#端口
port=47017
#日志
logpath=/root/Hugo/tools/mongodb/mongodb-linux-x86_64-4.0.5/data/shard-47017/47017.log
#后台启动
fork=true
#复制集名称/副本集名称
replSet=shardCluster
#表明是分片服务
shardsvr=true

47018的配置:

dbpath=/root/Hugo/tools/mongodb/mongodb-linux-x86_64-4.0.5/data/shard-47018
port=47018
logpath=/root/Hugo/tools/mongodb/mongodb-linux-x86_64-4.0.5/data/shard-47018/47018.log
fork=true
replSet=shardCluster
shardsvr=true
b、启动47017,47018:
[root@hugo mongodb-linux-x86_64-4.0.5]# ./bin/mongod -f conf/shard-47017.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 2827
child process started successfully, parent exiting
[root@hugo mongodb-linux-x86_64-4.0.5]# ./bin/mongod -f conf/shard-47018.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 2909
child process started successfully, parent exiting
[root@hugo mongodb-linux-x86_64-4.0.5]# ps -ef|grep mongo
root      2191     1  0 15:52 ?        00:00:01 ./bin/mongos -f conf/mongos-27017.conf
root      2827     1  0 16:14 ?        00:00:00 ./bin/mongod -f conf/shard-47017.conf
root      2909     1  3 16:15 ?        00:00:00 ./bin/mongod -f conf/shard-47018.conf
root      2952  1471  0 16:16 pts/4    00:00:00 grep --color=auto mongo
root     32692     1  0 14:46 ?        00:00:23 ./bin/mongod -f conf/conf-37017.conf
root     32726     1  0 14:46 ?        00:00:24 ./bin/mongod -f conf/conf-37018.conf
[root@hugo mongodb-linux-x86_64-4.0.5]#
c、配置我们的分片节点集群47017,47018的集群关系配置:
[root@hugo mongodb-linux-x86_64-4.0.5]# ./bin/mongo --port 47017
MongoDB shell version v4.0.5
connecting to: mongodb://127.0.0.1:47017/?gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("7be3f20d-c3cf-4ee9-bb24-ae9f1695db6c") }
MongoDB server version: 4.0.5
Server has startup warnings: 
2019-11-15T16:14:42.422+0800 I STORAGE  [initandlisten] 
2019-11-15T16:14:42.422+0800 I STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2019-11-15T16:14:42.422+0800 I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
2019-11-15T16:14:43.147+0800 I CONTROL  [initandlisten] 
2019-11-15T16:14:43.147+0800 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
2019-11-15T16:14:43.147+0800 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
2019-11-15T16:14:43.147+0800 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2019-11-15T16:14:43.147+0800 I CONTROL  [initandlisten] 
2019-11-15T16:14:43.147+0800 I CONTROL  [initandlisten] ** WARNING: This server is bound to localhost.
2019-11-15T16:14:43.147+0800 I CONTROL  [initandlisten] **          Remote systems will be unable to connect to this server. 
2019-11-15T16:14:43.147+0800 I CONTROL  [initandlisten] **          Start the server with --bind_ip <address> to specify which IP 
2019-11-15T16:14:43.147+0800 I CONTROL  [initandlisten] **          addresses it should serve responses from, or with --bind_ip_all to
2019-11-15T16:14:43.147+0800 I CONTROL  [initandlisten] **          bind to all interfaces. If this behavior is desired, start the
2019-11-15T16:14:43.147+0800 I CONTROL  [initandlisten] **          server with --bind_ip 127.0.0.1 to disable this warning.
2019-11-15T16:14:43.147+0800 I CONTROL  [initandlisten] 
2019-11-15T16:14:43.148+0800 I CONTROL  [initandlisten] 
2019-11-15T16:14:43.148+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2019-11-15T16:14:43.148+0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2019-11-15T16:14:43.148+0800 I CONTROL  [initandlisten] 
2019-11-15T16:14:43.148+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2019-11-15T16:14:43.148+0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2019-11-15T16:14:43.148+0800 I CONTROL  [initandlisten] 
2019-11-15T16:14:43.148+0800 I CONTROL  [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 7230 processes, 65535 files. Number of processes should be at least 32767.5 : 0.5 times number of files.
2019-11-15T16:14:43.148+0800 I CONTROL  [initandlisten] 
> rs.status()
{
	"operationTime" : Timestamp(0, 0),
	"ok" : 0,
	"errmsg" : "no replset config has been received",
	"code" : 94,
	"codeName" : "NotYetInitialized",
	"$clusterTime" : {
		"clusterTime" : Timestamp(0, 0),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}
> var cfg = {
... "_id":"shardCluster",
... "protocolVersion":1,
... "members":[
... {
... "_id":0,"host":"127.0.0.1:47017"
... },{
... "_id":1,"host":"127.0.0.1:47018"
... }
... ]
... }
> rs.initiate(cfg)
{
	"ok" : 1,
	"operationTime" : Timestamp(1573806134, 1),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1573806134, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}
shardCluster:SECONDARY> rs.status()
{
	"set" : "shardCluster",
	"date" : ISODate("2019-11-15T08:23:17.712Z"),
	"myState" : 1,
	"term" : NumberLong(1),
	"syncingTo" : "",
	"syncSourceHost" : "",
	"syncSourceId" : -1,
	"heartbeatIntervalMillis" : NumberLong(2000),
	"optimes" : {
		"lastCommittedOpTime" : {
			"ts" : Timestamp(1573806188, 1),
			"t" : NumberLong(1)
		},
		"readConcernMajorityOpTime" : {
			"ts" : Timestamp(1573806188, 1),
			"t" : NumberLong(1)
		},
		"appliedOpTime" : {
			"ts" : Timestamp(1573806188, 1),
			"t" : NumberLong(1)
		},
		"durableOpTime" : {
			"ts" : Timestamp(1573806188, 1),
			"t" : NumberLong(1)
		}
	},
	"lastStableCheckpointTimestamp" : Timestamp(1573806148, 1),
	"members" : [
		{
			"_id" : 0,
			"name" : "127.0.0.1:47017",
			"health" : 1,
			"state" : 1,
			"stateStr" : "PRIMARY",
			"uptime" : 515,
			"optime" : {
				"ts" : Timestamp(1573806188, 1),
				"t" : NumberLong(1)
			},
			"optimeDate" : ISODate("2019-11-15T08:23:08Z"),
			"syncingTo" : "",
			"syncSourceHost" : "",
			"syncSourceId" : -1,
			"infoMessage" : "could not find member to sync from",
			"electionTime" : Timestamp(1573806146, 1),
			"electionDate" : ISODate("2019-11-15T08:22:26Z"),
			"configVersion" : 1,
			"self" : true,
			"lastHeartbeatMessage" : ""
		},
		{
			"_id" : 1,
			"name" : "127.0.0.1:47018",
			"health" : 1,
			"state" : 2,
			"stateStr" : "SECONDARY",
			"uptime" : 62,
			"optime" : {
				"ts" : Timestamp(1573806188, 1),
				"t" : NumberLong(1)
			},
			"optimeDurable" : {
				"ts" : Timestamp(1573806188, 1),
				"t" : NumberLong(1)
			},
			"optimeDate" : ISODate("2019-11-15T08:23:08Z"),
			"optimeDurableDate" : ISODate("2019-11-15T08:23:08Z"),
			"lastHeartbeat" : ISODate("2019-11-15T08:23:16.294Z"),
			"lastHeartbeatRecv" : ISODate("2019-11-15T08:23:17.074Z"),
			"pingMs" : NumberLong(0),
			"lastHeartbeatMessage" : "",
			"syncingTo" : "127.0.0.1:47017",
			"syncSourceHost" : "127.0.0.1:47017",
			"syncSourceId" : 0,
			"infoMessage" : "",
			"configVersion" : 1
		}
	],
	"ok" : 1,
	"operationTime" : Timestamp(1573806188, 1),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1573806188, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}
shardCluster:PRIMARY> 

主节点(47017)已经正常启动,现在把从节点(47018)也准备好:

[root@hugo mongodb-linux-x86_64-4.0.5]# ./bin/mongo --port 47018
MongoDB shell version v4.0.5
connecting to: mongodb://127.0.0.1:47018/?gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("b0c34434-cb24-49df-958b-c1fd3e819900") }
MongoDB server version: 4.0.5
Server has startup warnings: 
2019-11-15T16:15:46.416+0800 I STORAGE  [initandlisten] 
2019-11-15T16:15:46.416+0800 I STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2019-11-15T16:15:46.416+0800 I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
2019-11-15T16:15:47.135+0800 I CONTROL  [initandlisten] 
2019-11-15T16:15:47.135+0800 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
2019-11-15T16:15:47.135+0800 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
2019-11-15T16:15:47.135+0800 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2019-11-15T16:15:47.135+0800 I CONTROL  [initandlisten] 
2019-11-15T16:15:47.135+0800 I CONTROL  [initandlisten] ** WARNING: This server is bound to localhost.
2019-11-15T16:15:47.135+0800 I CONTROL  [initandlisten] **          Remote systems will be unable to connect to this server. 
2019-11-15T16:15:47.135+0800 I CONTROL  [initandlisten] **          Start the server with --bind_ip <address> to specify which IP 
2019-11-15T16:15:47.135+0800 I CONTROL  [initandlisten] **          addresses it should serve responses from, or with --bind_ip_all to
2019-11-15T16:15:47.135+0800 I CONTROL  [initandlisten] **          bind to all interfaces. If this behavior is desired, start the
2019-11-15T16:15:47.135+0800 I CONTROL  [initandlisten] **          server with --bind_ip 127.0.0.1 to disable this warning.
2019-11-15T16:15:47.135+0800 I CONTROL  [initandlisten] 
2019-11-15T16:15:47.135+0800 I CONTROL  [initandlisten] 
2019-11-15T16:15:47.135+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2019-11-15T16:15:47.135+0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2019-11-15T16:15:47.135+0800 I CONTROL  [initandlisten] 
2019-11-15T16:15:47.135+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2019-11-15T16:15:47.135+0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2019-11-15T16:15:47.135+0800 I CONTROL  [initandlisten] 
2019-11-15T16:15:47.135+0800 I CONTROL  [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 7230 processes, 65535 files. Number of processes should be at least 32767.5 : 0.5 times number of files.
2019-11-15T16:15:47.135+0800 I CONTROL  [initandlisten] 
shardCluster:SECONDARY> show dbs;
2019-11-15T16:25:34.407+0800 E QUERY    [js] Error: listDatabases failed:{
	"operationTime" : Timestamp(1573806328, 1),
	"ok" : 0,
	"errmsg" : "not master and slaveOk=false",
	"code" : 13435,
	"codeName" : "NotMasterNoSlaveOk",
	"$clusterTime" : {
		"clusterTime" : Timestamp(1573806328, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
} :
_getErrorWithCode@src/mongo/shell/utils.js:25:13
Mongo.prototype.getDBs@src/mongo/shell/mongo.js:124:1
shellHelper.show@src/mongo/shell/utils.js:876:19
shellHelper@src/mongo/shell/utils.js:766:15
@(shellhelp2):1:1
shardCluster:SECONDARY> rs.slaveOk()
shardCluster:SECONDARY> show dbs;
admin   0.000GB
config  0.000GB
local   0.000GB
shardCluster:SECONDARY>
d、把47017,47018的分片复制集加入到路由节点27017下

现在我们已经成功启动了路由节点,配置节点集群,一个分片节点复制集群
在这里插入图片描述
但是进入路由节点27017中可以看到,当前的shards分片节点还没有加进来:

[root@hugo mongodb-linux-x86_64-4.0.5]# ./bin/mongo
MongoDB shell version v4.0.5
connecting to: mongodb://127.0.0.1:27017/?gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("4cc305cf-5800-427f-9865-da500d12f281") }
MongoDB server version: 4.0.5
Server has startup warnings: 
2019-11-15T15:52:52.057+0800 I CONTROL  [main] 
2019-11-15T15:52:52.058+0800 I CONTROL  [main] ** WARNING: Access control is not enabled for the database.
2019-11-15T15:52:52.058+0800 I CONTROL  [main] **          Read and write access to data and configuration is unrestricted.
2019-11-15T15:52:52.058+0800 I CONTROL  [main] ** WARNING: You are running this process as the root user, which is not recommended.
2019-11-15T15:52:52.058+0800 I CONTROL  [main] 
2019-11-15T15:52:52.058+0800 I CONTROL  [main] ** WARNING: This server is bound to localhost.
2019-11-15T15:52:52.058+0800 I CONTROL  [main] **          Remote systems will be unable to connect to this server. 
2019-11-15T15:52:52.058+0800 I CONTROL  [main] **          Start the server with --bind_ip <address> to specify which IP 
2019-11-15T15:52:52.058+0800 I CONTROL  [main] **          addresses it should serve responses from, or with --bind_ip_all to
2019-11-15T15:52:52.058+0800 I CONTROL  [main] **          bind to all interfaces. If this behavior is desired, start the
2019-11-15T15:52:52.058+0800 I CONTROL  [main] **          server with --bind_ip 127.0.0.1 to disable this warning.
2019-11-15T15:52:52.058+0800 I CONTROL  [main] 
mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
  	"_id" : 1,
  	"minCompatibleVersion" : 5,
  	"currentVersion" : 6,
  	"clusterId" : ObjectId("5dce4a5cfd68eca59be538e4")
  }
  shards:
  active mongoses:
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                No recent migrations
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }

mongos> 

所以,把47017,47018shard节点加进来(添加分片节点):
(备注:sh.addShard(“shardCluster/127.0.0.1:47017,127.0.0.1:47018”)中的shardCluster就是我们上面配置47017,47018的分片复制集名称)

mongos> sh.addShard("shardCluster/127.0.0.1:47017,127.0.0.1:47018")
{
	"shardAdded" : "shardCluster",
	"ok" : 1,
	"operationTime" : Timestamp(1573807105, 8),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1573807105, 8),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}
mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
  	"_id" : 1,
  	"minCompatibleVersion" : 5,
  	"currentVersion" : 6,
  	"clusterId" : ObjectId("5dce4a5cfd68eca59be538e4")
  }
  shards:
        {  "_id" : "shardCluster",  "host" : "shardCluster/127.0.0.1:47017,127.0.0.1:47018",  "state" : 1 }
  active mongoses:
        "4.0.5" : 1
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                No recent migrations
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }

mongos>

可以看到,shards的分片节点已经加进来了。

e、往分片集群里再添加一个分片节点

前面的工作,我们已经往整个分片集群里加入了一个分片节点复制集了(47017,47018),现在再添加一个分片节点(57017,这里为了简化演示,就不做复制集了)

57017的conf配置:

dbpath=/root/Hugo/tools/mongodb/mongodb-linux-x86_64-4.0.5/data/shard-57017
port=57017
logpath=/root/Hugo/tools/mongodb/mongodb-linux-x86_64-4.0.5/data/shard-57017/57017.log
fork=true
shardsvr=true

启动57017节点:

[root@hugo mongodb-linux-x86_64-4.0.5]# ./bin/mongod -f conf/shard-57017.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 4111
child process started successfully, parent exiting
[root@hugo mongodb-linux-x86_64-4.0.5]# ps -ef|grep mongo
root      2191     1  0 15:52 ?        00:00:03 ./bin/mongos -f conf/mongos-27017.conf
root      2827     1  0 16:14 ?        00:00:08 ./bin/mongod -f conf/shard-47017.conf
root      2909     1  0 16:15 ?        00:00:08 ./bin/mongod -f conf/shard-47018.conf
root      3387  1471  0 16:29 pts/4    00:00:00 ./bin/mongo
root      4111     1  6 16:50 ?        00:00:00 ./bin/mongod -f conf/shard-57017.conf
root      4138 32621  0 16:50 pts/3    00:00:00 grep --color=auto mongo
root     32692     1  0 14:46 ?        00:00:32 ./bin/mongod -f conf/conf-37017.conf
root     32726     1  0 14:46 ?        00:00:32 ./bin/mongod -f conf/conf-37018.conf
[root@hugo mongodb-linux-x86_64-4.0.5]#

把57017加入到分片集群中:

mongos> sh.addShard("127.0.0.1:57017")
{
	"shardAdded" : "shard0001",
	"ok" : 1,
	"operationTime" : Timestamp(1573808031, 2),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1573808031, 2),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}
mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
  	"_id" : 1,
  	"minCompatibleVersion" : 5,
  	"currentVersion" : 6,
  	"clusterId" : ObjectId("5dce4a5cfd68eca59be538e4")
  }
  shards:
        {  "_id" : "shard0001",  "host" : "127.0.0.1:57017",  "state" : 1 }
        {  "_id" : "shardCluster",  "host" : "shardCluster/127.0.0.1:47017,127.0.0.1:47018",  "state" : 1 }
  active mongoses:
        "4.0.5" : 1
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                No recent migrations
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                config.system.sessions
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shardCluster	1
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shardCluster Timestamp(1, 0) 

mongos>

可以看到,我们的分片集群中shards已经有2个片健了,搞定成功。
下面演示数据操作。

4、片健数据操作演示
a、为数据库开启分片功能

这里新建一个叫做hugo的数据库,表名(collection) emp

mongos> sh.enableSharding("hugo")
{
	"ok" : 1,
	"operationTime" : Timestamp(1573808573, 5),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1573808573, 5),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}
mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
  	"_id" : 1,
  	"minCompatibleVersion" : 5,
  	"currentVersion" : 6,
  	"clusterId" : ObjectId("5dce4a5cfd68eca59be538e4")
  }
  shards:
        {  "_id" : "shard0001",  "host" : "127.0.0.1:57017",  "state" : 1 }
        {  "_id" : "shardCluster",  "host" : "shardCluster/127.0.0.1:47017,127.0.0.1:47018",  "state" : 1 }
  active mongoses:
        "4.0.5" : 1
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                No recent migrations
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                config.system.sessions
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shardCluster	1
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shardCluster Timestamp(1, 0) 
        {  "_id" : "hugo",  "primary" : "shard0001",  "partitioned" : true,  "version" : {  "uuid" : UUID("8175e28f-8335-4d35-bcf9-c40a937dc344"),  "lastMod" : 1 } }

mongos>

在databases中可以看到,hugo的数据库已经加进来了。

b、为指定集合开启分片功能

这里为集合‘emp’开启分片功能,并且指定‘_id’做为片健

mongos> sh.shardCollection("hugo.emp",{"_id":1})
{
	"collectionsharded" : "hugo.emp",
	"collectionUUID" : UUID("6027c2a3-e8d1-49c4-bbe9-b57eacaf3201"),
	"ok" : 1,
	"operationTime" : Timestamp(1573808814, 1),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1573808814, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}
mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
  	"_id" : 1,
  	"minCompatibleVersion" : 5,
  	"currentVersion" : 6,
  	"clusterId" : ObjectId("5dce4a5cfd68eca59be538e4")
  }
  shards:
        {  "_id" : "shard0001",  "host" : "127.0.0.1:57017",  "state" : 1 }
        {  "_id" : "shardCluster",  "host" : "shardCluster/127.0.0.1:47017,127.0.0.1:47018",  "state" : 1 }
  active mongoses:
        "4.0.5" : 1
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                No recent migrations
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                config.system.sessions
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shardCluster	1
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shardCluster Timestamp(1, 0) 
        {  "_id" : "hugo",  "primary" : "shard0001",  "partitioned" : true,  "version" : {  "uuid" : UUID("8175e28f-8335-4d35-bcf9-c40a937dc344"),  "lastMod" : 1 } }
                hugo.emp
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard0001	1
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 Timestamp(1, 0) 
mongos> show dbs;
admin   0.000GB
config  0.001GB
hugo    0.000GB
c、往分片集群中写数据,观察分片效果

因为分片集群中默认的chunk块大小是64M,这里为了演示效果,把chunk块大小修改为1m

mongos> use config;
switched to db config
mongos> db.settings.find()
mongos> db.settings.save({_id:"chunksize",value:1})
WriteResult({ "nMatched" : 0, "nUpserted" : 1, "nModified" : 0, "_id" : "chunksize" })
mongos> use config;
switched to db config
mongos> show tables;
changelog
chunks
collections
databases
lockpings
locks
migrations
mongos
settings
shards
tags
transactions
version
mongos> db.settings.find()
{ "_id" : "chunksize", "value" : 1 }
mongos> 

可以看到,当前emp表的数据为0:

mongos> use hugo;
switched to db hugo
mongos> db.emp.count()
0

为了演示分片效果,我们往集合emp中写入10000条数据:

mongos> use hugo;
switched to db hugo
mongos>  for(var i=1;i<=100000;i++){
...      db.emp.insert({"_id":i,"name":"copy"+i});
... }

查看一下当前分片集群中的分片效果:

mongos> db.emp.count()
0
mongos> db.emp.count()
4453
mongos> db.emp.count()
6693
mongos> db.emp.count()
8587
mongos> db.emp.count()
10460
mongos> db.emp.count()
22249
mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
  	"_id" : 1,
  	"minCompatibleVersion" : 5,
  	"currentVersion" : 6,
  	"clusterId" : ObjectId("5dce4a5cfd68eca59be538e4")
  }
  shards:
        {  "_id" : "shard0001",  "host" : "127.0.0.1:57017",  "state" : 1 }
        {  "_id" : "shardCluster",  "host" : "shardCluster/127.0.0.1:47017,127.0.0.1:47018",  "state" : 1 }
  active mongoses:
        "4.0.5" : 1
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                1 : Success
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                config.system.sessions
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shardCluster	1
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shardCluster Timestamp(1, 0) 
        {  "_id" : "hugo",  "primary" : "shard0001",  "partitioned" : true,  "version" : {  "uuid" : UUID("8175e28f-8335-4d35-bcf9-c40a937dc344"),  "lastMod" : 1 } }
                hugo.emp
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard0001	2
                                shardCluster	1
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : 2 } on : shardCluster Timestamp(2, 0) 
                        { "_id" : 2 } -->> { "_id" : 28340 } on : shard0001 Timestamp(2, 1) 
                        { "_id" : 28340 } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 Timestamp(1, 3) 

mongos>

可以看到,已经有2个块了
在这里插入图片描述
当前数据还在写,10000条还没写完,等一下,看最终的效果:

mongos> db.emp.count()
89701
mongos> db.emp.count()
89701
mongos> db.emp.count()
89701
mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
  	"_id" : 1,
  	"minCompatibleVersion" : 5,
  	"currentVersion" : 6,
  	"clusterId" : ObjectId("5dce4a5cfd68eca59be538e4")
  }
  shards:
        {  "_id" : "shard0001",  "host" : "127.0.0.1:57017",  "state" : 1 }
        {  "_id" : "shardCluster",  "host" : "shardCluster/127.0.0.1:47017,127.0.0.1:47018",  "state" : 1 }
  active mongoses:
        "4.0.5" : 1
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Collections with active migrations: 
                hugo.emp started at Fri Nov 15 2019 17:28:21 GMT+0800 (CST)
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                2 : Success
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                config.system.sessions
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shardCluster	1
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shardCluster Timestamp(1, 0) 
        {  "_id" : "hugo",  "primary" : "shard0001",  "partitioned" : true,  "version" : {  "uuid" : UUID("8175e28f-8335-4d35-bcf9-c40a937dc344"),  "lastMod" : 1 } }
                hugo.emp
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard0001	3
                                shardCluster	4
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : 2 } on : shardCluster Timestamp(2, 0) 
                        { "_id" : 2 } -->> { "_id" : 28340 } on : shard0001 Timestamp(3, 1) 
                        { "_id" : 28340 } -->> { "_id" : 42509 } on : shard0001 Timestamp(2, 2) 
                        { "_id" : 42509 } -->> { "_id" : 59897 } on : shard0001 Timestamp(2, 3) 
                        { "_id" : 59897 } -->> { "_id" : 74066 } on : shardCluster Timestamp(3, 2) 
                        { "_id" : 74066 } -->> { "_id" : 89699 } on : shardCluster Timestamp(3, 3) 
                        { "_id" : 89699 } -->> { "_id" : { "$maxKey" : 1 } } on : shardCluster Timestamp(3, 4) 

mongos>

可以看到:数据比较均匀的分布在2个块
在这里插入图片描述
未完,待续,吃饭去。。。。。。

  • 1
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值