Centos7.5安装部署MongoDB

MongoDB的介绍(特点、优点、原理)
介绍:MongoDB是一个基于分布式文件存储的数据库,由C++语音编写。为WEB应用提供可扩展的高性能数据存储解决方案。
特点:高性能、易部署、易使用、存储数据非常方便。
主要功能特性有:

  1. 面向集合存储,易存储对象类型的数据
  2. 模式自由
  3. 支持动态查询
  4. 支持安全索引,包含内部对象
  5. 支持查询
  6. 支持复制和故障恢复
  7. 使用高效的二进制数据存储,包括大型对象
  8. 支持JAVA,C++,PHP等多种语言
  9. 可以通过网络访问
    使用原理:
    所谓“面向集合”,意思是数据被分组存储在数据集中,被称为集合。 每个集合在数据库中都有一个唯一的标识名,并且可以包含 无线数目的文档。集合的概念类似关系型数据库(RDBMS)里的表(table),不同的是它不需要定义任何模式
    模式自由,意味着对于存储在mongodb数据库中的文件,我们不需要知道它的任何结构定义。如果需要的话,你完全可以把不同结构的文件存储在同一个数据库里。
    存储在集合中的文档,被存储为键-值对的形式。键用于唯一标识一个文档,为字符串类型,而值则可以是各种复杂的文件类型。我们称这种存储形式为BSON(Binary JSON)。

环境Centos7.5
服务器地址:
192.168.5.61、192.168.5.62、192.168.5.63
开始集群搭建192.168.5.61上操作:
首先创建两个目录:

[root@localhost ~]# mkdir /opt/software
[root@localhost ~]# mkdir /opt/apps

拷贝安装包到/opt/software目录下并进入到/opt/software目录并解压安装包

[root@localhost ~]# cp mongodb-linux-x86_64-4.0.2.tgz /opt/software/
[root@localhost ~]# cd /opt/software/
[root@localhost software]# tar zxf mongodb-linux-x86_64-4.0.2.tgz -C /opt/apps/

进入到/opt/apps目录下做相关链接

[root@localhost software]# cd ../apps/
[root@localhost apps]# ln -s mongodb-linux-x86_64-4.0.2 mongodb

创建数据和日志文件夹

[root@localhost apps]# cd
[root@localhost ~]# mkdir -p /data/mongodb/conf
[root@localhost ~]# mkdir -p /data/mongodb/{mongos,config,shard1,shard2,shard3}/{log,data}

配置环境变量

[root@localhost ~]# vim /etc/profile

在这里插入图片描述
让环境变量立即生效

[root@localhost ~]# source /etc/profile

添加配置文件

[root@localhost ~]# vim /data/mongodb/conf/config.conf
pidfilepath = /data/mongodb/config/log/mongosrv.pid
dbpath = /data/mongodb/config/data
logpath = /data/mongodb/config/log/configsrv.log
logappend = true

bind_ip = 0.0.0.0
port = 21000
fork = true

#declare this is a config db of a cluster;
configsvr = true

#副本集名称
replSet=configs

#设置最大连接数
maxConns=20000

启动192.168.5.61上的config server

[root@localhost ~]# mongod -f /data/mongodb/conf/config.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 4707
child process started successfully, parent exiting

依次启动服务器的config server(上面的操作在其他服务器做同样的操作)
192.168.5.62上操作:
首先创建两个目录:

[root@localhost ~]# mkdir /opt/software
[root@localhost ~]# mkdir /opt/apps

拷贝安装包到/opt/software目录下并进入到/opt/software目录并解压安装包

[root@localhost ~]# cp mongodb-linux-x86_64-4.0.2.tgz /opt/software/
[root@localhost ~]# cd /opt/software/
[root@localhost software]# tar zxf mongodb-linux-x86_64-4.0.2.tgz -C /opt/apps/

进入到/opt/apps目录下做相关链接

[root@localhost software]# cd ../apps/
[root@localhost apps]# ln -s mongodb-linux-x86_64-4.0.2 mongodb

创建数据和日志文件夹

[root@localhost apps]# cd
[root@localhost ~]# mkdir -p /data/mongodb/conf
[root@localhost ~]# mkdir -p /data/mongodb/{mongos,config,shard1,shard2,shard3}/{log,data}

配置环境变量

[root@localhost ~]# vim /etc/profile

在这里插入图片描述
让环境变量立即生效

[root@localhost ~]# source /etc/profile

添加配置文件

[root@localhost ~]# vim /data/mongodb/conf/config.conf
pidfilepath = /data/mongodb/config/log/mongosrv.pid
dbpath = /data/mongodb/config/data
logpath = /data/mongodb/config/log/configsrv.log
logappend = true

bind_ip = 0.0.0.0
port = 21000
fork = true

#declare this is a config db of a cluster;
configsvr = true

#副本集名称
replSet=configs

#设置最大连接数
maxConns=20000

启动192.168.5.62上的config server

[root@localhost ~]# mongod -f /data/mongodb/conf/config.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 4707
child process started successfully, parent exiting

依次启动服务器的config server(上面的操作在其他服务器做同样的操作)
192.168.5.63上操作:
首先创建两个目录:

[root@localhost ~]# mkdir /opt/software
[root@localhost ~]# mkdir /opt/apps

拷贝安装包到/opt/software目录下并进入到/opt/software目录并解压安装包

[root@localhost ~]# cp mongodb-linux-x86_64-4.0.2.tgz /opt/software/
[root@localhost ~]# cd /opt/software/
[root@localhost software]# tar zxf mongodb-linux-x86_64-4.0.2.tgz -C /opt/apps/

进入到/opt/apps目录下做相关链接

[root@localhost software]# cd ../apps/
[root@localhost apps]# ln -s mongodb-linux-x86_64-4.0.2 mongodb

创建数据和日志文件夹

[root@localhost apps]# cd
[root@localhost ~]# mkdir -p /data/mongodb/conf
[root@localhost ~]# mkdir -p /data/mongodb/{mongos,config,shard1,shard2,shard3}/{log,data}

配置环境变量

[root@localhost ~]# vim /etc/profile

在这里插入图片描述
让环境变量立即生效

[root@localhost ~]# source /etc/profile

添加配置文件

[root@localhost ~]# vim /data/mongodb/conf/config.conf
pidfilepath = /data/mongodb/config/log/mongosrv.pid
dbpath = /data/mongodb/config/data
logpath = /data/mongodb/config/log/configsrv.log
logappend = true

bind_ip = 0.0.0.0
port = 21000
fork = true

#declare this is a config db of a cluster;
configsvr = true

#副本集名称
replSet=configs

#设置最大连接数
maxConns=20000

启动192.168.5.63上的config server

[root@localhost ~]# mongod -f /data/mongodb/conf/config.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 4707
child process started successfully, parent exiting

配置副本集并初始化在任意一台服务器上连接(我在5.61上连接):

[root@localhost ~]# mongo --port 21000
MongoDB shell version v4.0.2
connecting to: mongodb://127.0.0.1:21000/
MongoDB server version: 4.0.2
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
	http://docs.mongodb.org/
Questions? Try the support group
	http://groups.google.com/group/mongodb-user
Server has startup warnings: 
2021-06-08T09:07:53.836+0800 I CONTROL  [initandlisten] 
2021-06-08T09:07:53.836+0800 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
2021-06-08T09:07:53.836+0800 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
2021-06-08T09:07:53.836+0800 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2021-06-08T09:07:53.836+0800 I CONTROL  [initandlisten] 
2021-06-08T09:07:53.836+0800 I CONTROL  [initandlisten] 
2021-06-08T09:07:53.836+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2021-06-08T09:07:53.836+0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2021-06-08T09:07:53.836+0800 I CONTROL  [initandlisten] 
2021-06-08T09:07:53.836+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2021-06-08T09:07:53.836+0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2021-06-08T09:07:53.836+0800 I CONTROL  [initandlisten] 
2021-06-08T09:07:53.836+0800 I CONTROL  [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 15078 processes, 65535 files. Number of processes should be at least 32767.5 : 0.5 times number of files.
2021-06-08T09:07:53.836+0800 I CONTROL  [initandlisten]
> use admin
switched to db admin
> config={_id:"configs",members:[{_id:0,host:"192.168.5.61:21000"},{_id:1,host:"192.168.5.62:21000"},{_id:25.63:21000"},]}
{
	"_id" : "configs",
	"members" : [
		{
			"_id" : 0,
			"host" : "192.168.5.61:21000"
		},
		{
			"_id" : 1,
			"host" : "192.168.5.62:21000"
		},
		{
			"_id" : 2,
			"host" : "192.168.5.63:21000"
		}
	]
}
> rs.initiate(config)
{
	"ok" : 1,
	"operationTime" : Timestamp(1623115680, 1),
	"$gleStats" : {
		"lastOpTime" : Timestamp(1623115680, 1),
		"electionId" : ObjectId("000000000000000000000000")
	},
	"lastCommittedOpTime" : Timestamp(0, 0),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1623115680, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}
configs:SECONDARY> rs.initiate(config);
{
	"operationTime" : Timestamp(1623115715, 1),
	"ok" : 0,
	"errmsg" : "already initialized",
	"code" : 23,
	"codeName" : "AlreadyInitialized",
	"$gleStats" : {
		"lastOpTime" : Timestamp(1623115680, 1),
		"electionId" : ObjectId("7fffffff0000000000000001")
	},
	"lastCommittedOpTime" : Timestamp(1623115715, 1),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1623115715, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}
configs:PRIMARY> rs.status()
{
	"set" : "configs",
	"date" : ISODate("2021-06-08T01:29:51.279Z"),
	"myState" : 1,
	"term" : NumberLong(1),
	"syncingTo" : "",
	"syncSourceHost" : "",
	"syncSourceId" : -1,
	"configsvr" : true,
	"heartbeatIntervalMillis" : NumberLong(2000),
	"optimes" : {
		"lastCommittedOpTime" : {
			"ts" : Timestamp(1623115785, 1),
			"t" : NumberLong(1)
		},
		"readConcernMajorityOpTime" : {
			"ts" : Timestamp(1623115785, 1),
			"t" : NumberLong(1)
		},
		"appliedOpTime" : {
			"ts" : Timestamp(1623115785, 1),
			"t" : NumberLong(1)
		},
		"durableOpTime" : {
			"ts" : Timestamp(1623115785, 1),
			"t" : NumberLong(1)
		}
	},
	"lastStableCheckpointTimestamp" : Timestamp(1623115755, 1),
	"members" : [
		{
			"_id" : 0,
			"name" : "192.168.5.61:21000",
			"health" : 1,
			"state" : 1,
			"stateStr" : "PRIMARY",
			"uptime" : 1319,
			"optime" : {
				"ts" : Timestamp(1623115785, 1),
				"t" : NumberLong(1)
			},
			"optimeDate" : ISODate("2021-06-08T01:29:45Z"),
			"syncingTo" : "",
			"syncSourceHost" : "",
			"syncSourceId" : -1,
			"infoMessage" : "could not find member to sync from",
			"electionTime" : Timestamp(1623115691, 1),
			"electionDate" : ISODate("2021-06-08T01:28:11Z"),
			"configVersion" : 1,
			"self" : true,
			"lastHeartbeatMessage" : ""
		},
		{
			"_id" : 1,
			"name" : "192.168.5.62:21000",
			"health" : 1,
			"state" : 2,
			"stateStr" : "SECONDARY",
			"uptime" : 110,
			"optime" : {
				"ts" : Timestamp(1623115785, 1),
				"t" : NumberLong(1)
			},
			"optimeDurable" : {
				"ts" : Timestamp(1623115785, 1),
				"t" : NumberLong(1)
			},
			"optimeDate" : ISODate("2021-06-08T01:29:45Z"),
			"optimeDurableDate" : ISODate("2021-06-08T01:29:45Z"),
			"lastHeartbeat" : ISODate("2021-06-08T01:29:49.535Z"),
			"lastHeartbeatRecv" : ISODate("2021-06-08T01:29:49.318Z"),
			"pingMs" : NumberLong(0),
			"lastHeartbeatMessage" : "",
			"syncingTo" : "192.168.5.61:21000",
			"syncSourceHost" : "192.168.5.61:21000",
			"syncSourceId" : 0,
			"infoMessage" : "",
			"configVersion" : 1
		},
		{
			"_id" : 2,
			"name" : "192.168.5.63:21000",
			"health" : 1,
			"state" : 2,
			"stateStr" : "SECONDARY",
			"uptime" : 110,
			"optime" : {
				"ts" : Timestamp(1623115785, 1),
				"t" : NumberLong(1)
			},
			"optimeDurable" : {
				"ts" : Timestamp(1623115785, 1),
				"t" : NumberLong(1)
			},
			"optimeDate" : ISODate("2021-06-08T01:29:45Z"),
			"optimeDurableDate" : ISODate("2021-06-08T01:29:45Z"),
			"lastHeartbeat" : ISODate("2021-06-08T01:29:49.536Z"),
			"lastHeartbeatRecv" : ISODate("2021-06-08T01:29:49.861Z"),
			"pingMs" : NumberLong(0),
			"lastHeartbeatMessage" : "",
			"syncingTo" : "192.168.5.61:21000",
			"syncSourceHost" : "192.168.5.61:21000",
			"syncSourceId" : 0,
			"infoMessage" : "",
			"configVersion" : 1
		}
	],
	"ok" : 1,
	"operationTime" : Timestamp(1623115785, 1),
	"$gleStats" : {
		"lastOpTime" : Timestamp(1623115680, 1),
		"electionId" : ObjectId("7fffffff0000000000000001")
	},
	"lastCommittedOpTime" : Timestamp(1623115785, 1),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1623115785, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}

192.168.5.61设置第一个分片副本集

[root@localhost ~]# vim /data/mongodb/conf/shard1.conf
pidfilepath = /data/mongodb/shard1/log/shard1.pid
dbpath = /data/mongodb/shard1/data
logpath = /data/mongodb/shard1/log/shard1.log
logappend = true

bind_ip = 0.0.0.0
port = 27001
fork = true


#副本集名称
replSet=shard1

#declare this is a shard db of a cluster;
shardsvr = true

#设置最大连接数
maxConns=20000

启动shard1.conf配置文件

[root@localhost ~]# mongod -f /data/mongodb/conf/shard1.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 6126
child process started successfully, parent exiting
[root@localhost ~]# mongo --port 27001
shard1:PRIMARY> use admin;
shard1:PRIMARY> config={_id:"shard1",members:[{_id:0,host:"192.168.5.61:27001"},{_id:1,host:"192.168.5.62:27001"},{_id:2,host:"192.168.5.63:27001",arbiterOnly:true},]}
{
	"_id" : "shard1",
	"members" : [
		{
			"_id" : 0,
			"host" : "192.168.5.61:27001"
		},
		{
			"_id" : 1,
			"host" : "192.168.5.62:27002"
		},
		{
			"_id" : 2,
			"host" : "192.168.5.63:27003",
			"arbiteronly" : true
		}
	]
}
> rs.initiate(config);
{ "ok" : 1 }

配置shard2.conf文件

[root@localhost ~]# vim /data/mongodb/conf/shard2.conf
pidfilepath = /data/mongodb/shard2/log/shard2.pid
dbpath = /data/mongodb/shard2/data
logpath = /data/mongodb/shard2/log/shard2.log
logappend = true

bind_ip = 0.0.0.0
port = 27002
fork = true

#副本集名称
replSet=shard2

#declare this is a shard db of a cluster;
shardsvr = true

#设置最大连接数
maxConns=20000

启动shard2.conf配置文件

[root@localhost ~]# mongod -f /data/mongodb/conf/shard2.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 6126
child process started successfully, parent exiting
[root@localhost ~]# mongo --port 27002
shard2:SECONDARY> use admin;
shard1:PRIMARY> config={_id:"shard2",members:[{_id:0,host:"192.168.5.61:27002"},{_id:1,host:"192.168.5.62:27002",arbiterOnly:true},{_id:2,host:"192.168.5.63:27002"},]}
{
	"_id" : "shard2",
	"members" : [
		{
			"_id" : 0,
			"host" : "192.168.5.61:27001"
		},
		{
			"_id" : 1,
			"host" : "192.168.5.62:27002"
		},
		{
			"_id" : 2,
			"host" : "192.168.5.63:27003",
			"arbiteronly" : true
		}
	]
}
rs.initiate(config);
{ "ok" : 1 }

配置shard3.conf文件

[root@localhost ~]# vim /data/mongodb/conf/shard3.conf
pidfilepath = /data/mongodb/shard3/log/shard3.pid
dbpath = /data/mongodb/shard3/data
logpath = /data/mongodb/shard3/log/shard3.log
logappend = true

bind_ip = 0.0.0.0
port = 27003
fork = true

#副本集名称
replSet=shard3

#declare this is a shard db of a cluster;
shardsvr = true

#设置最大连接数
maxConns=20000

启动shard3.conf配置文件

[root@localhost ~]# mongod -f /data/mongodb/conf/shard3.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 6126
child process started successfully, parent exiting
[root@localhost ~]# mongo --port 27003
shard2:SECONDARY> use admin;
shard1:PRIMARY> config={_id:"shard3",members:[{_id:0,host:"192.168.5.61:27003",arbiterOnly:true},{_id:1,host:"192.168.5.62:27003"},{_id:2,host:"192.168.5.63:27003"},]}
{
	"_id" : "shard3",
	"members" : [
		{
			"_id" : 0,
			"host" : "192.168.5.61:27001"
		},
		{
			"_id" : 1,
			"host" : "192.168.5.62:27002"
		},
		{
			"_id" : 2,
			"host" : "192.168.5.63:27003",
			"arbiteronly" : true
		}
	]
}
rs.initiate(config);
{ "ok" : 1 }

192.168.5.62设置第一个分片副本集

[root@localhost ~]# vim /data/mongodb/conf/shard1.conf
pidfilepath = /data/mongodb/shard1/log/shard1.pid
dbpath = /data/mongodb/shard1/data
logpath = /data/mongodb/shard1/log/shard1.log
logappend = true

bind_ip = 0.0.0.0
port = 27001
fork = true


#副本集名称
replSet=shard1

#declare this is a shard db of a cluster;
shardsvr = true

#设置最大连接数
maxConns=20000

启动shard1.conf配置文件

[root@localhost ~]# mongod -f /data/mongodb/conf/shard1.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 6126
child process started successfully, parent exiting
[root@localhost ~]# mongo --port 27001
shard1:PRIMARY> use admin;
shard1:PRIMARY> config={_id:"shard1",members:[{_id:0,host:"192.168.5.61:27001"},{_id:1,host:"192.168.5.62:27001"},{_id:2,host:"192.168.5.63:27001",arbiterOnly:true},]}
{
	"_id" : "shard1",
	"members" : [
		{
			"_id" : 0,
			"host" : "192.168.5.61:27001"
		},
		{
			"_id" : 1,
			"host" : "192.168.5.62:27002"
		},
		{
			"_id" : 2,
			"host" : "192.168.5.63:27003",
			"arbiteronly" : true
		}
	]
}
> rs.initiate(config);
{ "ok" : 1 }

配置shard2.conf文件

[root@localhost ~]# vim /data/mongodb/conf/shard2.conf
pidfilepath = /data/mongodb/shard2/log/shard2.pid
dbpath = /data/mongodb/shard2/data
logpath = /data/mongodb/shard2/log/shard2.log
logappend = true

bind_ip = 0.0.0.0
port = 27002
fork = true

#副本集名称
replSet=shard2

#declare this is a shard db of a cluster;
shardsvr = true

#设置最大连接数
maxConns=20000

启动shard2.conf配置文件

[root@localhost ~]# mongod -f /data/mongodb/conf/shard2.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 6126
child process started successfully, parent exiting
[root@localhost ~]# mongo --port 27002
shard2:SECONDARY> use admin;
shard1:PRIMARY> config={_id:"shard2",members:[{_id:0,host:"192.168.5.61:27002"},{_id:1,host:"192.168.5.62:27002",arbiterOnly:true},{_id:2,host:"192.168.5.63:27002"},]}
{
	"_id" : "shard2",
	"members" : [
		{
			"_id" : 0,
			"host" : "192.168.5.61:27001"
		},
		{
			"_id" : 1,
			"host" : "192.168.5.62:27002"
		},
		{
			"_id" : 2,
			"host" : "192.168.5.63:27003",
			"arbiteronly" : true
		}
	]
}
rs.initiate(config);
{ "ok" : 1 }

配置shard3.conf文件

[root@localhost ~]# vim /data/mongodb/conf/shard3.conf
pidfilepath = /data/mongodb/shard3/log/shard3.pid
dbpath = /data/mongodb/shard3/data
logpath = /data/mongodb/shard3/log/shard3.log
logappend = true

bind_ip = 0.0.0.0
port = 27003
fork = true

#副本集名称
replSet=shard3

#declare this is a shard db of a cluster;
shardsvr = true

#设置最大连接数
maxConns=20000

启动shard3.conf配置文件

[root@localhost ~]# mongod -f /data/mongodb/conf/shard3.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 6126
child process started successfully, parent exiting
[root@localhost ~]# mongo --port 27003
shard2:SECONDARY> use admin;
shard1:PRIMARY> config={_id:"shard3",members:[{_id:0,host:"192.168.5.61:27003",arbiterOnly:true},{_id:1,host:"192.168.5.62:27003"},{_id:2,host:"192.168.5.63:27003"},]}
{
	"_id" : "shard3",
	"members" : [
		{
			"_id" : 0,
			"host" : "192.168.5.61:27001"
		},
		{
			"_id" : 1,
			"host" : "192.168.5.62:27002"
		},
		{
			"_id" : 2,
			"host" : "192.168.5.63:27003",
			"arbiteronly" : true
		}
	]
}
rs.initiate(config);
{ "ok" : 1 }

192.168.5.63设置第一个分片副本集

[root@localhost ~]# vim /data/mongodb/conf/shard1.conf
pidfilepath = /data/mongodb/shard1/log/shard1.pid
dbpath = /data/mongodb/shard1/data
logpath = /data/mongodb/shard1/log/shard1.log
logappend = true

bind_ip = 0.0.0.0
port = 27001
fork = true


#副本集名称
replSet=shard1

#declare this is a shard db of a cluster;
shardsvr = true

#设置最大连接数
maxConns=20000

启动shard1.conf配置文件

[root@localhost ~]# mongod -f /data/mongodb/conf/shard1.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 6126
child process started successfully, parent exiting
[root@localhost ~]# mongo --port 27001
shard1:PRIMARY> use admin;
shard1:PRIMARY> config={_id:"shard1",members:[{_id:0,host:"192.168.5.61:27001"},{_id:1,host:"192.168.5.62:27001"},{_id:2,host:"192.168.5.63:27001",arbiterOnly:true},]}
{
	"_id" : "shard1",
	"members" : [
		{
			"_id" : 0,
			"host" : "192.168.5.61:27001"
		},
		{
			"_id" : 1,
			"host" : "192.168.5.62:27002"
		},
		{
			"_id" : 2,
			"host" : "192.168.5.63:27003",
			"arbiteronly" : true
		}
	]
}
> rs.initiate(config);
{ "ok" : 1 }

配置shard2.conf文件

[root@localhost ~]# vim /data/mongodb/conf/shard2.conf
pidfilepath = /data/mongodb/shard2/log/shard2.pid
dbpath = /data/mongodb/shard2/data
logpath = /data/mongodb/shard2/log/shard2.log
logappend = true

bind_ip = 0.0.0.0
port = 27002
fork = true

#副本集名称
replSet=shard2

#declare this is a shard db of a cluster;
shardsvr = true

#设置最大连接数
maxConns=20000

启动shard2.conf配置文件

[root@localhost ~]# mongod -f /data/mongodb/conf/shard2.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 6126
child process started successfully, parent exiting
[root@localhost ~]# mongo --port 27002
shard2:SECONDARY> use admin;
shard1:PRIMARY> config={_id:"shard2",members:[{_id:0,host:"192.168.5.61:27002"},{_id:1,host:"192.168.5.62:27002",arbiterOnly:true},{_id:2,host:"192.168.5.63:27002"},]}
{
	"_id" : "shard2",
	"members" : [
		{
			"_id" : 0,
			"host" : "192.168.5.61:27001"
		},
		{
			"_id" : 1,
			"host" : "192.168.5.62:27002"
		},
		{
			"_id" : 2,
			"host" : "192.168.5.63:27003",
			"arbiteronly" : true
		}
	]
}
rs.initiate(config);
{ "ok" : 1 }

配置shard3.conf文件

[root@localhost ~]# vim /data/mongodb/conf/shard3.conf
pidfilepath = /data/mongodb/shard3/log/shard3.pid
dbpath = /data/mongodb/shard3/data
logpath = /data/mongodb/shard3/log/shard3.log
logappend = true

bind_ip = 0.0.0.0
port = 27003
fork = true

#副本集名称
replSet=shard3

#declare this is a shard db of a cluster;
shardsvr = true

#设置最大连接数
maxConns=20000

启动shard3.conf配置文件

[root@localhost ~]# mongod -f /data/mongodb/conf/shard3.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 6126
child process started successfully, parent exiting
[root@localhost ~]# mongo --port 27003
shard2:SECONDARY> use admin;
shard1:PRIMARY> config={_id:"shard3",members:[{_id:0,host:"192.168.5.61:27003",arbiterOnly:true},{_id:1,host:"192.168.5.62:27003"},{_id:2,host:"192.168.5.63:27003"},]}
{
	"_id" : "shard3",
	"members" : [
		{
			"_id" : 0,
			"host" : "192.168.5.61:27001"
		},
		{
			"_id" : 1,
			"host" : "192.168.5.62:27002"
		},
		{
			"_id" : 2,
			"host" : "192.168.5.63:27003",
			"arbiteronly" : true
		}
	]
}
rs.initiate(config);
{ "ok" : 1 }

配置3台服务器路由服务mongos
192.168.5.61的配置

[root@localhost ~]# vim /data/mongodb/conf/mongos.conf 
pidfilepath = /data/mongodb/mongos/log/mongos.pid
logpath = /data/mongodb/mongos/log/mongos.log
logappend = true

bind_ip = 0.0.0.0
port = 20000
fork = true

#监听的配置服务器,只能有1个或者3个 configs为配置服务器的副本集名字
configdb = configs/192.168.5.61:21000,192.168.5.62:21000,192.168.5.63:21000

#设置最大连接数
maxConns=20000

启动mongos.conf文件

[root@localhost ~]# mongos -f /data/mongodb/conf/mongos.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 7568
child process started successfully, parent exiting

查看20000端口,能看到20000端口证明启动成功
在这里插入图片描述

192.168.5.62的配置

[root@localhost ~]# vim /data/mongodb/conf/mongos.conf 
pidfilepath = /data/mongodb/mongos/log/mongos.pid
logpath = /data/mongodb/mongos/log/mongos.log
logappend = true

bind_ip = 0.0.0.0
port = 20000
fork = true

#监听的配置服务器,只能有1个或者3个 configs为配置服务器的副本集名字
configdb = configs/192.168.5.61:21000,192.168.5.62:21000,192.168.5.63:21000

#设置最大连接数
maxConns=20000

启动mongos.conf文件

[root@localhost ~]# mongos -f /data/mongodb/conf/mongos.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 7568
child process started successfully, parent exiting

查看20000端口,能看到20000端口证明启动成功
在这里插入图片描述
192.168.5.63的配置

[root@localhost ~]# vim /data/mongodb/conf/mongos.conf 
pidfilepath = /data/mongodb/mongos/log/mongos.pid
logpath = /data/mongodb/mongos/log/mongos.log
logappend = true

bind_ip = 0.0.0.0
port = 20000
fork = true

#监听的配置服务器,只能有1个或者3个 configs为配置服务器的副本集名字
configdb = configs/192.168.5.61:21000,192.168.5.62:21000,192.168.5.63:21000

#设置最大连接数
maxConns=20000

启动mongos.conf文件

[root@localhost ~]# mongos -f /data/mongodb/conf/mongos.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 7568
child process started successfully, parent exiting

查看20000端口,能看到20000端口证明启动成功
在这里插入图片描述
启用分片
目前搭建了mongodb配置服务器、路由服务器,各个分片服务器,不过应用程序连接到mongos路由服务器并不能使用分片机制,还需要在程序里设置分片配置,让分片生效。
登陆任意一台mongos(我在192.168.5.61上操作)

[root@localhost ~]# mongo --port 20000
MongoDB shell version v4.0.2
connecting to: mongodb://127.0.0.1:20000/
MongoDB server version: 4.0.2
Server has startup warnings: 
2021-06-08T14:14:38.202+0800 I CONTROL  [main] 
2021-06-08T14:14:38.203+0800 I CONTROL  [main] ** WARNING: Access control is not enabled for the database.
2021-06-08T14:14:38.203+0800 I CONTROL  [main] **          Read and write access to data and configuration is unrestricted.
2021-06-08T14:14:38.203+0800 I CONTROL  [main] ** WARNING: You are running this process as the root user, which is not recommended.
2021-06-08T14:14:38.203+0800 I CONTROL  [main] 
mongos> use admin;
mongos> sh.addShard("shard1/192.168.5.61:27001,192.168.5.62:27001,192.168.5.63:27001")
mongos> sh.addShard("shard2/192.168.5.61:27002,192.168.5.62:27002,192.168.5.63:27002")
mongos> sh.addShard("shard3/192.168.5.61:27003,192.168.5.62:27003,192.168.5.63:27003")

查看集群状态

mongos> sh.status();
--- Sharding Status --- 
  sharding version: {
  	"_id" : 1,
  	"minCompatibleVersion" : 5,
  	"currentVersion" : 6,
  	"clusterId" : ObjectId("60bec7ae3d7b8f5ac1fdb68c")
  }
  shards:
        {  "_id" : "shard1",  "host" : "shard1/192.168.5.61:27001,192.168.5.62:27001,192.168.5.63:27001",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/192.168.5.61:27002,192.168.5.62:27002,192.168.5.63:27002",  "state" : 1 }
        {  "_id" : "shard3",  "host" : "shard3/192.168.5.61:27003,192.168.5.62:27003,192.168.5.63:27003",  "state" : 1 }
  active mongoses:
        "4.0.2" : 1
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                No recent migrations
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                config.system.sessions
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard1	1
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0) 
        {  "_id" : "testdb",  "primary" : "shard2",  "partitioned" : true,  "version" : {  "uuid" : UUID("1f37fbc0-542e-43f0-a6f0-c05b17747009"),  "lastMod" : 1 } }
                testdb.table1
                        shard key: { "id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard2	1
                        { "id" : { "$minKey" : 1 } } -->> { "id" : { "$maxKey" : 1 } } on : shard2 Timestamp(1, 0) 

测试
目前配置服务、路由服务、分片服务、副本集服务都已经串联起来了,但我们的目的是希望插入数据,数据能够自动分片。连接在mongos上,准备让指定的数据库、指定的集合分片生效。
在登录的mongo上,运行如下命令
#指定testdb分片生效还在5.61的机器上操作

mongos> db.runCommand({enablesharding:"testdb"});

指定数据库里需要分片的集合和片键

mongos> db.runCommand({shardcollection:"testdb.table1",key:{id:1}});

我们设置testdb的 table1 表需要分片,根据 id 自动分片到 shard1 ,shard2,shard3 上面去。要这样设置是因为不是所有mongodb 的数据库和表 都需要分片!
测试分片配置结果还在5.61机器上操作

[root@localhost ~]# mongo 127.0.0.1:20000
MongoDB shell version v4.0.2
connecting to: mongodb://127.0.0.1:20000/test
MongoDB server version: 4.0.2
Server has startup warnings: 
2021-06-08T11:35:45.272+0800 I CONTROL  [main] 
2021-06-08T11:35:45.272+0800 I CONTROL  [main] ** WARNING: Access control is not enabled for the database.
2021-06-08T11:35:45.272+0800 I CONTROL  [main] **          Read and write access to data and configuration is unrestricted.
2021-06-08T11:35:45.272+0800 I CONTROL  [main] ** WARNING: You are running this process as the root user, which is not recommended.
2021-06-08T11:35:45.272+0800 I CONTROL  [main] 
mongos> use testdb;
switched to db testdb
mongos> for (var i = 1;i <= 10000;i++) db.table1.save({id:i,"test1":"testval1"});
WriteResult({ "nInserted" : 1 })
mongos> db.table1.stats();
{
	"sharded" : true,
	"capped" : false,
	"wiredTiger" : {
		"metadata" : {
			"formatVersion" : 1
		},
	"shards" : {
		"shard2" : {
			"ns" : "testdb.table1",
			"size" : 6480000,
			"count" : 120000,
			"avgObjSize" : 54,
			"storageSize" : 1191936,
			"capped" : false,
			"wiredTiger" : {
				"metadata" : {
					"formatVersion" : 1
				},

注意!!!!!!!!!!!!!!!!!!
启动关闭
mongodb的启动顺序是,先启动配置服务器,在启动分片,最后启动mongos.

mongod -f /data/mongodb/conf/config.conf
mongod -f /data/mongodb/conf/shard1.conf
mongod -f /data/mongodb/conf/shard2.conf
mongod -f /data/mongodb/conf/shard3.conf
mongos -f /data/mongodb/conf/mongos.conf
关闭时,直接killall杀掉所有进程
killall mongod
killall mongos

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值