搭建 MongoDB 复制集,添加安全认证,Spring Boot 整合 MongoDB(内容较多慎入)

需求说明

(1)如图搭建一个分片集群 要求每个分片节点中的复制集含有一个仲裁节点

(2)使用权限控制 建立访问你访问的数据库mamba 这个账号名字是rwUser 密码是rwUser 这个账号对数据库有读写权限

(3)使用SpringBoot 进行访问分片集群 对mamba  库中的nba_star进行增加数据

一、MongoDB基础环境搭建

特殊提醒:

1、机器较多,建议每操作一个小部分先进行测试,再进行下一步,比如添加复制集配置,先配置一台,启动没问题再进行拷贝修改。

2、开启认证前,建议先使用java程序连接,再配置认证。

1、下载MongoDB安装包并上传到Centos

略。

2、解压安装包和重命名(本次作业目录homework_mongodb_shard_auth)

[root@localhost Downloads]# tar -xvf mongodb-linux-x86_64-amazon-4.2.8.tgz
​
[root@localhost Downloads]# ll
drwxr-xr-x.  3 root root         4096 Jun 21 16:47 mongodb-linux-x86_64-amazon-4.2.8
-rw-r--r--.  1 root root    132750301 Jun 20 18:52 mongodb-linux-x86_64-amazon-4.2.8.tgz
​
[root@localhost Downloads]# mv mongodb-linux-x86_64-amazon-4.2.8 homework_mongodb_shard_auth
[root@localhost Downloads]# ll
drwxr-xr-x.  3 root root         4096 Jun 21 16:47 homework_mongodb_shard_auth
-rw-r--r--.  1 root root    132750301 Jun 20 18:52 mongodb-linux-x86_64-amazon-4.2.8.tgz

3、升级ssl(./bin/mongod 命令需要)

[root@localhost Downloads]# yum -y install openssl

4、配置集群(3个节点)、分片集群(4*4个节点)、路由集群(1个节点),分别建立文件夹用于区分存放配置文件和日志文件

[root@localhost Downloads]# cd homework_mongodb_shard_auth/
​
[root@localhost homework_mongodb_shard_auth]# mkdir config_cluster/config_dbpath1 config_cluster/config_dbpath2 config_cluster/config_dbpath3 config_cluster/logs -p
​
[root@localhost homework_mongodb_shard_auth]# mkdir shard_cluster/shard1/shard_dbpath1 shard_cluster/shard1/shard_dbpath2 shard_cluster/shard1/shard_dbpath3 shard_cluster/shard1/shard_dbpath4 shard_cluster/shard1/logs                           shard_cluster/shard2/shard_dbpath1 shard_cluster/shard2/shard_dbpath2 shard_cluster/shard2/shard_dbpath3 shard_cluster/shard2/shard_dbpath4 shard_cluster/shard2/logs                           shard_cluster/shard3/shard_dbpath1 shard_cluster/shard3/shard_dbpath2 shard_cluster/shard3/shard_dbpath3 shard_cluster/shard3/shard_dbpath4 shard_cluster/shard3/logs                           shard_cluster/shard4/shard_dbpath1 shard_cluster/shard4/shard_dbpath2 shard_cluster/shard4/shard_dbpath3 shard_cluster/shard4/shard_dbpath4 shard_cluster/shard4/logs -p
​
[root@localhost homework_mongodb_shard_auth]# mkdir route route/logs -p
​
[root@localhost homework_mongodb_shard_auth]# ll
total 316
drwxr-xr-x. 2 root root   4096 Jun 21 16:47 bin
drwxr-xr-x. 6 root root     80 Jun 21 17:16 config_cluster
-rw-rw-r--. 1  500  500  30608 Jun 11 09:33 LICENSE-Community.txt
-rw-rw-r--. 1  500  500  16726 Jun 11 09:33 MPL-2
-rw-rw-r--. 1  500  500   2617 Jun 11 09:33 README
drwxr-xr-x. 2 root root      6 Jun 21 17:16 route
drwxr-xr-x. 7 root root     97 Jun 21 17:16 shard_cluster
-rw-rw-r--. 1  500  500  75405 Jun 11 09:33 THIRD-PARTY-NOTICES
-rw-rw-r--. 1  500  500 183512 Jun 11 09:35 THIRD-PARTY-NOTICES.gotools

 

二、配置副本集环境搭建

1、配置服务器副本集说明

配置服务器副本集中的3个节点中不能设置仲裁节点否则会报错(Arbiters are not allowed in replica set configurations being used for config servers)

配置副本集节点角色节点目录路径
192.168.127.128:17011配置节点/homework_mongodb_shard_auth/config_cluster/config_dbpath1
192.168.127.128:17013配置节点/homework_mongodb_shard_auth/config_cluster/config_dbpath2
192.168.127.128:17015配置节点/homework_mongodb_shard_auth/config_cluster/config_dbpath3

2、分别在相应节点目录下创建配置文件(点击i键进入插入模式输入内容,按esc键后按:wq然后按回车键保存文件)

[root@localhost config_cluster]# pwd
/home/fanxuebo/Downloads/homework_mongodb_shard_auth/config_cluster
[root@localhost config_cluster]# vi config1_17011.cfg
# 三个配置节点分别是config_cluster/config_dbpath1、config_cluster/config_dbpath2、config_cluster/config_dbpath3
dbpath=config_cluster/config_dbpath1
# 三个配置节点分别是config_cluster/logs/config1_17011.log、config_cluster/logs/config2_17013.log、config_cluster/logs/config3_17015.log
logpath=config_cluster/logs/config1_17011.log
logappend=true
fork=true
bind_ip=0.0.0.0
# 三个配置节点分别是17011、17013、17015
port=17011
configsvr=true
replSet=configsvr

3、拷贝刚才编辑好的config1_17011.cfg,并添加配置内容(配置内容参见上面配置文件中的注释)

[root@localhost config_cluster]# cp config1_17011.cfg config2_17013.cfg
[root@localhost config_cluster]# cp config1_17011.cfg config3_17015.cfg
​
[root@localhost config_cluster]# ll
total 12
-rw-r--r--. 1 root root 168 Jun 21 17:21 config1_17011.cfg
-rw-r--r--. 1 root root 168 Jun 21 17:30 config2_17013.cfg
-rw-r--r--. 1 root root 168 Jun 21 17:30 config3_17015.cfg
drwxr-xr-x. 2 root root   6 Jun 21 17:16 config_dbpath1
drwxr-xr-x. 2 root root   6 Jun 21 17:16 config_dbpath2
drwxr-xr-x. 2 root root   6 Jun 21 17:16 config_dbpath3
drwxr-xr-x. 2 root root   6 Jun 21 17:16 logs
​
[root@localhost config_cluster]# vim config2_17013.cfg
[root@localhost config_cluster]# vim config3_17015.cfg

4、返回MongoDB目录,分别以配置文件方式启动

[root@localhost config_cluster]# cd ..
​
[root@localhost homework_mongodb_shard_auth]# ./bin/mongod -f config_cluster/config1_17011.cfg
about to fork child process, waiting until server is ready for connections.
forked process: 5165
child process started successfully, parent exiting
[root@localhost homework_mongodb_shard_auth]# ./bin/mongod -f config_cluster/config2_17013.cfg
about to fork child process, waiting until server is ready for connections.
forked process: 5364
child process started successfully, parent exiting
[root@localhost homework_mongodb_shard_auth]# ./bin/mongod -f config_cluster/config3_17015.cfg
about to fork child process, waiting until server is ready for connections.
forked process: 5421
child process started successfully, parent exiting
[root@localhost homework_mongodb_shard_auth]#

5、输入命令,进入任意节点

初始化后,首先状态是configsvr:SECONDARY>,过一会后回车,会升级为主节点configsvr:PRIMARY>

[root@localhost homework_mongodb_shard_auth]# ./bin/mongo --port 17011
MongoDB shell version v4.2.8
connecting to: mongodb://127.0.0.1:17011/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("f8860698-2e46-45f1-8789-aa0cbfb98f76") }
MongoDB server version: 4.2.8
Server has startup warnings:
2020-06-21T17:37:49.579-0700 I  CONTROL  [initandlisten]
2020-06-21T17:37:49.579-0700 I  CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
2020-06-21T17:37:49.579-0700 I  CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
2020-06-21T17:37:49.579-0700 I  CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2020-06-21T17:37:49.579-0700 I  CONTROL  [initandlisten]
2020-06-21T17:37:49.579-0700 I  CONTROL  [initandlisten]
2020-06-21T17:37:49.579-0700 I  CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2020-06-21T17:37:49.579-0700 I  CONTROL  [initandlisten] **        We suggest setting it to 'never'
2020-06-21T17:37:49.579-0700 I  CONTROL  [initandlisten]
2020-06-21T17:37:49.579-0700 I  CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2020-06-21T17:37:49.579-0700 I  CONTROL  [initandlisten] **        We suggest setting it to 'never'
2020-06-21T17:37:49.579-0700 I  CONTROL  [initandlisten]
---
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).
​
The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.
​
To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---
​
>

6、添加配置("_id":"configsvr"和配置文件中replSet=configsvr保持一致),并初始化rs.initiate(cfg),然后查看节点状态验证rs.status(),有一个是仲裁节点

> var cfg = {
...     "_id":"configsvr",
...     "protocolVersion":1,
...     "members":[
...         {
...             "_id":1,
...             "host":"192.168.127.128:17011",
...             "priority":10
...         },
...         {
...             "_id":2,
...             "host":"192.168.127.128:17013"
...         },
...         {
...             "_id":3,
...             "host":"192.168.127.128:17015"
...         }]
... }
> rs.initiate(cfg)
{
        "ok" : 1,
        "$gleStats" : {
                "lastOpTime" : Timestamp(1592787125, 1),
                "electionId" : ObjectId("000000000000000000000000")
        },
        "lastCommittedOpTime" : Timestamp(0, 0),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1592787125, 1),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        },
        "operationTime" : Timestamp(1592787125, 1)
}
configsvr:SECONDARY>
configsvr:PRIMARY> rs.status()
{
        "set" : "configsvr",
        "date" : ISODate("2020-06-22T08:09:34.501Z"),
        "myState" : 1,
        "term" : NumberLong(1),
        "syncingTo" : "",
        "syncSourceHost" : "",
        "syncSourceId" : -1,
        "configsvr" : true,
        "heartbeatIntervalMillis" : NumberLong(2000),
        "majorityVoteCount" : 2,
        "writeMajorityCount" : 2,
        "optimes" : {
                "lastCommittedOpTime" : {
                        "ts" : Timestamp(1592813363, 1),
                        "t" : NumberLong(1)
                },
                "lastCommittedWallTime" : ISODate("2020-06-22T08:09:23.300Z"),
                "readConcernMajorityOpTime" : {
                        "ts" : Timestamp(1592813363, 1),
                        "t" : NumberLong(1)
                },
                "readConcernMajorityWallTime" : ISODate("2020-06-22T08:09:23.300Z"),
                "appliedOpTime" : {
                        "ts" : Timestamp(1592813363, 1),
                        "t" : NumberLong(1)
                },
                "durableOpTime" : {
                        "ts" : Timestamp(1592813363, 1),
                        "t" : NumberLong(1)
                },
                "lastAppliedWallTime" : ISODate("2020-06-22T08:09:23.300Z"),
                "lastDurableWallTime" : ISODate("2020-06-22T08:09:23.300Z")
        },
        "lastStableRecoveryTimestamp" : Timestamp(1592813319, 1),
        "lastStableCheckpointTimestamp" : Timestamp(1592813319, 1),
        "electionCandidateMetrics" : {
                "lastElectionReason" : "electionTimeout",
                "lastElectionDate" : ISODate("2020-06-22T06:44:37.923Z"),
                "electionTerm" : NumberLong(1),
                "lastCommittedOpTimeAtElection" : {
                        "ts" : Timestamp(0, 0),
                        "t" : NumberLong(-1)
                },
                "lastSeenOpTimeAtElection" : {
                        "ts" : Timestamp(1592808266, 1),
                        "t" : NumberLong(-1)
                },
                "numVotesNeeded" : 2,
                "priorityAtElection" : 10,
                "electionTimeoutMillis" : NumberLong(10000),
                "numCatchUpOps" : NumberLong(0),
                "newTermStartDate" : ISODate("2020-06-22T06:44:37.936Z"),
                "wMajorityWriteAvailabilityDate" : ISODate("2020-06-22T06:44:39.310Z")
        },
        "members" : [
                {
                        "_id" : 1,
                        "name" : "192.168.127.128:17011",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 5175,
                        "optime" : {
                                "ts" : Timestamp(1592813363, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2020-06-22T08:09:23Z"),
                        "syncingTo" : "",
                        "syncSourceHost" : "",
                        "syncSourceId" : -1,
                        "infoMessage" : "",
                        "electionTime" : Timestamp(1592808277, 1),
                        "electionDate" : ISODate("2020-06-22T06:44:37Z"),
                        "configVersion" : 1,
                        "self" : true,
                        "lastHeartbeatMessage" : ""
                },
                {
                        "_id" : 2,
                        "name" : "192.168.127.128:17013",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 5107,
                        "optime" : {
                                "ts" : Timestamp(1592813363, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1592813363, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2020-06-22T08:09:23Z"),
                        "optimeDurableDate" : ISODate("2020-06-22T08:09:23Z"),
                        "lastHeartbeat" : ISODate("2020-06-22T08:09:33.271Z"),
                        "lastHeartbeatRecv" : ISODate("2020-06-22T08:09:32.844Z"),
                        "pingMs" : NumberLong(0),
                        "lastHeartbeatMessage" : "",
                        "syncingTo" : "192.168.127.128:17011",
                        "syncSourceHost" : "192.168.127.128:17011",
                        "syncSourceId" : 1,
                        "infoMessage" : "",
                        "configVersion" : 1
                },
                {
                        "_id" : 3,
                        "name" : "192.168.127.128:17015",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 5107,
                        "optime" : {
                                "ts" : Timestamp(1592813363, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1592813363, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2020-06-22T08:09:23Z"),
                        "optimeDurableDate" : ISODate("2020-06-22T08:09:23Z"),
                        "lastHeartbeat" : ISODate("2020-06-22T08:09:33.271Z"),
                        "lastHeartbeatRecv" : ISODate("2020-06-22T08:09:33.251Z"),
                        "pingMs" : NumberLong(0),
                        "lastHeartbeatMessage" : "",
                        "syncingTo" : "192.168.127.128:17011",
                        "syncSourceHost" : "192.168.127.128:17011",
                        "syncSourceId" : 1,
                        "infoMessage" : "",
                        "configVersion" : 1
                }
        ],
        "ok" : 1,
        "$gleStats" : {
                "lastOpTime" : Timestamp(1592808266, 1),
                "electionId" : ObjectId("7fffffff0000000000000001")
        },
        "lastCommittedOpTime" : Timestamp(1592813363, 1),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1592813363, 1),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        },
        "operationTime" : Timestamp(1592813363, 1)
}
configsvr:PRIMARY>

 

三、分片副本集环境搭建

1、分片副本集说明

分片副本集1节点角色节点目录路径
192.168.127.128:37011分片节点/homework_mongodb_shard_auth/shard_cluster/shard1/shard_dbpath1
192.168.127.128:37013分片节点/homework_mongodb_shard_auth/shard_cluster/shard1/shard_dbpath2
192.168.127.128:37015分片节点/homework_mongodb_shard_auth/shard_cluster/shard1/shard_dbpath3
192.168.127.128:37017仲裁节点/homework_mongodb_shard_auth/shard_cluster/shard1/shard_dbpath4
分片副本集2节点角色节点目录路径
192.168.127.128:47011分片节点/homework_mongodb_shard_auth/shard_cluster/shard2/shard_dbpath1
192.168.127.128:47013分片节点/homework_mongodb_shard_auth/shard_cluster/shard2/shard_dbpath2
192.168.127.128:47015分片节点/homework_mongodb_shard_auth/shard_cluster/shard2/shard_dbpath3
192.168.127.128:47017仲裁节点/homework_mongodb_shard_auth/shard_cluster/shard2/shard_dbpath4
分片副本集3节点角色节点目录路径
192.168.127.128:57011分片节点/homework_mongodb_shard_auth/shard_cluster/shard3/shard_dbpath1
192.168.127.128:57013分片节点/homework_mongodb_shard_auth/shard_cluster/shard3/shard_dbpath2
192.168.127.128:57015分片节点/homework_mongodb_shard_auth/shard_cluster/shard3/shard_dbpath3
192.168.127.128:57017仲裁节点/homework_mongodb_shard_auth/shard_cluster/shard3/shard_dbpath4
分片副本集4节点角色节点目录路径
192.168.127.128:58011分片节点/homework_mongodb_shard_auth/shard_cluster/shard4/shard_dbpath1
192.168.127.128:58013分片节点/homework_mongodb_shard_auth/shard_cluster/shard4/shard_dbpath2
192.168.127.128:58015分片节点/homework_mongodb_shard_auth/shard_cluster/shard4/shard_dbpath3
192.168.127.128:58017仲裁节点/homework_mongodb_shard_auth/shard_cluster/shard4/shard_dbpath4

2、分别在相应节点目录下创建配置文件(以分片副本集1为例)

[root@localhost shard1]# pwd
/home/fanxuebo/Downloads/homework_mongodb_shard_auth/shard_cluster/shard1
[root@localhost shard1]# vi shard1_37011.cfg
# 四个分片节点分别是shard_cluster/shard1/shard_dbpath1、shard_cluster/shard1/shard_dbpath2、shard_cluster/shard1/shard_dbpath3、shard_cluster/shard1/shard_dbpath4
dbpath=shard_cluster/shard1/shard_dbpath1
# 四个分片节点分别是shard_cluster/shard1/logs/shard1_37011.log、shard_cluster/shard1/logs/shard2_37013.log、shard_cluster/shard1/logs/shard3_37015.log、shard_cluster/shard1/logs/shard4_37017.log
logpath=shard_cluster/shard1/logs/shard1_37011.log
logappend=true
fork=true
bind_ip=0.0.0.0
# 四个分片节点分别是37011、37013、17015、37017
port=37011
shardsvr=true
replSet=shard1svr

3、拷贝刚才编辑好的shard1_37011.cfg,并添加配置内容(配置内容参见上面配置文件中的注释)

[root@localhost shard1]# cp shard1_37011.cfg shard2_37013.cfg
[root@localhost shard1]# cp shard1_37011.cfg shard3_37015.cfg
[root@localhost shard1]# cp shard1_37011.cfg shard4_37017.cfg
​
[root@localhost shard1]# ll
total 16
drwxr-xr-x. 2 root root   6 Jun 21 19:08 logs
-rw-r--r--. 1 root root 547 Jun 21 19:30 shard1_37011.cfg
-rw-r--r--. 1 root root 547 Jun 21 19:35 shard2_37013.cfg
-rw-r--r--. 1 root root 547 Jun 21 19:35 shard3_37015.cfg
-rw-r--r--. 1 root root 547 Jun 21 19:35 shard4_37017.cfg
drwxr-xr-x. 2 root root   6 Jun 21 19:08 shard_dbpath1
drwxr-xr-x. 2 root root   6 Jun 21 19:08 shard_dbpath2
drwxr-xr-x. 2 root root   6 Jun 21 19:08 shard_dbpath3
drwxr-xr-x. 2 root root   6 Jun 21 19:08 shard_dbpath4
​
[root@localhost shard1]# vim shard2_37013.cfg
[root@localhost shard1]# vim shard3_37015.cfg
[root@localhost shard1]# vim shard4_37017.cfg

4、返回MongoDB目录,分别以配置文件方式启动

[root@localhost shard1]# cd ../..
​
[root@localhost homework_mongodb_shard_auth]# ./bin/mongod -f shard_cluster/shard1/shard1_37011.cfg
about to fork child process, waiting until server is ready for connections.
forked process: 47212
child process started successfully, parent exiting
[root@localhost homework_mongodb_shard_auth]# ./bin/mongod -f shard_cluster/shard1/shard2_37013.cfg
about to fork child process, waiting until server is ready for connections.
forked process: 47266
child process started successfully, parent exiting
[root@localhost homework_mongodb_shard_auth]# ./bin/mongod -f shard_cluster/shard1/shard3_37015.cfg
about to fork child process, waiting until server is ready for connections.
forked process: 47333
child process started successfully, parent exiting
[root@localhost homework_mongodb_shard_auth]# ./bin/mongod -f shard_cluster/shard1/shard4_37017.cfg
about to fork child process, waiting until server is ready for connections.
forked process: 47398
child process started successfully, parent exiting

5、输入命令,进入任意节点

[root@localhost homework_mongodb_shard_auth]# ./bin/mongo --port 37011
MongoDB shell version v4.2.8
connecting to: mongodb://127.0.0.1:37011/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("78606909-f485-4e45-ae95-1a80afa4cc72") }
MongoDB server version: 4.2.8
Server has startup warnings:
2020-06-22T00:16:40.255-0700 I  CONTROL  [initandlisten]
2020-06-22T00:16:40.255-0700 I  CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
2020-06-22T00:16:40.255-0700 I  CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
2020-06-22T00:16:40.255-0700 I  CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2020-06-22T00:16:40.255-0700 I  CONTROL  [initandlisten]
2020-06-22T00:16:40.255-0700 I  CONTROL  [initandlisten]
2020-06-22T00:16:40.255-0700 I  CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2020-06-22T00:16:40.255-0700 I  CONTROL  [initandlisten] **        We suggest setting it to 'never'
2020-06-22T00:16:40.255-0700 I  CONTROL  [initandlisten]
2020-06-22T00:16:40.255-0700 I  CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2020-06-22T00:16:40.255-0700 I  CONTROL  [initandlisten] **        We suggest setting it to 'never'
2020-06-22T00:16:40.255-0700 I  CONTROL  [initandlisten]
---
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).
​
The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.
​
To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---
​
>

6、添加配置("_id":"shard4svr"和配置文件中replSet=shard1svr保持一致),并初始化rs.initiate(cfg),然后查看节点状态验证rs.status(),有一个是仲裁节点

初始化后,首先状态是shard1svr:SECONDARY>,过一会后回车,会升级为主节点shard1svr:PRIMARY>

​
>var cfg = {
...     "_id":"shard1svr",
...     "protocolVersion":1,
...     "members":[
...         {
...             "_id":1,
...             "host":"192.168.127.128:37011",
...             "priority":10
...         },
...         {
...             "_id":2,
...             "host":"192.168.127.128:37013"
...         },
...         {
...             "_id":3,
...             "host":"192.168.127.128:37015"
...         },
... {
...             "_id":4,
... "arbiterOnly":true,
...             "host":"192.168.127.128:37017"
...         }]
... }
> rs.initiate(cfg)
{
        "ok" : 1,
        "$clusterTime" : {
                "clusterTime" : Timestamp(1592810509, 1),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        },
        "operationTime" : Timestamp(1592810509, 1)
}
shard1svr:SECONDARY>
shard1svr:PRIMARY> rs.status()
{
        "set" : "shard1svr",
        "date" : ISODate("2020-06-22T07:22:05.401Z"),
        "myState" : 1,
        "term" : NumberLong(1),
        "syncingTo" : "",
        "syncSourceHost" : "",
        "syncSourceId" : -1,
        "heartbeatIntervalMillis" : NumberLong(2000),
        "majorityVoteCount" : 3,
        "writeMajorityCount" : 3,
        "optimes" : {
                "lastCommittedOpTime" : {
                        "ts" : Timestamp(1592810520, 3),
                        "t" : NumberLong(1)
                },
                "lastCommittedWallTime" : ISODate("2020-06-22T07:22:00.476Z"),
                "readConcernMajorityOpTime" : {
                        "ts" : Timestamp(1592810520, 3),
                        "t" : NumberLong(1)
                },
                "readConcernMajorityWallTime" : ISODate("2020-06-22T07:22:00.476Z"),
                "appliedOpTime" : {
                        "ts" : Timestamp(1592810520, 3),
                        "t" : NumberLong(1)
                },
                "durableOpTime" : {
                        "ts" : Timestamp(1592810520, 3),
                        "t" : NumberLong(1)
                },
                "lastAppliedWallTime" : ISODate("2020-06-22T07:22:00.476Z"),
                "lastDurableWallTime" : ISODate("2020-06-22T07:22:00.476Z")
        },
        "lastStableRecoveryTimestamp" : Timestamp(1592810520, 3),
        "lastStableCheckpointTimestamp" : Timestamp(1592810520, 3),
        "electionCandidateMetrics" : {
                "lastElectionReason" : "electionTimeout",
                "lastElectionDate" : ISODate("2020-06-22T07:22:00.454Z"),
                "electionTerm" : NumberLong(1),
                "lastCommittedOpTimeAtElection" : {
                        "ts" : Timestamp(0, 0),
                        "t" : NumberLong(-1)
                },
                "lastSeenOpTimeAtElection" : {
                        "ts" : Timestamp(1592810509, 1),
                        "t" : NumberLong(-1)
                },
                "numVotesNeeded" : 3,
                "priorityAtElection" : 10,
                "electionTimeoutMillis" : NumberLong(10000),
                "numCatchUpOps" : NumberLong(0),
                "newTermStartDate" : ISODate("2020-06-22T07:22:00.476Z"),
                "wMajorityWriteAvailabilityDate" : ISODate("2020-06-22T07:22:01.495Z")
        },
        "members" : [
                {
                        "_id" : 1,
                        "name" : "192.168.127.128:37011",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 326,
                        "optime" : {
                                "ts" : Timestamp(1592810520, 3),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2020-06-22T07:22:00Z"),
                        "syncingTo" : "",
                        "syncSourceHost" : "",
                        "syncSourceId" : -1,
                        "infoMessage" : "could not find member to sync from",
                        "electionTime" : Timestamp(1592810520, 1),
                        "electionDate" : ISODate("2020-06-22T07:22:00Z"),
                        "configVersion" : 1,
                        "self" : true,
                        "lastHeartbeatMessage" : ""
                },
                {
                        "_id" : 2,
                        "name" : "192.168.127.128:37013",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 15,
                        "optime" : {
                                "ts" : Timestamp(1592810520, 3),
                                "t" : NumberLong(1)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1592810520, 3),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2020-06-22T07:22:00Z"),
                        "optimeDurableDate" : ISODate("2020-06-22T07:22:00Z"),
                        "lastHeartbeat" : ISODate("2020-06-22T07:22:04.464Z"),
                        "lastHeartbeatRecv" : ISODate("2020-06-22T07:22:03.562Z"),
                        "pingMs" : NumberLong(0),
                        "lastHeartbeatMessage" : "",
                        "syncingTo" : "192.168.127.128:37011",
                        "syncSourceHost" : "192.168.127.128:37011",
                        "syncSourceId" : 1,
                        "infoMessage" : "",
                        "configVersion" : 1
                },
                {
                        "_id" : 3,
                        "name" : "192.168.127.128:37015",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 15,
                        "optime" : {
                                "ts" : Timestamp(1592810520, 3),
                                "t" : NumberLong(1)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1592810520, 3),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2020-06-22T07:22:00Z"),
                        "optimeDurableDate" : ISODate("2020-06-22T07:22:00Z"),
                        "lastHeartbeat" : ISODate("2020-06-22T07:22:04.464Z"),
                        "lastHeartbeatRecv" : ISODate("2020-06-22T07:22:03.561Z"),
                        "pingMs" : NumberLong(0),
                        "lastHeartbeatMessage" : "",
                        "syncingTo" : "192.168.127.128:37011",
                        "syncSourceHost" : "192.168.127.128:37011",
                        "syncSourceId" : 1,
                        "infoMessage" : "",
                        "configVersion" : 1
                },
                {
                        "_id" : 4,
                        "name" : "192.168.127.128:37017",
                        "health" : 1,
                        "state" : 7,
                        "stateStr" : "ARBITER",
                        "uptime" : 15,
                        "lastHeartbeat" : ISODate("2020-06-22T07:22:04.464Z"),
                        "lastHeartbeatRecv" : ISODate("2020-06-22T07:22:04.297Z"),
                        "pingMs" : NumberLong(0),
                        "lastHeartbeatMessage" : "",
                        "syncingTo" : "",
                        "syncSourceHost" : "",
                        "syncSourceId" : -1,
                        "infoMessage" : "",
                        "configVersion" : 1
                }
        ],
        "ok" : 1,
        "$clusterTime" : {
                "clusterTime" : Timestamp(1592810520, 3),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        },
        "operationTime" : Timestamp(1592810520, 3)
}
shard1svr:PRIMARY>

7、分片副本集2、3、4配置同理

# 四个分片节点分别是shard_cluster/shard2/shard_dbpath1、shard_cluster/shard2/shard_dbpath2、shard_cluster/shard2/shard_dbpath3、shard_cluster/shard2/shard_dbpath4
dbpath=shard_cluster/shard2/shard_dbpath1
# 四个分片节点分别是shard_cluster/shard2/logs/shard1_47011.log、shard_cluster/shard2/logs/shard2_47013.log、shard_cluster/shard2/logs/shard3_47015.log、shard_cluster/shard2/logs/shard4_47017.log
logpath=shard_cluster/shard2/logs/shard1_47011.log
logappend=true
fork=true
bind_ip=0.0.0.0
# 四个分片节点分别是47011、47013、47015、47017
port=47011
shardsvr=true
replSet=shard2svr
# 四个分片节点分别是shard_cluster/shard3/shard_dbpath1、shard_cluster/shard3/shard_dbpath2、shard_cluster/shard3/shard_dbpath3、shard_cluster/shard3/shard_dbpath4
dbpath=shard_cluster/shard3/shard_dbpath1
# 四个分片节点分别是shard_cluster/shard3/logs/shard1_57011.log、shard_cluster/shard3/logs/shard2_57013.log、shard_cluster/shard3/logs/shard3_57015.log、shard_cluster/shard3/logs/shard4_57017.log
logpath=shard_cluster/shard3/logs/shard1_57011.log
logappend=true
fork=true
bind_ip=0.0.0.0
# 四个分片节点分别是57011、57013、57015、57017
port=57011
shardsvr=true
replSet=shard3svr
# 四个分片节点分别是shard_cluster/shard4/shard_dbpath1、shard_cluster/shard4/shard_dbpath2、shard_cluster/shard4/shard_dbpath3、shard_cluster/shard4/shard_dbpath4
dbpath=shard_cluster/shard4/shard_dbpath1
# 四个分片节点分别是shard_cluster/shard4/logs/shard1_58011.log、shard_cluster/shard4/logs/shard2_58013.log、shard_cluster/shard4/logs/shard8_57015.log、shard_cluster/shard4/logs/shard4_58017.log
logpath=shard_cluster/shard4/logs/shard1_58011.log
logappend=true
fork=true
bind_ip=0.0.0.0
# 四个分片节点分别是58011、58013、58015、58017
port=58011
shardsvr=true
replSet=shard4svr

 

四、路由节点环境搭建

1、路由节点机器说明

路由节点机器节点角色节点目录路径
192.168.127.128:27017路由节点/homework_mongodb_shard_auth/route

2、分别在相应节点目录下创建配置文件(以分片副本集1为例)

[root@localhost route]# pwd
/home/fanxuebo/Downloads/homework_mongodb_shard_auth/route
[root@localhost route]# vi route_27017.cfg
port=27017
bind_ip=0.0.0.0
fork=true
logpath=route/logs/route.log
configdb=configsvr/192.168.127.128:17011,192.168.127.128:17013,192.168.127.128:17015

3、启动路由节点(注意是./bin/mongos命令),并连接进入,查看状态

[root@localhost route]# cd ..
[root@localhost homework_mongodb_shard_auth]# ./bin/mongos -f route/route_27017.cfg
about to fork child process, waiting until server is ready for connections.
forked process: 55471
child process started successfully, parent exiting
​
[root@localhost homework_mongodb_shard_auth]# ./bin/mongo --port 27017
MongoDB shell version v4.2.8
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("741de845-787f-4b02-9b2d-403cae0a6f39") }
MongoDB server version: 4.2.8
Server has startup warnings:
2020-06-22T01:17:05.023-0700 I  CONTROL  [main]
2020-06-22T01:17:05.023-0700 I  CONTROL  [main] ** WARNING: Access control is not enabled for the database.
2020-06-22T01:17:05.023-0700 I  CONTROL  [main] **          Read and write access to data and configuration is unrestricted.
2020-06-22T01:17:05.023-0700 I  CONTROL  [main] ** WARNING: You are running this process as the root user, which is not recommended.
2020-06-22T01:17:05.023-0700 I  CONTROL  [main]
mongos>
​
mongos> sh.status()
--- Sharding Status ---
  sharding version: {
        "_id" : 1,
        "minCompatibleVersion" : 5,
        "currentVersion" : 6,
        "clusterId" : ObjectId("5ef05356794ed4730f838170")
  }
  shards:
  active mongoses:
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours:
                No recent migrations
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
​
mongos>

4、添加分片节点,分别执行命令

mongos> use admin
switched to db admin
mongos> sh.addShard("shard1svr/192.168.127.128:37011,192.168.127.128:37013,192.168.127.128:37015,192.168.127.128:37017")
{
        "shardAdded" : "shard1svr",
        "ok" : 1,
        "operationTime" : Timestamp(1592817739, 6),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1592817739, 6),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}
mongos> sh.addShard("shard2svr/192.168.127.128:47011,192.168.127.128:47013,192.168.127.128:47015,192.168.127.128:47017")
{
        "shardAdded" : "shard2svr",
        "ok" : 1,
        "operationTime" : Timestamp(1592817772, 4),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1592817772, 4),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}
mongos> sh.addShard("shard3svr/192.168.127.128:57011,192.168.127.128:57013,192.168.127.128:57015,192.168.127.128:57017")
{
        "shardAdded" : "shard3svr",
        "ok" : 1,
        "operationTime" : Timestamp(1592817779, 6),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1592817779, 6),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}
mongos> sh.addShard("shard4svr/192.168.127.128:58011,192.168.127.128:58013,192.168.127.128:58015,192.168.127.128:58017")
{
        "shardAdded" : "shard4svr",
        "ok" : 1,
        "operationTime" : Timestamp(1592817786, 7),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1592817786, 7),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}
mongos>

5、开启数据库和集合分片

mongos> sh.enableSharding("mamba")
{
        "ok" : 1,
        "operationTime" : Timestamp(1592817890, 19),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1592817890, 19),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}
mongos> sh.shardCollection("mamba.nba_star",{"name":"hashed"})
{
        "collectionsharded" : "mamba.nba_star",
        "collectionUUID" : UUID("e751fa47-8a76-4b69-9dcb-7e938b90ce42"),
        "ok" : 1,
        "operationTime" : Timestamp(1592902476, 42),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1592902476, 42),
                "signature" : {
                        "hash" : BinData(0,"+7+d0Ju/k8flMhGOmApOOkdhSo8="),
                        "keyId" : NumberLong("6841059462808076305")
                }
        }
}
mongos>

五、安全认证配置

1、进入路由节点创建管理员和普通用户(要先使用数据库,在数据库下创建用户)

[root@localhost homework_mongodb_shard_auth]# ./bin/mongo --port 27017
MongoDB shell version v4.2.8
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("b0b0acb4-32fe-4ec5-a6c3-ee3fe5e082bb") }
MongoDB server version: 4.2.8
Server has startup warnings:
2020-06-22T02:51:30.598-0700 I  CONTROL  [main]
2020-06-22T02:51:30.598-0700 I  CONTROL  [main] ** WARNING: Access control is not enabled for the database.
2020-06-22T02:51:30.598-0700 I  CONTROL  [main] **          Read and write access to data and configuration is unrestricted.
2020-06-22T02:51:30.598-0700 I  CONTROL  [main] ** WARNING: You are running this process as the root user, which is not recommended.
2020-06-22T02:51:30.598-0700 I  CONTROL  [main]
mongos>
mongos> use admin
mongos> db.createUser({"user":"root","pwd":"root",roles:[{"role":"root","db":"admin"}]})
Successfully added user: {
        "user" : "root",
        "roles" : [
                {
                        "role" : "root",
                        "db" : "admin"
                }
        ]
}
mongos> use mamba
mongos> db.createUser({"user":"rwUser","pwd":"rwUser",roles:[{"role":"readWrite","db":"mamba"}]})
Successfully added user: {
        "user" : "rwUser",
        "roles" : [
                {
                        "role" : "readWrite",
                        "db" : "mamba"
                }
        ]
}
mongos> db.createUser({"user":"rUser","pwd":"rUser",roles:[{"role":"read","db":"mamba"}]})
Successfully added user: {
        "user" : "rUser",
        "roles" : [
                {
                        "role" : "read",
                        "db" : "mamba"
                }
        ]
}
mongos> 

2、关闭所有的配置节点、分片节点、路由节点

参见第六部分

3、生成密钥文件并修改权限

[root@localhost homework_mongodb_shard_auth]# mkdir data/mongodb/keyFile -p
[root@localhost homework_mongodb_shard_auth]# openssl rand -base64 756 > data/mongodb/keyFile/testKeyFile.file
[root@localhost homework_mongodb_shard_auth]# chmod 600 data/mongodb/keyFile/testKeyFile.file
[root@localhost homework_mongodb_shard_auth]#

4、在配置副本集(3个节点)、分片副本集(4×4个节点)配置文件中,追加配置开启安全认证和指定密钥文件

auth=true
keyFile=data/mongodb/keyFile/testKeyFile.file

5、在路由节点配置文件中追加配置指定密钥文件

keyFile=data/mongodb/keyFile/testKeyFile.file

6、启动所有配置节点、分片节点、路由节点

参见第六部分

7、认证测试

参见视频作业演示

六、编写批量启停服务的shell脚本,安利批量关闭进程工具

1、创建并编辑启动脚本(注意顺序是:配置节点、分片节点、路由节点)

[root@localhost homework_mongodb_shard_auth]# vi startup.sh
./bin/mongod -f config_cluster/config1_17011.cfg
./bin/mongod -f config_cluster/config2_17013.cfg
./bin/mongod -f config_cluster/config3_17015.cfg
​
./bin/mongod -f shard_cluster/shard1/shard1_37011.cfg
./bin/mongod -f shard_cluster/shard1/shard2_37013.cfg
./bin/mongod -f shard_cluster/shard1/shard3_37015.cfg
./bin/mongod -f shard_cluster/shard1/shard4_37017.cfg
​
./bin/mongod -f shard_cluster/shard2/shard1_47011.cfg
./bin/mongod -f shard_cluster/shard2/shard2_47013.cfg
./bin/mongod -f shard_cluster/shard2/shard3_47015.cfg
./bin/mongod -f shard_cluster/shard2/shard4_47017.cfg
​
./bin/mongod -f shard_cluster/shard3/shard1_57011.cfg
./bin/mongod -f shard_cluster/shard3/shard2_57013.cfg
./bin/mongod -f shard_cluster/shard3/shard3_57015.cfg
./bin/mongod -f shard_cluster/shard3/shard4_57017.cfg
​
./bin/mongod -f shard_cluster/shard4/shard1_58011.cfg
./bin/mongod -f shard_cluster/shard4/shard2_58013.cfg
./bin/mongod -f shard_cluster/shard4/shard3_58015.cfg
./bin/mongod -f shard_cluster/shard4/shard4_58017.cfg
​
./bin/mongos -f route/route_27017.cfg

2、给启动脚本startup.sh设置权限,ll查看会看到启动脚本变成绿色,代表成功

[root@localhost homework_mongodb_shard_auth]# chmod +x startup.sh

3、执行脚本,即可批量依次启动所有配置节点,分片节点,路由节点

./startup.sh

4、创建并编辑停止脚本

[root@localhost homework_mongodb_shard_auth]# vi shutdown.sh
./bin/mongod --shutdown --config config_cluster/config1_17011.cfg
./bin/mongod --shutdown --config config_cluster/config2_17013.cfg
./bin/mongod --shutdown --config config_cluster/config3_17015.cfg
​
./bin/mongod --shutdown --config shard_cluster/shard1/shard1_37011.cfg
./bin/mongod --shutdown --config shard_cluster/shard1/shard2_37013.cfg
./bin/mongod --shutdown --config shard_cluster/shard1/shard3_37015.cfg
./bin/mongod --shutdown --config shard_cluster/shard1/shard4_37017.cfg
​
./bin/mongod --shutdown --config shard_cluster/shard2/shard1_47011.cfg
./bin/mongod --shutdown --config shard_cluster/shard2/shard2_47013.cfg
./bin/mongod --shutdown --config shard_cluster/shard2/shard3_47015.cfg
./bin/mongod --shutdown --config shard_cluster/shard2/shard4_47017.cfg
​
./bin/mongod --shutdown --config shard_cluster/shard3/shard1_57011.cfg
./bin/mongod --shutdown --config shard_cluster/shard3/shard2_57013.cfg
./bin/mongod --shutdown --config shard_cluster/shard3/shard3_57015.cfg
./bin/mongod --shutdown --config shard_cluster/shard3/shard4_57017.cfg
​
./bin/mongod --shutdown --config shard_cluster/shard4/shard1_58011.cfg
./bin/mongod --shutdown --config shard_cluster/shard4/shard2_58013.cfg
./bin/mongod --shutdown --config shard_cluster/shard4/shard3_58015.cfg
./bin/mongod --shutdown --config shard_cluster/shard4/shard4_58017.cfg
​
./bin/mongod --shutdown --config route/route_27017.cfg

5、给停止脚本shutdown.sh设置权限,ll查看会看到启动脚本变成绿色,代表成功

[root@localhost homework_mongodb_shard_auth]# chmod +x shutdown.sh

6、执行停止脚本即可批量停止所有服务

./shutdown.sh

7、下载安装批量杀进程的插件

yum install psmisc

8、执行命令即可批量杀死进程

killall mongod
killall mongos

七、其他事项

1、如果java程序无法连接虚拟机上的mongodb数据库,需要关闭防火墙

[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# systemctl disable firewalld.service

2、如果认证后,在操作过程中操作提示too many users are authenticated 

认证的用户过多,exit推出当前命令行,重新连接mongo数据库,重新认证

八、Spring Boot 整合 MongoDB

1、pom文件引入相关坐标依赖

<properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
        <java.version>1.8</java.version>
        <spring-boot.version>2.3.1.RELEASE</spring-boot.version>
    </properties>
​
    <dependencies>
​
        <!-- https://mvnrepository.com/artifact/org.springframework.boot/spring-boot-starter-web -->
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
            <version>${spring-boot.version}</version>
        </dependency>
​
        <!-- https://mvnrepository.com/artifact/org.springframework.boot/spring-boot-starter-test -->
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <version>${spring-boot.version}</version>
            <scope>test</scope>
        </dependency>
​
        <!-- https://mvnrepository.com/artifact/org.springframework.boot/spring-boot-starter-data-mongodb -->
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-mongodb</artifactId>
            <version>${spring-boot.version}</version>
        </dependency>
​
    </dependencies>
​
    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.8.1</version>
                <configuration>
                    <source>1.8</source>
                    <target>1.8</target>
                    <testSource>1.8</testSource>
                    <testTarget>1.8</testTarget>
                </configuration>
            </plugin>
        </plugins>
    </build>

2、spring配置文件application.properties

spring.data.mongodb.host=192.168.127.128
spring.data.mongodb.port=27017
spring.data.mongodb.database=mamba
spring.data.mongodb.username=rwUser
spring.data.mongodb.password=rwUser

3、定义实体类

@Document("nba_star") /** MongoRepository需要使用注解*/
public class NbaStar {
​
    private String id;
    private String name;
    private String city;
    private Date birthday;
    private double expectSalary;
​
    略
}

4、定义dao层接口

@Repository
public interface NbaStarRepository extends MongoRepository<NbaStar, String> {
}

5、编写Sping Boot启动类和测试类(增删改查可自己定义)

@SpringBootApplication
public class MongoApplication {
​
    public static void main(String[] args) {
        SpringApplication.run(MongoApplication.class, args);
    }
}
@RunWith(SpringJUnit4ClassRunner.class)
@SpringBootTest(classes = MongoApplication.class)
public class MongoAuthTest {

    @Autowired
    private NbaStarRepository nbaStarRepository;

    @Test
    public void testFind() {
        List<NbaStar> nbaStarList = nbaStarRepository.findAll();
        nbaStarList.forEach(nbaStar -> {
            System.out.println(nbaStar.toString());
        });
    }

    @Test
    public void testInsert() {
        NbaStar kyire = new NbaStar(null, "kyrie", "us", new Date(), 33320000);
        NbaStar insert = nbaStarRepository.insert(kyire);
        System.out.println(insert.toString());
    }
}

6、登陆路由节点,认证后,执行插入语句,验证分片情况(也可以在程序中操作,命令行执行比较快)(db.nba_star.stats()状态内容太多部分已省略

for(var i = 1; i <= 1000; i++) {
    db.nba_star.insert(
        {"name":"koke" + i, "city":"us", "birthday":new Date(), "expectSalary":12345678900000}
    )
}
[root@localhost homework_mongodb_shard_auth]# ./bin/mongo --port 27017
MongoDB shell version v4.2.8
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("36789eea-5923-44cc-ac42-1afd82f784be") }
MongoDB server version: 4.2.8
​
mongos> use mamba
switched to db mamba
​
mongos> db.auth("rwUser","rwUser")
1
​
mongos> for(var i = 1; i <= 1000; i++) {
... db.nba_star.insert(
... {"name":"koke" + i, "city":"us", "birthday":new Date(), "expectSalary":12345678900000}
... )
... }
WriteResult({ "nInserted" : 1 })
mongos>
mongos> db.nba_star.stats()
{
        "sharded" : true,
        "capped" : false,
        "wiredTiger" : {
                "metadata" : {
                        "formatVersion" : 1
                },
                "creationString" : "access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=false),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_image_max=0,memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u",
                "type" : "file",
                "uri" : "statistics:table:collection-0-1725845957864137391",
                "LSM" : {
                        "bloom filter false positives" : 0,
                        "bloom filter hits" : 0,
                        "bloom filter misses" : 0,
                        "bloom filter pages evicted from cache" : 0,
                        "bloom filter pages read into cache" : 0,
                        "bloom filters in the LSM tree" : 0,
                        "chunks in the LSM tree" : 0,
                        "highest merge generation in the LSM tree" : 0,
                        "queries that could have benefited from a Bloom filter that did not exist" : 0,
                        "sleep for LSM checkpoint throttle" : 0,
                        "sleep for LSM merge throttle" : 0,
                        "total size of bloom filters" : 0
                },
                "block-manager" : {
                        "allocations requiring file extension" : 4,
                        "blocks allocated" : 4,
                        "blocks freed" : 0,
                        "checkpoint size" : 8192,
                        "file allocation unit size" : 4096,
                        "file bytes available for reuse" : 0,
                        "file magic number" : 120897,
                        "file major version number" : 1,
                        "file size in bytes" : 24576,
                        "minor version number" : 0
                },
                "btree" : {
                        "btree checkpoint generation" : 97,
                        "column-store fixed-size leaf pages" : 0,
                        "column-store internal pages" : 0,
                        "column-store variable-size RLE encoded values" : 0,
                        "column-store variable-size deleted values" : 0,
                        "column-store variable-size leaf pages" : 0,
                        "fixed-record size" : 0,
                        "maximum internal page key size" : 368,
                        "maximum internal page size" : 4096,
                        "maximum leaf page key size" : 2867,
                        "maximum leaf page size" : 32768,
                        "maximum leaf page value size" : 67108864,
                        "maximum tree depth" : 3,
                        "number of key/value pairs" : 0,
                        "overflow pages" : 0,
                        "pages rewritten by compaction" : 0,
                        "row-store empty values" : 0,
                        "row-store internal pages" : 0,
                        "row-store leaf pages" : 0
                },
                "cache" : {
                        "bytes currently in the cache" : 50947,
                        "bytes dirty in the cache cumulative" : 908,
                        "bytes read into cache" : 0,
                        "bytes written from cache" : 24111,
                        "checkpoint blocked page eviction" : 0,
                        "data source pages selected for eviction unable to be evicted" : 0,
                        "eviction walk passes of a file" : 0,
                        "eviction walk target pages histogram - 0-9" : 0,
                        "eviction walk target pages histogram - 10-31" : 0,
                        "eviction walk target pages histogram - 128 and higher" : 0,
                        "eviction walk target pages histogram - 32-63" : 0,
                        "eviction walk target pages histogram - 64-128" : 0,
                        "eviction walks abandoned" : 0,
                        "eviction walks gave up because they restarted their walk twice" : 0,
                        "eviction walks gave up because they saw too many pages and found no candidates" : 0,
                        "eviction walks gave up because they saw too many pages and found too few candidates" : 0,
                        "eviction walks reached end of tree" : 0,
                        "eviction walks started from root of tree" : 0,
                        "eviction walks started from saved location in tree" : 0,
                        "hazard pointer blocked page eviction" : 0,
                        "in-memory page passed criteria to be split" : 0,
                        "in-memory page splits" : 0,
                        "internal pages evicted" : 0,
                        "internal pages split during eviction" : 0,
                        "leaf pages split during eviction" : 0,
                        "modified pages evicted" : 0,
                        "overflow pages read into cache" : 0,
                        "page split during eviction deepened the tree" : 0,
                        "page written requiring cache overflow records" : 0,
                        "pages read into cache" : 0,
                        "pages read into cache after truncate" : 1,
                        "pages read into cache after truncate in prepare state" : 0,
                        "pages read into cache requiring cache overflow entries" : 0,
                        "pages requested from the cache" : 246,
                        "pages seen by eviction walk" : 0,
                        "pages written from cache" : 2,
                        "pages written requiring in-memory restoration" : 0,
                        "tracked dirty bytes in the cache" : 0,
                        "unmodified pages evicted" : 0
                },
                "cache_walk" : {
                        "Average difference between current eviction generation when the page was last considered" : 0,
                        "Average on-disk page image size seen" : 0,
                        "Average time in cache for pages that have been visited by the eviction server" : 0,
                        "Average time in cache for pages that have not been visited by the eviction server" : 0,
                        "Clean pages currently in cache" : 0,
                        "Current eviction generation" : 0,
                        "Dirty pages currently in cache" : 0,
                        "Entries in the root page" : 0,
                        "Internal pages currently in cache" : 0,
                        "Leaf pages currently in cache" : 0,
                        "Maximum difference between current eviction generation when the page was last considered" : 0,
                        "Maximum page size seen" : 0,
                        "Minimum on-disk page image size seen" : 0,
                        "Number of pages never visited by eviction server" : 0,
                        "On-disk page image sizes smaller than a single allocation unit" : 0,
                        "Pages created in memory and never written" : 0,
                        "Pages currently queued for eviction" : 0,
                        "Pages that could not be queued for eviction" : 0,
                        "Refs skipped during cache traversal" : 0,
                        "Size of the root page" : 0,
                        "Total number of pages currently in cache" : 0
                },
                "compression" : {
                        "compressed page maximum internal page size prior to compression" : 4096,
                        "compressed page maximum leaf page size prior to compression " : 131072,
                        "compressed pages read" : 0,
                        "compressed pages written" : 1,
                        "page written failed to compress" : 0,
                        "page written was too small to compress" : 1
                },
                "cursor" : {
                        "bulk loaded cursor insert calls" : 0,
                        "cache cursors reuse count" : 245,
                        "close calls that result in cache" : 0,
                        "create calls" : 2,
                        "insert calls" : 246,
                        "insert key and value bytes" : 23282,
                        "modify" : 0,
                        "modify key and value bytes affected" : 0,
                        "modify value bytes modified" : 0,
                        "next calls" : 0,
                        "open cursor count" : 0,
                        "operation restarted" : 0,
                        "prev calls" : 1,
                        "remove calls" : 0,
                        "remove key bytes removed" : 0,
                        "reserve calls" : 0,
                        "reset calls" : 494,
                        "search calls" : 0,
                        "search near calls" : 0,
                        "truncate calls" : 0,
                        "update calls" : 0,
                        "update key and value bytes" : 0,
                        "update value size change" : 0
                },
                "reconciliation" : {
                        "dictionary matches" : 0,
                        "fast-path pages deleted" : 0,
                        "internal page key bytes discarded using suffix compression" : 0,
                        "internal page multi-block writes" : 0,
                        "internal-page overflow keys" : 0,
                        "leaf page key bytes discarded using prefix compression" : 0,
                        "leaf page multi-block writes" : 0,
                        "leaf-page overflow keys" : 0,
                        "maximum blocks required for a page" : 1,
                        "overflow values written" : 0,
                        "page checksum matches" : 0,
                        "page reconciliation calls" : 2,
                        "page reconciliation calls for eviction" : 0,
                        "pages deleted" : 0
                },
                "session" : {
                        "object compaction" : 0
                },
                "transaction" : {
                        "update conflicts" : 0
                }
        },
        "ns" : "mamba.nba_star",
        "count" : 1000,
        "size" : 92893,
        "storageSize" : 118784,
        "totalIndexSize" : 208896,
        "indexSizes" : {
                "_id_" : 102400,
                "name_hashed" : 106496
        },
        "avgObjSize" : 92,
        "maxSize" : NumberLong(0),
        "nindexes" : 2,
        "nchunks" : 8,
        "shards" : {
                "shard2svr" : {
                        "ns" : "mamba.nba_star",
                        "size" : 22853,
                        "count" : 246,
                        "avgObjSize" : 92,
                        "storageSize" : 24576,
                        "capped" : false,
                        "wiredTiger" : {},
                        "nindexes" : 2,
                        "indexBuilds" : [ ],
                        "totalIndexSize" : 40960,
                        "indexSizes" : {
                                "_id_" : 20480,
                                "name_hashed" : 20480
                        },
                        "scaleFactor" : 1,
                        "ok" : 1,
                        "$gleStats" : {
                                "lastOpTime" : {
                                        "ts" : Timestamp(1592902941, 81),
                                        "t" : NumberLong(29)
                                },
                                "electionId" : ObjectId("7fffffff000000000000001d")
                        },
                        "lastCommittedOpTime" : Timestamp(1592903056, 1),
                        "$configServerState" : {
                                "opTime" : {
                                        "ts" : Timestamp(1592903054, 1),
                                        "t" : NumberLong(27)
                                }
                        },
                        "$clusterTime" : {
                                "clusterTime" : Timestamp(1592903056, 1),
                                "signature" : {
                                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                                        "keyId" : NumberLong(0)
                                }
                        },
                        "operationTime" : Timestamp(1592903056, 1)
                },
                "shard4svr" : {
                        "ns" : "mamba.nba_star",
                        "size" : 22197,
                        "count" : 239,
                        "avgObjSize" : 92,
                        "storageSize" : 40960,
                        "capped" : false,
                        "wiredTiger" : {},
                        "nindexes" : 2,
                        "indexBuilds" : [ ],
                        "totalIndexSize" : 73728,
                        "indexSizes" : {
                                "_id_" : 36864,
                                "name_hashed" : 36864
                        },
                        "scaleFactor" : 1,
                        "ok" : 1,
                        "$gleStats" : {
                                "lastOpTime" : {
                                        "ts" : Timestamp(1592902941, 79),
                                        "t" : NumberLong(29)
                                },
                                "electionId" : ObjectId("7fffffff000000000000001d")
                        },
                        "lastCommittedOpTime" : Timestamp(1592903057, 1),
                        "$configServerState" : {
                                "opTime" : {
                                        "ts" : Timestamp(1592903054, 1),
                                        "t" : NumberLong(27)
                                }
                        },
                        "$clusterTime" : {
                                "clusterTime" : Timestamp(1592903058, 1),
                                "signature" : {
                                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                                        "keyId" : NumberLong(0)
                                }
                        },
                        "operationTime" : Timestamp(1592903057, 1)
                },
                "shard3svr" : {
                        "ns" : "mamba.nba_star",
                        "size" : 22207,
                        "count" : 239,
                        "avgObjSize" : 92,
                        "storageSize" : 28672,
                        "capped" : false,
                        "wiredTiger" : {},
                        "nindexes" : 2,
                        "indexBuilds" : [ ],
                        "totalIndexSize" : 49152,
                        "indexSizes" : {
                                "_id_" : 24576,
                                "name_hashed" : 24576
                        },
                        "scaleFactor" : 1,
                        "ok" : 1,
                        "$gleStats" : {
                                "lastOpTime" : {
                                        "ts" : Timestamp(1592902941, 71),
                                        "t" : NumberLong(28)
                                },
                                "electionId" : ObjectId("7fffffff000000000000001c")
                        },
                        "lastCommittedOpTime" : Timestamp(1592903058, 1),
                        "$configServerState" : {
                                "opTime" : {
                                        "ts" : Timestamp(1592903054, 1),
                                        "t" : NumberLong(27)
                                }
                        },
                        "$clusterTime" : {
                                "clusterTime" : Timestamp(1592903058, 1),
                                "signature" : {
                                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                                        "keyId" : NumberLong(0)
                                }
                        },
                        "operationTime" : Timestamp(1592903058, 1)
                },
                "shard1svr" : {
                        "ns" : "mamba.nba_star",
                        "size" : 25636,
                        "count" : 276,
                        "avgObjSize" : 92,
                        "storageSize" : 24576,
                        "capped" : false,
                        "wiredTiger" : {},
                        "nindexes" : 2,
                        "indexBuilds" : [ ],
                        "totalIndexSize" : 45056,
                        "indexSizes" : {
                                "_id_" : 20480,
                                "name_hashed" : 24576
                        },
                        "scaleFactor" : 1,
                        "ok" : 1,
                        "$gleStats" : {
                                "lastOpTime" : {
                                        "ts" : Timestamp(1592902941, 78),
                                        "t" : NumberLong(32)
                                },
                                "electionId" : ObjectId("7fffffff0000000000000020")
                        },
                        "lastCommittedOpTime" : Timestamp(1592903056, 2),
                        "$configServerState" : {
                                "opTime" : {
                                        "ts" : Timestamp(1592903054, 1),
                                        "t" : NumberLong(27)
                                }
                        },
                        "$clusterTime" : {
                                "clusterTime" : Timestamp(1592903056, 2),
                                "signature" : {
                                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                                        "keyId" : NumberLong(0)
                                }
                        },
                        "operationTime" : Timestamp(1592903056, 2)
                }
        },
        "ok" : 1,
        "operationTime" : Timestamp(1592903058, 1),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1592903058, 1),
                "signature" : {
                        "hash" : BinData(0,"pvr8niN99t8ff0SnIltPWwWYLfs="),
                        "keyId" : NumberLong("6841059462808076305")
                }
        }
}
mongos>  
  • 2
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值