文章目录
mongo扩容方案/使用副本集
1.增加harddisk
- agung说是采用了RAID1,所以无法添加新的硬盘,因此方案1pass掉。
2.新加一台服务器作为mongodb的从节点,设置主从模式
-
从节点不能自动转变为主节点,故障不能转移
-
从节点不能进行写操作,智能进行读操作,导致master节点又要读又要写,机器承载压力过大。
-
新版本的Mongodb都不在支持master/slave的模式。
因此从长远考虑我们应该选择第三种方式:副本集模式
3.采用mongodb的副本集模式
参考链接:https://gitee.com/myShoe/icat-mongodb?_from=gitee_search
官网配置教程(带仲裁者):<https://docs.mongodb.com/manual/tutorial/add-replica-set-arbiter/>
关于MongoDB集群 mongodb的集群搭建方式主要有三种,主从模式,Replica set模式,sharding模式, 三种模式各有优劣,适用于不同的场合, 属Replica set应用最为广泛,主从模式现在用的较少,sharding模式最为完备,但配置维护较为复杂。
副本集中mongo的数量至少3个分别是:primary 、secondary、arbiter其配置文件如下。
#master.conf
dbpath=/usr/local/mongodb/data
logpath=/usr/local/mongodb/logs/mongodb.log
pidfilepath=/usr/local/mongodb/master.pid
directoryperdb=true
logappend=true
replSet=testdb
port=27017
oplogSize=100
fork=true
noprealloc=true
#slave.conf
dbpath=/usr/local/mongodb/data
logpath=/usr/local/mongodb/logs/mongodb.log
pidfilepath=/usr/local/mongodb/master.pid
directoryperdb=true
logappend=true
replSet=testdb
port=27017
oplogSize=100
fork=true
noprealloc=true
#arbiter.conf
dbpath=/usr/local/mongodb/data
logpath=/usr/local/mongodb/logs/mongodb.log
pidfilepath=/usr/local/mongodb/master.pid
directoryperdb=true
logappend=true
replSet=testdb
port=27017
oplogSize=100
fork=true
noprealloc=true
--quiet # 安静输出
--port arg # 指定服务端口号,默认端口27017
--bind_ip arg # 绑定服务IP,若绑定127.0.0.1,则只能本机访问,不指定默认本地所有IP
--logpath arg # 指定MongoDB日志文件,注意是指定文件不是目录
--logappend # 使用追加的方式写日志
--pidfilepath arg # PID File 的完整路径,如果没有设置,则没有PID文件
--keyFile arg # 集群的私钥的完整路径,只对于Replica Set 架构有效
--unixSocketPrefix arg # UNIX域套接字替代目录,(默认为 /tmp)
--fork # 以守护进程的方式运行MongoDB,创建服务器进程
--auth # 启用验证
--cpu # 定期显示CPU的CPU利用率和iowait
--dbpath arg # 指定数据库路径
--diaglog arg # diaglog选项 0=off 1=W 2=R 3=both 7=W+some reads
--directoryperdb # 设置每个数据库将被保存在一个单独的目录
--journal # 启用日志选项,MongoDB的数据操作将会写入到journal文件夹的文件里
--journalOptions arg # 启用日志诊断选项
--ipv6 # 启用IPv6选项
--jsonp # 允许JSONP形式通过HTTP访问(有安全影响)
--maxConns arg # 最大同时连接数 默认2000
--noauth # 不启用验证
--nohttpinterface # 关闭http接口,默认关闭27018端口访问
--noprealloc # 禁用数据文件预分配(往往影响性能)
--noscripting # 禁用脚本引擎
--notablescan # 不允许表扫描
--nounixsocket # 禁用Unix套接字监听
--nssize arg (=16) # 设置信数据库.ns文件大小(MB)
--objcheck # 在收到客户数据,检查的有效性,
--profile arg # 档案参数 0=off 1=slow, 2=all
--quota # 限制每个数据库的文件数,设置默认为8
--quotaFiles arg # number of files allower per db, requires
--quota --rest # 开启简单的rest API
--repair # 修复所有数据库run repair on all dbs
--repairpath arg # 修复库生成的文件的目录,默认为目录名称dbpath
--slowms arg (=100) # value of slow for profile and console log
--smallfiles # 使用较小的默认文件
--syncdelay arg (=60) # 数据写入磁盘的时间秒数(0=never,不推荐)
--sysinfo # 打印一些诊断系统信息
--upgrade # 如果需要升级数据库
--fastsync # 从一个dbpath里启用从库复制服务,该dbpath的数据库是主库的快照,可用于快速启用同步
--autoresync # 如果从库与主库同步数据差得多,自动重新同步,
--oplogSize arg # 设置oplog的大小(MB)
--master # 主库模式
--slave # 从库模式
--source arg # 从库 端口号
--only arg # 指定单一的数据库复制
--slavedelay arg #设置从库同步主库的延迟时间
--replSet arg # 设置副本集名称
--configsvr # 声明这是一个集群的config服务,默认端口27019,默认目录/data/configdb
--shardsvr # 声明这是一个集群的分片,默认端口27018
--noMoveParanoia # 关闭偏执为moveChunk数据保存
节点配置完之后就可以启动mongo了,cd到bin目录下:
./mongod -f /etc/mongodb_master.conf
./mongod -f /etc/mongodb_slave.conf
./mongod -f /etc/mongodb_arbiter.conf
配置节点 最后,就需要配置主、备、仲裁节点了。首先我们选择一台服务器进行连接:
./mongo --port 27017
use admin
cfg={
\_id:"testdb",
members:[
{\_id:0,host:'10.100.1.101:27017',priority:2},
{\_id:1,host:'10.100.1.102:27017',priority:1},
{\_id:2,host:'10.100.1.103:27017',arbiterOnly:true}
]
};
rs.initiate(cfg) #生效配置 如果不出意外,配置正常生效,基本也就完成了,可以通过rs.status()命令查看相关信息。
rs.conf() #查看配置详情
4.mongo数据的备份与还原
-
数据的备份
mongodump -h dbhost -d dbname -o dbdirectory
-h : mongodb所有的服务器地址
-d:需要备份的数据库实例
-o : 备份数据的存放路径
**example: **sudo mongodump -h localhost:27017 -d test -o /home/mongodump/
-
数据的恢复
mongoretore -h <:port> -d dbname --dir dbdirectory
-h:mongodb服务器所在的地址,默认为local host:27017
-d:需要恢复的数据库
–dir : 备份数据所在的路径
–drop :恢复的时候,先删除当前的数据,然后恢复备份的数据
example: sudo mongorestore -h localhost:27017 -d test --dir /home/mongodump/
5.mongo数据的导入与导出
-
数据的导出
mongoexport -h dbhost -d dbname -c collectionname -o file --type json/csv -f field
-h : mongodb所有的服务器地址
-d:需要备份的数据库实例
-c:collection名
-o : 输出的文件名
–type :输出的格式,默认为json
-f:输出的字段,如果type为csv,则需要加上-f的字段名
**example: **mongoexport -d test -c users -o /home/python/Desktop/mongoDB/users.json --type json -f “_id,user_id,user_name,age,status”
-
数据的导入【经实测,副本集模式下,只有primary可以被手动导入,secondary智能被primary写】
mongoimport -h <:port> -d dbname -c collectionname --file filename --headerline --type json/csv -f field
-h:mongodb服务器所在的地址,默认为local host:27017
-d:需要恢复的数据库
-c : collection名
–file :要导入的文件
–headerline:如果导入的格式csv,则可以使用第一行的标题作为导入的字段
–type:导入的格式默认为json
example: sudo mongoimport -h localhost:12332 -d test -c users --file /tmp/test/users.json --type json
6.本地部署相关
-
error number 1
[root@docker-test etc]# mongod -f /etc/mongod.conf
about to fork child process, waiting until server is ready for connections.
forked process: 25545
ERROR: child process failed, exited with error number 1删除原有的mongodb.lock,注意修改日志文件或其他写文件的权限。
-
error number 100
[root@docker-test mongo]# mongod -f /etc/mongod.conf about to fork child process, waiting until server is ready for connections. forked process: 26920 ERROR: child process failed, exited with error number 100
-
另外说一下,mongodb的正确关闭方法。
kill -2 pid
use admin;
db.shutdownServer(),否则会造成数据被锁住。 -
使用配配置文件的方式一直启动不起来。报error number 100
解决的方法:手动创建db的文件夹,使用命令启动【mongod --dbpath /data/mongo/mongo_primary/ --logpath /data/mongo/log/mongo_primary.log --port 27017 --replSet testdb --fork】
成功创建了3个mongo的实例。现在
use admin
switched to db admin
cfg={_id:“testdb”,members:[{_id:0,host:‘127.0.0.1:27017’,priority:1},{_id:1,host:‘127.0.0.1:12331’,priority:2},{_id:2,host:‘127.0.0.1:12332’,arbiterOnly:true}]};
{
“_id” : “testdb”,
“members” : [
{
“_id” : 0,
“host” : “127.0.0.1:27017”,
“priority” : 1
},
{
“_id” : 1,
“host” : “127.0.0.1:12331”,
“priority” : 2
},
{
“_id” : 2,
“host” : “127.0.0.1:12332”,
“arbiterOnly” : true
}
]
}
rs.initate(cfg)
2020-03-19T15:05:17.611+0800 E QUERY [thread1] TypeError: rs.initate is not a function :
@(shell):1:1
rs.initiate(cfg)
{ “ok” : 1 }
testdb:OTHER> rs.conf()
{
"_id" : "testdb",
"version" : 1,
"protocolVersion" : NumberLong(1),
"members" : [
{
"_id" : 0,
"host" : "127.0.0.1:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 1,
"host" : "127.0.0.1:12331",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 2,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 2,
"host" : "127.0.0.1:12332",
"arbiterOnly" : true,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatIntervalMillis" : 2000,
"heartbeatTimeoutSecs" : 10,
"electionTimeoutMillis" : 10000,
"getLastErrorModes" : {
},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
},
"replicaSetId" : ObjectId("5e7319c48ab104a5f965524f")
}
}
testdb:SECONDARY>
成功建立起一个mongo的replica set with an arbiter
测试:
- 向primary中插入数据
[root@docker-test mongo]# mongo --port 12331
MongoDB shell version: 3.2.22
connecting to: 127.0.0.1:12331/test
Server has startup warnings:
2020-03-19T15:00:22.553+0800 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2020-03-19T15:00:22.554+0800 I CONTROL [initandlisten]
2020-03-19T15:00:22.554+0800 I CONTROL [initandlisten]
2020-03-19T15:00:22.554+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2020-03-19T15:00:22.554+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2020-03-19T15:00:22.554+0800 I CONTROL [initandlisten]
2020-03-19T15:00:22.554+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2020-03-19T15:00:22.554+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2020-03-19T15:00:22.554+0800 I CONTROL [initandlisten]
2020-03-19T15:00:22.554+0800 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 7270 processes, 65535 files. Number of processes should be at least 32767.5 : 0.5 times number of files.
2020-03-19T15:00:22.554+0800 I CONTROL [initandlisten]
testdb:PRIMARY> for(var i=1;i<=1000;i++) db.users.insert({id:i,addr_1:"Beijing",addr_2:"Shanghai"});
WriteResult({ "nInserted" : 1 })
testdb:PRIMARY> show dbs
local 0.000GB
test 0.000GB
testdb:PRIMARY> use test
switched to db test
testdb:PRIMARY> show collections
users
testdb:PRIMARY> db.users.find()
{ "_id" : ObjectId("5e731d735e79442c06446bb5"), "id" : 1, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5e731d735e79442c06446bb6"), "id" : 2, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5e731d735e79442c06446bb7"), "id" : 3, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5e731d735e79442c06446bb8"), "id" : 4, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5e731d735e79442c06446bb9"), "id" : 5, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5e731d735e79442c06446bba"), "id" : 6, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5e731d735e79442c06446bbb"), "id" : 7, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5e731d735e79442c06446bbc"), "id" : 8, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5e731d735e79442c06446bbd"), "id" : 9, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5e731d735e79442c06446bbe"), "id" : 10, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5e731d735e79442c06446bbf"), "id" : 11, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5e731d735e79442c06446bc0"), "id" : 12, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5e731d735e79442c06446bc1"), "id" : 13, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5e731d735e79442c06446bc2"), "id" : 14, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5e731d735e79442c06446bc3"), "id" : 15, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5e731d735e79442c06446bc4"), "id" : 16, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5e731d735e79442c06446bc5"), "id" : 17, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5e731d735e79442c06446bc6"), "id" : 18, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5e731d735e79442c06446bc7"), "id" : 19, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5e731d735e79442c06446bc8"), "id" : 20, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
Type "it" for more
testdb:PRIMARY>
- 结果登录secondary,报错。
testdb:SECONDARY> show dbs
2020-03-19T15:23:41.995+0800 E QUERY [thread1] Error: listDatabases failed:{ "ok" : 0, "errmsg" : "not master and slaveOk=false", "code" : 13435 } :
_getErrorWithCode@src/mongo/shell/utils.js:25:13
Mongo.prototype.getDBs@src/mongo/shell/mongo.js:62:1
shellHelper.show@src/mongo/shell/utils.js:781:19
shellHelper@src/mongo/shell/utils.js:671:15
@(shellhelp2):1:1
- 解决方法是:在secondary中执行函数rs.slaveOK(true),然后就可以同步了。
[root@docker-test data]# mongo --port 27017
MongoDB shell version: 3.2.22
connecting to: 127.0.0.1:27017/test
Server has startup warnings:
2020-03-19T15:31:07.663+0800 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2020-03-19T15:31:07.663+0800 I CONTROL [initandlisten]
2020-03-19T15:31:07.663+0800 I CONTROL [initandlisten]
2020-03-19T15:31:07.663+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2020-03-19T15:31:07.663+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2020-03-19T15:31:07.663+0800 I CONTROL [initandlisten]
2020-03-19T15:31:07.663+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2020-03-19T15:31:07.663+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2020-03-19T15:31:07.663+0800 I CONTROL [initandlisten]
2020-03-19T15:31:07.663+0800 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 7270 processes, 65535 files. Number of processes should be at least 32767.5 : 0.5 times number of files.
2020-03-19T15:31:07.663+0800 I CONTROL [initandlisten]
testdb:SECONDARY> rs.slaveOk(true)
testdb:SECONDARY> show dbs;
local 0.000GB
test 0.000GB
testdb:SECONDARY> show collections
users
testdb:SECONDARY> db.users.find()
{ "_id" : ObjectId("5e731d735e79442c06446bb5"), "id" : 1, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5e731d735e79442c06446bb6"), "id" : 2, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5e731d735e79442c06446bce"), "id" : 26, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5e731d735e79442c06446bd0"), "id" : 28, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5e731d735e79442c06446bdb"), "id" : 39, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5e731d735e79442c06446bdf"), "id" : 43, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5e731d735e79442c06446be5"), "id" : 49, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5e731d735e79442c06446be8"), "id" : 52, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5e731d735e79442c06446c11"), "id" : 93, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5e731d735e79442c06446c20"), "id" : 108, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5e731d735e79442c06446c26"), "id" : 114, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5e731d735e79442c06446c2c"), "id" : 120, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5e731d735e79442c06446c38"), "id" : 132, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5e731d735e79442c06446c48"), "id" : 148, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5e731d735e79442c06446c67"), "id" : 179, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5e731d735e79442c06446c69"), "id" : 181, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5e731d735e79442c06446c70"), "id" : 188, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5e731d735e79442c06446c7d"), "id" : 201, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5e731d735e79442c06446c85"), "id" : 209, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
{ "_id" : ObjectId("5e731d735e79442c06446cad"), "id" : 249, "addr_1" : "Beijing", "addr_2" : "Shanghai" }
Type "it" for more
- 测试arbiter的作用。先关掉primary的进程。
testdb:PRIMARY> db.shutdownServer()
server should be down...
2020-03-19T15:37:24.851+0800 I NETWORK [thread1] trying reconnect to 127.0.0.1:12331 (127.0.0.1) failed
2020-03-19T15:37:24.852+0800 I NETWORK [thread1] reconnect 127.0.0.1:12331 (127.0.0.1) failed failed
2020-03-19T15:37:24.854+0800 I NETWORK [thread1] trying reconnect to 127.0.0.1:12331 (127.0.0.1) failed
2020-03-19T15:37:24.973+0800 I NETWORK [thread1] Socket recv() errno:104 Connection reset by peer 127.0.0.1:12331
2020-03-19T15:37:24.973+0800 I NETWORK [thread1] SocketException: remote: (NONE):0 error: 9001 socket exception [RECV_ERROR] server [127.0.0.1:12331]
2020-03-19T15:37:24.974+0800 I NETWORK [thread1] reconnect 127.0.0.1:12331 (127.0.0.1) failed failed
> quit()
2020-03-19T15:37:32.612+0800 I NETWORK [thread1] trying reconnect to 127.0.0.1:12331 (127.0.0.1) failed
2020-03-19T15:37:32.612+0800 W NETWORK [thread1] Failed to connect to 127.0.0.1:12331, in(checking socket for error after poll), reason: errno:111 Connection refused
2020-03-19T15:37:32.612+0800 I NETWORK [thread1] reconnect 127.0.0.1:12331 (127.0.0.1) failed failed
2020-03-19T15:37:32.612+0800 I QUERY [thread1] Failed to kill cursor 36376178079 due to Location9001: socket exception [CONNECT_ERROR] for couldn't connect to server 127.0.0.1:12331, connection attempt failed
- 查看rs.status()
testdb:SECONDARY> show dbs;
local 0.000GB
test 0.000GB
testdb:PRIMARY> rs.status()
{
"set" : "testdb",
"date" : ISODate("2020-03-19T07:39:08.941Z"),
"myState" : 1,
"term" : NumberLong(3),
"heartbeatIntervalMillis" : NumberLong(2000),
"members" : [
{
"_id" : 0,
"name" : "127.0.0.1:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 482,
"optime" : {
"ts" : Timestamp(1584603454, 1),
"t" : NumberLong(3)
},
"optimeDate" : ISODate("2020-03-19T07:37:34Z"),
"infoMessage" : "could not find member to sync from",
"electionTime" : Timestamp(1584603453, 1),
"electionDate" : ISODate("2020-03-19T07:37:33Z"),
"configVersion" : 1,
"self" : true
},
{
"_id" : 1,
"name" : "127.0.0.1:12331",
"health" : 0,
"state" : 8,
"stateStr" : "(not reachable/healthy)",
"uptime" : 0,
"optime" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat" : ISODate("2020-03-19T07:39:07.832Z"),
"lastHeartbeatRecv" : ISODate("2020-03-19T07:37:24.558Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "Connection refused",
"configVersion" : -1
},
{
"_id" : 2,
"name" : "127.0.0.1:12332",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 481,
"lastHeartbeat" : ISODate("2020-03-19T07:39:07.816Z"),
"lastHeartbeatRecv" : ISODate("2020-03-19T07:39:04.486Z"),
"pingMs" : NumberLong(0),
"configVersion" : 1
}
],
"ok" : 1
}
testdb:PRIMARY>