MongoDB副本集(一主两从)读写分离、故障转移功能环境部署记录

目录

前言

Mongodb副本集的工作原理

一、环境准备

二、Mongodb安装、副本集配置

1) 在三个节点机上建立mongodb副本集测试文件夹,用于存放整个副本集文件

2)在三个节点机上安装mongodb

3)分别在每个节点机上启动mongodb(启动时指明--bind_ip地址,默认是127.0.0.1,需要改成本机ip,否则远程连接时失败)

4)初始化副本集

三、测试Mongodb副本集数据复制功能 

1)在主节点172.16.60.205上连接到终端

2)在副本节点172.16.60.206、172.16.60.207上连接到mongodb查看数据是否复制过来。

四、测试副本集故障转移功能

1)停掉原来的主节点172.16.60.205的mongodb,模拟故障

2)接着登录到另外两个正常的从节点172.16.60.206、172.16.60.207中的任意一个节点的mongodb,查看副本集状态

3)现在在172.16.60.206新主节点上创建测试数据

4)另一个从节点172.16.60.207上登录mongodb查看

5)再重新启动原来的主节点172.16.60.205的mongodb

五、Mongodb读写分离


前言

Mongodb是一种非关系数据库(NoSQL),非关系型数据库的产生就是为了解决大数据量、高扩展性、高性能、灵活数据模型、高可用性。MongoDB官方已经不建议使用主从模式了,替代方案是采用副本集的模式。主从模式其实就是一个单副本的应用,没有很好的扩展性和容错性,而Mongodb副本集具有多个副本保证了容错性,就算一个副本挂掉了还有很多副本存在,主节点挂掉后,整个集群内会实现自动切换。

Mongodb副本集的工作原理

客户端连接到整个Mongodb副本集,不关心具体哪一台节点机是否挂掉。主节点机负责整个副本集的读写,副本集定期同步数据备份,一但主节点挂掉,副本节点就会选举一个新的主服务器,这一切对于应用服务器不需要关心。副本集中的副本节点在主节点挂掉后通过心跳机制检测到后,就会在集群内发起主节点的选举机制,自动选举一位新的主服务器。

看起来Mongodb副本集很牛X的样子,下面就演示下副本集环境部署过程,官方推荐的Mongodb副本集机器数量为至少3个节点,这里我就选择三个节点,一个主节点,两个从节点,暂不使用仲裁节点。

一、环境准备

  1. ip地址 主机名 角色
  2. 172.16.60.205 mongodb-master01 副本集主节点
  3. 172.16.60.206 mongodb-slave01 副本集副本节点
  4. 172.16.60.207 mongodb-slave02 副本集副本节点
  5. 三个节点机均设置好各自的主机名,并如下设置好hosts绑定
  6. [root@mongodb-master01 ~]# cat /etc/hosts
  7. ............
  8. 172.16.60.205 mongodb-master01
  9. 172.16.60.206 mongodb-slave01
  10. 172.16.60.207 mongodb-slave02
  11. 三个节点机均关闭selinux,为了测试方便,将iptables也关闭
  12. [root@mongodb-master01 ~]# setenforce 0
  13. [root@mongodb-master01 ~]# cat /etc/sysconfig/selinux
  14. ...........
  15. SELINUX=disabled
  16. [root@mongodb-master01 ~]# iptables -F
  17. [root@mongodb-master01 ~]# /etc/init.d/iptables stop
  18. [root@mongodb-master01 ~]# /etc/init.d/iptables stop
  19. iptables: Setting chains to policy ACCEPT: filter [ OK ]
  20. iptables: Flushing firewall rules: [ OK ]
  21. iptables: Unloading modules: [ OK ]

二、Mongodb安装、副本集配置

  1. 1) 在三个节点机上建立mongodb副本集测试文件夹,用于存放整个副本集文件

  2. [root@mongodb-master01 ~]# mkdir -p /data/mongodb/data/replset/
  3. 2)在三个节点机上安装mongodb

  4. 下载地址:https://www.mongodb.org/dl/linux/x86_64-rhel62
  5. [root@mongodb-master01 ~]# wget http://downloads.mongodb.org/linux/mongodb-linux-x86_64-rhel62-v3.6-latest.tgz
  6. [root@mongodb-master01 ~]# tar -zvxf mongodb-linux-x86_64-rhel62-v3.6-latest.tgz
  7. 3)分别在每个节点机上启动mongodb(启动时指明--bind_ip地址,默认是127.0.0.1,需要改成本机ip,否则远程连接时失败)

  8. [root@mongodb-master01 ~]# mv mongodb-linux-x86_64-rhel62-3.6.11-rc0-2-g2151d1d219 /usr/local/mongodb
  9. [root@mongodb-master01 ~]# nohup /usr/local/mongodb/bin/mongod -dbpath /data/mongodb/data/replset -replSet repset --bind_ip=172.16.60.205 --port=27017 &
  10. [root@mongodb-master01 ~]# ps -ef|grep mongodb
  11. root 7729 6977 1 15:10 pts/1 00:00:01 /usr/local/mongodb/bin/mongod -dbpath /data/mongodb/data/replset -replSet repset
  12. root 7780 6977 0 15:11 pts/1 00:00:00 grep mongodb
  13. [root@mongodb-master01 ~]# lsof -i:27017
  14. COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
  15. mongod 7729 root 10u IPv4 6554476 0t0 TCP localhost:27017 (LISTEN)
  16. 4)初始化副本集

  17. 在三个节点中的任意一个节点机上操作(比如在172.16.60.205节点机)
  18. 登陆mongodb
  19. [root@mongodb-master01 ~]# /usr/local/mongodb/bin/mongo 172.16.60.205:27017
  20. .........
  21. #使用admin数据库
  22. > use admin
  23. switched to db admin
  24. #定义副本集配置变量,这里的 _id:”repset” 和上面命令参数“ –replSet repset” 要保持一样。
  25. > config = { _id:"repset", members:[{_id:0,host:"172.16.60.205:27017"},{_id:1,host:"172.16.60.206:27017"},{_id:2,host:"172.16.60.207:27017"}]}
  26. {
  27. "_id" : "repset",
  28. "members" : [
  29. {
  30. "_id" : 0,
  31. "host" : "172.16.60.205:27017"
  32. },
  33. {
  34. "_id" : 1,
  35. "host" : "172.16.60.206:27017"
  36. },
  37. {
  38. "_id" : 2,
  39. "host" : "172.16.60.207:27017"
  40. }
  41. ]
  42. }
  43. #初始化副本集配置
  44. > rs.initiate(config);
  45. {
  46. "ok" : 1,
  47. "operationTime" : Timestamp(1551166191, 1),
  48. "$clusterTime" : {
  49. "clusterTime" : Timestamp(1551166191, 1),
  50. "signature" : {
  51. "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
  52. "keyId" : NumberLong(0)
  53. }
  54. }
  55. }
  56. #查看集群节点的状态
  57. repset:SECONDARY> rs.status();
  58. {
  59. "set" : "repset",
  60. "date" : ISODate("2019-02-26T07:31:07.766Z"),
  61. "myState" : 1,
  62. "term" : NumberLong(1),
  63. "syncingTo" : "",
  64. "syncSourceHost" : "",
  65. "syncSourceId" : -1,
  66. "heartbeatIntervalMillis" : NumberLong(2000),
  67. "optimes" : {
  68. "lastCommittedOpTime" : {
  69. "ts" : Timestamp(1551166263, 1),
  70. "t" : NumberLong(1)
  71. },
  72. "readConcernMajorityOpTime" : {
  73. "ts" : Timestamp(1551166263, 1),
  74. "t" : NumberLong(1)
  75. },
  76. "appliedOpTime" : {
  77. "ts" : Timestamp(1551166263, 1),
  78. "t" : NumberLong(1)
  79. },
  80. "durableOpTime" : {
  81. "ts" : Timestamp(1551166263, 1),
  82. "t" : NumberLong(1)
  83. }
  84. },
  85. "members" : [
  86. {
  87. "_id" : 0,
  88. "name" : "172.16.60.205:27017",
  89. "health" : 1,
  90. "state" : 1,
  91. "stateStr" : "PRIMARY",
  92. "uptime" : 270,
  93. "optime" : {
  94. "ts" : Timestamp(1551166263, 1),
  95. "t" : NumberLong(1)
  96. },
  97. "optimeDate" : ISODate("2019-02-26T07:31:03Z"),
  98. "syncingTo" : "",
  99. "syncSourceHost" : "",
  100. "syncSourceId" : -1,
  101. "infoMessage" : "could not find member to sync from",
  102. "electionTime" : Timestamp(1551166202, 1),
  103. "electionDate" : ISODate("2019-02-26T07:30:02Z"),
  104. "configVersion" : 1,
  105. "self" : true,
  106. "lastHeartbeatMessage" : ""
  107. },
  108. {
  109. "_id" : 1,
  110. "name" : "172.16.60.206:27017",
  111. "health" : 1,
  112. "state" : 2,
  113. "stateStr" : "SECONDARY",
  114. "uptime" : 76,
  115. "optime" : {
  116. "ts" : Timestamp(1551166263, 1),
  117. "t" : NumberLong(1)
  118. },
  119. "optimeDurable" : {
  120. "ts" : Timestamp(1551166263, 1),
  121. "t" : NumberLong(1)
  122. },
  123. "optimeDate" : ISODate("2019-02-26T07:31:03Z"),
  124. "optimeDurableDate" : ISODate("2019-02-26T07:31:03Z"),
  125. "lastHeartbeat" : ISODate("2019-02-26T07:31:06.590Z"),
  126. "lastHeartbeatRecv" : ISODate("2019-02-26T07:31:06.852Z"),
  127. "pingMs" : NumberLong(0),
  128. "lastHeartbeatMessage" : "",
  129. "syncingTo" : "172.16.60.205:27017",
  130. "syncSourceHost" : "172.16.60.205:27017",
  131. "syncSourceId" : 0,
  132. "infoMessage" : "",
  133. "configVersion" : 1
  134. },
  135. {
  136. "_id" : 2,
  137. "name" : "172.16.60.207:27017",
  138. "health" : 1,
  139. "state" : 2,
  140. "stateStr" : "SECONDARY",
  141. "uptime" : 76,
  142. "optime" : {
  143. "ts" : Timestamp(1551166263, 1),
  144. "t" : NumberLong(1)
  145. },
  146. "optimeDurable" : {
  147. "ts" : Timestamp(1551166263, 1),
  148. "t" : NumberLong(1)
  149. },
  150. "optimeDate" : ISODate("2019-02-26T07:31:03Z"),
  151. "optimeDurableDate" : ISODate("2019-02-26T07:31:03Z"),
  152. "lastHeartbeat" : ISODate("2019-02-26T07:31:06.589Z"),
  153. "lastHeartbeatRecv" : ISODate("2019-02-26T07:31:06.958Z"),
  154. "pingMs" : NumberLong(0),
  155. "lastHeartbeatMessage" : "",
  156. "syncingTo" : "172.16.60.205:27017",
  157. "syncSourceHost" : "172.16.60.205:27017",
  158. "syncSourceId" : 0,
  159. "infoMessage" : "",
  160. "configVersion" : 1
  161. }
  162. ],
  163. "ok" : 1,
  164. "operationTime" : Timestamp(1551166263, 1),
  165. "$clusterTime" : {
  166. "clusterTime" : Timestamp(1551166263, 1),
  167. "signature" : {
  168. "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
  169. "keyId" : NumberLong(0)
  170. }
  171. }
  172. }
  173. 如上信息表明:
  174. 副本集配置成功后,172.16.60.205为主节点PRIMARY,172.16.60.206/207为副本节点SECONDARY。
  175. health:1 1表明状态是正常,0表明异常
  176. state:1 值小的是primary节点、值大的是secondary节点

三、测试Mongodb副本集数据复制功能 

<mongodb默认是从主节点读写数据的,副本节点上不允许读,需要设置副本节点可以读>

  1. 1)在主节点172.16.60.205上连接到终端

  2. [root@mongodb-master01 ~]# /usr/local/mongodb/bin/mongo 172.16.60.205:27017
  3. ................
  4. #建立test 数据库
  5. repset:PRIMARY> use test;
  6. switched to db test
  7. #往testdb表插入测试数据
  8. repset:PRIMARY> db.testdb.insert({"test1":"testval1"})
  9. WriteResult({ "nInserted" : 1 })
  10. 2)在副本节点172.16.60.206、172.16.60.207上连接到mongodb查看数据是否复制过来。

  11. 这里在172.16.60.206副本节点上进行查看
  12. [root@mongodb-slave01 ~]# /usr/local/mongodb/bin/mongo 172.16.60.206:27017
  13. ................
  14. repset:SECONDARY> use test;
  15. switched to db test
  16. repset:SECONDARY> show tables;
  17. 2019-02-26T15:37:46.446+0800 E QUERY [thread1] Error: listCollections failed: {
  18. "operationTime" : Timestamp(1551166663, 1),
  19. "ok" : 0,
  20. "errmsg" : "not master and slaveOk=false",
  21. "code" : 13435,
  22. "codeName" : "NotMasterNoSlaveOk",
  23. "$clusterTime" : {
  24. "clusterTime" : Timestamp(1551166663, 1),
  25. "signature" : {
  26. "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
  27. "keyId" : NumberLong(0)
  28. }
  29. }
  30. } :
  31. _getErrorWithCode@src/mongo/shell/utils.js:25:13
  32. DB.prototype._getCollectionInfosCommand@src/mongo/shell/db.js:941:1
  33. DB.prototype.getCollectionInfos@src/mongo/shell/db.js:953:19
  34. DB.prototype.getCollectionNames@src/mongo/shell/db.js:964:16
  35. shellHelper.show@src/mongo/shell/utils.js:853:9
  36. shellHelper@src/mongo/shell/utils.js:750:15
  37. @(shellhelp2):1:1
  38. 上面出现了报错!
  39. 这是因为mongodb默认是从主节点读写数据的,副本节点上不允许读,需要设置副本节点可以读
  40. repset:SECONDARY> db.getMongo().setSlaveOk();
  41. repset:SECONDARY> db.testdb.find();
  42. { "_id" : ObjectId("5c74ec9267d8c3d06506449b"), "test1" : "testval1" }
  43. repset:SECONDARY> show tables;
  44. testdb
  45. 如上发现已经在副本节点上发现了测试数据,即已经从主节点复制过来了。
  46. (在另一个副本节点172.16.60.207也如上操作即可)

四、测试副本集故障转移功能

先停掉主节点172.16.60.205,查看mongodb副本集状态,可以看到经过一系列的投票选择操作,172.16.60.206当选主节点,172.16.60.207从172.16.60.206同步数据过来。

  1. 1)停掉原来的主节点172.16.60.205的mongodb,模拟故障

  2. [root@mongodb-master01 ~]# ps -ef|grep mongodb|grep -v grep|awk '{print $2}'|xargs kill -9
  3. [root@mongodb-master01 ~]# lsof -i:27017
  4. [root@mongodb-master01 ~]#
  5. 2)接着登录到另外两个正常的从节点172.16.60.206、172.16.60.207中的任意一个节点的mongodb,查看副本集状态

  6. [root@mongodb-slave01 ~]# /usr/local/mongodb/bin/mongo 172.16.60.206:27017
  7. .................
  8. repset:PRIMARY> rs.status();
  9. {
  10. "set" : "repset",
  11. "date" : ISODate("2019-02-26T08:06:02.996Z"),
  12. "myState" : 1,
  13. "term" : NumberLong(2),
  14. "syncingTo" : "",
  15. "syncSourceHost" : "",
  16. "syncSourceId" : -1,
  17. "heartbeatIntervalMillis" : NumberLong(2000),
  18. "optimes" : {
  19. "lastCommittedOpTime" : {
  20. "ts" : Timestamp(1551168359, 1),
  21. "t" : NumberLong(2)
  22. },
  23. "readConcernMajorityOpTime" : {
  24. "ts" : Timestamp(1551168359, 1),
  25. "t" : NumberLong(2)
  26. },
  27. "appliedOpTime" : {
  28. "ts" : Timestamp(1551168359, 1),
  29. "t" : NumberLong(2)
  30. },
  31. "durableOpTime" : {
  32. "ts" : Timestamp(1551168359, 1),
  33. "t" : NumberLong(2)
  34. }
  35. },
  36. "members" : [
  37. {
  38. "_id" : 0,
  39. "name" : "172.16.60.205:27017",
  40. "health" : 0,
  41. "state" : 8,
  42. "stateStr" : "(not reachable/healthy)",
  43. "uptime" : 0,
  44. "optime" : {
  45. "ts" : Timestamp(0, 0),
  46. "t" : NumberLong(-1)
  47. },
  48. "optimeDurable" : {
  49. "ts" : Timestamp(0, 0),
  50. "t" : NumberLong(-1)
  51. },
  52. "optimeDate" : ISODate("1970-01-01T00:00:00Z"),
  53. "optimeDurableDate" : ISODate("1970-01-01T00:00:00Z"),
  54. "lastHeartbeat" : ISODate("2019-02-26T08:06:02.917Z"),
  55. "lastHeartbeatRecv" : ISODate("2019-02-26T08:03:37.492Z"),
  56. "pingMs" : NumberLong(0),
  57. "lastHeartbeatMessage" : "Connection refused",
  58. "syncingTo" : "",
  59. "syncSourceHost" : "",
  60. "syncSourceId" : -1,
  61. "infoMessage" : "",
  62. "configVersion" : -1
  63. },
  64. {
  65. "_id" : 1,
  66. "name" : "172.16.60.206:27017",
  67. "health" : 1,
  68. "state" : 1,
  69. "stateStr" : "PRIMARY",
  70. "uptime" : 2246,
  71. "optime" : {
  72. "ts" : Timestamp(1551168359, 1),
  73. "t" : NumberLong(2)
  74. },
  75. "optimeDate" : ISODate("2019-02-26T08:05:59Z"),
  76. "syncingTo" : "",
  77. "syncSourceHost" : "",
  78. "syncSourceId" : -1,
  79. "infoMessage" : "",
  80. "electionTime" : Timestamp(1551168228, 1),
  81. "electionDate" : ISODate("2019-02-26T08:03:48Z"),
  82. "configVersion" : 1,
  83. "self" : true,
  84. "lastHeartbeatMessage" : ""
  85. },
  86. {
  87. "_id" : 2,
  88. "name" : "172.16.60.207:27017",
  89. "health" : 1,
  90. "state" : 2,
  91. "stateStr" : "SECONDARY",
  92. "uptime" : 2169,
  93. "optime" : {
  94. "ts" : Timestamp(1551168359, 1),
  95. "t" : NumberLong(2)
  96. },
  97. "optimeDurable" : {
  98. "ts" : Timestamp(1551168359, 1),
  99. "t" : NumberLong(2)
  100. },
  101. "optimeDate" : ISODate("2019-02-26T08:05:59Z"),
  102. "optimeDurableDate" : ISODate("2019-02-26T08:05:59Z"),
  103. "lastHeartbeat" : ISODate("2019-02-26T08:06:02.861Z"),
  104. "lastHeartbeatRecv" : ISODate("2019-02-26T08:06:02.991Z"),
  105. "pingMs" : NumberLong(0),
  106. "lastHeartbeatMessage" : "",
  107. "syncingTo" : "172.16.60.206:27017",
  108. "syncSourceHost" : "172.16.60.206:27017",
  109. "syncSourceId" : 1,
  110. "infoMessage" : "",
  111. "configVersion" : 1
  112. }
  113. ],
  114. "ok" : 1,
  115. "operationTime" : Timestamp(1551168359, 1),
  116. "$clusterTime" : {
  117. "clusterTime" : Timestamp(1551168359, 1),
  118. "signature" : {
  119. "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
  120. "keyId" : NumberLong(0)
  121. }
  122. }
  123. }
  124. 发现当原来的主节点172.16.60.205宕掉后,经过选举,原来的从节点172.16.60.206被推举为新的主节点。
  125. 3)现在在172.16.60.206新主节点上创建测试数据

  126. repset:PRIMARY> use kevin;
  127. switched to db kevin
  128. repset:PRIMARY> db.kevin.insert({"shibo":"hahaha"})
  129. WriteResult({ "nInserted" : 1 })
  130. 4)另一个从节点172.16.60.207上登录mongodb查看

  131. [root@mongodb-slave02 ~]# /usr/local/mongodb/bin/mongo 172.16.60.207:27017
  132. ................
  133. repset:SECONDARY> use kevin;
  134. switched to db kevin
  135. repset:SECONDARY> db.getMongo().setSlaveOk();
  136. repset:SECONDARY> show tables;
  137. kevin
  138. repset:SECONDARY> db.kevin.find();
  139. { "_id" : ObjectId("5c74f42bb0b339ed6eb68e9c"), "shibo" : "hahaha" }
  140. 发现从节点172.16.60.207可以同步新的主节点172.16.60.206的数据
  141. 5)再重新启动原来的主节点172.16.60.205的mongodb

  142. [root@mongodb-master01 ~]# nohup /usr/local/mongodb/bin/mongod -dbpath /data/mongodb/data/replset -replSet repset --bind_ip=172.16.60.205 --port=27017 &
  143. mongod 9162 root 49u IPv4 6561201 0t0 TCP mongodb-master01:55236->mongodb-slave01:27017 (ESTABLISHED)
  144. [root@mongodb-master01 ~]# ps -ef|grep mongodb
  145. root 9162 6977 4 16:14 pts/1 00:00:01 /usr/local/mongodb/bin/mongod -dbpath /data/mongodb/data/replset -replSet repset --bind_ip=172.16.60.205 --port=27017
  146. root 9244 6977 0 16:14 pts/1 00:00:00 grep mongodb
  147. 再次登录到三个节点中的任意一个的mongodb,查看副本集状态
  148. [root@mongodb-master01 ~]# /usr/local/mongodb/bin/mongo 172.16.60.205:27017
  149. ....................
  150. repset:SECONDARY> rs.status();
  151. {
  152. "set" : "repset",
  153. "date" : ISODate("2019-02-26T08:16:11.741Z"),
  154. "myState" : 2,
  155. "term" : NumberLong(2),
  156. "syncingTo" : "172.16.60.206:27017",
  157. "syncSourceHost" : "172.16.60.206:27017",
  158. "syncSourceId" : 1,
  159. "heartbeatIntervalMillis" : NumberLong(2000),
  160. "optimes" : {
  161. "lastCommittedOpTime" : {
  162. "ts" : Timestamp(1551168969, 1),
  163. "t" : NumberLong(2)
  164. },
  165. "readConcernMajorityOpTime" : {
  166. "ts" : Timestamp(1551168969, 1),
  167. "t" : NumberLong(2)
  168. },
  169. "appliedOpTime" : {
  170. "ts" : Timestamp(1551168969, 1),
  171. "t" : NumberLong(2)
  172. },
  173. "durableOpTime" : {
  174. "ts" : Timestamp(1551168969, 1),
  175. "t" : NumberLong(2)
  176. }
  177. },
  178. "members" : [
  179. {
  180. "_id" : 0,
  181. "name" : "172.16.60.205:27017",
  182. "health" : 1,
  183. "state" : 2,
  184. "stateStr" : "SECONDARY",
  185. "uptime" : 129,
  186. "optime" : {
  187. "ts" : Timestamp(1551168969, 1),
  188. "t" : NumberLong(2)
  189. },
  190. "optimeDate" : ISODate("2019-02-26T08:16:09Z"),
  191. "syncingTo" : "172.16.60.206:27017",
  192. "syncSourceHost" : "172.16.60.206:27017",
  193. "syncSourceId" : 1,
  194. "infoMessage" : "",
  195. "configVersion" : 1,
  196. "self" : true,
  197. "lastHeartbeatMessage" : ""
  198. },
  199. {
  200. "_id" : 1,
  201. "name" : "172.16.60.206:27017",
  202. "health" : 1,
  203. "state" : 1,
  204. "stateStr" : "PRIMARY",
  205. "uptime" : 127,
  206. "optime" : {
  207. "ts" : Timestamp(1551168969, 1),
  208. "t" : NumberLong(2)
  209. },
  210. "optimeDurable" : {
  211. "ts" : Timestamp(1551168969, 1),
  212. "t" : NumberLong(2)
  213. },
  214. "optimeDate" : ISODate("2019-02-26T08:16:09Z"),
  215. "optimeDurableDate" : ISODate("2019-02-26T08:16:09Z"),
  216. "lastHeartbeat" : ISODate("2019-02-26T08:16:10.990Z"),
  217. "lastHeartbeatRecv" : ISODate("2019-02-26T08:16:11.518Z"),
  218. "pingMs" : NumberLong(0),
  219. "lastHeartbeatMessage" : "",
  220. "syncingTo" : "",
  221. "syncSourceHost" : "",
  222. "syncSourceId" : -1,
  223. "infoMessage" : "",
  224. "electionTime" : Timestamp(1551168228, 1),
  225. "electionDate" : ISODate("2019-02-26T08:03:48Z"),
  226. "configVersion" : 1
  227. },
  228. {
  229. "_id" : 2,
  230. "name" : "172.16.60.207:27017",
  231. "health" : 1,
  232. "state" : 2,
  233. "stateStr" : "SECONDARY",
  234. "uptime" : 127,
  235. "optime" : {
  236. "ts" : Timestamp(1551168969, 1),
  237. "t" : NumberLong(2)
  238. },
  239. "optimeDurable" : {
  240. "ts" : Timestamp(1551168969, 1),
  241. "t" : NumberLong(2)
  242. },
  243. "optimeDate" : ISODate("2019-02-26T08:16:09Z"),
  244. "optimeDurableDate" : ISODate("2019-02-26T08:16:09Z"),
  245. "lastHeartbeat" : ISODate("2019-02-26T08:16:10.990Z"),
  246. "lastHeartbeatRecv" : ISODate("2019-02-26T08:16:11.655Z"),
  247. "pingMs" : NumberLong(0),
  248. "lastHeartbeatMessage" : "",
  249. "syncingTo" : "172.16.60.206:27017",
  250. "syncSourceHost" : "172.16.60.206:27017",
  251. "syncSourceId" : 1,
  252. "infoMessage" : "",
  253. "configVersion" : 1
  254. }
  255. ],
  256. "ok" : 1,
  257. "operationTime" : Timestamp(1551168969, 1),
  258. "$clusterTime" : {
  259. "clusterTime" : Timestamp(1551168969, 1),
  260. "signature" : {
  261. "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
  262. "keyId" : NumberLong(0)
  263. }
  264. }
  265. }
  266. 发现原来的主节点172.16.60.205在故障恢复后,变成了新的主节点172.16.60.206的从节点

五、Mongodb读写分离

目前来看。Mongodb副本集可以完美支持故障转移。至于主节点的读写压力过大如何解决?常见的解决方案是读写分离。

  1. 一般情况下,常规写操作来说并没有读操作多,所以在Mongodb副本集中,一台主节点负责写操作,两台副本节点负责读操作。

  2. 1)设置读写分离需要先在副本节点SECONDARY 设置 setSlaveOk。
  3. 2)在程序中设置副本节点负责读操作,如下代码:
  4. public class TestMongoDBReplSetReadSplit {
  5. public static void main(String[] args) {
  6. try {
  7. List<ServerAddress> addresses = new ArrayList<ServerAddress>();
  8. ServerAddress address1 = new ServerAddress("172.16.60.205" , 27017);
  9. ServerAddress address2 = new ServerAddress("172.16.60.206" , 27017);
  10. ServerAddress address3 = new ServerAddress("172.16.60.207" , 27017);
  11. addresses.add(address1);
  12. addresses.add(address2);
  13. addresses.add(address3);
  14. MongoClient client = new MongoClient(addresses);
  15. DB db = client.getDB( "test" );
  16. DBCollection coll = db.getCollection( "testdb" );
  17. BasicDBObject object = new BasicDBObject();
  18. object.append( "test2" , "testval2" );
  19. //读操作从副本节点读取
  20. ReadPreference preference = ReadPreference. secondary();
  21. DBObject dbObject = coll.findOne(object, null , preference);
  22. System. out .println(dbObject);
  23. } catch (Exception e) {
  24. e.printStackTrace();
  25. }
  26. }
  27. }

读参数除了secondary一共还有五个参数:primary、primaryPreferred、secondary、secondaryPreferred、nearest。
primary:默认参数,只从主节点上进行读取操作;
primaryPreferred:大部分从主节点上读取数据,只有主节点不可用时从secondary节点读取数据。
secondary:只从secondary节点上进行读取操作,存在的问题是secondary节点的数据会比primary节点数据“旧”。
secondaryPreferred:优先从secondary节点进行读取操作,secondary节点不可用时从主节点读取数据;
nearest:不管是主节点、secondary节点,从网络延迟最低的节点上读取数据。

读写分离做好后,就可以进行数据分流,减轻压力,解决了"主节点的读写压力过大如何解决?"这个问题。不过当副本节点增多时,主节点的复制压力会加大有什么办法解决吗?基于这个问题,Mongodb已有了相应的解决方案 - 引用仲裁节点:
在Mongodb副本集中,仲裁节点不存储数据,只是负责故障转移的群体投票,这样就少了数据复制的压力。看起来想的很周到啊,其实不只是主节点、副本节点、仲裁节点,还有Secondary-Only、Hidden、Delayed、Non-Voting,其中:
Secondary-Only:不能成为primary节点,只能作为secondary副本节点,防止一些性能不高的节点成为主节点。
Hidden:这类节点是不能够被客户端制定IP引用,也不能被设置为主节点,但是可以投票,一般用于备份数据。
Delayed:可以指定一个时间延迟从primary节点同步数据。主要用于备份数据,如果实时同步,误删除数据马上同步到从节点,恢复又恢复不了。
Non-Voting:没有选举权的secondary节点,纯粹的备份数据节点。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值