一主一从
主从实现的原理:
首先主节点会把写操作记录下来,这些操作及记录在local数据库中的oplog.$admin集合中
从节点定时会去连接主节点,请求主节点的操作日志,从而对自己的数据副表进行同样的操作来达到数据同步。
主ip:192.168.26.134:27017
从ip:192.168.26.136:27017
配置文件:0
Master
#vim /mongodb/mongodb.cnf
dbpath=/mongodb/data/db
logpath=/mongodb/data/log/mongodb.log
logappend=true
port=27017
journal=true 这是啥?
#pidfile=/mongodb/data/run/mongodb.pid /这行导致服务启动不了。
fork=true 启动
master=true
oplogSize==2048 ===配置写记录的固定集合,一般为剩余磁盘的5%,放满时会覆盖旧数据,从节点可能来不及更新产生数据不同步的情况。
#mongod -f /mongodb/mongodb.cnf & 启动配置文件后看一下进程有木有
slave
#vim /mongodb/monogdb.cnf
dbpath=/mongodb/data/db
logpath=/mongodb/data/log/mongodb.log
logappend=true
port=27017
journal=true
fork=true
slave=true
source=192.168.26.134:27017
# mongod -f /mongodb/mongodb.cnf /启动配置文件
master:
# mongo
> db.printReplicationInfo()===///出现下面说明成功
configured oplog size: 2048MB
log length start to end: 24secs (0.01hrs)
oplog first event time: Mon Nov 27 2017 22:37:41 GMT-0500 (EST)
oplog last event time: Mon Nov 27 2017 22:38:05 GMT-0500 (EST)
now: Mon Nov 27 2017 22:38:09 GMT-0500 (EST)
>db.testdb.insert({"name":"test"})
> show dbs
admin 0.000GB
local 0.000GB
test 0.000GB
> exit
slave
#mongo
> rs.slaveOk() //没有这个会报错:"errmsg": "not master and slaveOk=false",
> db.testdb.find()
{ "_id" :ObjectId("59cb574c7b724f89e3a1e013"), "name" :"test" }
> exit ///退出后rs.slaveOk失效-à因为secondary默认是不允许读写的
??????能避免单点故障吗?
不能,只能备份
集群复制
两个节点的集群,避免单点故障
一个节点挂掉后,另一个节点取而代之
俩个服务ip:192.168.26.136
:192.168.26.134
配置文件
#cat /mongodb/mongodb.cnf
dbpath=/mongodb/data/db
logpath=/mongodb/data/log/mongodb.log
logappend=true
port=27017
httpinterface=true
fork=true
rest=true
pidfilepath=/mongodb/data/run/mongo.pid ====///进程文件,方便停止mongodb
replSet=mydbCenter ///关键 ===///replica set的名字
oplogSize=512 ====
一添加下面的这两行就启动不了
directoryperdb=true ====///为每一个数据库以数据库名创建文件夹进行存放
noperalloc=true ====///不预先分配存储。
#cat /mongodb/mongodb.cnf
dbpath=/mongodb/data/db
logpath=/mongodb/data/log/mongodb.log
logappend=true
port=27017
httpinterface=true
pidfilepath=/mongodb/data/run/mongodb.pid
fork=true
rest=true
replSet=mydbCenter 关键
oplogSize=512
#echo never > /………………
分别启动两个配置文件
#mongod -f /mongodb/mongodb.cnf &
成功后,进入任意一个mongo数据库
>rs.status() /查看状态
配置成员
> cfg={_id:'mydbCenter',members:[
... {_id:0,host:'192.168.26.136:27017'},
... {_id:1,host:'192.168.26.134:27017'}]
... }
cfg={_id:'mydbCenter',members:[{_id:0,host:'172.17.0.2:27017'},{_id:1,host:'172.17.0.3:27017'}]}
>rs.initiate(cfg)
报错:
"errmsg" : "replSetInitiate quorum check failed because not all proposed set members responded affirmatively: 192.168.221.135:27017 failed with No route to host"
解决:把防火墙关了,却出现“rs.initiate(cfg), has data already, cannot initiate set”报错
把报错的另一台主机的数据库清空。
>rs.status() 可以看到节点信息
//这是此节点会变成:
mydbCenter:PRIMARY>
另一个会变成:
mydbCenter:SECONDARY> 默认没有读写权,有点像主从,但是停primary的服务,secondly不变,只有重启primary服务后,secondly会变成primary,原来的primary变成secondly
加仲裁节点可以解决单点故障
仲裁者:192.168.26.140:27017
配置和上文相同
启动
在primary里:
>rs.addArb(“192.168.26.140:27017”)
>rs.status();
"stateStr": "(not reachable/healthy)" /出这样的错
解决:查看primary日志
# tail -n20 /mongodb/data/log/mongodb.log
Failedto connect to 192.168.26.140:27017 - HostUnreachable: No route to host
或者:Failed to connect to192.168.26.140:27017 - HostUnreachable: Connection refused
原因:仲裁者的防火墙没有关
查看仲裁者status
>rs.status()
出现 "stateStr": "ARBITER" ///OK
解释:
mydbCenter:PRIMARY> rs.status()
{
"set": "mydbCenter",
"date": ISODate("2017-09-28T06:47:24.526Z"),
"myState": 1,
"term": NumberLong(8),
"heartbeatIntervalMillis": NumberLong(2000),
"optimes": {
"lastCommittedOpTime": {
"ts": Timestamp(1506581235, 1),
"t": NumberLong(8)
},
"appliedOpTime": {
"ts": Timestamp(1506581235, 1),
"t": NumberLong(8)
},
"durableOpTime": {
"ts": Timestamp(1506581235, 1),
"t": NumberLong(8)
}
},
"members": [
{
"_id": 0,
"name": "192.168.26.136:27017",
"health": 1,
"state": 2,
"stateStr": "SECONDARY",
"uptime": 2949,
"optime": {
"ts" :Timestamp(1506581235, 1),
"t": NumberLong(8)
},
"optimeDurable": {
"ts": Timestamp(1506581235, 1),
"t": NumberLong(8)
},
"optimeDate": ISODate("2017-09-28T06:47:15Z"),
"optimeDurableDate": ISODate("2017-09-28T06:47:15Z"),
"lastHeartbeat": ISODate("2017-09-28T06:47:23.621Z"),
"lastHeartbeatRecv": ISODate("2017-09-28T06:47:23.651Z"),
"pingMs": NumberLong(1),
"syncingTo": "192.168.26.134:27017",
"configVersion": 16
},
{
"_id": 1,
"name": "192.168.26.134:27017",
"health": 1,
"state": 1,
"stateStr": "PRIMARY",
"uptime": 10631,
"optime": {
"ts": Timestamp(1506581235, 1),
"t": NumberLong(8)
},
"optimeDate": ISODate("2017-09-28T06:47:15Z"),
"electionTime": Timestamp(1506576930, 1),
"electionDate": ISODate("2017-09-28T05:35:30Z"),
"configVersion": 16,
"self": true
},
{
"_id": 2,
"name": "192.168.26.140:27017",
"health": 1,
"state": 7,
"stateStr": "ARBITER",
"uptime": 60,
"lastHeartbeat": ISODate("2017-09-28T06:47:23.652Z"),
"lastHeartbeatRecv": ISODate("2017-09-28T06:47:22.944Z"),
"pingMs": NumberLong(1),
"syncingTo": "192.168.26.134:27017",
"configVersion": 16
}
],
"ok": 1
}
mydbCenter:PRIMARY>
检测:杀死primary的mongo进程
过了好一会儿secondly自动变成新的primary
重新启动 原来primary变成secondly