MongoDB 副本集搭建【带身份认证】 【docker compose + 本机部署】【建议收藏起来】

什么是副本集

MongoDB副本集(Replica Set)是一种数据冗余和故障恢复机制,它允许你维护相同数据的一个或多个副本,并提供自动故障转移和数据恢复能力。副本集是一个包含多个MongoDB实例(通常称为成员)的集合,其中一个成员被选举为主节点(Primary),其他成员作为从节点(Secondary)或仲裁节点(Arbiter)。

副本集的角色:

  • 主节点(Primary):负责处理客户端的读写请求,并将数据更改写入其本地数据文件中。同时,主节点还会将这些更改记录在一个名为oplog(操作日志)的特殊日志文件中,并异步地复制给从节点。
  • 从节点(Secondary):从主节点复制数据,并在本地存储一份副本。默认情况下,从节点默认不处理客户端的读请求,但你可以将其配置为可以读取,从而分散读请求并提高性能。在发生故障时,一个从节点可以被选举为主节点。
  • 仲裁节点(Arbiter):只参与故障转移投票,不存储数据。它通常用于确保副本集具有奇数个成员(以实现多数投票),同时减少存储成本。

副本集的工作机制:

  • 数据复制:主节点将oplog中的更改异步复制到从节点。
  • 故障转移:如果主节点不可用,副本集中的其他成员将尝试选举一个新的主节点。这个过程基于心跳检测和投票机制。
  • 读请求路由:客户端可以将读请求发送到主节点或从节点(如果配置为可读)。

副本集至少需要3个成员,可以配置为一主两从
在这里插入图片描述
如果主节点挂掉,两个从节点会重新选举,找到一个从节点,将其提升为主
在这里插入图片描述
在某些情况下(例如存在一个主节点和一个从节点,但由于成本有限无法再添加另一个从节点),您可以选择将一个实例配置为仲裁节点添加到副本集中。仲裁节点参与选举,但不持有数据(即不提供数据冗余):
在这里插入图片描述
在选举时,主节点可能降为从节点,从节点也可能提升为主节点,但仲裁节点永远只能是仲裁节点。

数据同步原理

Primary节点写入数据,Secondary通过读取Primary的oplog得到复制信息,开始复制数据并且将复制信息写入到自己的oplog。如果某个操作失败,则secondary节点停止从当前数据源复制数据。如果某个secondary节点由于某些原因挂掉了,当重新启动后,就会自动从oplog的最后一个操作开始同步,同步完成后,将信息写入自己的oplog,由于复制操作是先复制数据,复制完成后再写入oplog,有可能相同的操作会同步两份,不过MongoDB的oplog的操作是幂等的,也就是说将oplog的同一个操作执行多次,与执行一次的效果是一样的:

当Primary节点完成数据操作后,Secondary会做出一系列的动作保证数据的同步:

  1. 检查自己local库的oplog.rs集合找出最近的时间戳。
  2. 检查Primary节点local库oplog.rs集合,找出大于此时间戳的记录。
  3. 将找到的记录插入到自己的oplog.rs集合中,并执行这些操作。

secondary从primary端获取日志,然后在本地完全顺序的执行oplog所记录的各种操作(oplog不记录查询操作),这个日志就是local数据库中的oplog.rs表,默认在64位机器上这个表是比较大的,占磁盘大小的5%,oplog.rs的大小可以在启动参数中设 定: --oplogSize 1000,单位是M。

注意:在副本集的环境中,要是所有的Secondary都宕机了,只剩下Primary。最后Primary会变成Secondary,不能提供服务。

oplog 讲解

oplog 详解

部署

Docker compose 部署

创建目录,所有主机都创建

mkdir -p /docker/mongo/{data,logs}

编写 docker-compose.yml 文件,文件在 /docker/mongo 文件夹中

services:
  mongo:
    image: mongo:7.0
    container_name: mongo
    restart: always
    volumes:
      - /docker/mongo/data:/data/db
      - /docker/mongo/logs:/var/log/mongodb
      - /docker/mongo/mongodb-keyfile.key:/etc/mongodb-keyfile.key
    ports:
      - "27017:27017"
    environment:
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: 123456
      MONGO_INITDB_REPLICA_SET_NAME: rs0
      MONGO_INITDB_DATABASE: admin
    command:
      - /bin/sh
      - -c
      - |
        chmod 400 /etc/mongodb-keyfile.key
        chown 999:999 /etc/mongodb-keyfile.key
        mongod --replSet rs0 --bind_ip_all --auth --keyFile /etc/mongodb-keyfile.key
    networks:
      - mongoNet

networks:
  mongoNet:
    driver: bridge

在随机一台主机上生成密钥

openssl rand -base64 756 > /docker/mongo/mongodb-keyfile.key

分发给其他主机

scp /docker/mongo/mongodb-keyfile.key slave@192.168.142.156:/home/slave
scp /docker/mongo/mongodb-keyfile.key slave02@192.168.142.155:/home/slave02
scp /docker/mongo/mongodb-keyfile.key slave03@192.168.142.158:/home/slave03

在其他主机上

mv /home/<用户名>/mongodb-keyfile.key /docker/mongo

修改文件所属

chown root:root /docker/mongo/mongodb-keyfile.key

开始启动容器

docker compose up -d

当然这句话要在 /mongodb 文件夹下使用
查看集群状态

进入容器
如果 docker exec -it mongo mongosh -u root -p 123456 能直接进入,则可以免去下面创建 root 的过程

无认证登录 mongo 创建 root

docker exec -it mongo mongosh
use admin
rs.initiate()
db.createUser({user:"root",pwd:"123456",roles:[{role:"root",db:"admin"}]})
db.auth("root","123456")

添加节点

rs.add({host:"192.168.142.156:27017",priority: 2})  // 添加第一个从节点
rs.add({host:"192.168.142.155:27017",priority: 3})  // 添加第二个从节点
rs.add({host:"192.168.142.158:27017",arbiterOnly: true})  // 添加第三个从节点 [仲裁节点]

查看集群状态

rs.status()

=========================================================================================================

这是 root 可以直接登录的部署集群的方式

在随机一台主机上执行,或者你想让哪台主机成为 leader 你就在哪台主机上执行

rs.initiate({
   _id:"rs0",
   members:[
      {
          _id:0,host:"192.168.142.155:27017",priority: 3
      },{
          _id:1,host:"192.168.142.156:27017",priority: 2
      },{
         _id:2,host:"192.168.142.157:27017",priority: 1
      },{
         _id:3,host:"192.168.142.158:27017",arbiterOnly: true
     }
]})

arbiterOnly: true 设置仲裁节点

查看集群状态

rs.status()
{
  set: 'rs0',
  date: ISODate('2024-10-14T09:02:08.996Z'),
  myState: 1,
  term: Long('1'),
  syncSourceHost: '',
  syncSourceId: -1,
  heartbeatIntervalMillis: Long('2000'),
  majorityVoteCount: 3,
  writeMajorityCount: 3,
  votingMembersCount: 4,
  writableVotingMembersCount: 3,
  optimes: {
    lastCommittedOpTime: { ts: Timestamp({ t: 1728896521, i: 6 }), t: Long('1') },
    lastCommittedWallTime: ISODate('2024-10-14T09:02:01.369Z'),
    readConcernMajorityOpTime: { ts: Timestamp({ t: 1728896521, i: 6 }), t: Long('1') },
    appliedOpTime: { ts: Timestamp({ t: 1728896521, i: 6 }), t: Long('1') },
    durableOpTime: { ts: Timestamp({ t: 1728896521, i: 6 }), t: Long('1') },
    lastAppliedWallTime: ISODate('2024-10-14T09:02:01.369Z'),
    lastDurableWallTime: ISODate('2024-10-14T09:02:01.369Z')
  },
  lastStableRecoveryTimestamp: Timestamp({ t: 1728896510, i: 1 }),
  electionCandidateMetrics: {
    lastElectionReason: 'electionTimeout',
    lastElectionDate: ISODate('2024-10-14T09:02:00.824Z'),
    electionTerm: Long('1'),
    lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1728896510, i: 1 }), t: Long('-1') },
    lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1728896510, i: 1 }), t: Long('-1') },
    numVotesNeeded: 3,
    priorityAtElection: 3,
    electionTimeoutMillis: Long('10000'),
    numCatchUpOps: Long('0'),
    newTermStartDate: ISODate('2024-10-14T09:02:00.862Z'),
    wMajorityWriteAvailabilityDate: ISODate('2024-10-14T09:02:01.348Z')
  },
  members: [
    {
      _id: 0,
      name: '192.168.142.155:27017',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 18,
      optime: { ts: Timestamp({ t: 1728896521, i: 6 }), t: Long('1') },
      optimeDurable: { ts: Timestamp({ t: 1728896521, i: 6 }), t: Long('1') },
      optimeDate: ISODate('2024-10-14T09:02:01.000Z'),
      optimeDurableDate: ISODate('2024-10-14T09:02:01.000Z'),
      lastAppliedWallTime: ISODate('2024-10-14T09:02:01.369Z'),
      lastDurableWallTime: ISODate('2024-10-14T09:02:01.369Z'),
      lastHeartbeat: ISODate('2024-10-14T09:02:08.835Z'),
      lastHeartbeatRecv: ISODate('2024-10-14T09:02:07.834Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: '192.168.142.157:27017',
      syncSourceId: 2,
      infoMessage: '',
      configVersion: 1,
      configTerm: 1
    },
    {
      _id: 1,
      name: '192.168.142.156:27017',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 18,
      optime: { ts: Timestamp({ t: 1728896521, i: 6 }), t: Long('1') },
      optimeDurable: { ts: Timestamp({ t: 1728896521, i: 6 }), t: Long('1') },
      optimeDate: ISODate('2024-10-14T09:02:01.000Z'),
      optimeDurableDate: ISODate('2024-10-14T09:02:01.000Z'),
      lastAppliedWallTime: ISODate('2024-10-14T09:02:01.369Z'),
      lastDurableWallTime: ISODate('2024-10-14T09:02:01.369Z'),
      lastHeartbeat: ISODate('2024-10-14T09:02:08.835Z'),
      lastHeartbeatRecv: ISODate('2024-10-14T09:02:07.834Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: '192.168.142.157:27017',
      syncSourceId: 2,
      infoMessage: '',
      configVersion: 1,
      configTerm: 1
    },
    {
      _id: 2,
      name: '192.168.142.157:27017',
      health: 1,
      state: 1,
      stateStr: 'PRIMARY',
      uptime: 399,
      optime: { ts: Timestamp({ t: 1728896521, i: 6 }), t: Long('1') },
      optimeDate: ISODate('2024-10-14T09:02:01.000Z'),
      lastAppliedWallTime: ISODate('2024-10-14T09:02:01.369Z'),
      lastDurableWallTime: ISODate('2024-10-14T09:02:01.369Z'),
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: 'Could not find member to sync from',
      electionTime: Timestamp({ t: 1728896520, i: 1 }),
      electionDate: ISODate('2024-10-14T09:02:00.000Z'),
      configVersion: 1,
      configTerm: 1,
      self: true,
      lastHeartbeatMessage: ''
    },
    {
      _id: 3,
      name: '192.168.142.158:27017',
      health: 1,
      state: 7,
      stateStr: 'ARBITER',
      uptime: 18,
      lastHeartbeat: ISODate('2024-10-14T09:02:08.836Z'),
      lastHeartbeatRecv: ISODate('2024-10-14T09:02:08.836Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      configVersion: 1,
      configTerm: 1
    }
  ],
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1728896521, i: 6 }),
    signature: {
      hash: Binary.createFromBase64('nE33lqZTBHPPFvNa74TCaIKfazk=', 0),
      keyId: Long('7425554011568209926')
    }
  },
  operationTime: Timestamp({ t: 1728896521, i: 6 })
}

着重关注 member

 _id: 0,
      name: '192.168.142.155:27017',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
_id: 1,
      name: '192.168.142.156:27017',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
_id: 2,
      name: '192.168.142.157:27017',
      health: 1,
      state: 1,
      stateStr: 'PRIMARY',
 _id: 3,
      name: '192.168.142.158:27017',
      health: 1,
      state: 7,
      stateStr: 'ARBITER',

着重看这几个信息就说明我们的副本集终于搭建成功了

上述两种部署副本集的方式,选其一即可

大家还要注意的一点是,rs.initiate() 只能在主节点上执行,rs.ststus() 也只有主节点上能查看副本集信息,其他的节点在没有执行 rs.initiate() 时,执行 rs.status() 还是会报错的

MongoServerError[NotYetInitialized]: no replset config has been received

普通部署

首先,我们需要在每一台主机上先安装好 mongodb ,这里我直接使用脚本安装,有需要的小伙伴可以去看一下我之前的一篇 mongodb 的单机部署的博客我有分享我的脚本

链接:https://blog.csdn.net/qq_62866151/article/details/142662644?spm=1001.2014.3001.5501

先创建一个 mongo 文件夹,进入文件夹

mkdir -p /mongo/data

data目录是存放数据的,我们 cd 到它的上一级即可

cd /mongo

拿到脚本之后给脚本执行权限

chmod u+x mongo_setup.sh

运行脚本

./mongo_setup.sh

安装完毕之后依据脚本最后打印的几条命令依次执行,最终

root@master:/mongo# systemctl status mongod
● mongod.service - MongoDB Database Server
     Loaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor preset: enab>
     Active: active (running) since Tue 2024-10-01 13:54:32 UTC; 4min 53s ago
       Docs: https://docs.mongodb.org/manual
   Main PID: 98050 (mongod)
     Memory: 72.9M
        CPU: 1.150s
     CGroup: /system.slice/mongod.service
             └─98050 /usr/bin/mongod --config /etc/mongod.conf

Oct 01 13:54:32 master systemd[1]: Started MongoDB Database Server.
Oct 01 13:54:32 master mongod[98050]: {"t":{"$date":"2024-10-01T13:54:32.853Z"},"s":"I>

表示 mongodb 部署成功了

接下来我们去修改一下相关的配置文件

vim /etc/mongod.conf
storage:
  dbPath: /var/lib/mongodb
  directoryPerDB: true
  engine: "wiredTiger"
  wiredTiger:
    engineConfig:
      cacheSizeGB: 1
systemLog:
  verbosity: 5
  destination: file
  logAppend: true
  path: /var/log/mongodb/mongod.log
net:
  port: 27017
  bindIp: 0.0.0.0
  maxIncomingConnections: 65536
  wireObjectCheck: true 
  ipv6: false
  unixDomainSocket:
     enabled: true
     pathPrefix: /tmp
     filePermissions: 755
processManagement:
  fork: false
  timeZoneInfo: /usr/share/zoneinfo
security:
  authorization: enabled
  keyFile: "/etc/mongodb/mongodb.key"
  clusterAuthMode: "keyFile"
replication:
  replSetName: "rs0"
  oplogSizeMB: 5000

改完之后不要忘记,之前 docker 部署的时候还生成过 mongodb-keyfile

 openssl rand -base64 666 > /mongo/mongo.key
chmod 400 /mongo/mongo.key

创建用户

useradd -M mongodb
passwd mongodb
<密码自定义>
chown mongodb:mongodb -R /mongo

重启

systemctl restart mongod.service

这个地方的 key 文件和上面的 docker 部署的那个 key 还是有点差别的
使用 mongosh 命令进入 mongodb 的 shell 界面
首先初始化

use admin
rs.initiate()

在创建一个用户

 admin> db.createUser({user:"root",pwd:"123456",roles:[{role :"root",db:"admin"}]})

登录

db.auth("root","123456")

进去之后还是要 use admin

然后再
还是老规矩,除了自己

rs.add({host:"192.168.142.156:27017",priority: 2})  // 添加第一个从节点
rs.add({host:"192.168.142.155:27017",priority: 3})  // 添加第二个从节点
rs.add({host:"192.168.142.158:27017",arbiterOnly: true})  // 添加第三个从节点 [仲裁节点]

或者在登录之后,初始化和添加节点在一起执行

rs.initiate({
   _id:"rs0",
   members:[
      {
          _id:0,host:"192.168.142.155:27017",priority: 3
      },{
          _id:1,host:"192.168.142.156:27017",priority: 2
      },{
         _id:2,host:"192.168.142.157:27017",priority: 1
      },{
         _id:3,host:"192.168.142.158:27017",arbiterOnly: true
     }
]})

这样子应该也是没问题的,我们查看一下结果

rs.status()
{
  set: 'rs0',
  date: ISODate('2024-10-14T09:02:08.996Z'),
  myState: 1,
  term: Long('1'),
  syncSourceHost: '',
  syncSourceId: -1,
  heartbeatIntervalMillis: Long('2000'),
  majorityVoteCount: 3,
  writeMajorityCount: 3,
  votingMembersCount: 4,
  writableVotingMembersCount: 3,
  optimes: {
    lastCommittedOpTime: { ts: Timestamp({ t: 1728896521, i: 6 }), t: Long('1') },
    lastCommittedWallTime: ISODate('2024-10-14T09:02:01.369Z'),
    readConcernMajorityOpTime: { ts: Timestamp({ t: 1728896521, i: 6 }), t: Long('1') },
    appliedOpTime: { ts: Timestamp({ t: 1728896521, i: 6 }), t: Long('1') },
    durableOpTime: { ts: Timestamp({ t: 1728896521, i: 6 }), t: Long('1') },
    lastAppliedWallTime: ISODate('2024-10-14T09:02:01.369Z'),
    lastDurableWallTime: ISODate('2024-10-14T09:02:01.369Z')
  },
  lastStableRecoveryTimestamp: Timestamp({ t: 1728896510, i: 1 }),
  electionCandidateMetrics: {
    lastElectionReason: 'electionTimeout',
    lastElectionDate: ISODate('2024-10-14T09:02:00.824Z'),
    electionTerm: Long('1'),
    lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1728896510, i: 1 }), t: Long('-1') },
    lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1728896510, i: 1 }), t: Long('-1') },
    numVotesNeeded: 3,
    priorityAtElection: 3,
    electionTimeoutMillis: Long('10000'),
    numCatchUpOps: Long('0'),
    newTermStartDate: ISODate('2024-10-14T09:02:00.862Z'),
    wMajorityWriteAvailabilityDate: ISODate('2024-10-14T09:02:01.348Z')
  },
  members: [
    {
      _id: 0,
      name: '192.168.142.155:27017',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 18,
      optime: { ts: Timestamp({ t: 1728896521, i: 6 }), t: Long('1') },
      optimeDurable: { ts: Timestamp({ t: 1728896521, i: 6 }), t: Long('1') },
      optimeDate: ISODate('2024-10-14T09:02:01.000Z'),
      optimeDurableDate: ISODate('2024-10-14T09:02:01.000Z'),
      lastAppliedWallTime: ISODate('2024-10-14T09:02:01.369Z'),
      lastDurableWallTime: ISODate('2024-10-14T09:02:01.369Z'),
      lastHeartbeat: ISODate('2024-10-14T09:02:08.835Z'),
      lastHeartbeatRecv: ISODate('2024-10-14T09:02:07.834Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: '192.168.142.157:27017',
      syncSourceId: 2,
      infoMessage: '',
      configVersion: 1,
      configTerm: 1
    },
    {
      _id: 1,
      name: '192.168.142.156:27017',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 18,
      optime: { ts: Timestamp({ t: 1728896521, i: 6 }), t: Long('1') },
      optimeDurable: { ts: Timestamp({ t: 1728896521, i: 6 }), t: Long('1') },
      optimeDate: ISODate('2024-10-14T09:02:01.000Z'),
      optimeDurableDate: ISODate('2024-10-14T09:02:01.000Z'),
      lastAppliedWallTime: ISODate('2024-10-14T09:02:01.369Z'),
      lastDurableWallTime: ISODate('2024-10-14T09:02:01.369Z'),
      lastHeartbeat: ISODate('2024-10-14T09:02:08.835Z'),
      lastHeartbeatRecv: ISODate('2024-10-14T09:02:07.834Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: '192.168.142.157:27017',
      syncSourceId: 2,
      infoMessage: '',
      configVersion: 1,
      configTerm: 1
    },
    {
      _id: 2,
      name: '192.168.142.157:27017',
      health: 1,
      state: 1,
      stateStr: 'PRIMARY',
      uptime: 399,
      optime: { ts: Timestamp({ t: 1728896521, i: 6 }), t: Long('1') },
      optimeDate: ISODate('2024-10-14T09:02:01.000Z'),
      lastAppliedWallTime: ISODate('2024-10-14T09:02:01.369Z'),
      lastDurableWallTime: ISODate('2024-10-14T09:02:01.369Z'),
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: 'Could not find member to sync from',
      electionTime: Timestamp({ t: 1728896520, i: 1 }),
      electionDate: ISODate('2024-10-14T09:02:00.000Z'),
      configVersion: 1,
      configTerm: 1,
      self: true,
      lastHeartbeatMessage: ''
    },
    {
      _id: 3,
      name: '192.168.142.158:27017',
      health: 1,
      state: 7,
      stateStr: 'ARBITER',
      uptime: 18,
      lastHeartbeat: ISODate('2024-10-14T09:02:08.836Z'),
      lastHeartbeatRecv: ISODate('2024-10-14T09:02:08.836Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      configVersion: 1,
      configTerm: 1
    }
  ],
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1728896521, i: 6 }),
    signature: {
      hash: Binary.createFromBase64('nE33lqZTBHPPFvNa74TCaIKfazk=', 0),
      keyId: Long('7425554011568209926')
    }
  },
  operationTime: Timestamp({ t: 1728896521, i: 6 })
}

验证集群

在主机点上添加数据

db.abc.insertOne({a:"b",c:1})

在这里插入图片描述
在从节点上查看

db.auth("root","123456")
db.abc.find()

在这里插入图片描述
副本集搭建完成

副本集的管理

查看副本集

# 查看配置
rs.config()

# 查看副本集状态: 
rs.status()

# 查看master
rs.isMaster()

管理副本集成员

删除成员

  1. 在要删除的成员实例上停止数据库实例
# 登录至指定节点停止实例
rs0:SECONDARY> use admin;
switched to db admin
rs0:SECONDARY> db.shutdownServer()
server should be down...

  1. 在master上执行删除操作
rs0:PRIMARY> rs.remove("10.1.56.85:27017")
{
        "ok" : 1,
        "$clusterTime" : {
                "clusterTime" : Timestamp(1641217135, 1),
                "signature" : {
                        "hash" : BinData(0,"nhyPvAHXJmE62L0LvIC3J8hAQl8="),
                        "keyId" : NumberLong("7048958896664215556")
                }
        },
        "operationTime" : Timestamp(1641217135, 1)
}
  1. 检查集群是否正常删除
rs.status()

添加成员

  1. 启动mongo并修改相应配置
# /etc/mongod.conf配置文件
security:
  authorization: enabled
  clusterAuthMode: "keyFile"
  keyFile: "/data/mongo/mongodb.key"

replication:
  replSetName: "rs0"
  oplogSizeMB: 5000
  
# 如果之前有数据,记得清理
rm -rf /var/lib/mongo/*

# 启动mongo
systemctl start mongod
  1. 添加成员至副本集
# 在主节点执行以下操作
rs0:PRIMARY> rs.add({host: "10.1.56.85:27017", priority: 0, votes: 0})
{
        "ok" : 1,
        "$clusterTime" : {
                "clusterTime" : Timestamp(1641217386, 1),
                "signature" : {
                        "hash" : BinData(0,"KLr++3ulixdXPfZhFFAqMhuZuuM="),
                        "keyId" : NumberLong("7048958896664215556")
                }
        },
        "operationTime" : Timestamp(1641217386, 1)
}

  1. 重新配置优先级

    一旦新加入的成员已经过渡到 SECONDARY状态,如果需要的话,使用rs.reconfig()更新新添加的成员priority和votes

rs0:PRIMARY> var cfg=rs.conf();
rs0:PRIMARY> cfg.members[2].priority = 1
1
rs0:PRIMARY> cfg.members[2].votes = 1
1
rs0:PRIMARY> rs.reconfig(cfg)
{
        "ok" : 1,
        "$clusterTime" : {
                "clusterTime" : Timestamp(1641217534, 1),
                "signature" : {
                        "hash" : BinData(0,"lj6U6EcieJCS5xJ/eAokxptiNfA="),
                        "keyId" : NumberLong("7048958896664215556")
                }
        },
        "operationTime" : Timestamp(1641217534, 1)
}


以上配置完成后,可以再次通过rs.status()查看集群状态

附录

异常处理

执行rs.conf()时抛如下异常:

rs0:PRIMARY> rs.conf()
uncaught exception: Error: Could not retrieve replica set config: {
        "ok" : 0,
        "errmsg" : "not authorized on admin to execute command { replSetGetConfig: 1.0, lsid: { id: UUID(\"4a7cd439-f949-41dc-acef-2838267a908a\") }, $clusterTime: { clusterTime: Timestamp(1641214558, 1), signature: { hash: BinData(0, D8E6570EDE34381A0866EB5600A32747FE4BD4DB), keyId: 7048958896664215556 } }, $db: \"admin\" }",
        "code" : 13,
        "codeName" : "Unauthorized",
        "$clusterTime" : {
                "clusterTime" : Timestamp(1641214701, 2),
                "signature" : {
                        "hash" : BinData(0,"y18DwhtkerhqWMwPKLy2m8JXsdk="),
                        "keyId" : NumberLong("7048958896664215556")
                }
        },
        "operationTime" : Timestamp(1641214701, 2)
} :
rs.conf@src/mongo/shell/utils.js:1713:11
@(shell):1:1

原因是当前用户权限不够,需要超级管理员权限:

use admin
db.grantRolesToUser("admin",[{role: "root", db: "admin"}])

mongodb 分片搭建

mongodb 分片详解

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值