十、mongodb分片集群运维相关

一、k8s集群部署分片集群

1、规划:

角色IP地址端口
cluster01-router01192.168.86.2127017
cluster01-router02192.168.86.2227017
cluster01-router03192.168.86.2327017
cluster01-configsvr01192.168.86.2127018
cluster01-configsvr02192.168.86.2227018
cluster01-configsvr03192.168.86.2327018
shard01-shardsvr01192.168.86.2127019
shard01-shardsvr02192.168.86.2227019
shard01-shardsvr03192.168.86.2327019
shard02-shardsvr01192.168.86.2427019
shard02-shardsvr02192.168.86.2527019
shard02-shardsvr03192.168.86.2627019

2、测试步骤:

1> 部署1个3副本集群,有密码认证并写入数据

2> 副本集集群扩展为单个shard集群通过shard方式管理集群

3> 部署第2个副本集群并添加到shard集群内进行数据均衡

3、分别在3个节点部署部署3副本集群

1> cat /etc/kubernetes/manifests/shard01-shardsvr01.yaml

apiVersion: v1
kind: Pod
metadata:
  labels:
    app: mongodb
  name: shard01-shardsvr01
spec:
  terminationGracePeriodSeconds: 60
  containers:
  - name: mongo
    image: harbor.y9000p.chandz.com/infra/mongo:4.2.14
    command:
    - mongod
    - --replSet
    - shard01
    - --shardsvr
    - --wiredTigerCacheSizeGB=2
    - --bind_ip_all
    - --port=27019
    resources:
      limits:
        memory: 2Gi
        cpu: 1000m
      requests:
        memory: 1Gi
        cpu: 500m
    volumeMounts:
    - name: data
      mountPath: /data/db
  hostNetwork: true
  volumes:
  - name: data
    hostPath:
      path: /data/shard01-shardsvr

2> cat /etc/kubernetes/manifests/shard01-shardsvr02.yaml

apiVersion: v1
kind: Pod
metadata:
  labels:
    app: mongodb
  name: shard01-shardsvr02
spec:
  terminationGracePeriodSeconds: 60
  containers:
  - name: mongo
    image: harbor.y9000p.chandz.com/infra/mongo:4.2.14
    command:
    - mongod
    - --replSet
    - shard01
    - --shardsvr
    - --wiredTigerCacheSizeGB=2
    - --bind_ip_all
    - --port=27019
    resources:
      limits:
        memory: 2Gi
        cpu: 1000m
      requests:
        memory: 1Gi
        cpu: 500m
    volumeMounts:
    - name: data
      mountPath: /data/db
  hostNetwork: true
  volumes:
  - name: data
    hostPath:
      path: /data/shard01-shardsvr

3> cat /etc/kubernetes/manifests/shard01-shardsvr03.yaml

apiVersion: v1
kind: Pod
metadata:
  labels:
    app: mongodb
  name: shard01-shardsvr03
spec:
  terminationGracePeriodSeconds: 60
  containers:
  - name: mongo
    image: harbor.y9000p.chandz.com/infra/mongo:4.2.14
    command:
    - mongod
    - --replSet
    - shard01
    - --shardsvr
    - --wiredTigerCacheSizeGB=2
    - --bind_ip_all
    - --port=27019
    resources:
      limits:
        memory: 2Gi
        cpu: 1000m
      requests:
        memory: 1Gi
        cpu: 500m
    volumeMounts:
    - name: data
      mountPath: /data/db
  hostNetwork: true
  volumes:
  - name: data
    hostPath:
      path: /data/shard01-shardsvr

4> 初始化集群

centos7安装mongo客户端
wget https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-rhel70-4.2.14.tgz -P /opt/
tar -xf /opt/mongodb-linux-x86_64-rhel70-4.2.14.tgz -C /opt/
export PATH=$PATH:/opt/mongodb-linux-x86_64-rhel70-4.2.14/bin/

ubuntu18安装客户端
wget https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-ubuntu1804-4.2.11.tgz
tar -xf mongodb-linux-x86_64-ubuntu1804-4.2.11.tgz -C /opt/
export PATH=$PATH:/opt/mongodb-linux-x86_64-ubuntu1804-4.2.11/bin/

初始化mongo集群
mongo 192.168.86.21:27019
use admin
rs.initiate({ _id: "shard01", members: [ { _id: 0, host : "192.168.86.21:27019" } ] } )
rs.add('192.168.86.22:27019')
rs.add('192.168.86.23:27019')

创建管理员用户
mongo 192.168.86.21:27019
use admin
db = db.getSiblingDB("admin");db.createUser({user:"root",pwd:"rootPassw0rd",roles:["root"]});

添加密码认证
openssl rand -base64 745 >> key
chmod 600 key
scp key root@common01:/data/shard01-shardsvr/
scp key root@common02:/data/shard01-shardsvr/
scp key root@common03:/data/shard01-shardsvr/

#各个节点修改yaml添加keyfile配置
sed -i '/port/a\    - --keyFile=/data/db/key' /etc/kubernetes/manifests/shard01-*.yaml

尝试登陆验证密码
mongo mongodb://root:rootPassw0rd@192.168.86.21:27019,192.168.86.22:27019,192.168.86.23:27019/admin?replicaSet=shard01

插入数据
use duanshuaixing-mongodb
for(i=1; i<=5000;i++){
  db.user.insert( {id:'user'+i, level:i} )
}

4、副本集集群扩展为单个shard集群通过shard方式管理集群

1>部署3节点configsvr集群

部署(启动参数中指定了角色为configsrv)
cat cluster01-configsvr0{1,2,3}.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: mongodb
  name: cluster01-configsvr0{1,2,3}
spec:
  terminationGracePeriodSeconds: 60
  containers:
  - name: configsvr
    image: harbor.y9000p.chandz.com/infra/mongo:4.2.14
    command:
    - mongod
    - --replSet
    - configsvr
    - --configsvr
    - --wiredTigerCacheSizeGB=2
    - --bind_ip_all
    - --port=27018
    resources:
      limits:
        memory: 2Gi
        cpu: 1000m
      requests:
        memory: 1Gi
        cpu: 500m
    volumeMounts:
    - name: data
      mountPath: /data/configdb
  hostNetwork: true
  volumes:
  - name: data
    hostPath:
      path: /data/cluster01-configsvr

初始化mongo集群
mongo 192.168.86.21:27018
use admin
rs.initiate({ _id: "configsvr", members: [ { _id: 0, host : "192.168.86.21:27018" } ] } )
rs.add('192.168.86.22:27018')
rs.add('192.168.86.23:27018')

创建管理员用户
mongo 192.168.86.21:27018
use admin
db = db.getSiblingDB("admin");db.createUser({user:"root",pwd:"rootPassw0rd",roles:["root"]});

#拷贝副本集的key并在各个节点修改yaml添加keyfile配置
cp -a /data/shard01-shardsvr/key /data/cluster01-configsvr/
sed -i '/port/a\    - --keyFile=/data/configdb/key' /etc/kubernetes/manifests/cluster01-configsvr*.yaml

尝试登陆验证密码
mongo mongodb://root:rootPassw0rd@192.168.86.21:27018,192.168.86.22:27018,192.168.86.23:27018/admin?replicaSet=configsvr

2>部署单节点router集群
router说明
mongodb中的router角色只负责提供一个入口,不存储任何的数据, 配置多个router,任何一个都能正常的获取数据

router最重要的配置,指定configsvr的地址,使用副本集id+ip端口的方式指定

cat /etc/kubernetes/manifests/cluster01-router01.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: mongodb
  name: cluster01-router01
spec:
  terminationGracePeriodSeconds: 60
  containers:
  - name: mongo
    image: harbor.y9000p.chandz.com/infra/mongo:4.2.14
    command:
    - mongos
    - --configdb
    - configsvr/192.168.86.21:27018,192.168.86.22:27018,192.168.86.23:27018
    - --bind_ip_all
    - --port=27017
    - --keyFile=/data/db/key
    resources:
      limits:
        memory: 2Gi
        cpu: 1000m
      requests:
        memory: 1Gi
        cpu: 500m
    volumeMounts:
    - name: data
      mountPath: /data/db
  hostNetwork: true
  volumes:
  - name: data
    hostPath:
      path: /data/cluster01-router

3>分片集群配置

mongo 192.168.86.21:27017(鉴权用的是configsvr集群密码)
use admin
sh.addShard("shard01/192.168.86.21:27019,192.168.86.22:27019,192.168.86.23:27019")
sh.status()

5、扩展第2个分片进行数据均衡

二、分片集群管理

0、使用分片集群

开启db分片
use admin
db.runCommand( { enablesharding : "duanshuaixing01" } )
设置索引
use duanshuaixing01
db.user01.ensureIndex( { userid: 1 },{unique:true})
根据索引开启collection分片并设置片键盘
use admin
db.runCommand( { shardcollection : "duanshuaixing01.user01",key : {userid: 1} } )

插入50w条数据
use duanshuaixing01
for(i=1; i<=500000;i++){ db.user01.insert( {name:'man'+i, userid:i} ) }

1、集群管理

1>列出分片信息
use admin
db.runCommand( { listshards : 1 } )
或者
use config
db.shards.find()

2> 判断是否是分片集群
use admin
db.runCommand({ isdbgrid : 1})

3>整体分片查看
sh.status()
db.printShardingStatus()

4>shard扩缩容
#添加shard
use admin
sh.addShard("shardxxx/192.168.86.xxx:27019,192.168.86.xxx:27019,192.168.86.xxx:27019")
#下线shard, 需要迁移非分片数据,迁移非分片数据数据量大的情况下时间较久
use admin
db.adminCommand( { removeShard: "shard01" } )
db.runCommand( { movePrimary:"testdb", to: "shard2" })

5>列出开启分片功能的数据库
use config
db.databases.find()

6>查看集合的分片键
use config
db.databases.find()

7>添加和删除shard内mongod replicaset节点,直接在replicaset集群内添加节点mongos会自动识别到集群节点并同步数据(调整后约1分钟后在新集群状态变更)

2、balancer管理

1>确认 balancer 是否在工作
sh.getBalancerState()

2>关闭和启动 balancer,在备份时需要关闭
sh.stopBalancer()
sh.startBalancer()

3>设置 balancer工作的时间窗口为凌晨 1:00 到 6:00
use config
db.settings.update({ _id : "balancer" }, { $set : { activeWindow : { start : "1:00", stop : "6:00" } } }, true )

4>查看设定的balancer执行时间窗口
sh.getBalancerWindow()

5>关闭某个集合的 balancer 功能
sh.disableBalancing("<库名>.<集合名>")
6>关闭某个集合的 balancer 功能
sh.enableBalancing("<库名>.<集合名>")

7>查看某个collection是否支持分片
use dbname
db.collectionName.stats().sharded 

8>查看数据分布
db.collectionName.getShardDistribution()

9>查看collection数据量大小, 以M为单位
mongo mongodb://dbusername:dbpassword@host:27017/databases --eval 'var collectionNames= db.getCollectionNames(); for (var i = 0; i < collectionNames.length; i++) { var coll = db.getCollection(collectionNames[i]); var stats = coll.stats(1024 * 1024); print(stats.ns, stats.storageSize); }'

10>查看大于1000M的collection
mongo mongodb://dbusername:dbpassword@host:27017/databases --eval 'var collectionNames= db.getCollectionNames(); for (var i = 0; i < collectionNames.length; i++) { var coll = db.getCollection(collectionNames[i]); var stats = coll.stats(1024 * 1024); print(stats.ns, stats.storageSize); }' |awk '{print $2 "M  "$1}'|sort -n -r|grep '^[0-9]'|awk -F 'M' '$1 > 1000'

11>统计db内所有colleciton数据条数
mongo mongodb://dbusername:dbpassword@host:27017/databases --eval 'var collectionNames= db.getCollectionNames(); for (var i = 0; i < collectionNames.length; i++) { var coll = db.getCollection(collectionNames[i]); var stats = coll.find().count(); print(coll, stats); }'

12>列出数据库所有索引
db.getCollectionNames().forEach(function(collection) {
   indexes = db[collection].getIndexes();
   print("Indexes for " + collection + ":");
   printjson(indexes);
});

3、FAQ

1、mongos卡死,解决办法切换configsrv的主节点,原因是每隔不到3个月,在expiresAt到期前,切换一下config主节点就会生成一个新的并延期3-6个月的使用期间。

2、通过判断是否通过索引查询

db.user01.find( {userid:1} ).explain(true)
查看winningPlan内是否有indexBounds字段

3、shard集群平滑迁移:mongoshake平滑迁移

4、mongodump mongos的svc报错

报错信息:Failed: error writing data for collection `operator.usage` to disk: error reading collection: (CursorNotFound) Cursor not found (id: 6959853928523587904).

原因:每个 mongos 实例都需要有自己的 FQDN 来进行连接,并且无法实现负载平衡,因为 mongos 实例之间不共享游标

解决办法:备份服务发布一个专属的svc,关联某一个mongos的pod

4、修改shard内节点配置(节点迁移场景)
直接在shard副本集集群内添加或者删除节点mongos会自动识别到集群节点变更

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值