MongoDB4.4版本源码部署分片集群

MongoDB4.4版本源码部署分片集群

node1:39.103.204.27

node2:49.232.197.39

node3:43.138.41.190

shard1 PRIMARY

shard1 SECONDARY

shard1 arbiterOnly / hidden

shard2 PRIMARY

shard2 SECONDARY

shard2 arbiterOnly / hidden

shard3 PRIMARY

shard3 SECONDARY

shard3 arbiterOnly / hidden

config1 PRIMARY

config2 SECONDARY

config3 arbiterOnly / hidden

mongos1

mongos2

mongos3

端口分配

mongos

config

shard1

shard2

shard3

20000

21000

27001

27002

27003

一、单节点部署及环境准备

1、1 单节点部署及目录规划

/data/mongodb/config/{data,log}       # config节点的数据和日志
/data/mongodb/mongos/log              # mongos节点的日志
/data/mongodb/shard1/{data,log}       # shard节点的数据和日志
/data/logs/mongodb                    # config节点的日志
/data/apps/mongodb/config             # 节点配置文件路径
/data/apps/mongodb/pid                # 节点pid文件路径
root@bugaoxing:/data/apps# tar xf mongodb-linux-x86_64-ubuntu2004-4.4.10.tgz
root@bugaoxing:/data/apps# tar xf mongodb-database-tools-ubuntu2004-x86_64-100.5.1.tgz
root@bugaoxing:/data/apps# mv mongodb-linux-x86_64-ubuntu2004-4.4.10 mongodb
root@bugaoxing:/data/apps# mv mongodb-database-tools-ubuntu2004-x86_64-100.5.1.tgz mongodb-database
echo "export MONGODB_HOME=/data/apps/mongodb" >> /etc/profile
echo "export PATH=$PATH:$MONGODB_HOME/bin" >> /etc/profile
echo "export MONGODB_DATABASE_HOME=/data/apps/mongodb-database" >> /etc/profile
echo "export PATH=$PATH:$MONGODB_DATABASE_HOME/bin" >> /etc/profile
root@bugaoxing:/data/apps# source /etc/profile
# 创建相关目录
mkdir -p /data/mongodb/config/{data,log}
mkdir -p /data/mongodb/mongos/log
mkdir -p /data/mongodb/shard1/{data,log}
mkdir -p /data/mongodb/shard2/{data,log}
mkdir -p /data/mongodb/shard3/{data,log}
mkdir -p /data/logs/mongodb
mkdir -p /data/apps/mongodb/{config,pid}

1、2 配置config节点

配置config.conf

注意各节点的bindip

systemLog:
  destination: file
  logAppend: true
  path: /data/logs/mongodb/config.log
storage:
  #dbPath: /data/mongodb
  dbPath: /data/mongodb/config/data
  journal:
    enabled: true
processManagement:
  fork: true  # fork and run in background
  pidFilePath: /data/apps/mongodb/pid/config.pid  # location of pidfile
net:
  port: 21000
  #bindIp: 127.0.0.1  # Listen to local interface only, comment to listen on all interfaces.
  bindIp: 0.0.0.0  # Listen to local interface only, comment to listen on all interfaces.
sharding:
  clusterRole: configsvr
replication:
   oplogSizeMB: 100
   replSetName: configs

启动三台config server节点

root@VM-24-2-ubuntu:/data/apps/mongodb/config# mongod -f /data/apps/mongodb/config/config.conf
about to fork child process, waiting until server is ready for connections.
forked process: 2154
child process started successfully, parent exiting
##  登录任意一台config  server服务器,初始化副本集
root@VM-24-2-ubuntu:/data/apps/mongodb/config# mongo 127.0.0.1:21000
# 进入mongodb后输入一下内容 记得更改各节点ip,防火墙
config = {_id: 'configs', members: [
       {_id: 0, host: '39.103.204.27:21000'},
       {_id: 1, host: '49.232.197.39:21000'},
       {_id: 2, host: '43.138.41.190:21000'}]
   }
# 初始化配置
rs.initiate(config)
# 出现 ok 1 则成功
> config = {_id: 'configs', members: [
...                          {_id: 0, host: '39.103.204.27:21000'},
...                          {_id: 1, host: '49.232.197.39:21000'},
...                          {_id: 2, host: '43.138.41.190:21000'}]
...           }
   {
 "_id" : "configs",
 "members" : [
  {
   "_id" : 0,
   "host" : "39.103.204.27:21000"
  },
  {
   "_id" : 1,
   "host" : "49.232.197.39:21000"
  },
  {
   "_id" : 2,
   "host" : "43.138.41.190:21000"
  }
 ]
}
>           
>                    
> rs.initiate(config)
{
 "ok" : 1,
 "$gleStats" : {
  "lastOpTime" : Timestamp(1665399949, 1),
  "electionId" : ObjectId("000000000000000000000000")
 },
 "lastCommittedOpTime" : Timestamp(0, 0)
}
configs:SECONDARY> 

阿里云MongoDB副本集中将hidden节点用来在Secondary节点故障时接替该故障节点成为新的Secondary节点

hidden与arbiterOnly的区别在于 两种节点都会参加选举,hidden会同步SECONDARY节点的数据,但对于客户端来说是隐藏的。所以并不会对外提供数据,而arbiterOnly节点不会同步集群的数据,只会参加选举。

配置hidden从库注意事项:

hidden节点必须始终是priority为0,因为hidden节点不能成为primary节点

客户端不会将只读流量发给hidden节点,除了基本复制,这些成员不会收到任何流量

hidden节点可以在副本集选举中投票

配置方法: 在primary节点执行

cfg=rs.conf() 
cfg.members[2].priority=0
cfg.members[2].hidden=true
rs.reconfig(cfg) 

二、部署shard节点及mongos

2、1 部署shard1

root@bugaoxing:/data/apps/mongodb/config# cat shard1.conf 
systemLog:
  destination: file
  logAppend: true
  path: /data/mongodb/shard1/log/shard1.log
storage:
  #dbPath: /var/lib/mongo
  dbPath: /data/mongodb/shard1/data
  journal:
    enabled: true
processManagement:
  fork: true  # fork and run in background
  pidFilePath: /data/apps/mongodb/pid/shard1.pid  # location of pidfile
net:
  port: 27001
  #bindIp: 127.0.0.1  # Listen to local interface only, comment to listen on all interfaces.
  bindIp: 0.0.0.0  # Listen to local interface only, comment to listen on all interfaces.
sharding:
  clusterRole: shardsvr
replication:
   replSetName: shard1
root@bugaoxing:/data/apps/mongodb/config# 
root@bugaoxing:/data/apps/mongodb/config# mongod -f /data/apps/mongodb/config/shard1.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 2991
child process started successfully, parent exiting
# 登录任意一台shard节点,初始化副本集
root@bugaoxing:/data/apps/mongodb/config# mongo 127.0.0.1:27001
# 进入mongodb后输入一下内容 记得更改各节点ip,防火墙
# 以下为仲裁节点配置
config = {_id: 'shard1', members: [
        {_id: 0, host: '39.103.204.27:27001'},
        {_id: 1, host: '49.232.197.39:27001'},
        {_id: 2, host: '43.138.41.190:27001' , arbiterOnly: true}]
    }
# 以下为不带仲裁节点配置
config = {_id: 'shard1', members: [
        {_id: 0, host: '39.103.204.27:27001'},
        {_id: 1, host: '49.232.197.39:27001'},
        {_id: 2, host: '43.138.41.190:27001'}]
    }
# 然后初始化配置
rs.initiate(config)
# 出现 ok 1 则成功
> config = {_id: 'shard1', members: [
...                           {_id: 0, host: '39.103.204.27:27001'},
...                           {_id: 1, host: '49.232.197.39:27001'},
...                           {_id: 2, host: '43.138.41.190:27001' , arbiterOnly: true}]
...            }
{
 "_id" : "shard1",
 "members" : [
  {
   "_id" : 0,
   "host" : "39.103.204.27:27001"
  },
  {
   "_id" : 1,
   "host" : "49.232.197.39:27001"
  },
  {
   "_id" : 2,
   "host" : "43.138.41.190:27001",
   "arbiterOnly" : true
  }
 ]
}
> 
> 
> 
> rs.initiate(config)
{ "ok" : 1 }

设置hidden

shard1:PRIMARY> cfg=rs.conf()
{
 "_id" : "shard1",
 "version" : 1,
 "protocolVersion" : NumberLong(1),
 "writeConcernMajorityJournalDefault" : true,
 "members" : [
  {
   "_id" : 0,
   "host" : "39.103.204.27:27001",
   "arbiterOnly" : false,
   "buildIndexes" : true,
   "hidden" : false,
   "priority" : 1,
   "tags" : {
    
   },
   "slaveDelay" : NumberLong(0),
   "votes" : 1
  },
  {
   "_id" : 1,
   "host" : "49.232.197.39:27001",
   "arbiterOnly" : false,
   "buildIndexes" : true,
   "hidden" : false,
   "priority" : 1,
   "tags" : {
    
   },
   "slaveDelay" : NumberLong(0),
   "votes" : 1
  },
  {
   "_id" : 2,
   "host" : "43.138.41.190:27001",
   "arbiterOnly" : false,
   "buildIndexes" : true,
   "hidden" : false,
   "priority" : 1,
   "tags" : {
    
   },
   "slaveDelay" : NumberLong(0),
   "votes" : 1
  }
 ],
 "settings" : {
  "chainingAllowed" : true,
  "heartbeatIntervalMillis" : 2000,
  "heartbeatTimeoutSecs" : 10,
  "electionTimeoutMillis" : 10000,
  "catchUpTimeoutMillis" : -1,
  "catchUpTakeoverDelayMillis" : 30000,
  "getLastErrorModes" : {
   
  },
  "getLastErrorDefaults" : {
   "w" : 1,
   "wtimeout" : 0
  },
  "replicaSetId" : ObjectId("6346736d3a10bf51153b640b")
 }
}
shard1:PRIMARY> cfg.members[2].priority=0
0
shard1:PRIMARY> cfg.members[2].hidden=true
true
shard1:PRIMARY> rs.reconfig(cfg) 
{
 "ok" : 1,
 "$clusterTime" : {
  "clusterTime" : Timestamp(1665562650, 1),
  "signature" : {
   "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
   "keyId" : NumberLong(0)
  }
 },
 "operationTime" : Timestamp(1665562650, 1)
}
shard1:PRIMARY> cfg=rs.conf()
{
 "_id" : "shard1",
 "version" : 2,
 "protocolVersion" : NumberLong(1),
 "writeConcernMajorityJournalDefault" : true,
 "members" : [
  {
   "_id" : 0,
   "host" : "39.103.204.27:27001",
   "arbiterOnly" : false,
   "buildIndexes" : true,
   "hidden" : false,
   "priority" : 1,
   "tags" : {
    
   },
   "slaveDelay" : NumberLong(0),
   "votes" : 1
  },
  {
   "_id" : 1,
   "host" : "49.232.197.39:27001",
   "arbiterOnly" : false,
   "buildIndexes" : true,
   "hidden" : false,
   "priority" : 1,
   "tags" : {
    
   },
   "slaveDelay" : NumberLong(0),
   "votes" : 1
  },
  {
   "_id" : 2,
   "host" : "43.138.41.190:27001",
   "arbiterOnly" : false,
   "buildIndexes" : true,
   "hidden" : true,
   "priority" : 0,
   "tags" : {
    
   },
   "slaveDelay" : NumberLong(0),
   "votes" : 1
  }
 ],
 "settings" : {
  "chainingAllowed" : true,
  "heartbeatIntervalMillis" : 2000,
  "heartbeatTimeoutSecs" : 10,
  "electionTimeoutMillis" : 10000,
  "catchUpTimeoutMillis" : -1,
  "catchUpTakeoverDelayMillis" : 30000,
  "getLastErrorModes" : {
   
  },
  "getLastErrorDefaults" : {
   "w" : 1,
   "wtimeout" : 0
  },
  "replicaSetId" : ObjectId("6346736d3a10bf51153b640b")
 }
}
shard1:PRIMARY> 

2、2 部署shard2

注:在配置shard2的时候,如果你将第一台机器分配成仲裁节点,请不要登录第一台机器进行初始化,会报错,只需要换一台机器就可以了。

root@bugaoxing:/data/apps/mongodb/config# cat shard2.conf 
systemLog:
  destination: file
  logAppend: true
  path: /data/mongodb/shard2/log/shard2.log
storage:
  #dbPath: /var/lib/mongo
  dbPath: /data/mongodb/shard2/data
  journal:
    enabled: true
processManagement:
  fork: true  # fork and run in background
  pidFilePath: /data/apps/mongodb/pid/shard2.pid  # location of pidfile
net:
  port: 27002
  #bindIp: 127.0.0.1  # Listen to local interface only, comment to listen on all interfaces.
  bindIp: 0.0.0.0  # Listen to local interface only, comment to listen on all interfaces.
sharding:
  clusterRole: shardsvr
replication:
   replSetName: shard2
root@bugaoxing:/data/apps/mongodb/config# 
root@bugaoxing:/data/apps/mongodb/config# mongod -f /data/apps/mongodb/config/shard2.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 2991
child process started successfully, parent exiting
# 登录任意一台shard节点,初始化副本集
root@bugaoxing:/data/apps/mongodb/config# mongo 127.0.0.1:27002
# 进入mongodb后输入一下内容 记得更改各节点ip,防火墙
# 以下为仲裁节点配置
config = {_id: 'shard2', members: [
        {_id: 0, host: '39.103.204.27:27002'},
        {_id: 1, host: '49.232.197.39:27002'},
        {_id: 2, host: '43.138.41.190:27002' , arbiterOnly: true}]
    }
# 以下为不带仲裁节点配置
config = {_id: 'shard2', members: [
        {_id: 0, host: '39.103.204.27:27002'},
        {_id: 1, host: '49.232.197.39:27002'},
        {_id: 2, host: '43.138.41.190:27002'}]
    }
# 然后初始化配置
rs.initiate(config)
# 出现 ok 1 则成功
> config = {_id: 'shard2', members: [                          
... 						{_id: 0, host: '39.103.204.27:27002'},                         
... 						{_id: 1, host: '49.232.197.39:27002'},                          
... 						{_id: 2, host: '43.138.41.190:27002', arbiterOnly: true}]           
... 		}

{
 "_id" : "shard2",
 "members" : [
  {
   "_id" : 0,
   "host" : "39.103.204.27:27002"
  },
  {
   "_id" : 1,
   "host" : "49.232.197.39:27002"
  },
  {
   "_id" : 2,
   "host" : "43.138.41.190:27002",
   "arbiterOnly" : true
  }
 ]
}
> 
> 
> rs.initiate(config)
{ "ok" : 1 }

设置hidden节点

shard2:SECONDARY> cfg=rs.conf() 
{
 "_id" : "shard2",
 "version" : 1,
 "protocolVersion" : NumberLong(1),
 "writeConcernMajorityJournalDefault" : true,
 "members" : [
  {
   "_id" : 0,
   "host" : "39.103.204.27:27002",
   "arbiterOnly" : false,
   "buildIndexes" : true,
   "hidden" : false,
   "priority" : 1,
   "tags" : {
    
   },
   "slaveDelay" : NumberLong(0),
   "votes" : 1
  },
  {
   "_id" : 1,
   "host" : "49.232.197.39:27002",
   "arbiterOnly" : false,
   "buildIndexes" : true,
   "hidden" : false,
   "priority" : 1,
   "tags" : {
    
   },
   "slaveDelay" : NumberLong(0),
   "votes" : 1
  },
  {
   "_id" : 2,
   "host" : "43.138.41.190:27002",
   "arbiterOnly" : false,
   "buildIndexes" : true,
   "hidden" : false,
   "priority" : 1,
   "tags" : {
    
   },
   "slaveDelay" : NumberLong(0),
   "votes" : 1
  }
 ],
 "settings" : {
  "chainingAllowed" : true,
  "heartbeatIntervalMillis" : 2000,
  "heartbeatTimeoutSecs" : 10,
  "electionTimeoutMillis" : 10000,
  "catchUpTimeoutMillis" : -1,
  "catchUpTakeoverDelayMillis" : 30000,
  "getLastErrorModes" : {
   
  },
  "getLastErrorDefaults" : {
   "w" : 1,
   "wtimeout" : 0
  },
  "replicaSetId" : ObjectId("634676c55899be79cf346ce7")
 }
}
shard2:PRIMARY> cfg.members[2].priority=0
0
shard2:PRIMARY> cfg.members[2].hidden=true
true
shard2:PRIMARY> rs.reconfig(cfg)
{
 "ok" : 1,
 "$clusterTime" : {
  "clusterTime" : Timestamp(1665562354, 1),
  "signature" : {
   "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
   "keyId" : NumberLong(0)
  }
 },
 "operationTime" : Timestamp(1665562354, 1)
}
shard2:PRIMARY> 
shard2:PRIMARY> 
shard2:PRIMARY> 
shard2:PRIMARY> cfg=rs.conf()
{
 "_id" : "shard2",
 "version" : 2,
 "protocolVersion" : NumberLong(1),
 "writeConcernMajorityJournalDefault" : true,
 "members" : [
  {
   "_id" : 0,
   "host" : "39.103.204.27:27002",
   "arbiterOnly" : false,
   "buildIndexes" : true,
   "hidden" : false,
   "priority" : 1,
   "tags" : {
    
   },
   "slaveDelay" : NumberLong(0),
   "votes" : 1
  },
  {
   "_id" : 1,
   "host" : "49.232.197.39:27002",
   "arbiterOnly" : false,
   "buildIndexes" : true,
   "hidden" : false,
   "priority" : 1,
   "tags" : {
    
   },
   "slaveDelay" : NumberLong(0),
   "votes" : 1
  },
  {
   "_id" : 2,
   "host" : "43.138.41.190:27002",
   "arbiterOnly" : false,
   "buildIndexes" : true,
   "hidden" : true,
   "priority" : 0,
   "tags" : {
    
   },
   "slaveDelay" : NumberLong(0),
   "votes" : 1
  }
 ],
 "settings" : {
  "chainingAllowed" : true,
  "heartbeatIntervalMillis" : 2000,
  "heartbeatTimeoutSecs" : 10,
  "electionTimeoutMillis" : 10000,
  "catchUpTimeoutMillis" : -1,
  "catchUpTakeoverDelayMillis" : 30000,
  "getLastErrorModes" : {
   
  },
  "getLastErrorDefaults" : {
   "w" : 1,
   "wtimeout" : 0
  },
  "replicaSetId" : ObjectId("634676c55899be79cf346ce7")
 }
}
shard2:PRIMARY> 
shard2:PRIMARY> exit

2、3 部署shard3

root@bugaoxing:/data/apps/mongodb/config# cat shard3.conf 
systemLog:
  destination: file
  logAppend: true
  path: /data/mongodb/shard3/log/shard3.log
storage:
  #dbPath: /var/lib/mongo
  dbPath: /data/mongodb/shard3/data
  journal:
    enabled: true
processManagement:
  fork: true  # fork and run in background
  pidFilePath: /data/apps/mongodb/pid/shard3.pid  # location of pidfile
net:
  port: 27003
  #bindIp: 127.0.0.1  # Listen to local interface only, comment to listen on all interfaces.
  bindIp: 0.0.0.0  # Listen to local interface only, comment to listen on all interfaces.
sharding:
  clusterRole: shardsvr
replication:
   replSetName: shard3
root@bugaoxing:/data/apps/mongodb/config# 
root@bugaoxing:/data/apps/mongodb/config# mongod -f /data/apps/mongodb/config/shard3.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 2991
child process started successfully, parent exiting
# 登录任意一台shard节点,初始化副本集
root@bugaoxing:/data/apps/mongodb/config# mongo 127.0.0.1:27003
# 进入mongodb后输入一下内容 记得更改各节点ip,防火墙
# 以下为仲裁节点配置
config = {_id: 'shard3', members: [
        {_id: 0, host: '39.103.204.27:27003'},
        {_id: 1, host: '49.232.197.39:27003'},
        {_id: 2, host: '43.138.41.190:27003' , arbiterOnly: true}]
    }
# 以下为不带仲裁节点配置
config = {_id: 'shard3', members: [
        {_id: 0, host: '39.103.204.27:27003'},
        {_id: 1, host: '49.232.197.39:27003'},
        {_id: 2, host: '43.138.41.190:27003'}]
    }
# 然后初始化配置
rs.initiate(config)
# 出现 ok 1 则成功
> config = {_id: 'shard3', members: [
...                           {_id: 0, host: '39.103.204.27:27003'},
...                           {_id: 1, host: '49.232.197.39:27003'},
...                           {_id: 2, host: '43.138.41.190:27003', arbiterOnly: true}]
...            }
{
 "_id" : "shard3",
 "members" : [
  {
   "_id" : 0,
   "host" : "39.103.204.27:27003"
  },
  {
   "_id" : 1,
   "host" : "49.232.197.39:27003"
  },
  {
   "_id" : 2,
   "host" : "43.138.41.190:27003",
   "arbiterOnly" : true
  }
 ]
}
> rs.initiate(config)
{
 "ok" : 1,
 "$clusterTime" : {
  "clusterTime" : Timestamp(1665461256, 1),
  "signature" : {
   "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
   "keyId" : NumberLong(0)
  }
 },
 "operationTime" : Timestamp(1665461256, 1)
}

设置hidden从库

shard3:SECONDARY> cfg=rs.conf()
{
 "_id" : "shard3",
 "version" : 1,
 "protocolVersion" : NumberLong(1),
 "writeConcernMajorityJournalDefault" : true,
 "members" : [
  {
   "_id" : 0,
   "host" : "39.103.204.27:27003",
   "arbiterOnly" : false,
   "buildIndexes" : true,
   "hidden" : false,
   "priority" : 1,
   "tags" : {
    
   },
   "slaveDelay" : NumberLong(0),
   "votes" : 1
  },
  {
   "_id" : 1,
   "host" : "49.232.197.39:27003",
   "arbiterOnly" : false,
   "buildIndexes" : true,
   "hidden" : false,
   "priority" : 1,
   "tags" : {
    
   },
   "slaveDelay" : NumberLong(0),
   "votes" : 1
  },
  {
   "_id" : 2,
   "host" : "43.138.41.190:27003",
   "arbiterOnly" : false,
   "buildIndexes" : true,
   "hidden" : false,
   "priority" : 1,
   "tags" : {
    
   },
   "slaveDelay" : NumberLong(0),
   "votes" : 1
  }
 ],
 "settings" : {
  "chainingAllowed" : true,
  "heartbeatIntervalMillis" : 2000,
  "heartbeatTimeoutSecs" : 10,
  "electionTimeoutMillis" : 10000,
  "catchUpTimeoutMillis" : -1,
  "catchUpTakeoverDelayMillis" : 30000,
  "getLastErrorModes" : {
   
  },
  "getLastErrorDefaults" : {
   "w" : 1,
   "wtimeout" : 0
  },
  "replicaSetId" : ObjectId("63467947ea6c526d7bea181a")
 }
}
shard3:PRIMARY> cfg.members[2].priority=0
0
shard3:PRIMARY> cfg.members[2].hidden=true
true
shard3:PRIMARY> rs.reconfig(cfg)
{
 "ok" : 1,
 "$clusterTime" : {
  "clusterTime" : Timestamp(1665563016, 1),
  "signature" : {
   "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
   "keyId" : NumberLong(0)
  }
 },
 "operationTime" : Timestamp(1665563016, 1)
}
shard3:PRIMARY> cfg=rs.conf()
{
 "_id" : "shard3",
 "version" : 2,
 "protocolVersion" : NumberLong(1),
 "writeConcernMajorityJournalDefault" : true,
 "members" : [
  {
   "_id" : 0,
   "host" : "39.103.204.27:27003",
   "arbiterOnly" : false,
   "buildIndexes" : true,
   "hidden" : false,
   "priority" : 1,
   "tags" : {
    
   },
   "slaveDelay" : NumberLong(0),
   "votes" : 1
  },
  {
   "_id" : 1,
   "host" : "49.232.197.39:27003",
   "arbiterOnly" : false,
   "buildIndexes" : true,
   "hidden" : false,
   "priority" : 1,
   "tags" : {
    
   },
   "slaveDelay" : NumberLong(0),
   "votes" : 1
  },
  {
   "_id" : 2,
   "host" : "43.138.41.190:27003",
   "arbiterOnly" : false,
   "buildIndexes" : true,
   "hidden" : true,
   "priority" : 0,
   "tags" : {
    
   },
   "slaveDelay" : NumberLong(0),
   "votes" : 1
  }
 ],
 "settings" : {
  "chainingAllowed" : true,
  "heartbeatIntervalMillis" : 2000,
  "heartbeatTimeoutSecs" : 10,
  "electionTimeoutMillis" : 10000,
  "catchUpTimeoutMillis" : -1,
  "catchUpTakeoverDelayMillis" : 30000,
  "getLastErrorModes" : {
   
  },
  "getLastErrorDefaults" : {
   "w" : 1,
   "wtimeout" : 0
  },
  "replicaSetId" : ObjectId("63467947ea6c526d7bea181a")
 }
}
shard3:PRIMARY> 

2、4 部署路由服务器mongos

root@VM-24-2-ubuntu:/data/apps/mongodb/config# cat mongos.conf 
systemLog:
  destination: file
  logAppend: true
  path: /data/mongodb/mongos/log/mongos.log
  #dbPath: /var/lib/mongo
processManagement:
  fork: true  # fork and run in background
  pidFilePath: /data/apps/mongodb/pid/mongos.pid  # location of pidfile
  #timeZoneInfo: /usr/share/zoneinfo
net:
  port: 20000
  #bindIp: 127.0.0.1  # Listen to local interface only, comment to listen on all interfaces.
  bindIp: 0.0.0.0  # Listen to local interface only, comment to listen on all interfaces.
#监听的配置服务器,只能有1个或者3个 configs为配置服务器的副本集名字
sharding:
  configDB: configs/39.103.204.27:21000,49.232.197.39:21000,43.138.41.190:21000
##  使用mongos启动 而不是 mongod
root@VM-24-2-ubuntu:/data/apps/mongodb/config# mongos -f /data/apps/mongodb/config/mongos.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 195471
child process started successfully, parent exiting

三、启动分片

目前搭建了mongodb配置服务器、路由服务器,各个分片服务器,不过应用程序连接到mongos路由服务器并不能使用分片机制,还需要在程序里设置分片配置,让分片生效。

片键是集合的一个键,MongoDB根据这个键拆分数据。例如,如果选择基于“username”进行分片,MongoDB会根据不同的用户名进行分片。选择片键可以认为是选择集合中数据的顺序。它与索引是个相似的概念:随着集合的不断增长,片键就会成为集合上最重要的索引。只有被索引过的键才能够作为片键.

仲裁节点:

sh.addShard("shard1/39.103.204.27:27001,49.232.197.39:27001,43.138.41.190:27001")

sh.addShard("shard2/39.103.204.27:27002,49.232.197.39:27002,43.138.41.190:27002")

sh.addShard("shard3/39.103.204.27:27003,49.232.197.39:27003,43.138.41.190:27003")

hidden节点:

sh.addShard("shard1/39.103.204.27:27001,49.232.197.39:27001")

sh.addShard("shard2/39.103.204.27:27002,49.232.197.39:27002")

sh.addShard("shard3/39.103.204.27:27003,49.232.197.39:27003")

3、1 启动分片

以下为仲裁节点启动方式

root@bugaoxing:/data/apps/mongodb/config# mongo 127.0.0.1:20000
MongoDB shell version v4.4.10
connecting to: mongodb://127.0.0.1:20000/test?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("a3741440-8529-45f3-a183-5948e3b10983") }
MongoDB server version: 4.4.10
---
The server generated these startup warnings when booting: 
        2022-10-11T14:00:06.725+08:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted
        2022-10-11T14:00:06.725+08:00: You are running this process as the root user, which is not recommended
---
mongos> 

mongos> use admin
switched to db admin
mongos> 
mongos> sh.addShard("shard1/39.103.204.27:27001,49.232.197.39:27001,43.138.41.190:27001")
{
 "shardAdded" : "shard1",
 "ok" : 1,
 "operationTime" : Timestamp(1665468514, 4),
 "$clusterTime" : {
  "clusterTime" : Timestamp(1665468514, 4),
  "signature" : {
   "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
   "keyId" : NumberLong(0)
  }
 }
}
mongos> sh.addShard("shard2/39.103.204.27:27002,49.232.197.39:27002,43.138.41.190:27002")
{
 "shardAdded" : "shard2",
 "ok" : 1,
 "operationTime" : Timestamp(1665468527, 3),
 "$clusterTime" : {
  "clusterTime" : Timestamp(1665468527, 4),
  "signature" : {
   "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
   "keyId" : NumberLong(0)
  }
 }
}
mongos> sh.addShard("shard3/39.103.204.27:27003,49.232.197.39:27003,43.138.41.190:27003")
{
 "shardAdded" : "shard3",
 "ok" : 1,
 "operationTime" : Timestamp(1665468535, 5),
 "$clusterTime" : {
  "clusterTime" : Timestamp(1665468535, 5),
  "signature" : {
   "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
   "keyId" : NumberLong(0)
  }
 }
}
mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
   "_id" : 1,
   "minCompatibleVersion" : 5,
   "currentVersion" : 6,
   "clusterId" : ObjectId("6343fc9741b232fb9010faab")
  }
  shards:
        {  "_id" : "shard1",  "host" : "shard1/39.103.204.27:27001,49.232.197.39:27001",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/39.103.204.27:27002,49.232.197.39:27002",  "state" : 1 }
        {  "_id" : "shard3",  "host" : "shard3/39.103.204.27:27003,49.232.197.39:27003",  "state" : 1 }
  active mongoses:
        "4.4.10" : 3
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                No recent migrations
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }

以下为hidden节点启动方式

mongos> sh.addShard("shard1/39.103.204.27:27001,49.232.197.39:27001")
{
 "shardAdded" : "shard1",
 "ok" : 1,
 "operationTime" : Timestamp(1665563739, 8),
 "$clusterTime" : {
  "clusterTime" : Timestamp(1665563739, 8),
  "signature" : {
   "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
   "keyId" : NumberLong(0)
  }
 }
}
mongos> sh.addShard("shard2/39.103.204.27:27002,49.232.197.39:27002")
{
 "shardAdded" : "shard2",
 "ok" : 1,
 "operationTime" : Timestamp(1665563776, 3),
 "$clusterTime" : {
  "clusterTime" : Timestamp(1665563776, 3),
  "signature" : {
   "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
   "keyId" : NumberLong(0)
  }
 }
}
mongos> sh.addShard("shard3/39.103.204.27:27003,49.232.197.39:27003")
{
 "shardAdded" : "shard3",
 "ok" : 1,
 "operationTime" : Timestamp(1665563789, 5),
 "$clusterTime" : {
  "clusterTime" : Timestamp(1665563789, 5),
  "signature" : {
   "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
   "keyId" : NumberLong(0)
  }
 }
}
mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
   "_id" : 1,
   "minCompatibleVersion" : 5,
   "currentVersion" : 6,
   "clusterId" : ObjectId("63466fe028ab153ff68f35e2")
  }
  shards:
        {  "_id" : "shard1",  "host" : "shard1/39.103.204.27:27001,49.232.197.39:27001",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/39.103.204.27:27002,49.232.197.39:27002",  "state" : 1 }
        {  "_id" : "shard3",  "host" : "shard3/39.103.204.27:27003,49.232.197.39:27003",  "state" : 1 }
  active mongoses:
        "4.4.10" : 3
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                No recent migrations
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }

3、2 测试

目前配置服务、路由服务、分片服务、副本集服务都已经串联起来了,但我们的目的是希望插入数据,数据能够自动分片。连接在mongos上,准备让指定的数据库、指定的集合分片生效。

设置分片chunk大小

root@bugaoxing:/data/apps/mongodb/config# mongo mongodb://39.103.204.27:20000,49.232.197.39:20000,43.138.41.190:20000/
MongoDB shell version v4.4.10
connecting to: mongodb://39.103.204.27:20000,49.232.197.39:20000,43.138.41.190:20000/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("94567086-d2b1-4b85-8b1c-acb8290c110b") }
MongoDB server version: 4.4.10
---
The server generated these startup warnings when booting: 
        2022-10-12T16:33:18.306+08:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted
        2022-10-12T16:33:18.306+08:00: You are running this process as the root user, which is not recommended
---
mongos> 
mongos> use config
switched to db config
mongos> db.settings.save({ "_id" : "chunksize", "value" : 1 })
WriteResult({ "nMatched" : 0, "nUpserted" : 1, "nModified" : 0, "_id" : "chunksize" })

**数据库分片功能

语法:( { enablesharding : “数据库名称” } )
mongos> db.runCommand( { enablesharding : “test” } )
或者mongos>sh.enableSharding(“test”)

##use到admin库下执行,否则会报错
mongos> use admin
switched to db admin
mongos> sh.enableSharding("test")
{
 "ok" : 1,
 "operationTime" : Timestamp(1665566334, 7),
 "$clusterTime" : {
  "clusterTime" : Timestamp(1665566334, 7),
  "signature" : {
   "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
   "keyId" : NumberLong(0)
  }
 }
}
mongos> 

指定数据库里需要分片的集合和片键

db.runCommand( { shardcollection : “test.users”,key : {user_id: 1} } )
或者sh.shardCollection(“test.users”, {user_id: 1})

mongos> use test
switched to db test
mongos> 
mongos> 
mongos> db.users.createIndex({user_id : 1})
{
 "raw" : {
  "shard2/39.103.204.27:27002,49.232.197.39:27002" : {
   "createdCollectionAutomatically" : true,
   "numIndexesBefore" : 1,
   "numIndexesAfter" : 2,
   "commitQuorum" : "votingMembers",
   "ok" : 1
  }
 },
 "ok" : 1,
 "operationTime" : Timestamp(1665566368, 2),
 "$clusterTime" : {
  "clusterTime" : Timestamp(1665566368, 2),
  "signature" : {
   "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
   "keyId" : NumberLong(0)
  }
 }
}

我们设置test的users表需要分片,根据 id 自动分片到 shard1 ,shard2,shard3 上面去。要这样设置是因为不是所有mongodb 的数据库和表都需要分片!

插入数据测试分片效果

mongos> for (var i = 1; i <= 100000; i++) db.users.save({user_id:i,username:"user"+i});
WriteResult({ "nInserted" : 1 })

查看分片情况如下,此处省略其他信息

mongos> db.users.stats()
{
 "sharded" : false,
 "primary" : "shard2",
 "capped" : false,
 "wiredTiger" : {
  "metadata" : {
   "formatVersion" : 1
  },
  "creationString" : "access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none,write_timestamp=off),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,import=(enabled=false,file_metadata=,repair=false),internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=false),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_image_max=0,memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,readonly=false,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,tiered_object=false,tiered_storage=(auth_token=,bucket=,bucket_prefix=,cache_directory=,local_retention=300,name=,object_target_size=10M),type=file,value_format=u,verbose=[],write_timestamp_usage=none",
  "type" : "file",
  "uri" : "statistics:table:collection-35-8856345805109314156",
  "LSM" : {
   "bloom filter false positives" : 0,
   "bloom filter hits" : 0,
   "bloom filter misses" : 0,
   "bloom filter pages evicted from cache" : 0,
   "bloom filter pages read into cache" : 0,
   "bloom filters in the LSM tree" : 0,
   "chunks in the LSM tree" : 0,
   "highest merge generation in the LSM tree" : 0,
   "queries that could have benefited from a Bloom filter that did not exist" : 0,
   "sleep for LSM checkpoint throttle" : 0,
   "sleep for LSM merge throttle" : 0,
   "total size of bloom filters" : 0
  },
  "block-manager" : {
   "allocations requiring file extension" : 100,
   "blocks allocated" : 1123,
   "blocks freed" : 882,
   "checkpoint size" : 1646592,
   "file allocation unit size" : 4096,
   "file bytes available for reuse" : 847872,
   "file magic number" : 120897,
   "file major version number" : 1,
   "file size in bytes" : 2510848,
   "minor version number" : 0
  },
  "btree" : {
   "btree checkpoint generation" : 176,
   "btree clean tree checkpoint expiration time" : NumberLong("9223372036854775807"),
   "column-store fixed-size leaf pages" : 0,
   "column-store internal pages" : 0,
   "column-store variable-size RLE encoded values" : 0,
   "column-store variable-size deleted values" : 0,
   "column-store variable-size leaf pages" : 0,
   "fixed-record size" : 0,
   "maximum internal page key size" : 368,
   "maximum internal page size" : 4096,
   "maximum leaf page key size" : 2867,
   "maximum leaf page size" : 32768,
   "maximum leaf page value size" : 67108864,
   "maximum tree depth" : 3,
   "number of key/value pairs" : 0,
   "overflow pages" : 0,
   "pages rewritten by compaction" : 0,
   "row-store empty values" : 0,
   "row-store internal pages" : 0,
   "row-store leaf pages" : 0
  },
  "cache" : {
   "bytes currently in the cache" : 7990950,
   "bytes dirty in the cache cumulative" : 213310601,
   "bytes read into cache" : 0,
   "bytes written from cache" : 108729333,
   "checkpoint blocked page eviction" : 0,
   "checkpoint of history store file blocked non-history store page eviction" : 0,
   "data source pages selected for eviction unable to be evicted" : 0,
   "eviction gave up due to detecting an out of order on disk value behind the last update on the chain" : 0,
   "eviction gave up due to detecting an out of order tombstone ahead of the selected on disk update" : 0,
   "eviction gave up due to detecting an out of order tombstone ahead of the selected on disk update after validating the update chain" : 0,
   "eviction gave up due to detecting out of order timestamps on the update chain after the selected on disk update" : 0,
   "eviction walk passes of a file" : 86,
   "eviction walk target pages histogram - 0-9" : 1,
   "eviction walk target pages histogram - 10-31" : 1,
   "eviction walk target pages histogram - 128 and higher" : 0,
   "eviction walk target pages histogram - 32-63" : 84,
   "eviction walk target pages histogram - 64-128" : 0,
   "eviction walk target pages reduced due to history store cache pressure" : 0,
   "eviction walks abandoned" : 0,
   "eviction walks gave up because they restarted their walk twice" : 86,
   "eviction walks gave up because they saw too many pages and found no candidates" : 0,
   "eviction walks gave up because they saw too many pages and found too few candidates" : 0,
   "eviction walks reached end of tree" : 172,
   "eviction walks restarted" : 0,
   "eviction walks started from root of tree" : 86,
   "eviction walks started from saved location in tree" : 0,
   "hazard pointer blocked page eviction" : 0,
   "history store table insert calls" : 0,
   "history store table insert calls that returned restart" : 0,
   "history store table out-of-order resolved updates that lose their durable timestamp" : 0,
   "history store table out-of-order updates that were fixed up by reinserting with the fixed timestamp" : 0,
   "history store table reads" : 0,
   "history store table reads missed" : 0,
   "history store table reads requiring squashed modifies" : 0,
   "history store table truncation by rollback to stable to remove an unstable update" : 0,
   "history store table truncation by rollback to stable to remove an update" : 0,
   "history store table truncation to remove an update" : 0,
   "history store table truncation to remove range of updates due to key being removed from the data page during reconciliation" : 0,
   "history store table truncation to remove range of updates due to out-of-order timestamp update on data page" : 0,
   "history store table writes requiring squashed modifies" : 0,
   "in-memory page passed criteria to be split" : 2,
   "in-memory page splits" : 1,
   "internal pages evicted" : 0,
   "internal pages split during eviction" : 0,
   "leaf pages split during eviction" : 1,
   "modified pages evicted" : 1,
   "overflow pages read into cache" : 0,
   "page split during eviction deepened the tree" : 0,
   "page written requiring history store records" : 0,
   "pages read into cache" : 0,
   "pages read into cache after truncate" : 1,
   "pages read into cache after truncate in prepare state" : 0,
   "pages requested from the cache" : 100084,
   "pages seen by eviction walk" : 173,
   "pages written from cache" : 1001,
   "pages written requiring in-memory restoration" : 0,
   "tracked dirty bytes in the cache" : 0,
   "unmodified pages evicted" : 0
  },
  "cache_walk" : {
   "Average difference between current eviction generation when the page was last considered" : 0,
   "Average on-disk page image size seen" : 0,
   "Average time in cache for pages that have been visited by the eviction server" : 0,
   "Average time in cache for pages that have not been visited by the eviction server" : 0,
   "Clean pages currently in cache" : 0,
   "Current eviction generation" : 0,
   "Dirty pages currently in cache" : 0,
   "Entries in the root page" : 0,
   "Internal pages currently in cache" : 0,
   "Leaf pages currently in cache" : 0,
   "Maximum difference between current eviction generation when the page was last considered" : 0,
   "Maximum page size seen" : 0,
   "Minimum on-disk page image size seen" : 0,
   "Number of pages never visited by eviction server" : 0,
   "On-disk page image sizes smaller than a single allocation unit" : 0,
   "Pages created in memory and never written" : 0,
   "Pages currently queued for eviction" : 0,
   "Pages that could not be queued for eviction" : 0,
   "Refs skipped during cache traversal" : 0,
   "Size of the root page" : 0,
   "Total number of pages currently in cache" : 0
  },
  "checkpoint-cleanup" : {
   "pages added for eviction" : 0,
   "pages removed" : 0,
   "pages skipped during tree walk" : 403,
   "pages visited" : 480
  },
  "compression" : {
   "compressed page maximum internal page size prior to compression" : 4096,
   "compressed page maximum leaf page size prior to compression " : 131072,
   "compressed pages read" : 0,
   "compressed pages written" : 940,
   "page written failed to compress" : 0,
   "page written was too small to compress" : 61
  },
  "cursor" : {
   "Total number of entries skipped by cursor next calls" : 0,
   "Total number of entries skipped by cursor prev calls" : 0,
   "Total number of entries skipped to position the history store cursor" : 0,
   "Total number of times a search near has exited due to prefix config" : 0,
   "bulk loaded cursor insert calls" : 0,
   "cache cursors reuse count" : 100000,
   "close calls that result in cache" : 100004,
   "create calls" : 8,
   "cursor next calls that skip due to a globally visible history store tombstone" : 0,
   "cursor next calls that skip greater than or equal to 100 entries" : 0,
   "cursor next calls that skip less than 100 entries" : 1003,
   "cursor prev calls that skip due to a globally visible history store tombstone" : 0,
   "cursor prev calls that skip greater than or equal to 100 entries" : 0,
   "cursor prev calls that skip less than 100 entries" : 1,
   "insert calls" : 100000,
   "insert key and value bytes" : 6606530,
   "modify" : 0,
   "modify key and value bytes affected" : 0,
   "modify value bytes modified" : 0,
   "next calls" : 1003,
   "open cursor count" : 0,
   "operation restarted" : 0,
   "prev calls" : 1,
   "remove calls" : 0,
   "remove key bytes removed" : 0,
   "reserve calls" : 0,
   "reset calls" : 200011,
   "search calls" : 0,
   "search history store calls" : 0,
   "search near calls" : 1,
   "truncate calls" : 0,
   "update calls" : 0,
   "update key and value bytes" : 0,
   "update value size change" : 0
  },
  "reconciliation" : {
   "approximate byte size of timestamps in pages written" : 139184,
   "approximate byte size of transaction IDs in pages written" : 0,
   "dictionary matches" : 0,
   "fast-path pages deleted" : 0,
   "internal page key bytes discarded using suffix compression" : 1797,
   "internal page multi-block writes" : 0,
   "internal-page overflow keys" : 0,
   "leaf page key bytes discarded using prefix compression" : 0,
   "leaf page multi-block writes" : 60,
   "leaf-page overflow keys" : 0,
   "maximum blocks required for a page" : 1,
   "overflow values written" : 0,
   "page checksum matches" : 0,
   "page reconciliation calls" : 123,
   "page reconciliation calls for eviction" : 0,
   "pages deleted" : 0,
   "pages written including an aggregated newest start durable timestamp " : 61,
   "pages written including an aggregated newest stop durable timestamp " : 0,
   "pages written including an aggregated newest stop timestamp " : 0,
   "pages written including an aggregated newest stop transaction ID" : 0,
   "pages written including an aggregated newest transaction ID " : 0,
   "pages written including an aggregated oldest start timestamp " : 0,
   "pages written including an aggregated prepare" : 0,
   "pages written including at least one prepare" : 0,
   "pages written including at least one start durable timestamp" : 74,
   "pages written including at least one start timestamp" : 74,
   "pages written including at least one start transaction ID" : 0,
   "pages written including at least one stop durable timestamp" : 0,
   "pages written including at least one stop timestamp" : 0,
   "pages written including at least one stop transaction ID" : 0,
   "records written including a prepare" : 0,
   "records written including a start durable timestamp" : 8699,
   "records written including a start timestamp" : 8699,
   "records written including a start transaction ID" : 0,
   "records written including a stop durable timestamp" : 0,
   "records written including a stop timestamp" : 0,
   "records written including a stop transaction ID" : 0
  },
  "session" : {
   "object compaction" : 0,
   "tiered operations dequeued and processed" : 0,
   "tiered operations scheduled" : 0,
   "tiered storage local retention time (secs)" : 0,
   "tiered storage object size" : 0
  },
  "transaction" : {
   "race to read prepared update retry" : 0,
   "rollback to stable history store records with stop timestamps older than newer records" : 0,
   "rollback to stable inconsistent checkpoint" : 0,
   "rollback to stable keys removed" : 0,
   "rollback to stable keys restored" : 0,
   "rollback to stable restored tombstones from history store" : 0,
   "rollback to stable restored updates from history store" : 0,
   "rollback to stable skipping delete rle" : 0,
   "rollback to stable skipping stable rle" : 0,
   "rollback to stable sweeping history store keys" : 0,
   "rollback to stable updates removed from history store" : 0,
   "transaction checkpoints due to obsolete pages" : 0,
   "update conflicts" : 0
  }
 },
 "ns" : "test.users",
 "count" : 100000,
 "size" : 6288895,
 "storageSize" : 2510848,
 "totalIndexSize" : 3731456,
 "totalSize" : 6242304,
 "indexSizes" : {
  "_id_" : 2138112,
  "user_id_1" : 1593344
 },
 "avgObjSize" : 62,
 "maxSize" : NumberLong(0),
 "nindexes" : 2,
 "nchunks" : 1,
 "shards" : {
  "shard2" : {
   "ns" : "test.users",
   "size" : 6288895,
   "count" : 100000,
   "avgObjSize" : 62,
   "storageSize" : 2510848,
   "freeStorageSize" : 847872,
   "capped" : false,
   "wiredTiger" : {
    "metadata" : {
     "formatVersion" : 1
    },
    "creationString" : "access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none,write_timestamp=off),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,import=(enabled=false,file_metadata=,repair=false),internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=false),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_image_max=0,memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,readonly=false,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,tiered_object=false,tiered_storage=(auth_token=,bucket=,bucket_prefix=,cache_directory=,local_retention=300,name=,object_target_size=10M),type=file,value_format=u,verbose=[],write_timestamp_usage=none",
    "type" : "file",
    "uri" : "statistics:table:collection-35-8856345805109314156",
    "LSM" : {
     "bloom filter false positives" : 0,
     "bloom filter hits" : 0,
     "bloom filter misses" : 0,
     "bloom filter pages evicted from cache" : 0,
     "bloom filter pages read into cache" : 0,
     "bloom filters in the LSM tree" : 0,
     "chunks in the LSM tree" : 0,
     "highest merge generation in the LSM tree" : 0,
     "queries that could have benefited from a Bloom filter that did not exist" : 0,
     "sleep for LSM checkpoint throttle" : 0,
     "sleep for LSM merge throttle" : 0,
     "total size of bloom filters" : 0
    },
    "block-manager" : {
     "allocations requiring file extension" : 100,
     "blocks allocated" : 1123,
     "blocks freed" : 882,
     "checkpoint size" : 1646592,
     "file allocation unit size" : 4096,
     "file bytes available for reuse" : 847872,
     "file magic number" : 120897,
     "file major version number" : 1,
     "file size in bytes" : 2510848,
     "minor version number" : 0
    },
    "btree" : {
     "btree checkpoint generation" : 176,
     "btree clean tree checkpoint expiration time" : NumberLong("9223372036854775807"),
     "column-store fixed-size leaf pages" : 0,
     "column-store internal pages" : 0,
     "column-store variable-size RLE encoded values" : 0,
     "column-store variable-size deleted values" : 0,
     "column-store variable-size leaf pages" : 0,
     "fixed-record size" : 0,
     "maximum internal page key size" : 368,
     "maximum internal page size" : 4096,
     "maximum leaf page key size" : 2867,
     "maximum leaf page size" : 32768,
     "maximum leaf page value size" : 67108864,
     "maximum tree depth" : 3,
     "number of key/value pairs" : 0,
     "overflow pages" : 0,
     "pages rewritten by compaction" : 0,
     "row-store empty values" : 0,
     "row-store internal pages" : 0,
     "row-store leaf pages" : 0
    },
    "cache" : {
     "bytes currently in the cache" : 7990950,
     "bytes dirty in the cache cumulative" : 213310601,
     "bytes read into cache" : 0,
     "bytes written from cache" : 108729333,
     "checkpoint blocked page eviction" : 0,
     "checkpoint of history store file blocked non-history store page eviction" : 0,
     "data source pages selected for eviction unable to be evicted" : 0,
     "eviction gave up due to detecting an out of order on disk value behind the last update on the chain" : 0,
     "eviction gave up due to detecting an out of order tombstone ahead of the selected on disk update" : 0,
     "eviction gave up due to detecting an out of order tombstone ahead of the selected on disk update after validating the update chain" : 0,
     "eviction gave up due to detecting out of order timestamps on the update chain after the selected on disk update" : 0,
     "eviction walk passes of a file" : 86,
     "eviction walk target pages histogram - 0-9" : 1,
     "eviction walk target pages histogram - 10-31" : 1,
     "eviction walk target pages histogram - 128 and higher" : 0,
     "eviction walk target pages histogram - 32-63" : 84,
     "eviction walk target pages histogram - 64-128" : 0,
     "eviction walk target pages reduced due to history store cache pressure" : 0,
     "eviction walks abandoned" : 0,
     "eviction walks gave up because they restarted their walk twice" : 86,
     "eviction walks gave up because they saw too many pages and found no candidates" : 0,
     "eviction walks gave up because they saw too many pages and found too few candidates" : 0,
     "eviction walks reached end of tree" : 172,
     "eviction walks restarted" : 0,
     "eviction walks started from root of tree" : 86,
     "eviction walks started from saved location in tree" : 0,
     "hazard pointer blocked page eviction" : 0,
     "history store table insert calls" : 0,
     "history store table insert calls that returned restart" : 0,
     "history store table out-of-order resolved updates that lose their durable timestamp" : 0,
     "history store table out-of-order updates that were fixed up by reinserting with the fixed timestamp" : 0,
     "history store table reads" : 0,
     "history store table reads missed" : 0,
     "history store table reads requiring squashed modifies" : 0,
     "history store table truncation by rollback to stable to remove an unstable update" : 0,
     "history store table truncation by rollback to stable to remove an update" : 0,
     "history store table truncation to remove an update" : 0,
     "history store table truncation to remove range of updates due to key being removed from the data page during reconciliation" : 0,
     "history store table truncation to remove range of updates due to out-of-order timestamp update on data page" : 0,
     "history store table writes requiring squashed modifies" : 0,
     "in-memory page passed criteria to be split" : 2,
     "in-memory page splits" : 1,
     "internal pages evicted" : 0,
     "internal pages split during eviction" : 0,
     "leaf pages split during eviction" : 1,
     "modified pages evicted" : 1,
     "overflow pages read into cache" : 0,
     "page split during eviction deepened the tree" : 0,
     "page written requiring history store records" : 0,
     "pages read into cache" : 0,
     "pages read into cache after truncate" : 1,
     "pages read into cache after truncate in prepare state" : 0,
     "pages requested from the cache" : 100084,
     "pages seen by eviction walk" : 173,
     "pages written from cache" : 1001,
     "pages written requiring in-memory restoration" : 0,
     "tracked dirty bytes in the cache" : 0,
     "unmodified pages evicted" : 0
    },
    "cache_walk" : {
     "Average difference between current eviction generation when the page was last considered" : 0,
     "Average on-disk page image size seen" : 0,
     "Average time in cache for pages that have been visited by the eviction server" : 0,
     "Average time in cache for pages that have not been visited by the eviction server" : 0,
     "Clean pages currently in cache" : 0,
     "Current eviction generation" : 0,
     "Dirty pages currently in cache" : 0,
     "Entries in the root page" : 0,
     "Internal pages currently in cache" : 0,
     "Leaf pages currently in cache" : 0,
     "Maximum difference between current eviction generation when the page was last considered" : 0,
     "Maximum page size seen" : 0,
     "Minimum on-disk page image size seen" : 0,
     "Number of pages never visited by eviction server" : 0,
     "On-disk page image sizes smaller than a single allocation unit" : 0,
     "Pages created in memory and never written" : 0,
     "Pages currently queued for eviction" : 0,
     "Pages that could not be queued for eviction" : 0,
     "Refs skipped during cache traversal" : 0,
     "Size of the root page" : 0,
     "Total number of pages currently in cache" : 0
    },
    "checkpoint-cleanup" : {
     "pages added for eviction" : 0,
     "pages removed" : 0,
     "pages skipped during tree walk" : 403,
     "pages visited" : 480
    },
    "compression" : {
     "compressed page maximum internal page size prior to compression" : 4096,
     "compressed page maximum leaf page size prior to compression " : 131072,
     "compressed pages read" : 0,
     "compressed pages written" : 940,
     "page written failed to compress" : 0,
     "page written was too small to compress" : 61
    },
    "cursor" : {
     "Total number of entries skipped by cursor next calls" : 0,
     "Total number of entries skipped by cursor prev calls" : 0,
     "Total number of entries skipped to position the history store cursor" : 0,
     "Total number of times a search near has exited due to prefix config" : 0,
     "bulk loaded cursor insert calls" : 0,
     "cache cursors reuse count" : 100000,
     "close calls that result in cache" : 100004,
     "create calls" : 8,
     "cursor next calls that skip due to a globally visible history store tombstone" : 0,
     "cursor next calls that skip greater than or equal to 100 entries" : 0,
     "cursor next calls that skip less than 100 entries" : 1003,
     "cursor prev calls that skip due to a globally visible history store tombstone" : 0,
     "cursor prev calls that skip greater than or equal to 100 entries" : 0,
     "cursor prev calls that skip less than 100 entries" : 1,
     "insert calls" : 100000,
     "insert key and value bytes" : 6606530,
     "modify" : 0,
     "modify key and value bytes affected" : 0,
     "modify value bytes modified" : 0,
     "next calls" : 1003,
     "open cursor count" : 0,
     "operation restarted" : 0,
     "prev calls" : 1,
     "remove calls" : 0,
     "remove key bytes removed" : 0,
     "reserve calls" : 0,
     "reset calls" : 200011,
     "search calls" : 0,
     "search history store calls" : 0,
     "search near calls" : 1,
     "truncate calls" : 0,
     "update calls" : 0,
     "update key and value bytes" : 0,
     "update value size change" : 0
    },
    "reconciliation" : {
     "approximate byte size of timestamps in pages written" : 139184,
     "approximate byte size of transaction IDs in pages written" : 0,
     "dictionary matches" : 0,
     "fast-path pages deleted" : 0,
     "internal page key bytes discarded using suffix compression" : 1797,
     "internal page multi-block writes" : 0,
     "internal-page overflow keys" : 0,
     "leaf page key bytes discarded using prefix compression" : 0,
     "leaf page multi-block writes" : 60,
     "leaf-page overflow keys" : 0,
     "maximum blocks required for a page" : 1,
     "overflow values written" : 0,
     "page checksum matches" : 0,
     "page reconciliation calls" : 123,
     "page reconciliation calls for eviction" : 0,
     "pages deleted" : 0,
     "pages written including an aggregated newest start durable timestamp " : 61,
     "pages written including an aggregated newest stop durable timestamp " : 0,
     "pages written including an aggregated newest stop timestamp " : 0,
     "pages written including an aggregated newest stop transaction ID" : 0,
     "pages written including an aggregated newest transaction ID " : 0,
     "pages written including an aggregated oldest start timestamp " : 0,
     "pages written including an aggregated prepare" : 0,
     "pages written including at least one prepare" : 0,
     "pages written including at least one start durable timestamp" : 74,
     "pages written including at least one start timestamp" : 74,
     "pages written including at least one start transaction ID" : 0,
     "pages written including at least one stop durable timestamp" : 0,
     "pages written including at least one stop timestamp" : 0,
     "pages written including at least one stop transaction ID" : 0,
     "records written including a prepare" : 0,
     "records written including a start durable timestamp" : 8699,
     "records written including a start timestamp" : 8699,
     "records written including a start transaction ID" : 0,
     "records written including a stop durable timestamp" : 0,
     "records written including a stop timestamp" : 0,
     "records written including a stop transaction ID" : 0
    },
    "session" : {
     "object compaction" : 0,
     "tiered operations dequeued and processed" : 0,
     "tiered operations scheduled" : 0,
     "tiered storage local retention time (secs)" : 0,
     "tiered storage object size" : 0
    },
    "transaction" : {
     "race to read prepared update retry" : 0,
     "rollback to stable history store records with stop timestamps older than newer records" : 0,
     "rollback to stable inconsistent checkpoint" : 0,
     "rollback to stable keys removed" : 0,
     "rollback to stable keys restored" : 0,
     "rollback to stable restored tombstones from history store" : 0,
     "rollback to stable restored updates from history store" : 0,
     "rollback to stable skipping delete rle" : 0,
     "rollback to stable skipping stable rle" : 0,
     "rollback to stable sweeping history store keys" : 0,
     "rollback to stable updates removed from history store" : 0,
     "transaction checkpoints due to obsolete pages" : 0,
     "update conflicts" : 0
    }
   },
   "nindexes" : 2,
   "indexDetails" : {
    "_id_" : {
     "metadata" : {
      "formatVersion" : 8
     },
     "creationString" : "access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=8),assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none,write_timestamp=off),block_allocation=best,block_compressor=,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,import=(enabled=false,file_metadata=,repair=false),internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=16k,key_format=u,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=16k,leaf_value_max=0,log=(enabled=false),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_image_max=0,memory_page_max=5MB,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=true,prefix_compression_min=4,readonly=false,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,tiered_object=false,tiered_storage=(auth_token=,bucket=,bucket_prefix=,cache_directory=,local_retention=300,name=,object_target_size=10M),type=file,value_format=u,verbose=[],write_timestamp_usage=none",
     "type" : "file",
     "uri" : "statistics:table:index-36-8856345805109314156",
     "LSM" : {
      "bloom filter false positives" : 0,
      "bloom filter hits" : 0,
      "bloom filter misses" : 0,
      "bloom filter pages evicted from cache" : 0,
      "bloom filter pages read into cache" : 0,
      "bloom filters in the LSM tree" : 0,
      "chunks in the LSM tree" : 0,
      "highest merge generation in the LSM tree" : 0,
      "queries that could have benefited from a Bloom filter that did not exist" : 0,
      "sleep for LSM checkpoint throttle" : 0,
      "sleep for LSM merge throttle" : 0,
      "total size of bloom filters" : 0
     },
     "block-manager" : {
      "allocations requiring file extension" : 149,
      "blocks allocated" : 1464,
      "blocks freed" : 1173,
      "checkpoint size" : 1753088,
      "file allocation unit size" : 4096,
      "file bytes available for reuse" : 368640,
      "file magic number" : 120897,
      "file major version number" : 1,
      "file size in bytes" : 2138112,
      "minor version number" : 0
     },
     "btree" : {
      "btree checkpoint generation" : 176,
      "btree clean tree checkpoint expiration time" : NumberLong("9223372036854775807"),
      "column-store fixed-size leaf pages" : 0,
      "column-store internal pages" : 0,
      "column-store variable-size RLE encoded values" : 0,
      "column-store variable-size deleted values" : 0,
      "column-store variable-size leaf pages" : 0,
      "fixed-record size" : 0,
      "maximum internal page key size" : 1474,
      "maximum internal page size" : 16384,
      "maximum leaf page key size" : 1474,
      "maximum leaf page size" : 16384,
      "maximum leaf page value size" : 7372,
      "maximum tree depth" : 3,
      "number of key/value pairs" : 0,
      "overflow pages" : 0,
      "pages rewritten by compaction" : 0,
      "row-store empty values" : 0,
      "row-store internal pages" : 0,
      "row-store leaf pages" : 0
     },
     "cache" : {
      "bytes currently in the cache" : 6777893,
      "bytes dirty in the cache cumulative" : 114698206,
      "bytes read into cache" : 0,
      "bytes written from cache" : 18626770,
      "checkpoint blocked page eviction" : 0,
      "checkpoint of history store file blocked non-history store page eviction" : 0,
      "data source pages selected for eviction unable to be evicted" : 0,
      "eviction gave up due to detecting an out of order on disk value behind the last update on the chain" : 0,
      "eviction gave up due to detecting an out of order tombstone ahead of the selected on disk update" : 0,
      "eviction gave up due to detecting an out of order tombstone ahead of the selected on disk update after validating the update chain" : 0,
      "eviction gave up due to detecting out of order timestamps on the update chain after the selected on disk update" : 0,
      "eviction walk passes of a file" : 86,
      "eviction walk target pages histogram - 0-9" : 1,
      "eviction walk target pages histogram - 10-31" : 1,
      "eviction walk target pages histogram - 128 and higher" : 0,
      "eviction walk target pages histogram - 32-63" : 84,
      "eviction walk target pages histogram - 64-128" : 0,
      "eviction walk target pages reduced due to history store cache pressure" : 0,
      "eviction walks abandoned" : 0,
      "eviction walks gave up because they restarted their walk twice" : 86,
      "eviction walks gave up because they saw too many pages and found no candidates" : 0,
      "eviction walks gave up because they saw too many pages and found too few candidates" : 0,
      "eviction walks reached end of tree" : 172,
      "eviction walks restarted" : 0,
      "eviction walks started from root of tree" : 86,
      "eviction walks started from saved location in tree" : 0,
      "hazard pointer blocked page eviction" : 0,
      "history store table insert calls" : 0,
      "history store table insert calls that returned restart" : 0,
      "history store table out-of-order resolved updates that lose their durable timestamp" : 0,
      "history store table out-of-order updates that were fixed up by reinserting with the fixed timestamp" : 0,
      "history store table reads" : 0,
      "history store table reads missed" : 0,
      "history store table reads requiring squashed modifies" : 0,
      "history store table truncation by rollback to stable to remove an unstable update" : 0,
      "history store table truncation by rollback to stable to remove an update" : 0,
      "history store table truncation to remove an update" : 0,
      "history store table truncation to remove range of updates due to key being removed from the data page during reconciliation" : 0,
      "history store table truncation to remove range of updates due to out-of-order timestamp update on data page" : 0,
      "history store table writes requiring squashed modifies" : 0,
      "in-memory page passed criteria to be split" : 4,
      "in-memory page splits" : 2,
      "internal pages evicted" : 0,
      "internal pages split during eviction" : 0,
      "leaf pages split during eviction" : 1,
      "modified pages evicted" : 1,
      "overflow pages read into cache" : 0,
      "page split during eviction deepened the tree" : 0,
      "page written requiring history store records" : 0,
      "pages read into cache" : 0,
      "pages read into cache after truncate" : 1,
      "pages read into cache after truncate in prepare state" : 0,
      "pages requested from the cache" : 100083,
      "pages seen by eviction walk" : 257,
      "pages written from cache" : 1342,
      "pages written requiring in-memory restoration" : 0,
      "tracked dirty bytes in the cache" : 0,
      "unmodified pages evicted" : 0
     },
     "cache_walk" : {
      "Average difference between current eviction generation when the page was last considered" : 0,
      "Average on-disk page image size seen" : 0,
      "Average time in cache for pages that have been visited by the eviction server" : 0,
      "Average time in cache for pages that have not been visited by the eviction server" : 0,
      "Clean pages currently in cache" : 0,
      "Current eviction generation" : 0,
      "Dirty pages currently in cache" : 0,
      "Entries in the root page" : 0,
      "Internal pages currently in cache" : 0,
      "Leaf pages currently in cache" : 0,
      "Maximum difference between current eviction generation when the page was last considered" : 0,
      "Maximum page size seen" : 0,
      "Minimum on-disk page image size seen" : 0,
      "Number of pages never visited by eviction server" : 0,
      "On-disk page image sizes smaller than a single allocation unit" : 0,
      "Pages created in memory and never written" : 0,
      "Pages currently queued for eviction" : 0,
      "Pages that could not be queued for eviction" : 0,
      "Refs skipped during cache traversal" : 0,
      "Size of the root page" : 0,
      "Total number of pages currently in cache" : 0
     },
     "checkpoint-cleanup" : {
      "pages added for eviction" : 0,
      "pages removed" : 0,
      "pages skipped during tree walk" : 1290,
      "pages visited" : 1371
     },
     "compression" : {
      "compressed page maximum internal page size prior to compression" : 16384,
      "compressed page maximum leaf page size prior to compression " : 16384,
      "compressed pages read" : 0,
      "compressed pages written" : 0,
      "page written failed to compress" : 0,
      "page written was too small to compress" : 0
     },
     "cursor" : {
      "Total number of entries skipped by cursor next calls" : 0,
      "Total number of entries skipped by cursor prev calls" : 0,
      "Total number of entries skipped to position the history store cursor" : 0,
      "Total number of times a search near has exited due to prefix config" : 0,
      "bulk loaded cursor insert calls" : 0,
      "cache cursors reuse count" : 99996,
      "close calls that result in cache" : 100000,
      "create calls" : 7,
      "cursor next calls that skip due to a globally visible history store tombstone" : 0,
      "cursor next calls that skip greater than or equal to 100 entries" : 0,
      "cursor next calls that skip less than 100 entries" : 0,
      "cursor prev calls that skip due to a globally visible history store tombstone" : 0,
      "cursor prev calls that skip greater than or equal to 100 entries" : 0,
      "cursor prev calls that skip less than 100 entries" : 0,
      "insert calls" : 100000,
      "insert key and value bytes" : 1698977,
      "modify" : 0,
      "modify key and value bytes affected" : 0,
      "modify value bytes modified" : 0,
      "next calls" : 0,
      "open cursor count" : 0,
      "operation restarted" : 0,
      "prev calls" : 0,
      "remove calls" : 0,
      "remove key bytes removed" : 0,
      "reserve calls" : 0,
      "reset calls" : 200000,
      "search calls" : 0,
      "search history store calls" : 0,
      "search near calls" : 0,
      "truncate calls" : 0,
      "update calls" : 0,
      "update key and value bytes" : 0,
      "update value size change" : 0
     },
     "reconciliation" : {
      "approximate byte size of timestamps in pages written" : 139248,
      "approximate byte size of transaction IDs in pages written" : 0,
      "dictionary matches" : 0,
      "fast-path pages deleted" : 0,
      "internal page key bytes discarded using suffix compression" : 5792,
      "internal page multi-block writes" : 0,
      "internal-page overflow keys" : 0,
      "leaf page key bytes discarded using prefix compression" : 4983883,
      "leaf page multi-block writes" : 61,
      "leaf-page overflow keys" : 0,
      "maximum blocks required for a page" : 1,
      "overflow values written" : 0,
      "page checksum matches" : 0,
      "page reconciliation calls" : 124,
      "page reconciliation calls for eviction" : 0,
      "pages deleted" : 0,
      "pages written including an aggregated newest start durable timestamp " : 61,
      "pages written including an aggregated newest stop durable timestamp " : 0,
      "pages written including an aggregated newest stop timestamp " : 0,
      "pages written including an aggregated newest stop transaction ID" : 0,
      "pages written including an aggregated newest transaction ID " : 0,
      "pages written including an aggregated oldest start timestamp " : 0,
      "pages written including an aggregated prepare" : 0,
      "pages written including at least one prepare" : 0,
      "pages written including at least one start durable timestamp" : 95,
      "pages written including at least one start timestamp" : 95,
      "pages written including at least one start transaction ID" : 0,
      "pages written including at least one stop durable timestamp" : 0,
      "pages written including at least one stop timestamp" : 0,
      "pages written including at least one stop transaction ID" : 0,
      "records written including a prepare" : 0,
      "records written including a start durable timestamp" : 8703,
      "records written including a start timestamp" : 8703,
      "records written including a start transaction ID" : 0,
      "records written including a stop durable timestamp" : 0,
      "records written including a stop timestamp" : 0,
      "records written including a stop transaction ID" : 0
     },
     "session" : {
      "object compaction" : 0,
      "tiered operations dequeued and processed" : 0,
      "tiered operations scheduled" : 0,
      "tiered storage local retention time (secs)" : 0,
      "tiered storage object size" : 0
     },
     "transaction" : {
      "race to read prepared update retry" : 0,
      "rollback to stable history store records with stop timestamps older than newer records" : 0,
      "rollback to stable inconsistent checkpoint" : 0,
      "rollback to stable keys removed" : 0,
      "rollback to stable keys restored" : 0,
      "rollback to stable restored tombstones from history store" : 0,
      "rollback to stable restored updates from history store" : 0,
      "rollback to stable skipping delete rle" : 0,
      "rollback to stable skipping stable rle" : 0,
      "rollback to stable sweeping history store keys" : 0,
      "rollback to stable updates removed from history store" : 0,
      "transaction checkpoints due to obsolete pages" : 0,
      "update conflicts" : 0
     }
    },
    "user_id_1" : {
     "metadata" : {
      "formatVersion" : 8
     },
     "creationString" : "access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=8),assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none,write_timestamp=off),block_allocation=best,block_compressor=,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,import=(enabled=false,file_metadata=,repair=false),internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=16k,key_format=u,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=16k,leaf_value_max=0,log=(enabled=false),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_image_max=0,memory_page_max=5MB,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=true,prefix_compression_min=4,readonly=false,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,tiered_object=false,tiered_storage=(auth_token=,bucket=,bucket_prefix=,cache_directory=,local_retention=300,name=,object_target_size=10M),type=file,value_format=u,verbose=[],write_timestamp_usage=none",
     "type" : "file",
     "uri" : "statistics:table:index-37-8856345805109314156",
     "LSM" : {
      "bloom filter false positives" : 0,
      "bloom filter hits" : 0,
      "bloom filter misses" : 0,
      "bloom filter pages evicted from cache" : 0,
      "bloom filter pages read into cache" : 0,
      "bloom filters in the LSM tree" : 0,
      "chunks in the LSM tree" : 0,
      "highest merge generation in the LSM tree" : 0,
      "queries that could have benefited from a Bloom filter that did not exist" : 0,
      "sleep for LSM checkpoint throttle" : 0,
      "sleep for LSM merge throttle" : 0,
      "total size of bloom filters" : 0
     },
     "block-manager" : {
      "allocations requiring file extension" : 111,
      "blocks allocated" : 1143,
      "blocks freed" : 887,
      "checkpoint size" : 1187840,
      "file allocation unit size" : 4096,
      "file bytes available for reuse" : 389120,
      "file magic number" : 120897,
      "file major version number" : 1,
      "file size in bytes" : 1593344,
      "minor version number" : 0
     },
     "btree" : {
      "btree checkpoint generation" : 176,
      "btree clean tree checkpoint expiration time" : NumberLong("9223372036854775807"),
      "column-store fixed-size leaf pages" : 0,
      "column-store internal pages" : 0,
      "column-store variable-size RLE encoded values" : 0,
      "column-store variable-size deleted values" : 0,
      "column-store variable-size leaf pages" : 0,
      "fixed-record size" : 0,
      "maximum internal page key size" : 1474,
      "maximum internal page size" : 16384,
      "maximum leaf page key size" : 1474,
      "maximum leaf page size" : 16384,
      "maximum leaf page value size" : 7372,
      "maximum tree depth" : 3,
      "number of key/value pairs" : 0,
      "overflow pages" : 0,
      "pages rewritten by compaction" : 0,
      "row-store empty values" : 0,
      "row-store internal pages" : 0,
      "row-store leaf pages" : 0
     },
     "cache" : {
      "bytes currently in the cache" : 6092356,
      "bytes dirty in the cache cumulative" : 126763753,
      "bytes read into cache" : 0,
      "bytes written from cache" : 13834392,
      "checkpoint blocked page eviction" : 0,
      "checkpoint of history store file blocked non-history store page eviction" : 0,
      "data source pages selected for eviction unable to be evicted" : 0,
      "eviction gave up due to detecting an out of order on disk value behind the last update on the chain" : 0,
      "eviction gave up due to detecting an out of order tombstone ahead of the selected on disk update" : 0,
      "eviction gave up due to detecting an out of order tombstone ahead of the selected on disk update after validating the update chain" : 0,
      "eviction gave up due to detecting out of order timestamps on the update chain after the selected on disk update" : 0,
      "eviction walk passes of a file" : 86,
      "eviction walk target pages histogram - 0-9" : 1,
      "eviction walk target pages histogram - 10-31" : 1,
      "eviction walk target pages histogram - 128 and higher" : 0,
      "eviction walk target pages histogram - 32-63" : 84,
      "eviction walk target pages histogram - 64-128" : 0,
      "eviction walk target pages reduced due to history store cache pressure" : 0,
      "eviction walks abandoned" : 0,
      "eviction walks gave up because they restarted their walk twice" : 86,
      "eviction walks gave up because they saw too many pages and found no candidates" : 0,
      "eviction walks gave up because they saw too many pages and found too few candidates" : 0,
      "eviction walks reached end of tree" : 172,
      "eviction walks restarted" : 0,
      "eviction walks started from root of tree" : 86,
      "eviction walks started from saved location in tree" : 0,
      "hazard pointer blocked page eviction" : 0,
      "history store table insert calls" : 0,
      "history store table insert calls that returned restart" : 0,
      "history store table out-of-order resolved updates that lose their durable timestamp" : 0,
      "history store table out-of-order updates that were fixed up by reinserting with the fixed timestamp" : 0,
      "history store table reads" : 0,
      "history store table reads missed" : 0,
      "history store table reads requiring squashed modifies" : 0,
      "history store table truncation by rollback to stable to remove an unstable update" : 0,
      "history store table truncation by rollback to stable to remove an update" : 0,
      "history store table truncation to remove an update" : 0,
      "history store table truncation to remove range of updates due to key being removed from the data page during reconciliation" : 0,
      "history store table truncation to remove range of updates due to out-of-order timestamp update on data page" : 0,
      "history store table writes requiring squashed modifies" : 0,
      "in-memory page passed criteria to be split" : 4,
      "in-memory page splits" : 2,
      "internal pages evicted" : 0,
      "internal pages split during eviction" : 0,
      "leaf pages split during eviction" : 1,
      "modified pages evicted" : 1,
      "overflow pages read into cache" : 0,
      "page split during eviction deepened the tree" : 0,
      "page written requiring history store records" : 0,
      "pages read into cache" : 0,
      "pages read into cache after truncate" : 1,
      "pages read into cache after truncate in prepare state" : 0,
      "pages requested from the cache" : 100077,
      "pages seen by eviction walk" : 257,
      "pages written from cache" : 1021,
      "pages written requiring in-memory restoration" : 0,
      "tracked dirty bytes in the cache" : 0,
      "unmodified pages evicted" : 0
     },
     "cache_walk" : {
      "Average difference between current eviction generation when the page was last considered" : 0,
      "Average on-disk page image size seen" : 0,
      "Average time in cache for pages that have been visited by the eviction server" : 0,
      "Average time in cache for pages that have not been visited by the eviction server" : 0,
      "Clean pages currently in cache" : 0,
      "Current eviction generation" : 0,
      "Dirty pages currently in cache" : 0,
      "Entries in the root page" : 0,
      "Internal pages currently in cache" : 0,
      "Leaf pages currently in cache" : 0,
      "Maximum difference between current eviction generation when the page was last considered" : 0,
      "Maximum page size seen" : 0,
      "Minimum on-disk page image size seen" : 0,
      "Number of pages never visited by eviction server" : 0,
      "On-disk page image sizes smaller than a single allocation unit" : 0,
      "Pages created in memory and never written" : 0,
      "Pages currently queued for eviction" : 0,
      "Pages that could not be queued for eviction" : 0,
      "Refs skipped during cache traversal" : 0,
      "Size of the root page" : 0,
      "Total number of pages currently in cache" : 0
     },
     "checkpoint-cleanup" : {
      "pages added for eviction" : 0,
      "pages removed" : 0,
      "pages skipped during tree walk" : 900,
      "pages visited" : 975
     },
     "compression" : {
      "compressed page maximum internal page size prior to compression" : 16384,
      "compressed page maximum leaf page size prior to compression " : 16384,
      "compressed pages read" : 0,
      "compressed pages written" : 0,
      "page written failed to compress" : 0,
      "page written was too small to compress" : 0
     },
     "cursor" : {
      "Total number of entries skipped by cursor next calls" : 0,
      "Total number of entries skipped by cursor prev calls" : 0,
      "Total number of entries skipped to position the history store cursor" : 0,
      "Total number of times a search near has exited due to prefix config" : 0,
      "bulk loaded cursor insert calls" : 0,
      "cache cursors reuse count" : 99996,
      "close calls that result in cache" : 100000,
      "create calls" : 7,
      "cursor next calls that skip due to a globally visible history store tombstone" : 0,
      "cursor next calls that skip greater than or equal to 100 entries" : 0,
      "cursor next calls that skip less than 100 entries" : 0,
      "cursor prev calls that skip due to a globally visible history store tombstone" : 0,
      "cursor prev calls that skip greater than or equal to 100 entries" : 0,
      "cursor prev calls that skip less than 100 entries" : 0,
      "insert calls" : 100000,
      "insert key and value bytes" : 866083,
      "modify" : 0,
      "modify key and value bytes affected" : 0,
      "modify value bytes modified" : 0,
      "next calls" : 0,
      "open cursor count" : 0,
      "operation restarted" : 0,
      "prev calls" : 0,
      "remove calls" : 0,
      "remove key bytes removed" : 0,
      "reserve calls" : 0,
      "reset calls" : 200000,
      "search calls" : 0,
      "search history store calls" : 0,
      "search near calls" : 0,
      "truncate calls" : 0,
      "update calls" : 0,
      "update key and value bytes" : 0,
      "update value size change" : 0
     },
     "reconciliation" : {
      "approximate byte size of timestamps in pages written" : 139328,
      "approximate byte size of transaction IDs in pages written" : 0,
      "dictionary matches" : 0,
      "fast-path pages deleted" : 0,
      "internal page key bytes discarded using suffix compression" : 9178,
      "internal page multi-block writes" : 0,
      "internal-page overflow keys" : 0,
      "leaf page key bytes discarded using prefix compression" : 0,
      "leaf page multi-block writes" : 61,
      "leaf-page overflow keys" : 0,
      "maximum blocks required for a page" : 1,
      "overflow values written" : 0,
      "page checksum matches" : 0,
      "page reconciliation calls" : 124,
      "page reconciliation calls for eviction" : 0,
      "pages deleted" : 0,
      "pages written including an aggregated newest start durable timestamp " : 0,
      "pages written including an aggregated newest stop durable timestamp " : 0,
      "pages written including an aggregated newest stop timestamp " : 0,
      "pages written including an aggregated newest stop transaction ID" : 0,
      "pages written including an aggregated newest transaction ID " : 0,
      "pages written including an aggregated oldest start timestamp " : 0,
      "pages written including an aggregated prepare" : 0,
      "pages written including at least one prepare" : 0,
      "pages written including at least one start durable timestamp" : 92,
      "pages written including at least one start timestamp" : 92,
      "pages written including at least one start transaction ID" : 0,
      "pages written including at least one stop durable timestamp" : 0,
      "pages written including at least one stop timestamp" : 0,
      "pages written including at least one stop transaction ID" : 0,
      "records written including a prepare" : 0,
      "records written including a start durable timestamp" : 8708,
      "records written including a start timestamp" : 8708,
      "records written including a start transaction ID" : 0,
      "records written including a stop durable timestamp" : 0,
      "records written including a stop timestamp" : 0,
      "records written including a stop transaction ID" : 0
     },
     "session" : {
      "object compaction" : 0,
      "tiered operations dequeued and processed" : 0,
      "tiered operations scheduled" : 0,
      "tiered storage local retention time (secs)" : 0,
      "tiered storage object size" : 0
     },
     "transaction" : {
      "race to read prepared update retry" : 0,
      "rollback to stable history store records with stop timestamps older than newer records" : 0,
      "rollback to stable inconsistent checkpoint" : 0,
      "rollback to stable keys removed" : 0,
      "rollback to stable keys restored" : 0,
      "rollback to stable restored tombstones from history store" : 0,
      "rollback to stable restored updates from history store" : 0,
      "rollback to stable skipping delete rle" : 0,
      "rollback to stable skipping stable rle" : 0,
      "rollback to stable sweeping history store keys" : 0,
      "rollback to stable updates removed from history store" : 0,
      "transaction checkpoints due to obsolete pages" : 0,
      "update conflicts" : 0
     }
    }
   },
   "indexBuilds" : [ ],
   "totalIndexSize" : 3731456,
   "totalSize" : 6242304,
   "indexSizes" : {
    "_id_" : 2138112,
    "user_id_1" : 1593344
   },
   "scaleFactor" : 1,
   "ok" : 1,
   "$gleStats" : {
    "lastOpTime" : Timestamp(0, 0),
    "electionId" : ObjectId("7fffffff0000000000000001")
   },
   "lastCommittedOpTime" : Timestamp(1665572580, 1),
   "$configServerState" : {
    "opTime" : {
     "ts" : Timestamp(1665572580, 2),
     "t" : NumberLong(1)
    }
   },
   "$clusterTime" : {
    "clusterTime" : Timestamp(1665572580, 2),
    "signature" : {
     "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
     "keyId" : NumberLong(0)
    }
   },
   "operationTime" : Timestamp(1665572580, 1)
  }
 },
 "ok" : 1,
 "operationTime" : Timestamp(1665572580, 1),
 "$clusterTime" : {
  "clusterTime" : Timestamp(1665572580, 2),
  "signature" : {
   "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
   "keyId" : NumberLong(0)
  }
 }
}
mongos> 

查看分片信息

mongos> db.printShardingStatus()
或mongos>sh.status()

mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
   "_id" : 1,
   "minCompatibleVersion" : 5,
   "currentVersion" : 6,
   "clusterId" : ObjectId("63466fe028ab153ff68f35e2")
  }
  shards:
        {  "_id" : "shard1",  "host" : "shard1/39.103.204.27:27001,49.232.197.39:27001",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/39.103.204.27:27002,49.232.197.39:27002",  "state" : 1 }
        {  "_id" : "shard3",  "host" : "shard3/39.103.204.27:27003,49.232.197.39:27003",  "state" : 1 }
  active mongoses:
        "4.4.10" : 3
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                682 : Success
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                config.system.sessions
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard1	342
                                shard2	341
                                shard3	341
                        too many chunks to print, use verbose if you want to force print
        {  "_id" : "test",  "primary" : "shard2",  "partitioned" : true,  "version" : {  "uuid" : UUID("388af51a-fa32-4741-aca3-95d711691406"),  "lastMod" : 1 } }

至此,mongodb数据分片集群部署并测试完成 想要看到详细的信息则需要执行:

mongos> sh.status({"verbose":1})
或则
mongos> db.printShardingStatus("vvvv")
或则
mongos> printShardingStatus(db.getSisterDB("config"),1)

查看mongodb集群是否开启了balance状态

mongos> sh.getBalancerState()
true

也可以通过在路由节点mongos上执行sh.status() 查看balance状态。如果balance开启,查看是否正在有数据的迁移

mongos> sh.isBalancerRunning()
false

如果未开启,执行命令

sh.setBalancerState( true )


                
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值