MongoDB的基本操作以及在CentOS7下配置复制集与分片

MongoDB

编制:D

版本:v1.0

日期:2020年7月10日

环境:centos7

mongodb 版本:4.2.8

一、windows下配置

注:下方为我windows下的mongoDB登陆用户名(根据自身设置修改)
mongodb://qyd:wasd2671jkluio@localhost

1.MongoDB在windows下的基础配置
https://www.runoob.com/mongodb/mongodb-window-install.html

2.MongoDB角色及权限设置
https://www.jianshu.com/p/62736bff7e2e
https://www.cnblogs.com/swordfall/p/10841418.html

3.php调用MongoDB报错的解决方式(进去点个赞)
https://blog.csdn.net/FalconKkv/article/details/107001402

4.php调用mongoDB的使用方法
https://www.cnblogs.com/wujuntian/p/8352586.html

5.数据库导出命令(进入管理员cmd直接打就可以,其中myFirstMongoDB为你的数据库名称,f:\tt为导出的路径名称)
mongodump -h 127.0.0.1 -d myFirstMongoDB -o f:\tt

6.数据导入命令
mongorestore.exe -d newDB f:/tt/myFirstMongoDB
链接:https://www.cnblogs.com/softwarefang/p/9823062.html

二、linux下配置

查看端口netstat -ntlp

我mongo下的bin目录位置
/home/qyd/Documents/mongodb-linux-x86_64-rhel70-4.2.8/bin

1.centos7.0下配置mongoDB
https://www.cnblogs.com/zhunong/p/12736471.html

2.找上面的博客配置可能会报一个错,这个错会夹在众多打印中间,不太好发现:exception in initAndListen: NonExistentPath: Data directory /data/db not found., terminating
mongod 没找到路径,解决办法:创建该路径:sudo mkdir /data/db/ -p

3.与windows不同的一点,无法通过快捷启动服务方式(net start MongoDB)启动linux下的服务,
只能在该bin目录下运行./mongod以后,新开一个命令行窗口再进入bin下执行./mongo开启(注:这个bin目录为mongo解压后的bin不是根目录的bin千万别弄错)

4.这条成功推翻了上一条,我也懒得改上面了,假设你已经可以在bin目录下运行mongo了
进入/etc/profile,下方的 P A T H : 后 面 为 你 的 m o n g o 的 b i n 目 录 e x p o r t P A T H = PATH:后面为你的mongo的bin目录 export PATH= PATH:mongobinexportPATH=PATH:/home/qyd/Documents/mongodb-linux-x86_64-rhel70-4.2.8/bin
将之前的更新成上面这句以后,终端执行,使系统环境变量立即生效
source /etc/profile

5.将mongo路径软链到/usr/bin路径下,方便随处执行mongo命令,终端执行
换为你的bin
ln -s /home/qyd/Documents/mongodb-linux-x86_64-rhel70-4.2.8/bin

6.回到任意路径下,执行mongo命令,连接mongod服务,在当前终端不要离开,测试mongo/mongod任意一个命令,如果可以执行,则重启生效,重启结束,就完成了

7.使用角色登录
db.auth(‘root’,‘999999’)

8.出现问题无法使用其他ip访问的情况
先将进程打开,然后kill -9 xxxx
之后使用修复模式执行
/usr/local/mongodb/bin/mongod -f usr/local/mongodb/bin/mongod.conf --repair
看是否成功,如果成功,再次启动
/usr/local/mongodb/bin/mongod -f usr/local/mongodb/bin/mongod.conf
之后mongo --host=192.168.62.158连接(换成自己的ip)
查看效果,不成功的话,则需要重新配置mongodb.conf文件

注意,关闭服务的时候一定要用这个,不要使用kill
db.shutdownServer()

三、使用账户登录以及权限描述的具体方法

运行mongod服务并使用对应用户登录示例:

mongod --config /home/…/bin/mongodb.config
mongo testDB -u test -p testpsw

创建用户

use admin

db.createUser( {user: "admin",pwd: "123456",roles: [ { role: "userAdminAnyDatabase", db: "admin" } ]})

//管理员↑

//管理员↑

//普通用户↓

db.createUser(
  {
    user: "test",
    pwd: "testpsw",
    roles: [ { role: "readWrite", db: "mydb" } ]
  }
)

查看用户:

show users

角色描述:

Read:允许用户读取指定数据库
readWrite:允许用户读写指定数据库
dbAdmin:允许用户在指定数据库中执行管理函数,如索引创建、删除,查看统计或访问system.profile
userAdmin:允许用户向system.users集合写入,可以找指定数据库里创建、删除和管理用户
clusterAdmin:只在admin数据库中可用,赋予用户所有分片和复制集相关函数的管理权限。
readAnyDatabase:只在admin数据库中可用,赋予用户所有数据库的读权限
readWriteAnyDatabase:只在admin数据库中可用,赋予用户所有数据库的读写权限
userAdminAnyDatabase:只在admin数据库中可用,赋予用户所有数据库的userAdmin权限
dbAdminAnyDatabase:只在admin数据库中可用,赋予用户所有数据库的dbAdmin权限。
root:只在admin数据库中可用。超级账号,超级权限。

删除用户:

示例:
需要使用管理员账号进入对应库内删除
db.dropUser(“admin”) //删除名为admin的用户
db.dropAllUser() // 删除当前库的所有用户

修改用户密码:
方法1:db.changeUserPassword(“usertest”,“changepass”);
方法2:db.updateUser(“usertest”,{pwd:“changepass1”});

四、高级检索方式

(mongo中是可以直接使用js语句的)

1.游标查询

比如:

> use admin 						//切换到admin
> switched to db admin
> show dbs						//查看所有数据库
> admin       0.000GB
> config      0.000GB
> local       0.000GB
> qydtestdb1  0.000GB
> use testDB						//如果没有testDB,创建testDB
> switched to db testDB
> db.createCollection('testtable1');				//testDB下创建,testtable1表
> { "ok" : 1 }
> for(var i=0;i<100;i++){					//for循环插入100条数据,使用js语法
> ... db.testtable1.insert({_id:i+1,name:"qyd"+i,num:i})
> ... }
> WriteResult({ "nInserted" : 1 })
> db.testtable1.find().count();					//查看当前testtable1表的条数
> 100
> db.testtable1.find()					//查看testtable1的数据
> { "_id" : 1, "name" : "qyd0", "num" : 0 }
> { "_id" : 2, "name" : "qyd1", "num" : 1 }
> { "_id" : 3, "name" : "qyd2", "num" : 2 }
> { "_id" : 4, "name" : "qyd3", "num" : 3 }
> { "_id" : 5, "name" : "qyd4", "num" : 4 }
> { "_id" : 6, "name" : "qyd5", "num" : 5 }
> { "_id" : 7, "name" : "qyd6", "num" : 6 }
> { "_id" : 8, "name" : "qyd7", "num" : 7 }
> { "_id" : 9, "name" : "qyd8", "num" : 8 }
> { "_id" : 10, "name" : "qyd9", "num" : 9 }
> { "_id" : 11, "name" : "qyd10", "num" : 10 }
> { "_id" : 12, "name" : "qyd11", "num" : 11 }
> { "_id" : 13, "name" : "qyd12", "num" : 12 }
> { "_id" : 14, "name" : "qyd13", "num" : 13 }
> { "_id" : 15, "name" : "qyd14", "num" : 14 }
> { "_id" : 16, "name" : "qyd15", "num" : 15 }
> { "_id" : 17, "name" : "qyd16", "num" : 16 }
> { "_id" : 18, "name" : "qyd17", "num" : 17 }
> { "_id" : 19, "name" : "qyd18", "num" : 18 }
> { "_id" : 20, "name" : "qyd19", "num" : 19 }
> Type "it" for more
> it							//输入it可查看接下来的20条
> var mycurser = db.testtable1.find({_id:{$lte:5}})			//创建一个游标,目前的大小限制为小于等于5,游标会从头开始,所以开头的下一条为第一条数据
> while(mycurser.hasNext()){ printjson(mycurser.next()); }		//while循环这个游标,hasNext()方法为看他是否存在下一条,next()是取下一条,printjson打印
> { "_id" : 1, "name" : "qyd0", "num" : 0 }
> { "_id" : 2, "name" : "qyd1", "num" : 1 }
> { "_id" : 3, "name" : "qyd2", "num" : 2 }
> { "_id" : 4, "name" : "qyd3", "num" : 3 }
> { "_id" : 5, "name" : "qyd4", "num" : 4 }
> for(var mycursor=db.testtable1.find({_id:{$lte:5}});mycursor.hasNext();){	//使用for循环也是一样的
> ... printjson(mycursor.next())
> ... }
> { "_id" : 1, "name" : "qyd0", "num" : 0 }
> { "_id" : 2, "name" : "qyd1", "num" : 1 }
> { "_id" : 3, "name" : "qyd2", "num" : 2 }
> { "_id" : 4, "name" : "qyd3", "num" : 3 }
> { "_id" : 5, "name" : "qyd4", "num" : 4 }
> var mycursor = db.testtable1.find({_id:{$lte:5}})			//使用forEach,obj代表当前的游标处在的单元
> mycursor.forEach(function(obj){printjson(obj)})
> { "_id" : 1, "name" : "qyd0", "num" : 0 }
> { "_id" : 2, "name" : "qyd1", "num" : 1 }
> { "_id" : 3, "name" : "qyd2", "num" : 2 }
> { "_id" : 4, "name" : "qyd3", "num" : 3 }
> { "_id" : 5, "name" : "qyd4", "num" : 4 }
> var mycursor = db.testtable1.find({_id:{$lte:5}})			//还不明白可以看这个例子
> mycursor.forEach(function(obj){print('your id is'+obj._id)});
> your id is1
> your id is2
> your id is3
> your id is4
> your id is5
> var mycursor = db.testtable1.find().skip(95)			//跳过多少行
> mycursor.forEach(function(obj){printjson(obj)})
> { "_id" : 96, "name" : "qyd95", "num" : 95 }
> { "_id" : 97, "name" : "qyd96", "num" : 96 }
> { "_id" : 98, "name" : "qyd97", "num" : 97 }
> { "_id" : 99, "name" : "qyd98", "num" : 98 }
> { "_id" : 100, "name" : "qyd99", "num" : 99 }

> var mycursor = db.testtable1.find().skip(50).limit(10)		//跳过50行取10行
> mycursor.forEach(function(obj){printjson(obj)})
> { "_id" : 51, "name" : "qyd50", "num" : 50 }
> { "_id" : 52, "name" : "qyd51", "num" : 51 }
> { "_id" : 53, "name" : "qyd52", "num" : 52 }
> { "_id" : 54, "name" : "qyd53", "num" : 53 }
> { "_id" : 55, "name" : "qyd54", "num" : 54 }
> { "_id" : 56, "name" : "qyd55", "num" : 55 }
> { "_id" : 57, "name" : "qyd56", "num" : 56 }
> { "_id" : 58, "name" : "qyd57", "num" : 57 }
> { "_id" : 59, "name" : "qyd58", "num" : 58 }
> { "_id" : 60, "name" : "qyd59", "num" : 59 }

> var mycursor = db.testtable1.find().skip(50).limit(5)			//不打印直接以数组形式返回
> mycursor.toArray()
> [
> {
> 	"_id" : 51,
> 	"name" : "qyd50",
> 	"num" : 50
> },
> {
> 	"_id" : 52,
> 	"name" : "qyd51",
> 	"num" : 51
> },
> {
> 	"_id" : 53,
> 	"name" : "qyd52",
> 	"num" : 52
> },
> {
> 	"_id" : 54,
> 	"name" : "qyd53",
> 	"num" : 53
> },
> {
> 	"_id" : 55,
> 	"name" : "qyd54",
> 	"num" : 54
> }
> ]
>
> var mycursor = db.testtable1.find().skip(50).limit(5)			//使用toArray查看具体某项
> printjson(mycursor.toArray()[4])
> { "_id" : 55, "name" : "qyd54", "num" : 54 }

2.索引:

常用操作:
添加索引–> db.collection.ensureIndex({name:1})
查看索引–> db.collection.getIndexes()
删除索引–> db.collection.dropIndex({name:1})
查询子项–> db.collection.find({‘spc.area’:‘taiwan’})
为子项添加索引–> db.collection.ensureIndex({‘spc.area’:1})
同时添加多个索引–> db.collection.ensureIndex({sn:1,name:-1})
添加唯一索引–> db.collection.ensureIndex({email:1},{unique:true})
稀疏索引–> db.collection.ensureIndex({email:1},{sparse:true});
哈希索引–> db.collection.ensureIndex({email:‘hashed’})
重建索引–> db.collection.reIndex()

使用方式:
首先任何语句后都可以加.explain()查看细节
假设创建999条数据如下:

for (var i=1;i<1000;i++){
... db.people.insert({sn:i,name:"student"+i})}
WriteResult({ "nInserted" : 1 })
db.people.find().count()
999
db.people.find()
{ "_id" : ObjectId("5f044036291a298d7675b2eb"), "sn" : 1, "name" : "student1" }
{ "_id" : ObjectId("5f044036291a298d7675b2ec"), "sn" : 2, "name" : "student2" }
{ "_id" : ObjectId("5f044036291a298d7675b2ed"), "sn" : 3, "name" : "student3" }
{ "_id" : ObjectId("5f044036291a298d7675b2ee"), "sn" : 4, "name" : "student4" }
{ "_id" : ObjectId("5f044036291a298d7675b2ef"), "sn" : 5, "name" : "student5" }
{ "_id" : ObjectId("5f044036291a298d7675b2f0"), "sn" : 6, "name" : "student6" }
{ "_id" : ObjectId("5f044036291a298d7675b2f1"), "sn" : 7, "name" : "student7" }
{ "_id" : ObjectId("5f044036291a298d7675b2f2"), "sn" : 8, "name" : "student8" }
{ "_id" : ObjectId("5f044036291a298d7675b2f3"), "sn" : 9, "name" : "student9" }
{ "_id" : ObjectId("5f044036291a298d7675b2f4"), "sn" : 10, "name" : "student10" }
{ "_id" : ObjectId("5f044036291a298d7675b2f5"), "sn" : 11, "name" : "student11" }
{ "_id" : ObjectId("5f044036291a298d7675b2f6"), "sn" : 12, "name" : "student12" }
{ "_id" : ObjectId("5f044036291a298d7675b2f7"), "sn" : 13, "name" : "student13" }
{ "_id" : ObjectId("5f044036291a298d7675b2f8"), "sn" : 14, "name" : "student14" }
{ "_id" : ObjectId("5f044036291a298d7675b2f9"), "sn" : 15, "name" : "student15" }
{ "_id" : ObjectId("5f044036291a298d7675b2fa"), "sn" : 16, "name" : "student16" }
{ "_id" : ObjectId("5f044036291a298d7675b2fb"), "sn" : 17, "name" : "student17" }
{ "_id" : ObjectId("5f044036291a298d7675b2fc"), "sn" : 18, "name" : "student18" }
{ "_id" : ObjectId("5f044036291a298d7675b2fd"), "sn" : 19, "name" : "student19" }
{ "_id" : ObjectId("5f044036291a298d7675b2fe"), "sn" : 20, "name" : "student20" }
Type "it" for more
加入索引,1为正序-1为倒序

> db.people.ensureIndex({sn:1})				//创建索引
> {
> "createdCollectionAutomatically" : false,
> "numIndexesBefore" : 1,
> "numIndexesAfter" : 2,
> "ok" : 1
> }

> db.people.getIndexes()					//查看索引,id和sn,id为自带的sn为自己创建的
> [
> {
> 	"v" : 2,
> 	"key" : {
> 		"_id" : 1
> 	},
> 	"name" : "_id_",
> 	"ns" : "testDB.people"
> },
> {
> 	"v" : 2,
> 	"key" : {
> 		"sn" : 1
> 	},
> 	"name" : "sn_1",
> 	"ns" : "testDB.people"
> }
> ]

> db.people.ensureIndex({name:-1})				//再次创建name索引,倒序
> {
> "createdCollectionAutomatically" : false,
> "numIndexesBefore" : 2,
> "numIndexesAfter" : 3,
> "ok" : 1
> }

> db.people.dropIndex({name:-1})				//删除索引,我们删除的是倒序索引“name”
> { "nIndexesWas" : 3, "ok" : 1 }

> db.people.dropIndexes()					//删除除_id的所有其他索引					
> {
> "nIndexesWas" : 2,
> "msg" : "non-_id indexes dropped for collection",
> "ok" : 1
> }
> db.people.getIndexes()					//获取索引,发现只有_id
> [
> {
> 	"v" : 2,
> 	"key" : {
> 		"_id" : 1
> 	},
> 	"name" : "_id_",
> 	"ns" : "testDB.people"
> }
> ]
>
> db.people.ensureIndex({sn:1,name:-1})				//同时创建多个索引

{
	"createdCollectionAutomatically" : false,
	"numIndexesBefore" : 1,
	"numIndexesAfter" : 2,
	"ok" : 1
}

> 查询子项:db.shop.find({'spc.area':'taiwan'})
> db.shop.insert({name:'Nokia',spc:{weight:120,area:'taiwan'}});		//插入两条数据,spc的子项area为taiwan和hanguo
> WriteResult({ "nInserted" : 1 })
> db.shop.insert({name:'sumsong',spc:{weight:100,area:'hanguo'}});	
> WriteResult({ "nInserted" : 1 })
> db.shop.find({name:'Nokia'})					//查找name为Nokia的项
> { "_id" : ObjectId("5f0457192314195032b33168"), "name" : "Nokia", "spc" : { "weight" : 120, "area" : "taiwan" } }
>
> db.shop.find({'spc.area':'taiwan'})				//spc.area,记得加冒号左边的引号
> { "_id" : ObjectId("5f0457192314195032b33168"), "name" : "Nokia", "spc" : { "weight" : 120, "area" : "taiwan" } }

>  db.shop.ensureIndex({'spc.area':1})				//为子项添加索引
>  {
>  "createdCollectionAutomatically" : false,
>  "numIndexesBefore" : 1,
>  "numIndexesAfter" : 2,
>  "ok" : 1
>  }

刚刚是普通索引,接下来:
唯一索引:db.teacher.ensureIndex({email:1},{unique:true})	

> db.teacher.ensureIndex({email:1},{unique:true})			//使用unique:true给email设置为唯一索引
> {
> "createdCollectionAutomatically" : false,
> "numIndexesBefore" : 1,
> "numIndexesAfter" : 2,
> "ok" : 1
> }
>
> 当添加email为key为唯一key以后,我无法重复添加一样key的数据比如
> db.teacher.insert({email:"c@163.com"})				//添加了一条email:"c@163.com"
> WriteResult({ "nInserted" : 1 })
> db.teacher.getIndexes()					//获取当前索引
> [
> {
> 	"v" : 2,
> 	"key" : {
> 		"_id" : 1
> 	},
> 	"name" : "_id_",
> 	"ns" : "testDB.teacher"
> },
> {
> 	"v" : 2,
> 	"unique" : true,
> 	"key" : {
> 		"email" : 1
> 	},
> 	"name" : "email_1",
> 	"ns" : "testDB.teacher"
> }
> ]
> db.teacher.insert({email:"c@163.com"})				//再次添加相同的email:"c@163.com"会报错
> WriteResult({
> "nInserted" : 0,
> "writeError" : {
> 	"code" : 11000,
> 	"errmsg" : "E11000 duplicate key error collection: testDB.teacher index: email_1 dup key: { email: \"c@163.com\" }"
> }
> })
稀疏索引:db.teacher.ensureIndex({email:1},{sparse:true});

举个栗子:

db.teacher.insert({})					//插入一条空数据
WriteResult({ "nInserted" : 1 })
db.teacher.find()						//查询表
{ "_id" : ObjectId("5f0461dceb04e9c035bf7a9d"), "email" : "a@163.com" }
{ "_id" : ObjectId("5f0461dfeb04e9c035bf7a9e"), "email" : "b@163.com" }
{ "_id" : ObjectId("5f053bb8caddd86cd63d7505"), "email" : "c@163.com" }
{ "_id" : ObjectId("5f053d84caddd86cd63d7507") }
db.teacher.ensureIndex({email:1})				//插入索引email正序
{
"createdCollectionAutomatically" : false,
"numIndexesBefore" : 1,
"numIndexesAfter" : 2,
"ok" : 1
}
db.teacher.find({email:null})					//直接查找email为空的,可以正常查询到
{ "_id" : ObjectId("5f053d84caddd86cd63d7507") }

稀疏索引概念:如果针对某一个键(email)做索引,针对不含email的文档,将不建立索引
与之相对,普通索引会把该文档的email列的值认为NULL并创建索引
适用于:小部分文档含有某列时
db.teacher.ensureIndex({email:1},{sparse:true});			//创建稀疏索引{sparse:true}
{
"createdCollectionAutomatically" : false,
"numIndexesBefore" : 1,
"numIndexesAfter" : 2,
"ok" : 1
}
db.teacher.find({email:1})					//查询不到没有email的

根据email:null查询,普通可以查到,稀疏查不到
哈希索引: db.teacher.ensureIndex({email:‘hashed’})

特点:时间复杂度低,数据多的情况下推荐使用

db.teacher.dropIndexes()					//首先删除所有索引
{
"nIndexesWas" : 2,
"msg" : "non-_id indexes dropped for collection",
"ok" : 1
}
db.teacher.ensureIndex({email:'hashed'})			//创建哈希索引email
{
"createdCollectionAutomatically" : false,
"numIndexesBefore" : 1,
"numIndexesAfter" : 2,
"ok" : 1
}
db.teacher.find({email:'a@163.com'})				//查找a@163.com
{ "_id" : ObjectId("5f0461dceb04e9c035bf7a9d"), "email" : "a@163.com" }
db.teacher.find({email:'a@163.com'}).explain()			//查看细节
{
"queryPlanner" : {
	"plannerVersion" : 1,
	"namespace" : "testDB.teacher",
	"indexFilterSet" : false,
	"parsedQuery" : {
		"email" : {
			"$eq" : "a@163.com"
		}
	},
	"queryHash" : "F4C3CE3A",
	"planCacheKey" : "214EB173",
	"winningPlan" : {
		"stage" : "FETCH",
		"filter" : {
			"email" : {
				"$eq" : "a@163.com"
			}
		},
		"inputStage" : {
			"stage" : "IXSCAN",
			"keyPattern" : {
				"email" : "hashed"
			},
			"indexName" : "email_hashed",
			"isMultiKey" : false,
			"isUnique" : false,
			"isSparse" : false,
			"isPartial" : false,
			"indexVersion" : 2,
			"direction" : "forward",
			"indexBounds" : {
				"email" : [
					"[2069420833715827975, 2069420833715827975]"
				]
			}
		}
	},
	"rejectedPlans" : [ ]
},
"serverInfo" : {
	"host" : "www.test.com",
	"port" : 27017,
	"version" : "4.2.8",
	"gitVersion" : "43d25964249164d76d5e04dd6cf38f6111e21f5f"
},
"ok" : 1
}
重建索引: db.collection.reIndex()

一个表经过多次修改后,导致表的文件产生空洞,索引文件如此。
可以通过索引重建,来提交索引的效率,类似mysql中的optimize table
作用:重建collection中的所有索引,包括_id
意义:减少索引文件碎片,提升索引效率

五、导入与导出

注:以下的所有操作,在没有权限要求的库中,都可以直接将./mongo/bin/目录下的mongodb.config中的auth设置为false,在输入命令的时候可以不输入:-u和-p

1.导入、导出可以操作的是本地的mongodb服务器,也可以是远程的

所以,都有如下通用选项:

-h host 主机

–port port 端口

-u username 用户名

-p passwd 密码

2.mongoexport 导出json格式的文件

问:导出哪个库,哪张表,哪几列,哪几行?

-d 库名

-c 表名

-f field1,field2…列名

-q 查询条件

-o 导出的文件名称

–csv 导出csv格式

开始导出:

导出json

首先进入mongo目录的bin目录下。ls查看是否有mongoexport,找到后

终端输入:

[root@www bin]# mongoexport -h 127.0.0.1 --port 27017 -u 'test' -p 'testpassword' -d testDB -c people -o ./locateInfo.json

即可使用test用户导出testDB库的文件为locateInfo.json

(我尝试直接使用超级管理员导出这个文件夹的时候它提示连接失败了,貌似只能用该db下的用户才可以导出)

下方为增加-q查询条件的使用方式,–query和-q都ok

[root@www bin]# mongoexport -h 127.0.0.1 --port 27017 -u 'test' -p 'testpassword' -d testDB -c people --query '{"sn":{"$lte":100}}' -o ./locateInfo5.json
2020-07-08T01:34:17.100-0700	connected to: mongodb://127.0.0.1:27017/
2020-07-08T01:34:17.105-0700	exported 100 records
导出csv
[root@www bin]# mongoexport -h 127.0.0.1 --port 27017 -u 'test' -p 'testpassword' -d testDB -c people -f name,sn --query '{"sn":{"$lte":100}}' --type=csv -o ./locateInfo.csv
2020-07-08T01:41:46.973-0700	connected to: mongodb://127.0.0.1:27017/
2020-07-08T01:41:46.976-0700	exported 100 records

注意,csv文件需要制定-f,也就是需要指定导出的列

上方导出的文件样式为:

name,sn
student1,1
student2,2
student3,3
student4,4
student5,5
student6,6
student7,7
student8,8
student9,9
student10,10
student11,11

...(共100条)

导出csv的目的,便于和传统数据库交换数据

开始导入:

导入json
[root@www bin]# mongoimport -h 127.0.0.1 --port 27017 -u 'q' -p '123456' -d qydtestdb1 -c shop  --type json --file ./locateInfo5.json 
2020-07-08T02:01:21.114-0700	connected to: mongodb://127.0.0.1:27017/
2020-07-08T02:01:21.149-0700	100 document(s) imported successfully. 0 document(s) failed to import.

使用用户”q“向qydtestdb1数据库中的shop表中导入了 ./locateInfo5.json 中的数据

[root@www bin]# mongoimport -h 127.0.0.1 --port 27017 -u 'q' -p '123456' -d qydtestdb1 -c test100  --type json --file ./locateInfo5.json 
2020-07-08T02:03:26.661-0700	connected to: mongodb://127.0.0.1:27017/
2020-07-08T02:03:26.761-0700	100 document(s) imported successfully. 0 document(s) failed to import.

如果表没有,则会创建该表,上方的意思为创建一个为test100的表

导入csv
[root@www bin]# mongoimport -h 127.0.0.1 --port 27017 -u 'q' -p '123456' -d qydtestdb1 -c testcsv  --type=csv  --headerline --file ./locateInfo.csv 
2020-07-08T02:24:47.783-0700	connected to: mongodb://127.0.0.1:27017/
2020-07-08T02:24:47.788-0700	100 document(s) imported successfully. 0 document(s) failed to import.

使用用户”q“将./locateInfo.csv 文件导入到数据库testcsv中

–headerline,跳过第一行

–type需要改为csv

由于csv文件第一行为没有用的描述行,所以需要使用–headerline跳过第一行

二进制导出:

[root@www bin]# mongodump -h 127.0.0.1 --port 27017 -u 'q' -p '123456' -d qydtestdb1 -c shop	//二进制导出qydtestdb1中的shop
2020-07-08T02:33:31.120-0700	writing qydtestdb1.shop to dump/qydtestdb1/shop.bson
2020-07-08T02:33:31.122-0700	done dumping qydtestdb1.shop (102 documents)
[root@www bin]# ls
bin              locateInfo5.json  mongodb.conf  mongoimport   mongostat
bsondump         locateInfo.csv    mongodump     mongoreplay   mongotop
dump             mongo             mongoexport   mongorestore  peopleTest.json
install_compass  mongod            mongofiles    mongos
[root@www bin]# cd dump/

[root@www dump]# cd qydtestdb1/
[root@www qydtestdb1]# ls
shop.bson  shop.metadata.json

导出的文件放在了当前bin目录下的dump中以数据库名称命名的文件(qydtestdb1),进去可以看到最终导出了一个json一个bson文件

[root@www qydtestdb1]# mongodump -h 127.0.0.1 --port 27017 -u 'q' -p '123456' -d qydtestdb1
2020-07-08T02:56:23.982-0700	writing qydtestdb1.shop to dump/qydtestdb1/shop.bson
2020-07-08T02:56:23.983-0700	done dumping qydtestdb1.shop (102 documents)
2020-07-08T02:56:24.005-0700	writing qydtestdb1.qydtesttable1 to dump/qydtestdb1/qydtesttable1.bson
2020-07-08T02:56:24.016-0700	done dumping qydtestdb1.qydtesttable1 (2 documents)
2020-07-08T02:56:24.016-0700	writing qydtestdb1.testcsv2 to dump/qydtestdb1/testcsv2.bson
2020-07-08T02:56:24.017-0700	writing qydtestdb1.testcsv to dump/qydtestdb1/testcsv.bson
2020-07-08T02:56:24.017-0700	writing qydtestdb1.test100 to dump/qydtestdb1/test100.bson
2020-07-08T02:56:24.017-0700	done dumping qydtestdb1.testcsv2 (100 documents)
2020-07-08T02:56:24.018-0700	done dumping qydtestdb1.testcsv (100 documents)
2020-07-08T02:56:24.018-0700	done dumping qydtestdb1.test100 (100 documents)
[root@www qydtestdb1]# ls
dump  shop.bson  shop.metadata.json
[root@www qydtestdb1]# cd dump/qydtestdb1/
[root@www qydtestdb1]# ls
qydtesttable1.bson           test100.bson            testcsv.bson
qydtesttable1.metadata.json  test100.metadata.json   testcsv.metadata.json
shop.bson                    testcsv2.bson
shop.metadata.json           testcsv2.metadata.json
[root@www qydtestdb1]# 

如果再导出的时候不指定表,则会将所有的表全部导出

二进制导入:

我首先将qydtestdb1数据库再mongo中删除,然后用下方命令再刚刚的dump文件夹中恢复qydtestdb1数据库

mongorestore -u q -p 123456 -d qydtestdb1 ./dump/qydtestdb1 //恢复数据库

2020-07-08T04:16:02.924-0700	the --db and --collection args should only be used when restoring from a BSON file. Other uses are deprecated and will not exist in the future; use --nsInclude instead
2020-07-08T04:16:02.924-0700	building a list of collections to restore from dump/qydtestdb1 dir
2020-07-08T04:16:02.924-0700	reading metadata for qydtestdb1.shop from dump/qydtestdb1/shop.metadata.json
2020-07-08T04:16:02.925-0700	reading metadata for qydtestdb1.test100 from dump/qydtestdb1/test100.metadata.json
2020-07-08T04:16:02.926-0700	reading metadata for qydtestdb1.testcsv from dump/qydtestdb1/testcsv.metadata.json
2020-07-08T04:16:02.962-0700	reading metadata for qydtestdb1.testcsv2 from dump/qydtestdb1/testcsv2.metadata.json
2020-07-08T04:16:02.962-0700	restoring qydtestdb1.testcsv from dump/qydtestdb1/testcsv.bson
2020-07-08T04:16:02.967-0700	no indexes to restore
2020-07-08T04:16:02.967-0700	finished restoring qydtestdb1.testcsv (100 documents, 0 failures)
2020-07-08T04:16:02.967-0700	reading metadata for qydtestdb1.qydtesttable1 from dump/qydtestdb1/qydtesttable1.metadata.json
2020-07-08T04:16:02.968-0700	restoring qydtestdb1.shop from dump/qydtestdb1/shop.bson
2020-07-08T04:16:02.972-0700	restoring indexes for collection qydtestdb1.shop from metadata
2020-07-08T04:16:02.974-0700	restoring qydtestdb1.test100 from dump/qydtestdb1/test100.bson
2020-07-08T04:16:03.008-0700	no indexes to restore
2020-07-08T04:16:03.008-0700	finished restoring qydtestdb1.test100 (100 documents, 0 failures)
2020-07-08T04:16:03.010-0700	restoring qydtestdb1.qydtesttable1 from dump/qydtestdb1/qydtesttable1.bson
2020-07-08T04:16:03.012-0700	no indexes to restore
2020-07-08T04:16:03.012-0700	finished restoring qydtestdb1.qydtesttable1 (2 documents, 0 failures)
2020-07-08T04:16:03.015-0700	restoring qydtestdb1.testcsv2 from dump/qydtestdb1/testcsv2.bson
2020-07-08T04:16:03.018-0700	no indexes to restore
2020-07-08T04:16:03.018-0700	finished restoring qydtestdb1.testcsv2 (100 documents, 0 failures)
2020-07-08T04:16:03.022-0700	finished restoring qydtestdb1.shop (102 documents, 0 failures)
2020-07-08T04:16:03.022-0700	404 document(s) restored successfully. 0 document(s) failed to restore.
[root@www bin]# 

六、replication set复制集

定义:多台服务器维护相同的数据副本,提高服务器的可用性

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-egPQ38tq-1594379012742)(C:\Users\Administrator\AppData\Roaming\Typora\typora-user-images\image-20200709120650965.png)]

Replication set设置过程

1.克隆主从虚拟机

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-085gNdn9-1594379012745)(C:\Users\Administrator\AppData\Roaming\Typora\typora-user-images\image-20200709154106617.png)]

右击你当前的虚拟机名称–>管理–>克隆

在克隆向导中点击下一步,而后选中虚拟机中的当前状态

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-eJqg6opI-1594379012746)(C:\Users\Administrator\AppData\Roaming\Typora\typora-user-images\image-20200709154157177.png)]

选择创建完整克隆

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-gBq3lMyJ-1594379012747)(C:\Users\Administrator\AppData\Roaming\Typora\typora-user-images\image-20200709154224699.png)]

取名我这里更改成了primary,克隆的另外的两个我改名为secondary1和secondary2,文件夹也可以根据要求随意更改,之后点击完成即可

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-IlcSBHu9-1594379012748)(C:\Users\Administrator\AppData\Roaming\Typora\typora-user-images\image-20200709154255431.png)]

2.启动虚拟机

由于是克隆的虚拟机,所以账号密码都跟之前是一样的,三个都启动并且修改mongo/bin/目录下的mongodb.conf文件内容为如下,具体文件夹可根据自己需求更改

注意:最后一行为必须项,此行将作为复制集的_id

#任何机器可以连接
#bind_ip_all = true
port=27017
bind_ip=0.0.0.0 #默认是127.0.0.1
dbpath=/home/qyd/Documents/data/
logpath=/home/qyd/Documents/logs/mongodb.log #日志文件
logappend=true
fork=true #设置后台运行
#auth=true #开启认证
#smallfiles=true #开启小文件存储
replSet=qyd_repl
shardsvr=true #分片集群必需属性
3.创建复制集

先进入你想成为主的服务器中,开启服务进入mongo,并且use admin,输入js脚本如下

> use admin
switched to db admin

>var rsconf = { _id:'qyd_repl', members:[ { _id:0, host:'192.168.62.158:27017' }, { _id:1, host:'192.168.62.159:27017' }, { _id:2, host:'192.168.62.160:27017' } ] }
> printjson(rsconf)				//打印看到刚刚添加的变量rsconf
{
	"_id" : "qyd_repl",
	"members" : [
		{
			"_id" : 0,
			"host" : "192.168.62.158:27017"
		},
		{
			"_id" : 1,
			"host" : "192.168.62.159:27017"
		},
		{
			"_id" : 2,
			"host" : "192.168.62.160:27017"
		}
	]
}
> 

初始化

初始化rsconf(执行这一步之前一定要关闭防火墙)

关闭防火墙命令 :systemctl stop firewalld

rs.initiate(rsconf)
{
"ok" : 1,
"$clusterTime" : {
	"clusterTime" : Timestamp(1594276538, 1),
	"signature" : {
		"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
		"keyId" : NumberLong(0)
	}
},
"operationTime" : Timestamp(1594276538, 1)
}

查看状态:

//查看当前状态是否添加成功
qyd_repl:PRIMARY> rs.status()
{
	"set" : "qyd_repl",
	"date" : ISODate("2020-07-09T06:37:33.007Z"),
	"myState" : 1,
	"term" : NumberLong(1),
	"syncingTo" : "",
	"syncSourceHost" : "",
	"syncSourceId" : -1,
	"heartbeatIntervalMillis" : NumberLong(2000),
	"majorityVoteCount" : 2,
	"writeMajorityCount" : 2,
	"optimes" : {
		"lastCommittedOpTime" : {
			"ts" : Timestamp(1594276649, 1),
			"t" : NumberLong(1)
		},
		"lastCommittedWallTime" : ISODate("2020-07-09T06:37:29.585Z"),
		"readConcernMajorityOpTime" : {
			"ts" : Timestamp(1594276649, 1),
			"t" : NumberLong(1)
		},
		"readConcernMajorityWallTime" : ISODate("2020-07-09T06:37:29.585Z"),
		"appliedOpTime" : {
			"ts" : Timestamp(1594276649, 1),
			"t" : NumberLong(1)
		},
		"durableOpTime" : {
			"ts" : Timestamp(1594276649, 1),
			"t" : NumberLong(1)
		},
		"lastAppliedWallTime" : ISODate("2020-07-09T06:37:29.585Z"),
		"lastDurableWallTime" : ISODate("2020-07-09T06:37:29.585Z")
	},
	"lastStableRecoveryTimestamp" : Timestamp(1594276609, 1),
	"lastStableCheckpointTimestamp" : Timestamp(1594276609, 1),
	"electionCandidateMetrics" : {
		"lastElectionReason" : "electionTimeout",
		"lastElectionDate" : ISODate("2020-07-09T06:35:49.553Z"),
		"electionTerm" : NumberLong(1),
		"lastCommittedOpTimeAtElection" : {
			"ts" : Timestamp(0, 0),
			"t" : NumberLong(-1)
		},
		"lastSeenOpTimeAtElection" : {
			"ts" : Timestamp(1594276538, 1),
			"t" : NumberLong(-1)
		},
		"numVotesNeeded" : 2,
		"priorityAtElection" : 1,
		"electionTimeoutMillis" : NumberLong(10000),
		"numCatchUpOps" : NumberLong(0),
		"newTermStartDate" : ISODate("2020-07-09T06:35:49.576Z"),
		"wMajorityWriteAvailabilityDate" : ISODate("2020-07-09T06:35:50.777Z")
	},
	"members" : [
		{
			"_id" : 0,
			"name" : "192.168.62.158:27017",
			"health" : 1,
			"state" : 1,
			"stateStr" : "PRIMARY",
			"uptime" : 619,
			"optime" : {
				"ts" : Timestamp(1594276649, 1),
				"t" : NumberLong(1)
			},
			"optimeDate" : ISODate("2020-07-09T06:37:29Z"),
			"syncingTo" : "",
			"syncSourceHost" : "",
			"syncSourceId" : -1,
			"infoMessage" : "could not find member to sync from",
			"electionTime" : Timestamp(1594276549, 1),
			"electionDate" : ISODate("2020-07-09T06:35:49Z"),
			"configVersion" : 1,
			"self" : true,
			"lastHeartbeatMessage" : ""
		},
		{
			"_id" : 1,
			"name" : "192.168.62.159:27017",
			"health" : 1,
			"state" : 2,
			"stateStr" : "SECONDARY",
			"uptime" : 114,
			"optime" : {
				"ts" : Timestamp(1594276649, 1),
				"t" : NumberLong(1)
			},
			"optimeDurable" : {
				"ts" : Timestamp(1594276649, 1),
				"t" : NumberLong(1)
			},
			"optimeDate" : ISODate("2020-07-09T06:37:29Z"),
			"optimeDurableDate" : ISODate("2020-07-09T06:37:29Z"),
			"lastHeartbeat" : ISODate("2020-07-09T06:37:31.619Z"),
			"lastHeartbeatRecv" : ISODate("2020-07-09T06:37:32.662Z"),
			"pingMs" : NumberLong(0),
			"lastHeartbeatMessage" : "",
			"syncingTo" : "192.168.62.158:27017",
			"syncSourceHost" : "192.168.62.158:27017",
			"syncSourceId" : 0,
			"infoMessage" : "",
			"configVersion" : 1
		},
		{
			"_id" : 2,
			"name" : "192.168.62.160:27017",
			"health" : 1,
			"state" : 2,
			"stateStr" : "SECONDARY",
			"uptime" : 114,
			"optime" : {
				"ts" : Timestamp(1594276649, 1),
				"t" : NumberLong(1)
			},
			"optimeDurable" : {
				"ts" : Timestamp(1594276649, 1),
				"t" : NumberLong(1)
			},
			"optimeDate" : ISODate("2020-07-09T06:37:29Z"),
			"optimeDurableDate" : ISODate("2020-07-09T06:37:29Z"),
			"lastHeartbeat" : ISODate("2020-07-09T06:37:31.619Z"),
			"lastHeartbeatRecv" : ISODate("2020-07-09T06:37:32.622Z"),
			"pingMs" : NumberLong(0),
			"lastHeartbeatMessage" : "",
			"syncingTo" : "192.168.62.158:27017",
			"syncSourceHost" : "192.168.62.158:27017",
			"syncSourceId" : 0,
			"infoMessage" : "",
			"configVersion" : 1
		}
	],
	"ok" : 1,
	"$clusterTime" : {
		"clusterTime" : Timestamp(1594276649, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	},
	"operationTime" : Timestamp(1594276649, 1)
}
qyd_repl:PRIMARY> 

测试复制集

数据测试
qyd_repl:PRIMARY> use studentdb
switched to db studentdb
qyd_repl:PRIMARY> db.student.insert({_id:1,name:'曲耀达',enname:'qyd',age:23});
WriteResult({ "nInserted" : 1 })
qyd_repl:PRIMARY> db.student.insert({_id:2,name:'乔越鑫',enname:'qyx',age:23});
WriteResult({ "nInserted" : 1 })
qyd_repl:PRIMARY> db.student.find().pretty()
{ "_id" : 1, "name" : "曲耀达", "enname" : "qyd", "age" : 23 }
{ "_id" : 2, "name" : "乔越鑫", "enname" : "qyx", "age" : 23 }

添加完以后,先使用数据库图形界面管理工具查看一下,连接主服务器PRIMARY,我这里是192.168.62.158,这个工具无法连接从服务器SECONDARY

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-ClnZckJ3-1594379012749)(C:\Users\Administrator\AppData\Roaming\Typora\typora-user-images\image-20200709145518487.png)]

可以看到是有的,这个时候我们再xshell中登录复制集查看,在show dbs的时候,会提示如下报错

qyd_repl:SECONDARY> show db
2020-07-08T23:51:52.084-0700 E  QUERY    [js] uncaught exception: Error: don't know how to show [db] :
shellHelper.show@src/mongo/shell/utils.js:1139:11
shellHelper@src/mongo/shell/utils.js:790:15
@(shellhelp2):1:1
qyd_repl:SECONDARY> show dbs
2020-07-08T23:51:53.462-0700 E  QUERY    [js] uncaught exception: Error: listDatabases failed:{
	"operationTime" : Timestamp(1594277509, 1),
	"ok" : 0,
	"errmsg" : "not master and slaveOk=false",
	"code" : 13435,
	"codeName" : "NotMasterNoSlaveOk",
	"$clusterTime" : {
		"clusterTime" : Timestamp(1594277509, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
} :

这个时候我们可以使用rs.help中的方法,rs.slaveOk()来允许访问,运行以后如下

qyd_repl:SECONDARY> rs.slaveOk()
qyd_repl:SECONDARY> show dbs
admin       0.000GB
config      0.000GB
local       0.000GB
newqydtest  0.000GB
qydtestdb1  0.000GB
studentdb   0.000GB
testDB      0.000GB
qyd_repl:SECONDARY> 
qyd_repl:SECONDARY> use studentdb
switched to db studentdb
qyd_repl:SECONDARY> db.student.find().pretty()
{ "_id" : 1, "name" : "曲耀达", "enname" : "qyd", "age" : 23 }
{ "_id" : 2, "name" : "乔越鑫", "enname" : "qyx", "age" : 23 }
qyd_repl:SECONDARY> 

目前我们有两个从节点(集),当我们需要添加或删除的时候可以执行

rs.add(192.168.xx.xx:2701x)

rs.remove(192.168.xx.xx:2701x)

执行完上面两行中的任意一行后,执行

rs.reconfig(rsconfig) 重新读取config即可

修改完记得rs.status()查看状态

崩溃测试

我们将主服务器使用db.shutdownServer()关掉

qyd_repl:PRIMARY> db.shutdownServer()
shutdown command only works with the admin database; try 'use admin'
qyd_repl:PRIMARY> use admin
switched to db admin
qyd_repl:PRIMARY> db.shutdownServer()
2020-07-09T00:24:00.115-0700 I  NETWORK  [js] DBClientConnection failed to receive message from 127.0.0.1:27017 - HostUnreachable: Connection closed by peer
server should be down...
2020-07-09T00:24:00.134-0700 I  NETWORK  [js] trying reconnect to 127.0.0.1:27017 failed
2020-07-09T00:24:00.138-0700 I  NETWORK  [js] reconnect 127.0.0.1:27017 failed failed 
2020-07-09T00:24:00.141-0700 I  NETWORK  [js] trying reconnect to 127.0.0.1:27017 failed
2020-07-09T00:24:00.141-0700 I  NETWORK  [js] reconnect 127.0.0.1:27017 failed failed 

然后在从服务器中输入rs.status()测试当前状态

qyd_repl:SECONDARY> rs.status()
{
	"set" : "qyd_repl",
	"date" : ISODate("2020-07-09T07:24:21.036Z"),
	"myState" : 1,
	"term" : NumberLong(2),
	"syncingTo" : "",
	"syncSourceHost" : "",
	"syncSourceId" : -1,
	"heartbeatIntervalMillis" : NumberLong(2000),
	"majorityVoteCount" : 2,
	"writeMajorityCount" : 2,
	"optimes" : {
		"lastCommittedOpTime" : {
			"ts" : Timestamp(1594279460, 1),
			"t" : NumberLong(2)
		},
		"lastCommittedWallTime" : ISODate("2020-07-09T07:24:20.121Z"),
		"readConcernMajorityOpTime" : {
			"ts" : Timestamp(1594279460, 1),
			"t" : NumberLong(2)
		},
		"readConcernMajorityWallTime" : ISODate("2020-07-09T07:24:20.121Z"),
		"appliedOpTime" : {
			"ts" : Timestamp(1594279460, 1),
			"t" : NumberLong(2)
		},
		"durableOpTime" : {
			"ts" : Timestamp(1594279460, 1),
			"t" : NumberLong(2)
		},
		"lastAppliedWallTime" : ISODate("2020-07-09T07:24:20.121Z"),
		"lastDurableWallTime" : ISODate("2020-07-09T07:24:20.121Z")
	},
	"lastStableRecoveryTimestamp" : Timestamp(1594279429, 1),
	"lastStableCheckpointTimestamp" : Timestamp(1594279429, 1),
	"electionCandidateMetrics" : {
		"lastElectionReason" : "stepUpRequestSkipDryRun",
		"lastElectionDate" : ISODate("2020-07-09T07:23:59.057Z"),
		"electionTerm" : NumberLong(2),
		"lastCommittedOpTimeAtElection" : {
			"ts" : Timestamp(1594279429, 1),
			"t" : NumberLong(1)
		},
		"lastSeenOpTimeAtElection" : {
			"ts" : Timestamp(1594279429, 1),
			"t" : NumberLong(1)
		},
		"numVotesNeeded" : 2,
		"priorityAtElection" : 1,
		"electionTimeoutMillis" : NumberLong(10000),
		"priorPrimaryMemberId" : 0,
		"numCatchUpOps" : NumberLong(0),
		"newTermStartDate" : ISODate("2020-07-09T07:24:00.117Z"),
		"wMajorityWriteAvailabilityDate" : ISODate("2020-07-09T07:24:00.800Z")
	},
	"electionParticipantMetrics" : {
		"votedForCandidate" : true,
		"electionTerm" : NumberLong(1),
		"lastVoteDate" : ISODate("2020-07-09T06:35:49.553Z"),
		"electionCandidateMemberId" : 0,
		"voteReason" : "",
		"lastAppliedOpTimeAtElection" : {
			"ts" : Timestamp(1594276538, 1),
			"t" : NumberLong(-1)
		},
		"maxAppliedOpTimeInSet" : {
			"ts" : Timestamp(1594276538, 1),
			"t" : NumberLong(-1)
		},
		"priorityAtElection" : 1
	},
	"members" : [
		{
			"_id" : 0,
			"name" : "192.168.62.158:27017",
			"health" : 0,
			"state" : 8,
			"stateStr" : "(not reachable/healthy)",
			"uptime" : 0,
			"optime" : {
				"ts" : Timestamp(0, 0),
				"t" : NumberLong(-1)
			},
			"optimeDurable" : {
				"ts" : Timestamp(0, 0),
				"t" : NumberLong(-1)
			},
			"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
			"optimeDurableDate" : ISODate("1970-01-01T00:00:00Z"),
			"lastHeartbeat" : ISODate("2020-07-09T07:24:20.123Z"),
			"lastHeartbeatRecv" : ISODate("2020-07-09T07:23:59.024Z"),
			"pingMs" : NumberLong(0),
			"lastHeartbeatMessage" : "Error connecting to 192.168.62.158:27017 :: caused by :: Connection refused",
			"syncingTo" : "",
			"syncSourceHost" : "",
			"syncSourceId" : -1,
			"infoMessage" : "",
			"configVersion" : -1
		},
		{
			"_id" : 1,
			"name" : "192.168.62.159:27017",
			"health" : 1,
			"state" : 1,
			"stateStr" : "PRIMARY",
			"uptime" : 3439,
			"optime" : {
				"ts" : Timestamp(1594279460, 1),
				"t" : NumberLong(2)
			},
			"optimeDate" : ISODate("2020-07-09T07:24:20Z"),
			"syncingTo" : "",
			"syncSourceHost" : "",
			"syncSourceId" : -1,
			"infoMessage" : "could not find member to sync from",
			"electionTime" : Timestamp(1594279439, 1),
			"electionDate" : ISODate("2020-07-09T07:23:59Z"),
			"configVersion" : 1,
			"self" : true,
			"lastHeartbeatMessage" : ""
		},
		{
			"_id" : 2,
			"name" : "192.168.62.160:27017",
			"health" : 1,
			"state" : 2,
			"stateStr" : "SECONDARY",
			"uptime" : 2922,
			"optime" : {
				"ts" : Timestamp(1594279450, 1),
				"t" : NumberLong(2)
			},
			"optimeDurable" : {
				"ts" : Timestamp(1594279450, 1),
				"t" : NumberLong(2)
			},
			"optimeDate" : ISODate("2020-07-09T07:24:10Z"),
			"optimeDurableDate" : ISODate("2020-07-09T07:24:10Z"),
			"lastHeartbeat" : ISODate("2020-07-09T07:24:19.106Z"),
			"lastHeartbeatRecv" : ISODate("2020-07-09T07:24:20.804Z"),
			"pingMs" : NumberLong(0),
			"lastHeartbeatMessage" : "",
			"syncingTo" : "192.168.62.159:27017",
			"syncSourceHost" : "192.168.62.159:27017",
			"syncSourceId" : 1,
			"infoMessage" : "",
			"configVersion" : 1
		}
	],
	"ok" : 1,
	"$clusterTime" : {
		"clusterTime" : Timestamp(1594279460, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	},
	"operationTime" : Timestamp(1594279460, 1)
}
qyd_repl:PRIMARY> 

由上方members中的提示看出,我们的主机192.168.62.158:27017的health为0,代表已经宕掉了

此时我们再看我们从服务器中的192.168.62.159:27017的stateStr字段显示为PRIMARY,此时代表它已经自动成为了新的主服务器。

此时再向192.168.62.159:27017,也就是现在的PRIMARY中写一条数据,

qyd_repl:PRIMARY> db.student.insert({_id:3,name:"孟繁松",enname:'mfs',age:26})
WriteResult({ "nInserted" : 1 })

然后在另一台SECONDARY也就是192.168.62.160:27017上访问

qyd_repl:SECONDARY> db.student.find().pretty()
{ "_id" : 1, "name" : "曲耀达", "enname" : "qyd", "age" : 23 }
{ "_id" : 2, "name" : "乔越鑫", "enname" : "qyx", "age" : 23 }
{ "_id" : 3, "name" : "孟繁松", "enname" : "mfs", "age" : 26 }
qyd_repl:SECONDARY> 

通过测试

七、Shard分片

需要的集群环境:

2个分片复制集

shard(192.168.62.158:27017、192.168.62.159:27017、192.168.62.160:27017)

shard(192.168.62.158:27018、192.168.62.159:27018、192.168.62.160:27018)

1个config复制集

(192.168.62.158:28018、192.168.62.159:28018、192.168.62.160:28018)

1个mongos节点

创建新的conf

三台虚拟机都进入mongo/bin/ 在文件夹中创建新的conf文件mongodb2.conf

内容跟之前的conf只有:

①端口port

②数据及log文件夹名称dbpath和logpath

③复制集id名称replSet不同

#任何机器可以连接
#bind_ip_all = true
port=27018
bind_ip=0.0.0.0 #默认是127.0.0.1
dbpath=/home/qyd/Documents/data/db2
logpath=/home/qyd/Documents/logs/mongodb2.log #日志文件
logappend=true
fork=true #设置后台运行
#auth=true #开启认证
smallfiles=true #开启小文件存储
replSet=qyd_repl2
shardsvr=true #分片集群必须有的属性

启动第二个副本集

3台虚拟机都启动副本集

mongod --config mongodb2.conf

运行netstat -ntlp查看是否两个mongo都启动成功(三台都要看)

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-rS3Uk3Wr-1594379012750)(C:\Users\Administrator\AppData\Roaming\Typora\typora-user-images\image-20200710112250641.png)]

(非强迫症无需执行:由于刚刚我们崩溃测试中将158机器的服务干掉了。所以现在159为复制集的主节点,我们现在进入三台虚拟机的mongo中,db.shutdownServer()干掉159,然后rs.status()查看,158是否恢复成主节点,操作很简单,不再阐述)

现在进入27018

mongo -port 27018

先配置27018的复制集

> use admin
switched to db admin
> var rsconf2 = { _id:'qyd_repl2', members:[ { _id:0, host:'192.168.62.158:27018' }, { _id:1, host:'192.168.62.159:27018' }, { _id:2, host:'192.168.62.160:27018' } ] }
> printjson(rsconf2)
{
	"_id" : "qyd_repl2",
	"members" : [
		{
			"_id" : 0,
			"host" : "192.168.62.158:27018"
		},
		{
			"_id" : 1,
			"host" : "192.168.62.159:27018"
		},
		{
			"_id" : 2,
			"host" : "192.168.62.160:27018"
		}
	]
}
> 

然后,检查防火墙,检查防火墙,检查防火墙*100000000000000,重要的事情说tm1000000000000000遍,我这步tm防火墙不知道啥时候自己开了,各种找问题一直配一直提示一个SECONDARY两个STARTUP,心态差点崩溃。he!tui!萨比防火墙!

关闭防火墙命令 :systemctl stop firewalld

rs.initiate(rsconf2)初始化,与上面步骤一样,然后rs.status()查看状态。

搭建config节点复制集

创建config节点配置文件:mongodb-cfg.conf,下方ip需要修改

systemLog:
   destination: file        
   path: "/home/qyd/Documents/mongo-cfg/logs/mongodb.log"
   logAppend: true
storage:
   journal:
      enabled: true
   dbPath: "/home/qyd/Documents/mongo-cfg/data"
processManagement:
   fork: true
net:            
   bindIp: 192.168.62.158
   port: 28018   
replication:
 oplogSizeMB: 2048
 replSetName: "configReplSet"
sharding:
 clusterRole: configsvr

三台都启动复制集

然后./mongod --config ./mongodb-cfg.conf

三台启动服务

mongo -host 192.168.62.158 -port 28018
mongo -host 192.168.62.159 -port 28018
mongo -host 192.168.62.160 -port 28018
rs.initiate({
	"_id" : "configReplSet",
	"configsvr": true,
	"members" : [
		{
			"_id" : 0,
			"host" : "192.168.62.158:28018"
		},
		{
			"_id" : 1,
			"host" : "192.168.62.159:28018"
		},
		{
			"_id" : 2,
			"host" : "192.168.62.160:28018"
		}
	]
})

ok:1则代表成功

可以re.status()查看状态也可以多敲两下回车等待选举

mongos节点(158)

mongos配置文件

systemLog:
   destination: file
   path: "/home/qyd/Documents/mongos/log/mongos.log"
   logAppend: true
processManagement:
   fork: true
net:
   bindIp: 192.168.62.158
   port: 28017
sharding:
 configDB: configReplSet/192.168.62.158:28018,192.168.62.159:28018,192.168.62.160:28018                                                                  

启动mongos

[root@www bin]# vim mongos.conf
[root@www bin]# ./mongos --config ./mongos.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 19583
child process started successfully, parent exiting
[root@www bin]# 

登录进去mongos

 mongo 192.168.62.158:28017
[root@www bin]# mongo 192.168.62.158:28017
MongoDB shell version v4.2.8
connecting to: mongodb://192.168.62.158:28017/test?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("a3d99e8c-dd19-4dd1-9c0b-1c7a851fa328") }
MongoDB server version: 4.2.8
Server has startup warnings: 
2020-07-10T02:48:51.618-0700 I  CONTROL  [main] 
2020-07-10T02:48:51.618-0700 I  CONTROL  [main] ** WARNING: Access control is not enabled for the database.
2020-07-10T02:48:51.618-0700 I  CONTROL  [main] **          Read and write access to data and configuration is unrestricted.
2020-07-10T02:48:51.618-0700 I  CONTROL  [main] ** WARNING: You are running this process as the root user, which is not recommended.
2020-07-10T02:48:51.618-0700 I  CONTROL  [main] 
mongos> 

添加集群中的分片节点

use admin

添加shard1复制集

db.runCommand({addshard:"qyd_repl/192.168.62.158:27017,192.168.62.159:27017,192.168.62.160:27017",name:"shard1"})

添加shard2复制集

db.runCommand({addshard:"qyd_repl2/192.168.62.158:27018,192.168.62.159:27018,192.168.62.160:27018",name:"shard1"})
mongos> use admin
switched to db admin
mongos> db.runCommand({addshard:"qyd_repl/192.168.62.158:27017,192.168.62.159:27017,192.168.62.160:27017",name:"shard1"})
{
	"shardAdded" : "shard1",
	"ok" : 1,
	"operationTime" : Timestamp(1594374949, 13),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1594374949, 13),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}

mongos> db.runCommand({addshard:"qyd_repl2/192.168.62.158:27018,192.168.62.159:27018,192.168.62.160:27018",name:"shard2"})
{
	"shardAdded" : "shard2",
	"ok" : 1,
	"operationTime" : Timestamp(1594375027, 6),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1594375027, 6),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}
mongos> 

查看分片

mongos下执行,列举分片list

db.runCommand({listshards:1})
mongos> db.runCommand({listshards:1})
{
	"shards" : [
		{
			"_id" : "shard1",
			"host" : "qyd_repl/192.168.62.158:27017,192.168.62.159:27017,192.168.62.160:27017",
			"state" : 1
		},
		{
			"_id" : "shard2",
			"host" : "qyd_repl2/192.168.62.158:27018,192.168.62.159:27018,192.168.62.160:27018",
			"state" : 1
		}
	],
	"ok" : 1,
	"operationTime" : Timestamp(1594375164, 1),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1594375164, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}
mongos> 

当前集群中有shard1,shard2分片,状态值为1,可用

mongos下执行,查看分片状态

sh.status()
mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
  	"_id" : 1,
  	"minCompatibleVersion" : 5,
  	"currentVersion" : 6,
  	"clusterId" : ObjectId("5f0835abc00f35f52d31f67a")
  }
  shards:		//如果不加分片,则没有此字段
        {  "_id" : "shard1",  "host" : "qyd_repl/192.168.62.158:27017,192.168.62.159:27017,192.168.62.160:27017",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "qyd_repl2/192.168.62.158:27018,192.168.62.159:27018,192.168.62.160:27018",  "state" : 1 }
  active mongoses:
        "4.2.8" : 1
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                96 : Success
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                config.system.sessions
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard1	928
                                shard2	96
                        too many chunks to print, use verbose if you want to force print
        {  "_id" : "newqydtest",  "primary" : "shard1",  "partitioned" : false,  "version" : {  "uuid" : UUID("95fdf301-28c7-4598-bf34-1dd757c7d9d8"),  "lastMod" : 1 } }
        {  "_id" : "qydtestdb1",  "primary" : "shard1",  "partitioned" : false,  "version" : {  "uuid" : UUID("74de6677-e10d-4e0a-9b2a-8cd57ec0b490"),  "lastMod" : 1 } }
        {  "_id" : "studentdb",  "primary" : "shard1",  "partitioned" : false,  "version" : {  "uuid" : UUID("4be7bec8-9d8b-4e1b-88b9-d890bda92445"),  "lastMod" : 1 } }
        {  "_id" : "testDB",  "primary" : "shard1",  "partitioned" : false,  "version" : {  "uuid" : UUID("9377ae06-ccc7-47d8-ac47-8fd13e980e39"),  "lastMod" : 1 } }

mongos> 

如果分片没加的话上方是没有shards字段的

测试分片集群

开启数据库分片配置
db.runCommand({enablesharding:"testshard"})

使用库testshard配分片做索引↑

mongos> db.runCommand({enablesharding:"testshard"})
{
	"ok" : 1,
	"operationTime" : Timestamp(1594375579, 6),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1594375579, 7),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}

ok:1为成功

创建分片的key(键),下方是基于id配置的
db.runCommand({shardcollection:"testshard.users",key:{id:1}})

注意,如果不是空表,需要声明索引,我们使用的是空表,所以直接就可以使用

mongos> db.runCommand({shardcollection:"testshard.users",key:{id:1}})
{
	"collectionsharded" : "testshard.users",
	"collectionUUID" : UUID("990c759d-b577-4bf1-8761-9b91aa5c2ea6"),
	"ok" : 1,
	"operationTime" : Timestamp(1594375845, 6),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1594375845, 6),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}
创建索引(如果使用的不是空表,不是第一次操作)
use testshard

db.users.ensureIndex({id:1})
#给id创建一个索引
mongos> use testshard
switched to db testshard
mongos> db.users.ensureIndex({id:1})
{
	"raw" : {
		"qyd_repl2/192.168.62.158:27018,192.168.62.159:27018,192.168.62.160:27018" : {
			"numIndexesBefore" : 2,
			"numIndexesAfter" : 2,
			"note" : "all indexes already exist",
			"ok" : 1
		}
	},
	"ok" : 1,
	"operationTime" : Timestamp(1594376135, 1),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1594376136, 4),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}
mongos> 
添加50w条测试数据
var arr=[];
for(var i=0;i<500000;i++){
	var uid = i;
	var name = "name"+1;
	arr.push({"id":uid,"name":name});
}
db.users.insertMany(arr);

运行sh.status()查看。没啥效果

说明数据量太少

我们再次运行上面的js脚本。这次将50w改为150w

var arr=[];
for(var i=0;i<500000;i++){
	var uid = i;
	var name = "name"+1;
	arr.push({"id":uid,"name":name});
}
db.users.insertMany(arr);

等待一会添加完成以后

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-kmaVTS7o-1594379012750)(C:\Users\Administrator\AppData\Roaming\Typora\typora-user-images\image-20200710183521158.png)]

可以看出一部分放到了分片1一部分放到了分片2

0-----166667、166667----250000在分片1上

250000----333334、333334----457502、457502------最后在分片2上

其他分片集群的命令

#添加分片
db.runCommand({addshard:"qyd_repl/192.168.62.158:27017,192.168.62.159:27017,192.168.62.160:27017",name:"shard1"})
#删除分片
db.runCommand({removeShard:"shard2"})

rded" : “testshard.users”,
“collectionUUID” : UUID(“990c759d-b577-4bf1-8761-9b91aa5c2ea6”),
“ok” : 1,
“operationTime” : Timestamp(1594375845, 6),
“$clusterTime” : {
“clusterTime” : Timestamp(1594375845, 6),
“signature” : {
“hash” : BinData(0,“AAAAAAAAAAAAAAAAAAAAAAAAAAA=”),
“keyId” : NumberLong(0)
}
}
}


#### 创建索引(如果使用的不是空表,不是第一次操作)

use testshard

db.users.ensureIndex({id:1})


```shell
#给id创建一个索引
mongos> use testshard
switched to db testshard
mongos> db.users.ensureIndex({id:1})
{
	"raw" : {
		"qyd_repl2/192.168.62.158:27018,192.168.62.159:27018,192.168.62.160:27018" : {
			"numIndexesBefore" : 2,
			"numIndexesAfter" : 2,
			"note" : "all indexes already exist",
			"ok" : 1
		}
	},
	"ok" : 1,
	"operationTime" : Timestamp(1594376135, 1),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1594376136, 4),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}
mongos> 
添加50w条测试数据
var arr=[];
for(var i=0;i<500000;i++){
	var uid = i;
	var name = "name"+1;
	arr.push({"id":uid,"name":name});
}
db.users.insertMany(arr);

运行sh.status()查看。没啥效果

说明数据量太少

我们再次运行上面的js脚本。这次将50w改为150w

var arr=[];
for(var i=0;i<500000;i++){
	var uid = i;
	var name = "name"+1;
	arr.push({"id":uid,"name":name});
}
db.users.insertMany(arr);

等待一会添加完成以后

[外链图片转存中…(img-kmaVTS7o-1594379012750)]

可以看出一部分放到了分片1一部分放到了分片2

0-----166667、166667----250000在分片1上

250000----333334、333334----457502、457502------最后在分片2上

其他分片集群的命令

#添加分片
db.runCommand({addshard:"qyd_repl/192.168.62.158:27017,192.168.62.159:27017,192.168.62.160:27017",name:"shard1"})
#删除分片
db.runCommand({removeShard:"shard2"})
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值