MongoDB学习札记


本文参考地址:

1,快速入门

安装MongoDB


我选择的是windows平台的mongodb安装方式,也很简单,只需要 下载 对应操作系统版本的Mongodb即可。(建议下载ZIP的安装包)

怎么验证我们的mongodb已经可以使用了?


1.通过命令行切换到mongodb的安装目录, 我的目录为 G:\JavaData\mongoDB\bin
2.命令行执行 mongod —dbpath G:\JavaData\mongoDBDATA
G:\JavaData\mongoDB\bin>mongod --dbpath G:\JavaData\mongoDBDATA
3.执行之后,可以观察到 waiting for connections on port 27017 表示mongodb已经启动成功了

MongoDB常规操作


Mongodb服务启动成功后,我们就可以使用客户端工具来连接mongodb了。 这里我们使用的是mongo提供的客户端工具,在我们刚才安装的mongodb目录下bin目录底下

同样的进入到我们的mongodb的bin目录,执行 mongo命令

G:\JavaData\mongoDB\bin>mongo
MongoDB shell version: 2.6.3
connecting to: test
>

当然,如果你已经将 G:\JavaData\mongoDB\bin加入到环境变量中,你就可以不需要切换目录了,直接执行mongo即可看到效果。

通过上面的客户端连接结果可以看到,默认情况下会连接到 test这个数据库 。 如果我们想要知道mongodb现在有多少数据库,可以通过命令

> show dbs
admin   (empty)
foobar  0.203GB
local   0.078GB
piedra  0.078GB
>

如果想要知道当前是哪个数据库,使用 db 命令

> db
test

如果想要切换到其他的数据库, 使用命令 user <your-dbname>

> use piedra
switched to db piedra

显示当前数据库有哪一些集合show collections

> show collections
system.indexes
users

当然,如果你第一次使用运行 show collection ,结果是空的,但是当你往集合里面插入数据后,就可以看到集合以及对应的数据库都会被创建。

> show dbs
admin   (empty)
foobar  0.203GB
local   0.078GB
piedra  0.078GB
> use demo
switched to db demo
> show collections
> db.users.insert({username:"linwenbin",pwd:"1234"})
WriteResult({ "nInserted" : 1 })
> db.users.find()
{ "_id" : ObjectId("55717e7ae25992bae59cca65"), "username" : "linwenbin", "pwd" : "1234" }
>

在上面的操作中,我们切换到demo数据库,并且创建了users这个集合,还往users集合插入一条数据。

mongodb中的集合和mysql,Oracle中的table是一样的概念。

插入数据


在mongodb中,插入数据使用命令 db.collectionName.insert({data}) 或者 db.collectionName.save({data})
这两个方法都可以正常工作。 对于 insert方法在上文中已经提到,这里演示save方法的使用。

> db
demo
> show collections
system.indexes
users
>
> db.users.save({username:"saveMethod",pwd:"123"}) 
WriteResult({ "nInserted" : 1 })

需要注意的是 save 方法在没有指定 _id的时候,工作方式和insert是一样的,但是如果指定了 _id 那么如果集合中已经有对应的_id了,就会用新的文档覆盖掉旧的文档。
修改数据


要修改数据,也有两种方式,一个是修改整个文档,一个是修改部分。首先先介绍修改整个文档的做法, 我们先把usres这个集合的数据都查询出来,方便和修改操作后的对比

> db.users.find()
{ "_id" : ObjectId("55717e7ae25992bae59cca65"), "username" : "linwenbin", "pwd" : "1234" }
{ "_id" : ObjectId("55717fd5e25992bae59cca66"), "username" : "saveMethod", "pwd" : "123" }
>

第一种方式: db.collectionName.update({criteria},{data}); 这个方法会替换整个文档。

> db.users.update({username:"linwenbin"},{age:22})
WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 })
> db.users.find({username:'linwenbin'})
>

可以发现,当我们修改完之后,再去查找username为’linwenbin’的数据就不存在了。

第二种方式: db.collectionName.update({criteria},{set:{newData}}); 现在我们对另一条数据进行set的修改操作

> db.users.update({username:"saveMethod"},{$set:{pwd:"9999"}})
WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 })
> db.users.find()
{ "_id" : ObjectId("55717e7ae25992bae59cca65"), "age" : 22 }
{ "_id" : ObjectId("55717fd5e25992bae59cca66"), "username" : "saveMethod", "pwd" : "9999" }
>

可以观察到,username为 saveMethod的那条数据的pwd已经被成功修改为 ‘9999’了,而且其他属性并没有被新数据覆盖掉。

查询数据


mongodb的查询方法就是find方法了,上文中已经提到多次, db.collectionName.find({criteria})

> db.users.find({age:22})
{ "_id" : ObjectId("55717e7ae25992bae59cca65"), "age" : 22 }
>

如果都不指定find方法的条件,那么就是查询所有的数据了。
我们还可以通过指定查询结果显示哪一些字段,这个请查阅mongodb的manual手册。
删除数据


如果要删除某一个集合,可以通过 db.collectionName.drop(); 来删除。
如果删除某一个集合内的数据,则通过 db.collectionName.remove({criteria});
我们在上面操作的基础上,将 {age:22} 这条数据删除

> db.users.find()
{ "_id" : ObjectId("55717e7ae25992bae59cca65"), "age" : 22 }
{ "_id" : ObjectId("55717fd5e25992bae59cca66"), "username" : "saveMethod", "pwd" : "9999" }
> db.users.remove({age:22})
WriteResult({ "nRemoved" : 1 })
> db.users.find()
{ "_id" : ObjectId("55717fd5e25992bae59cca66"), "username" : "saveMethod", "pwd" : "9999" }
>

MongoDB索引


什么是索引?


索引是创建在表格之上的,对用户来说不可见。索引加快了查找的速度,但是会增加额外的空间,所以创建所以要谨慎,mongodb也对每一个Collection的索引个数有限制。

简单的说: 索引就是提供了一个能够更快的定位到数据的方法

mongodb怎么查询索引


要查看当前集合有哪些索引,可以通过命令 db.collectionName.getIndexes();

> db.users.getIndexes()
[
        {
                "v" : 1,
                "key" : {
                        "_id" : 1
                },
                "name" : "_id_",
                "ns" : "demo.users"
        }
]
>

key为 _id 为默认的主键索引,创建集合的时候mongodb自动为我们创建。

mongodb怎么创建索引


那如果此时我们想要对users这个集合的 username创建索引,应该怎么操作?

> db.users.ensureIndex({username:1})
{
        "createdCollectionAutomatically" : false,
        "numIndexesBefore" : 1,
        "numIndexesAfter" : 2,
        "ok" : 1
}

这样就表示我们创建索引成功了,在来看看usres集合的索引。

> db.users.getIndexes()
[
        {
                "v" : 1,
                "key" : {
                        "_id" : 1
                },
                "name" : "_id_",
                "ns" : "demo.users"
        },
        {
                "v" : 1,
                "key" : {
                        "username" : 1
                },
                "name" : "username_1",
                "ns" : "demo.users"
        }
]
>

我们发现,多了一个key为 username的索引。 其中创建索引 db.users.ensuerIndex({username:1}) 中的1 表示正序索引, -1表示逆序索引

mongodb怎么删除索引


要删除索引通过 db.collectionName.dropIndex(column) 来删除

> db.users.dropIndex({username:1})
{ "nIndexesWas" : 2, "ok" : 1 }
> db.users.getIndexes()
[
        {
                "v" : 1,
                "key" : {
                        "_id" : 1
                },
                "name" : "_id_",
                "ns" : "demo.users"
        }
]
>

至此,简单的索引操作也记录完成了。



2,MongoDB学习札记第二篇之mongodb安全

要开启mongodb的安全认证,在mongod服务启动的时候需要指定 —auth 参数,用来表示开启安全认证

mongod --auth

开启之后,在通过客户端连接,虽然可以连接上,但是无法操作

G:\JavaData\mongoDB\bin>mongo
MongoDB shell version: 2.6.3
connecting to: test
> show collections
2015-06-05T20:10:51.608+0800 error: {
        "$err" : "not authorized for query on test.system.namespaces",
        "code" : 13
} at src/mongo/shell/query.js:131
>

根据提示,我们知道出错原因是没有认证、
切换到admin这个数据库,添加用户。创建用户的语法如下

db.createUser({
    user:"username",
    pwd:"password",
    customData:{any info},
    roles:[{role:"<role>",db:"<db>"},{role:"<role>",db:"<db>"}]
})

其中mongodb内建的角色: read, readWrite, dbAdmin, dbOwner, userAdmin,dbAdminAnyDatabase,userAdminAnyDatabase,readWriteAnyDatabase,readAnyDatabase,clusterAdmin
在我们的例子中,我们通过如下语句为demo这个数据库创建 lwb 用户,并且只具备读的权限

> db.createUser({user:"lwb",pwd:"lwb",roles:[{role:"read",db:"demo"}]})
Successfully added user: {
        "user" : "lwb",
        "roles" : [
                {
                        "role" : "read",
                        "db" : "demo"
                }
        ]
}
> db
demo

通过命令 db.auth("lwb","lwb") 来认证

> use demo
switched to db demo
> show collections
2015-06-05T20:31:15.034+0800 error: {
        "$err" : "not authorized for query on demo.system.namespaces",
        "code" : 13
} at src/mongo/shell/query.js:131
> db.auth("lwb","lwb")
1
> show collections
system.indexes
users
> db.users.find()
{ "_id" : ObjectId("55717fd5e25992bae59cca66"), "username" : "saveMethod", "pwd" : "9999" }

查看的认证已经可以了,但是我们指定的是read的权限,所以,我们需要测试一下是否可以插入数据

> db.users.insert({username:'abc'})
WriteResult({
        "writeError" : {
                "code" : 13,
                "errmsg" : "not authorized on demo to execute command { insert: \"users\", documents: [ { _id: ObjectId('557196b2e661d1419e528fbb'), username: \
"abc\" } ], ordered: true }"
        }
})
>

可以观察到,认证失败。说明我们的lwb用户不能像demo这个数据库的users集合插入数据
为了形成对比,我们在插入一个用户 rwu (read write user) 并让这个用户具备 readWrite权限。

> use admin
switched to db admin
> db.auth("admin","admin")
1
>  db.createUser({user:"rwu",pwd:"rwu",roles:[{role:"readWrite",db:"demo"}]})
Successfully added user: {
        "user" : "rwu",
        "roles" : [
                {
                        "role" : "readWrite",      //具备 读写 权限
                        "db" : "demo"                 //针对 demo这个数据库的 读写 权限
                }
        ]
}
> use demo
switched to db demo
> show collections
system.indexes
users
> db.users.find()
{ "_id" : ObjectId("55717fd5e25992bae59cca66"), "username" : "saveMethod", "pwd" : "9999" }
> db.users.save({username:"rwu",pwd:"rwu"}) //插入一条数据,插入成功表示授权成功
WriteResult({ "nInserted" : 1 })
> db.users.find()
{ "_id" : ObjectId("55717fd5e25992bae59cca66"), "username" : "saveMethod", "pwd" : "9999" }
{ "_id" : ObjectId("557198f5e661d1419e528fbc"), "username" : "rwu", "pwd" : "rwu" }
>

通过上面你的观察, 我们发现了创建的新用户 rwu 具有 readWrite权限后,可以往demo数据库的users集合插入数据了。

权限讲解至此。更多详细内容参考mongodb的anual文档









3,MongoDB学习札记第三篇之JAVA-DRIVER

环境准备:
1.mongo能够正常运行,可参考第一篇文章的介绍
2.如果已经有demo数据库了,建议删除,确保数据干净。

use demo
switched to db demo
db.dropDatabase()
{ “dropped” : “demo”, “ok” : 1 }

本文并没有开启用户认证,直接操作。使用gradle来构建工程。(Gradle怎么使用,参考我的关于gradle介绍的文章)
创建项目 mongo ,并在根目录下面创建 build.gradle 文件

apply plugin:"java"
apply plugin:"eclipse"

repositories{
    mavenCentral()
}

dependencies{
    compile 'org.mongodb:mongo-java-driver:3.0.2'
}

此外,还要创建和maven那样约定好的源文件,暂不用test的目录

└─src
   └─main
       ├─java
       └─resources

那么我们以后的源代码会放在java目录底下,资源文件会放到resources目录底下。

下面我们就开始我们的mongo-java之旅了。

我们在mongo项目根路径 运行 > gradle cleanEclipse eclipse 来生成Eclipse的java项目。然后通过Eclipse的Import… 功能来引入我们的mongo项目,如图

D:\workspace_myeclipse\mongo>gradle cleanEclipse eclipse


HelloMongo.java的源码如下:

package com.piedra.mongo;

import static com.mongodb.client.model.Filters.and;
import static com.mongodb.client.model.Filters.eq;
import static com.mongodb.client.model.Filters.gt;
import static com.mongodb.client.model.Filters.lte;

import java.util.ArrayList;
import java.util.List;

import org.bson.Document;
import org.bson.conversions.Bson;

import com.mongodb.Block;
import com.mongodb.MongoClient;
import com.mongodb.client.MongoCollection;
import com.mongodb.client.MongoCursor;
import com.mongodb.client.MongoDatabase;

/**
 * HelloMongo 
 * 参考: http://mongodb.github.io/mongo-java-driver/3.0/driver/getting-started/quick-tour/
 * @author LINWENBIN
 * @since 2015-6-6
 */
public class HelloMongo {

    /**
     * 往集合里面插入一条数据
     * @param coll
     * @since 2015-6-6
     * @author LINWENBIN
     */
    public void insertOne(MongoCollection<Document> coll){
        /*
        {
           "name" : "MongoDB",
           "type" : "database",
           "count" : 1,
           "info" : {
                       x : 203,
                       y : 102
                     }
        }
        */
        System.out.println("insertOne 插入记录之前 users集合的数量:" + coll.count());

        Document doc = new Document("name","MongoDB")
        .append("type","database").append("count", 1).append("info", new Document("x","203").append("y","102"));

        coll.insertOne(doc);

        System.out.println("insertOne 插入记录之后 users集合的数量:" + coll.count());
    }

    /**
     * 插入多条数据
     * @param coll
     * @since 2015-6-6
     * @author LINWENBIN
     */
    public void insertMany(MongoCollection<Document> coll){
        /*
         * 循环插入 {"i" : i}的document
         */
        List<Document> docs = new ArrayList<Document>();
        for(int i=0; i<10; i++){
            docs.add(new Document("i",i));
        }

        coll.insertMany(docs);

        System.out.println("insertMany 插入10条 {i:i} 记录之后 users集合的数量:" + coll.count());        
    }

    /**
     * 查询集合coll中的所有数据
     * @param coll
     * @since 2015-6-6
     * @author LINWENBIN
     */
    public void findAll(MongoCollection<Document> coll){
        MongoCursor<Document> cursor = coll.find().iterator();
        try {
            System.out.println("findAll 打印结果:");
            while(cursor.hasNext()){
                System.out.println(cursor.next().toJson());
            }
        } finally {
            cursor.close();
        }
    }

    /**
     * 查询满足条件的第一条数据
     * @param coll
     * @param filter
     * @since 2015-6-6
     * @author LINWENBIN
     */
    public void findSpecifyDoc(MongoCollection<Document> coll, Bson filter){
        System.out.println("findSpecifyDoc 打印结果:");
        System.out.println(coll.find(filter).first().toJson());
    }

    /**
     * 查询满足条件的文档集合
     * @param coll
     * @param filter
     * @since 2015-6-6
     * @author LINWENBIN
     */
    public void findDocs(MongoCollection<Document> coll, Bson filter){
        Block<Document> printBlock = new Block<Document>() {
             @Override
             public void apply(final Document document) {
                 System.out.println(document.toJson());
             }
        };
        System.out.println("findDocs 打印结果:");
        coll.find(filter).forEach(printBlock);
    }

    /**
     * 更新文档属性
     * @param coll        要操作的集合
     * @param criteria    要更新的文档过滤条件
     * @param newDoc    新的文档属性
     * @since 2015-6-6
     * @author LINWENBIN
     */
    public void update(MongoCollection<Document> coll, Bson criteria, Document newDoc){
        coll.updateMany(criteria, new Document("$set",newDoc));
    }

    /**
     * 删除文档
     * @param coll
     * @param criteria
     * @since 2015-6-6
     * @author LINWENBIN
     */
    public void delete(MongoCollection<Document> coll, Bson criteria){
        coll.deleteMany(criteria);
    }

    public static void main(String[] args) {
        HelloMongo helloMongo = new HelloMongo();

        //mongoClient实例本身代表着数据库的连接池
        MongoClient mongoClient = new MongoClient("127.0.0.1", 27017);
        /**
         * Calling the getDatabase() on MongoClient does not create a database. 
         * Only when a database is written to will a database be created
         */
        MongoDatabase db = mongoClient.getDatabase("demo");
        MongoCollection<Document> users = db.getCollection("users");


        helloMongo.insertOne(users);
        helloMongo.insertMany(users);
        helloMongo.findAll(users);
        helloMongo.findSpecifyDoc(users, eq("i",5));
        helloMongo.findDocs(users, and(gt("i",6),lte("i",8)));

        helloMongo.update(users, and(gt("i",6),lte("i",8)), new Document("ii",99));
        helloMongo.findDocs(users, and(gt("i",6),lte("i",8)));

        helloMongo.delete(users, and(gt("i",6),lte("i",8)));
        helloMongo.findDocs(users, and(gt("i",6),lte("i",8)));

        //销毁
        mongoClient.dropDatabase("demo");
        //关闭数据库连接
        mongoClient.close();
    }
}
打印结果:
insertOne 插入记录之前 users集合的数量:0
insertOne 插入记录之后 users集合的数量:1
insertMany 插入10条 {i:i} 记录之后 users集合的数量:11
findAll 打印结果:
{ "_id" : { "$oid" : "5572841e0ef45c0bf0e42bd8" }, "name" : "MongoDB", "type" : "database", "count" : 1, "info" : { "x" : "203", "y" : "102" } }
{ "_id" : { "$oid" : "5572841e0ef45c0bf0e42bd9" }, "i" : 0 }
{ "_id" : { "$oid" : "5572841e0ef45c0bf0e42bda" }, "i" : 1 }
{ "_id" : { "$oid" : "5572841e0ef45c0bf0e42bdb" }, "i" : 2 }
{ "_id" : { "$oid" : "5572841e0ef45c0bf0e42bdc" }, "i" : 3 }
{ "_id" : { "$oid" : "5572841e0ef45c0bf0e42bdd" }, "i" : 4 }
{ "_id" : { "$oid" : "5572841e0ef45c0bf0e42bde" }, "i" : 5 }
{ "_id" : { "$oid" : "5572841e0ef45c0bf0e42bdf" }, "i" : 6 }
{ "_id" : { "$oid" : "5572841e0ef45c0bf0e42be0" }, "i" : 7 }
{ "_id" : { "$oid" : "5572841e0ef45c0bf0e42be1" }, "i" : 8 }
{ "_id" : { "$oid" : "5572841e0ef45c0bf0e42be2" }, "i" : 9 }
findSpecifyDoc 打印结果:
{ "_id" : { "$oid" : "5572841e0ef45c0bf0e42bde" }, "i" : 5 }
findDocs 打印结果:
{ "_id" : { "$oid" : "5572841e0ef45c0bf0e42be0" }, "i" : 7 }
{ "_id" : { "$oid" : "5572841e0ef45c0bf0e42be1" }, "i" : 8 }
findDocs 打印结果:
{ "_id" : { "$oid" : "5572841e0ef45c0bf0e42be0" }, "i" : 7, "ii" : 99 }
{ "_id" : { "$oid" : "5572841e0ef45c0bf0e42be1" }, "i" : 8, "ii" : 99 }
findDocs 打印结果:



4,MongoDB学习札记第四篇之Query



查询条件

首先往数据库集合里面插入几条数据。
测试数据:

> db.users.insert({username:"mongo", url:"webinglin.github.io", tags:["mongodb", database","nosql"],likes:999, author:"linwenbin"})
> db.users.insert({username:"redis", url:"webinglin.github.io", tags:["redis","database","nosql"],likes:888, author:"linwenbin"})
> db.users.insert({username:"spring", url:"webinglin.github.io", tags:["spring","framework"],likes:777, author:"linwenbin"})
> db.users.find().pretty()
{
        "_id" : ObjectId("5574bdabc705777157a515aa"),
        "username" : "mongo",
        "url" : "webinglin.github.io",
        "tags" : [
                "mongodb",
                "database",
                "nosql"
        ],
        "likes" : 999,
        "author" : "linwenbin"
}
{
        "_id" : ObjectId("5574bdd2c705777157a515ab"),
        "username" : "redis",
        "url" : "webinglin.github.io",
        "tags" : [
                "redis",
                "database",
                "nosql"
        ],
        "likes" : 888,
        "author" : "linwenbin"
}
{
        "_id" : ObjectId("5574bdf3c705777157a515ac"),
        "username" : "spring",
        "url" : "webinglin.github.io",
        "tags" : [
                "spring",
                "framework"
        ],
        "likes" : 777,
        "author" : "linwenbin"
}

pretty() 方法是对查询结果进行格式化
查询的时候可以带上查询条件,那具体的查询条件怎么使用?
等于
等于操作直接使用 {key:value} 这样的文档形式即可

> db.users.find({username:"mongo"})
{ "_id" : ObjectId("5574bdabc705777157a515aa"), "username" : "mongo", "url" : "webinglin.github.io", "tags" : [ "mongodb", "database", "nosql" ], "likes" : 999, "author" : "linwenbin" }
>

大于
语法: {key : {$gt:value} }

> db.users.find({likes:{$gt:888}})
{ "_id" : ObjectId("5574bdabc705777157a515aa"), "username" : "mongo", "url" : "webinglin.github.io", "tags" : [ "mongodb", "database", "nosql" ], "likes" : 999, "author" : "linwenbin" }
>

大于等于
语法: {key : {$gte:value} }

> db.users.find({likes:{$gte:888}})
{ "_id" : ObjectId("5574bdabc705777157a515aa"), "username" : "mongo", "url" : "webinglin.github.io", "tags" : [ "mongodb", "database", "nosql" ], "likes" : 999, "author" : "linwenbin" }
{ "_id" : ObjectId("5574bdd2c705777157a515ab"), "username" : "redis", "url" : "webinglin.github.io", "tags" : [ "redis", "database", "nosql" ], "likes" : 888, "author" : "linwenbin" }

小于
语法: {key : {$lt:value} }

> db.users.find({likes:{$lt:888}})
{ "_id" : ObjectId("5574bdf3c705777157a515ac"), "username" : "spring", "url" : "webinglin.github.io", "tags" : [ "spring", "framework" ], "likes" : 777, "author" : "linwenbin" }

小于等于

语法: {key : {$lte:value}}

> db.users.find({likes:{$lte:888}})
{ "_id" : ObjectId("5574bdd2c705777157a515ab"), "username" : "redis", "url" : "webinglin.github.io", "tags" : [ "redis", "database", "nosql" ], "likes" : 888, "author" : "linwenbin" }
{ "_id" : ObjectId("5574bdf3c705777157a515ac"), "username" : "spring", "url" : "webinglin.github.io", "tags" : [ "spring", "framework" ], "likes" : 777, "author" : "linwenbin" }

不等于
语法: {key : {$ne:value} }

> db.users.find({likes:{$ne:888}})
{ "_id" : ObjectId("5574bdabc705777157a515aa"), "username" : "mongo", "url" : "webinglin.github.io", "tags" : [ "mongodb", "database", "nosql" ], "likes" : 999, "author" : "linwenbin" }
{ "_id" : ObjectId("5574bdf3c705777157a515ac"), "username" : "spring", "url" : "webinglin.github.io", "tags" : [ "spring", "framework" ], "likes" : 777, "author" : "linwenbin" }

且操作 AND
语法: {key1:value1, key2:value2, key3:value3 …}

> db.users.find({likes:{$gt:777},username:"mongo"})
{ "_id" : ObjectId("5574bdabc705777157a515aa"), "username" : "mongo", "url" : "webinglin.github.io", "tags" : [ "mongodb", "database", "nosql" ], "likes" : 999, "author" : "linwenbin" }

> db.users.find({likes:{$gt:777}})
{ "_id" : ObjectId("5574bdabc705777157a515aa"), "username" : "mongo", "url" : "webinglin.github.io", "tags" : [ "mongodb", "database", "nosql" ], "likes" : 999, "author" : "linwenbin" }
{ "_id" : ObjectId("5574bdd2c705777157a515ab"), "username" : "redis", "url" : "webinglin.github.io", "tags" : [ "redis", "database", "nosql" ], "likes" : 888, "author" : "linwenbin" }

或操作 OR

语法: { $or: [ {key1: value1}, {key2:value2} ] } 将or条件的所有 {key:value} 都放在 $or 的value中(数组)

> db.users.find({$or:[{username:"mongo"},{username:"redis"}]})
{ "_id" : ObjectId("5574bdabc705777157a515aa"), "username" : "mongo", "url" : "webinglin.github.io", "tags" : [ "mongodb", "database", "nosql" ], "likes" : 999, "author" : "linwenbin" }
{ "_id" : ObjectId("5574bdd2c705777157a515ab"), "username" : "redis", "url" : "webinglin.github.io", "tags" : [ "redis", "database", "nosql" ], "likes" : 888, "author" : "linwenbin" }

复杂条件查询

如何将所有的条件都连起来用呢?

比如我们想要这样查询 like>=888 && (username=”mongo” or username=”spring”)
由于上面的数据只有三条, 我们知道 like>=888 只有 mongo 和 redis 这两条数据满足条件, 后面的username=”mongo” or username=”spring” 又有 mongo和 spring 满足条件, 这两个and操作之后 就只剩下 mongo 这条数据满足条件了。 所以最终应该查出一条mongo的Document.

> db.users.find({likes:{$gte:888},$or:[{username:"mongo"},{username:"spring"}]})
{ "_id" : ObjectId("5574bdabc705777157a515aa"), "username" : "mongo", "url" : "webinglin.github.io", "tags" : [ "mongodb", "database", "nosql" ], "likes" : 999, "author" : "linwenbin" }
>

find() 其他用法
Projection

mongodb中 projection 意味着显示你希望看到的字段而非所有的字段都显示,这是什么意思呢?

比如: 我们的测试数据里面有那么多的字段: username,likes,tags,author,url 而我们经常要用到的就只有 username 和 likes 那么就显示这两个字段就好了,其他的字段就别显示出来了。

find({},{KEY:1/0}) find的第二个参数,KEY为要显示或隐藏的字段,value为1表示显示,0表示隐藏,看着也很简单,试一下吧

> db.users.find({},{_id:0,url:0,tags:0,author:0})
{ "username" : "mongo", "likes" : 999 }
{ "username" : "redis", "likes" : 888 }
{ "username" : "spring", "likes" : 777 }
>

limit, skip, sort

为了更好的测试分页的效果,新建一个集合,并插入30条数据

> for(var i=0; i<30; i++){
... db.pages.insert({"val":i});
... }
WriteResult({ "nInserted" : 1 })
> db.pages.find()
{ "_id" : ObjectId("5574ca7b192e9dda0925e37f"), "val" : 0 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e380"), "val" : 1 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e381"), "val" : 2 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e382"), "val" : 3 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e383"), "val" : 4 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e384"), "val" : 5 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e385"), "val" : 6 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e386"), "val" : 7 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e387"), "val" : 8 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e388"), "val" : 9 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e389"), "val" : 10 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e38a"), "val" : 11 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e38b"), "val" : 12 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e38c"), "val" : 13 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e38d"), "val" : 14 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e38e"), "val" : 15 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e38f"), "val" : 16 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e390"), "val" : 17 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e391"), "val" : 18 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e392"), "val" : 19 }
Type "it" for more
> db.pages.find().limit(5)
{ "_id" : ObjectId("5574ca7b192e9dda0925e37f"), "val" : 0 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e380"), "val" : 1 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e381"), "val" : 2 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e382"), "val" : 3 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e383"), "val" : 4 }

可以发现,如果使用 limit方法的话会显示整个集合的所有文档。 指定了 limit 之后, 显示具体的条数,上文中,limit(5) 表示, 显示5条文档。

limit方法除外,还有一个 skip 方法,skip也是接受一个整型的参数,表示查询结果跳过多少个文档。

例如上面插入的30条记录中,我们要显示18-22条记录,那么就应该使用
db.pages.find().skip(18).limit(5)

> db.pages.find().skip(18).limit(5)
{ "_id" : ObjectId("5574ca7b192e9dda0925e391"), "val" : 18 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e392"), "val" : 19 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e393"), "val" : 20 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e394"), "val" : 21 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e395"), "val" : 22 }

skip和limit的组合就能做到分页的功能了。但是如果数据量很大,理论上分页就会变得很慢了,比如有一亿条数据,要拿最后一页。那skip的数据量就很多很多了。这样就会变得比较慢。话说回来,有谁会看数据看到最后的几页?正常都是看前面几页数据,所以,skip和limit实现分页是可以接受的。

在mongodb中,如果要对查询结果排序,那么需要使用sort方法。sort方法接收一个文档参数。也就是{key:value}的形式。其中,key表示要排序的字段,value的可取值为 1 / -1 。1表示升序asc,-1表示降序desc。话不多说,直接上例子:

> db.pages.find().sort({val:-1})
{ "_id" : ObjectId("5574ca7b192e9dda0925e39c"), "val" : 29 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e39b"), "val" : 28 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e39a"), "val" : 27 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e399"), "val" : 26 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e398"), "val" : 25 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e397"), "val" : 24 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e396"), "val" : 23 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e395"), "val" : 22 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e394"), "val" : 21 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e393"), "val" : 20 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e392"), "val" : 19 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e391"), "val" : 18 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e390"), "val" : 17 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e38f"), "val" : 16 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e38e"), "val" : 15 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e38d"), "val" : 14 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e38c"), "val" : 13 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e38b"), "val" : 12 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e38a"), "val" : 11 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e389"), "val" : 10 }
Type "it" for more

这个是对val这个key进行逆序排序,所以value取值为-1。 那value值为1的话,就变成升序了。

> db.pages.find().sort({val:1})
{ "_id" : ObjectId("5574ca7b192e9dda0925e37f"), "val" : 0 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e380"), "val" : 1 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e381"), "val" : 2 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e382"), "val" : 3 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e383"), "val" : 4 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e384"), "val" : 5 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e385"), "val" : 6 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e386"), "val" : 7 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e387"), "val" : 8 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e388"), "val" : 9 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e389"), "val" : 10 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e38a"), "val" : 11 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e38b"), "val" : 12 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e38c"), "val" : 13 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e38d"), "val" : 14 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e38e"), "val" : 15 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e38f"), "val" : 16 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e390"), "val" : 17 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e391"), "val" : 18 }
{ "_id" : ObjectId("5574ca7b192e9dda0925e392"), "val" : 19 }
Type "it" for more
>

那如果要对多个值进行组合排序呢? 就好比如对于我们最初的例子 users 集合。 要对users集合进行排序。其中 按照likes升序, 按照username降序。为了达到我们的效果,我们再往users集合里面插入两条数据

> db.users.insert({username:"mongodb",likes:999})
> db.users.insert({username:"springMVC",likes:888})

下面是运行结果,注意观察 likes为888的两个文档。发现username逆序排序了。至此,说明我们的sort实验成功了。

> db.users.find().sort({likes:1,username:-1}).pretty()
{
        "_id" : ObjectId("5574bdf3c705777157a515ac"),
        "username" : "spring",
        "url" : "webinglin.github.io",
        "tags" : [
                "spring",
                "framework"
        ],
        "likes" : 777,
        "author" : "linwenbin"
}
{
        "_id" : ObjectId("5574cefa192e9dda0925e39e"),
        "username" : "springMVC",
        "likes" : 888
}
{
        "_id" : ObjectId("5574bdd2c705777157a515ab"),
        "username" : "redis",
        "url" : "webinglin.github.io",
        "tags" : [
                "redis",
                "database",
                "nosql"
        ],
        "likes" : 888,
        "author" : "linwenbin"
}
{
        "_id" : ObjectId("5574cef5192e9dda0925e39d"),
        "username" : "mongodb",
        "likes" : 999
}
{
        "_id" : ObjectId("5574bdabc705777157a515aa"),
        "username" : "mongo",
        "url" : "webinglin.github.io",
        "tags" : [
                "mongodb",
                "database",
                "nosql"
        ],
        "likes" : 999,
        "author" : "linwenbin"
}
>





5,MongoDB学习札记第五篇之监控


对数据库的监控对于数据库管理人员(包括开发人员排查问题也是)来说是一项很重要的工作。

Mongodb提供了三种监控策略:

  • mongodb提供的工具集合,实时监听数据库的活动。
  • database commands 返回当前数据库的状态
  • MongoDB Management Service(MMS) 提供可视化的监控结果。

MongoDB Utilities
mongostat
mongostat显示每秒钟插入,查询,更新,删除,连接数 的统计信息。

root@ubuntu:~# mongostat
insert query update delete getmore command flushes mapped  vsize   res faults qr|qw ar|aw netIn netOut conn     time
    *0    *0     *0     *0       0     1|0       0 160.0M 527.0M 69.0M      0   0|0   0|0   79b    10k    2 17:37:15
    *0    *0     *0     *0       0     1|0       0 160.0M 527.0M 69.0M      0   0|0   0|0   79b    10k    2 17:37:16
    *0    *0     *0     *0       0     1|0       0 160.0M 527.0M 69.0M      0   0|0   0|0   79b    10k    2 17:37:17
    *0    *0     *0     *0       0     1|0       0 160.0M 527.0M 69.0M      0   0|0   0|0   79b    10k    2 17:37:18
    *0    *0     *0     *0       0     2|0       1 160.0M 527.0M 69.0M      0   0|0   0|0  133b    10k    2 17:37:19
    *0    *0     *0     *0       0     1|0       0 160.0M 527.0M 69.0M      0   0|0   0|0   79b    10k    2 17:37:20
    *0    *0     *0     *0       0     1|0       0 160.0M 527.0M 69.0M      0   0|0   0|0   79b    10k    2 17:37:21
^Croot@ubuntu:~#

mongotop
mongotop 统计当前活动的mongodb实例的集合读写时长,可以用来验证是否我们的mongodb实例还活着或者验证操作时长是否达到我们的要求。

root@ubuntu:~# mongotop
2015-06-07T17:48:58.772-0700    connected to: 127.0.0.1
                     ns    total    read    write    2015-06-07T17:52:03-07:00
             test.pages     65ms    65ms      0ms
     admin.system.roles      0ms     0ms      0ms
   admin.system.version      0ms     0ms      0ms
      local.startup_log      0ms     0ms      0ms
   local.system.indexes      0ms     0ms      0ms
local.system.namespaces      0ms     0ms      0ms
   local.system.replset      0ms     0ms      0ms
    test.system.indexes      0ms     0ms      0ms
 test.system.namespaces      0ms     0ms      0ms
              test.user      0ms     0ms      0ms

Http控制台

我用的mongodb是3.0.3版本,默认没有开启28017端口,所以你访问http://yourhost:28017 是访问不了的, 如果要访问28017端口的应用, 需要在启动monogd的时候加入 —rest 参数

./mongod --dbpath ../data/db/ --rest

MongoDB Command

db.serverStatus()

返回的结果是数据库的状态信息,包含磁盘,内存的使用情况,连接数,索引访问情况等。 db.serverStatus()返回结果非常快速,并不会影响到mongoDB的性能。

> db.serverStatus()
{
        "host" : "ubuntu",
        "version" : "3.0.3",
        "process" : "mongod",
        "pid" : NumberLong(2354),
        "uptime" : 13191,

        ... ...

        "ok" : 1
}

db.stats()

返回当前数据库的存储的内存大小,集合数量,索引占用内存大小等情况。

> db.stats()
{
        "db" : "test",
        "collections" : 4,
        "objects" : 42,
        "avgObjSize" : 67.80952380952381,
        "dataSize" : 2848,
        "storageSize" : 28672,
        "numExtents" : 4,
        "indexes" : 2,
        "indexSize" : 16352,

        ... ...

        "ok" : 1
}

db.collection.stats()

相对于db.stats(),db.collection.stats()返回的是集合的统计信息。



> db.pages.stats()
{
        "ns" : "test.pages",
        "count" : 30,
        "size" : 1440,
        "avgObjSize" : 48,
        "numExtents" : 1,
        "storageSize" : 8192,
        "lastExtentSize" : 8192,
        "paddingFactor" : 1,

        ... ...

        "ok" : 1
}
>

其他工具

来源官网manual手册

Sincerely!
参考

http://docs.mongodb.org/manual/administration/monitoring/





6,MongoDB学习札记第六篇之主从复制



环境准备:

ubuntu12.0.4
mongodb3.0.3

主从复制是MongoDB中最常见的复制方式。这种方式非常灵活,可用于备份,故障恢复,读扩展 等。
本次试验中,我们采用一个主节点,一个从节点。
首先先创建master和slave的目录

lwb@ubuntu:~$ mkdir -p ~/mongoData/master
lwb@ubuntu:~$ mkdir -p ~/mongoData/slave

创建之后,启动master

lwb@ubuntu:~$ mongod --master --dbpath ~/mongoData/master/ --port 10000

然后再启动slave

lwb@ubuntu:~$ mongod --dbpath  ~/mongoData/slave/ --port 10001 --slave --source localhost:10000

接着,连接到master的机器,

lwb@ubuntu:~$ mongo --host localhost --port 10000

往test数据库的users集合里面插入两条数据:

> db.users.find()
{ "_id" : ObjectId("55763d98db85929bb8addedf"), "username" : "lwb" }
{ "_id" : ObjectId("55764a694b24187a7a3c6693"), "username" : "mongodb master-slave" }

在master操作完成之后,在连接slave的mongod

lwb@ubuntu:~$ mongo --host localhost --port 10001
MongoDB shell version: 3.0.3
connecting to: localhost:10001/test
Server has startup warnings:
2015-06-08T19:02:31.866-0700 I CONTROL  [initandlisten]
2015-06-08T19:02:31.866-0700 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/   mm/transparent_hugepage/defrag is 'always'.
2015-06-08T19:02:31.866-0700 I CONTROL  [initandlisten] **        We suggest set   ting it to 'never'
2015-06-08T19:02:31.866-0700 I CONTROL  [initandlisten]
>
> show dbs
2015-06-08T19:09:17.770-0700 E QUERY    Error: listDatabases failed:{ "note" : "   from execCommand", "ok" : 0, "errmsg" : "not master" }
    at Error (<anonymous>)
    at Mongo.getDBs (src/mongo/shell/mongo.js:47:15)
    at shellHelper.show (src/mongo/shell/utils.js:630:33)
    at shellHelper (src/mongo/shell/utils.js:524:36)
    at (shellhelp2):1:1 at src/mongo/shell/mongo.js:47
>
> rs.slaveOk()
> 
> show dbs
local  0.078GB
test   0.078GB
>
> use test
switched to db test
> show collections
system.indexes
users
>
> db.users.find()
{ "_id" : ObjectId("55763d98db85929bb8addedf"), "username" : "lwb" }
{ "_id" : ObjectId("55764a694b24187a7a3c6693"), "username" : "mongodb master-slave" }

我遇到的问题及解决方法
问题一:

我的主从复制实验分为两次进行,刚开始我配置的master的端口是 10000 ,salve的端口是10001 ; 后因为电脑内存使用率暴涨,90+% 。 所以关掉电脑重启。问题就出现在这里,重启之后,我指定master端口的时候指定为 27000 , 指定slave端口为 27001 所以就出现了如下问题:terminating mongod after 30 seconds

2015-06-08T18:11:37.981-0700 I NETWORK  [initandlisten] waiting for connections on port 27001
2015-06-08T18:11:38.975-0700 I REPL     [replslave] repl: --source localhost:27000 != localhost:10000 from local.sources collection
2015-06-08T18:11:38.976-0700 I REPL     [replslave] repl: for instructions on changing this slave's source, see: 2015-06-08T18:11:38.976-0700 I REPL     [replslave] http://dochub.mongodb.org/co re/masterslave
2015-06-08T18:11:38.976-0700 I REPL     [replslave] repl: terminating mongod after 30 seconds
2015-06-08T18:12:08.976-0700 I CONTROL  [replslave] dbexit:  rc: 3

解决方法:
如果仔细观察日志的同学应该会发现:

2015-06-08T18:11:38.975-0700 I REPL     [replslave] repl: --source localhost:27000 != localhost:10000 from local.sources collection

所以,在一开始的时候我们已经为slave指定了master的host和port,这个会插入到local.sources 这个集合的。所以,把master端口改成10000就可以了。

问题二

主从启动之后,连接slave可以成功连上,但是在slave中执行 show dbs 的时候就报错了:

QUERY    Error: listDatabases failed:{ "note" : "from execCommand", "ok" : 0, "errmsg" : "not master" }

解决方法:

在报错的slave机器上执行 rs.slaveOk()方法即可。

> rs.slaveOk()
> show dbs
local  0.078GB
test   0.078GB
> use test
switched to db test
> show collections
system.indexes
users
> db.users.find()
{ "_id" : ObjectId("55763d98db85929bb8addedf"), "username" : "lwb" }
{ "_id" : ObjectId("55764a694b24187a7a3c6693"), "username" : "mongodb master-sla

具体slaveOk方法是什么意思?
rs.slaveOk()

Provides a shorthand for the following operation:

db.getMongo().setSlaveOk()
This allows the current connection to allow read operations to run on secondary members. See the readPref() method for more fine-grained control over read preference in the mongo shell.

Master-Slave安全
这个主从安全在 MongoDB官网说的很清楚。不能和普通的mongod权限验证那样。这里除了需要加入 —auth 还需要加入 —keyFile 的验证。

首先,我们生成我们的keyFile,根据官网提供的说明,这个keyfile是可以任意内容的,只要保证所有集群中的机器都拥有同样的文件即可。在linux环境下,我们通过

openssl rand -base64 741 > /usr/localhsot/mongodb/mongo-keyfile

这条命令来生成我们的keyFile。 生成之后就可以在启动mongod的时候指定了。

首先先启动 master

root@ubuntu:/usr/local/mongodb# mongod --master --dbpath ~/mongoData/master/ --port 10000 --auth --keyFile /usr/local/mongodb/mongo-keyfile

这里在启动的时候可能会遇到一些问题,我是在ubuntu环境下,所以经常操作要sudo,很繁琐。因此,让当前用户获得root权限是很有必要的。

在命令行模式执行 vi etc/passwd
我的用户名是 lwb ,所以将lwb所在的行改成
lwb:x:0:0:Ubuntu12.04,,,:/home/lwb:/bin/bash
原来的值是(将1000 改成 0 即可): lwb:x:1000:1000:Ubuntu12.04,,,:/home/lwb:/bin/bash
修改完成之后重启登录就可以让当前用户获得root权限了。

回到正题,在生成mongo-keyfile后,并指定keyFile参数来启动mongod的时候,可能还会遇到另一个问题:

root@ubuntu:~# mongod --master --dbpath ~/mongoData/master/ --port 10000 --auth --keyFile /usr/local/mongodb/mongo-keyfile
2015-06-08T21:34:43.864-0700 I ACCESS   permissions on /usr/local/mongodb/mongo-keyfile are too open

这个错误的意思是说 mongo-keyfile权限太大了,降低一下这个文件的权限。

root@ubuntu:/usr/local/mongodb# chmod 400 mongo-keyfile
root@ubuntu:/usr/local/mongodb# ll
total 84
drwxr-xr-x  4 root root  4096 Jun  8 21:34 ./
drwxr-xr-x 11 root root  4096 Jun  8 16:49 ../
-rw-r--r--  1 root root 34520 Jun  6 07:24 GNU-AGPL-3.0
-rw-r--r--  1 root root  1359 Jun  6 07:24 README
-rw-r--r--  1 root root 22660 Jun  6 07:24 THIRD-PARTY-NOTICES
drwxr-xr-x  2 root root  4096 Jun  6 07:24 bin/
drwxr-xr-x  3 root root  4096 Jun  7 13:02 data/
-r--------  1 root root  1004 Jun  8 21:34 mongo-keyfile

重启一下mongod即可正常运行。

接着启动slave

mongod --slave --dbpath ~/mongoData/slave/ --port 10001 --source localhost:10000 --auth --keyFile /usr/local/mongodb/mongo-keyfile

一切都顺利的进行着。
使用创建的用户操作master里面的数据库以及集合都是正常的。但是使用同样的用户操作slave的时候就有不正常了。还是提示

QUERY    Error: listDatabases failed:{ "note" : "from execCommand", "ok" : 0, "errmsg" : "not master" }

这个错误上面已经提到了。解决方法也是一样的。 rs.slaveOk() 执行完这条语句之后既可以正常操作了。

可以发现,用keyFile的方式启动mongod服务器其实和平常启动没什么区别,唯一的区别就是在启动参数中指定了 --keyFile keyfile 而已。

具体怎么创建用户参考: MongoDB学习札记第二篇之mongodb安全

Sincerely!
参考

MongoDB权威指南

MongoDB官网Manual手册






7,MongoDB学习札记第七篇之Replica Set 核心概念


副本集是一组mongod进程组成的,提供了数据冗余和高可用性。

副本集的成员

Replica Set Primary

The primary is the only member of a replica set that accepts write operations.

Replica Set Secondary Members

Secondary members replicate the primary’s data set and accept read operations. If the set has no primary, a secondary can become primary.

Priority 0 Replica Set Members

Priority 0 members are secondaries that cannot become the primary.

Hidden Replica Set Members

Hidden members are secondaries that are invisible to applications. These members support dedicated workloads, such as reporting or backup.

Replica Set Arbiter

An arbiter does not maintain a copy of the data set but participate in elections.

观察上面提到的集中成员,归根结底就是两种 Primary和Secondary,只不过Secondary根据不用的用途再次划分了。

Most deployments, however, will keep three members that store data: A primary and two secondary members.

这句话说明了正常情况下,会有一个primary节点和两个secondare节点。

在mongodb3.0.0以后的版本,最多可以有50个节点,但是只能有7个投票的节点

Changed in version 3.0.0: A replica set can have up to 50 members but only 7 voting members

那如果超过了50个节点,应该采用master-slave的模式了。但是master-slave的模式就不能自动的故障恢复了。(不能像副本集那样自动选举主节点)

Primary节点

primary节点是在副本集中唯一能够接受写操作的节点。Mongodb将这些写操作应用到primary节点中,然后将这些操作记录在primary节点的oplog中。 Secondary节点复制这个oplog,并将里面的操作应用到自己的数据库中。(和Redis的aop备份方式一样的道理)

在上图中,有三个节点的副本集,primary节点接受所有的写操作。然后Secondary节点从primary节点复制oplog并应用到他们的数据集里面。

所有的副本集成员都能够接受读的请求。但是默认情况下,应用程序会将读请求定向到primary节点、这个是可以修改的。

一个副本集最多只能拥有一个Primary节点,一旦这个Primary节点变得不可用了,副本集就会选出一个Secondary,让它成为新的Primary节点。

Secondary节点

上文中提到了,Secondary节点维护者primary节点的数据拷贝。一个副本集能够拥有一个或多个Secondary节点。

尽管客户端不能往Secondary节点写入数据,但是能够从Secondary节点读取数据。

一个Secondary节点也能成为primary节点,当primary节点变得不可用的时候,Replica Sets(副本集)会选举出新的Primary节点(是否是Zookeeper的master选举方式呢?还不清楚)。

对于Secondary节点,我们可以根据不能的目的,将secondary节点配置成不同用途的secondary:

  • Priority 0 Replica Set Members
  • Hidden Replica Set Members.
  • Delayed Replica Set Members.

Priority 0 Replica Set Members

阻止Secondary成为Primary节点,可用来让其一直处于secondar状态,只读。或者做冷备份。

如果一个副本集中的机器配置都不一样的话,可以将性能不那么优秀的机器配置成priority 0 的 Secondary节点。这样的话就能够保证只有高性能机器能够成为Primary节点了。当然,这种是备份的目的,也可以考虑将这样的节点设置成Hidden节点

Hidden Replica Set Members

应用程序都无法访问的节点。可以用来做备份

Hidden Member必须是priority 0 成员节点。那样才能不成为Primary节点。(对客户端都不可见,成为Primary节点太危险,太诡异)

虽然Hidden节点对客户端不可见,而且不能成为Primary节点,但是当Primary节点挂掉的时候可以参与投票。

Delayed Replica Set Members

历史副本的镜像备份,具备延迟性,方便从致命性的错误中恢复回来,如:无意的删除数据库或者集合。

例如,当前时间 9:50,然后delayed memeber设置一小时的延迟,那么这个delayed member的数据是8:50之前的数据。

Delayed member必须是priority 0 memeber,这样不能成为primary节点。应该是hidden member,这样能够阻止应用程序访问delayed member。

Arbiter

一个arbiter节点不会拥有数据拷贝,也不能成为主节点。它只能在选举Primary节点的时候参与投票。而且只能投出一票。

如果有有偶数台机器,那么加上一台arbiter,(arbiter也就这时候用吧?) 因为arbiter member需要很少的资源,随便再加一台普通的机子即可。

IMPORTANT

Do not run an arbiter on systems that also host the primary or the secondary members of the replica set.

即Arbiter节点不要和primary或者secondar节点在同一台机器上。

Sincerely!
参考

http://docs.mongodb.org/manual/core/replication/

《MongoDB权威指南》








8,MongoDB学习札记第八篇之Replica Set 实战



环境

  • Ubuntu12.0.4
  • mongodb3.0.3
  • 三台机器,分别为: 192.168.236.131 ; 192.168.236.133 ; 192.168.236.134

如果对于怎么安装Mongodb还不清楚的同学可以查看我之前的学习札记

第一步:

在三台机器上分别运行(都要运行)

root@ubuntu:/usr/local/mongodb#    mongod --dbpath /usr/local/mongodb/data --replSet rs0

注意这里的 —replSet 参数指定了副本集的名称,每一个副本集都有一个唯一的名称。

运行之后可以看到下面这样的信息:

2015-06-09T17:54:20.845-0700 I JOURNAL  [initandlisten] journal dir=/usr/local/mongodb/data/journal
2015-06-09T17:54:20.846-0700 I JOURNAL  [initandlisten] recover : no journal files present, no recovery needed
2015-06-09T17:54:20.925-0700 I JOURNAL  [durability] Durability thread started
2015-06-09T17:54:20.926-0700 I JOURNAL  [journal writer] Journal writer thread started
2015-06-09T17:54:20.931-0700 I CONTROL  [initandlisten] MongoDB starting : pid=2539 port=27017 dbpath=/usr/local/mongodb/data/ 64-bit host=ubuntu
2015-06-09T17:54:20.931-0700 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2015-06-09T17:54:20.931-0700 I CONTROL  [initandlisten]
2015-06-09T17:54:20.932-0700 I CONTROL  [initandlisten]
2015-06-09T17:54:20.932-0700 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2015-06-09T17:54:20.932-0700 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2015-06-09T17:54:20.932-0700 I CONTROL  [initandlisten]
2015-06-09T17:54:20.932-0700 I CONTROL  [initandlisten] db version v3.0.3
2015-06-09T17:54:20.933-0700 I CONTROL  [initandlisten] git version: b40106b36eecd1b4407eb1ad1af6bc60593c6105
2015-06-09T17:54:20.933-0700 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.1 14 Mar 2012
2015-06-09T17:54:20.933-0700 I CONTROL  [initandlisten] build info: Linux ip-10-216-207-166 3.2.0-36-virtual #57-Ubuntu SMP Tue Jan 8 22:04:49 UTC 2013 x86_64 BOOST_LIB_VERSION=1_49
2015-06-09T17:54:20.933-0700 I CONTROL  [initandlisten] allocator: tcmalloc
2015-06-09T17:54:20.933-0700 I CONTROL  [initandlisten] options: { replication: { replSet: "rs0" }, storage: { dbPath: "/usr/local/mongodb/data/" } }
2015-06-09T17:54:20.954-0700 I NETWORK  [initandlisten] waiting for connections on port 27017
2015-06-09T17:54:20.973-0700 W NETWORK  [ReplicationExecutor] Failed to connect to 192.168.236.134:27017, reason: errno:111 Connection refused
2015-06-09T17:54:20.974-0700 W NETWORK  [ReplicationExecutor] Failed to connect to 192.168.236.131:27017, reason: errno:111 Connection refused
2015-06-09T17:54:20.975-0700 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "rs0", version: 3, members: [ { _id: 1, host: "192.168.236.133:27017", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "192.168.236.134:27017", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 3, host: "192.168.236.131:27017", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatTimeoutSecs: 10, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
2015-06-09T17:54:20.975-0700 I REPL     [ReplicationExecutor] This node is 192.168.236.133:27017 in the config
2015-06-09T17:54:20.975-0700 I REPL     [ReplicationExecutor] transition to STARTUP2
2015-06-09T17:54:20.975-0700 I REPL     [ReplicationExecutor] Starting replication applier threads
2015-06-09T17:54:20.977-0700 I REPL     [ReplicationExecutor] transition to RECOVERING
2

信息大概就是这样的,等到三台机器都启动完了之后。使用mongo客户端登录其中一台mongod服务器。这里我登录到 192.168.236.131 这台机器

root@ubuntu:~# mongo

登录之后要切换到admin数据库,这样我们可以进行副本集的配置,具体怎么配置,代码如下:

> use admin
switched to db admin
> config = {_id:"rs0",members:[
... {_id:0,host:"192.168.236.131:27017"},
... {_id:1,host:"192.168.236.133:27017"},
... {_id:2,host:"192.168.236.134:27017"}]}
{
        "_id" : "rs0",
        "members" : [
                {
                        "_id" : 0,
                        "host" : "192.168.236.131:27017"
                },
                {
                        "_id" : 1,
                        "host" : "192.168.236.133:27017"
                },
                {
                        "_id" : 2,
                        "host" : "192.168.236.134:27017"
                }
        ]
}
> rs.initiate(config);
{ "ok" : 1 }

先定义 config 的配置信息, 然后通过 rs.initiate(config) 方法,将配置信息初始化。这两个步骤完成之后就表示我们的副本集配置信息初始化完成了,在这个rs0的副本集中我们定义了三台主机(注意在定义配置信息的时候指定的 _id 必须和我们启动mongod的时候指定的参数 —replSet 这个参数的值是一样的。)

过一会,mongodb就会帮我们选举出Primary节点和Secondary节点了。那在mongo客户端,我们可以通过 rs.status() 来查看副本集的状态信息

rs0:OTHER>
rs0:PRIMARY> rs.status()
{
        "set" : "rs0",
        "date" : ISODate("2015-06-10T00:10:06.941Z"),
        "myState" : 1,
        "members" : [
                {
                        "_id" : 0,
                        "name" : "192.168.236.131:27017",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 468,
                        "optime" : Timestamp(1433894773, 1),
                        "optimeDate" : ISODate("2015-06-10T00:06:13Z"),
                        "electionTime" : Timestamp(1433894777, 1),
                        "electionDate" : ISODate("2015-06-10T00:06:17Z"),
                        "configVersion" : 1,
                        "self" : true
                },
                {
                        "_id" : 1,
                        "name" : "192.168.236.133:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 233,
                        "optime" : Timestamp(1433894773, 1),
                        "optimeDate" : ISODate("2015-06-10T00:06:13Z"),
                        "lastHeartbeat" : ISODate("2015-06-10T00:10:06.278Z"),
                        "lastHeartbeatRecv" : ISODate("2015-06-10T00:10:06.245Z"),
                        "pingMs" : 1,
                        "configVersion" : 1
                },
                {
                        "_id" : 2,
                        "name" : "192.168.236.134:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 233,
                        "optime" : Timestamp(1433894773, 1),
                        "optimeDate" : ISODate("2015-06-10T00:06:13Z"),
                        "lastHeartbeat" : ISODate("2015-06-10T00:10:05.943Z"),
                        "lastHeartbeatRecv" : ISODate("2015-06-10T00:10:05.890Z"),
                        "pingMs" : 1,
                        "configVersion" : 1
                }
        ],
        "ok" : 1
}

其中name表示我么你的主机, health表示主机是否健康(0/1) , state(主节点还是从节点,或者是不可达节点)

如果上面信息正常显示出来说明整个副本集群已经建立起来了。这时候我们来验证一下是否是真的能够自动备份数据,是否能够自动从失败中恢复,自动选举新的Primary节点。

这个实验我们这样来做:

  1. 先往Primary节点插入数据(131那台机器)
  2. 在133和134两台Secondary节点中查询数据,验证是否能够正常的同步机器。
rs0:PRIMARY> use test
switched to db test
rs0:PRIMARY> show collections
rs0:PRIMARY> db.guids.insert({"name":"replica set","author":"webinglin"})
WriteResult({ "nInserted" : 1 })
rs0:PRIMARY> exit
bye
root@ubuntu:~# mongo --host 192.168.236.134
MongoDB shell version: 3.0.3
connecting to: 192.168.236.134:27017/test
Server has startup warnings:
2015-06-09T17:03:27.744-0700 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not re                                      commended.
2015-06-09T17:03:27.744-0700 I CONTROL  [initandlisten]
2015-06-09T17:03:27.745-0700 I CONTROL  [initandlisten]
2015-06-09T17:03:27.745-0700 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2015-06-09T17:03:27.745-0700 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2015-06-09T17:03:27.745-0700 I CONTROL  [initandlisten]
rs0:SECONDARY> show dbs
2015-06-09T17:13:49.138-0700 E QUERY    Error: listDatabases failed:{ "note" : "from execCommand", "ok" : 0, "errmsg" : "not master" }
    at Error (<anonymous>)
    at Mongo.getDBs (src/mongo/shell/mongo.js:47:15)
    at shellHelper.show (src/mongo/shell/utils.js:630:33)
    at shellHelper (src/mongo/shell/utils.js:524:36)
    at (shellhelp2):1:1 at src/mongo/shell/mongo.js:47
rs0:SECONDARY> use test
switched to db test
rs0:SECONDARY> db.guids.find()
Error: error: { "$err" : "not master and slaveOk=false", "code" : 13435 }
rs0:SECONDARY> rs.slaveOk()
rs0:SECONDARY> rs.slaveOk()
rs0:SECONDARY> db.guids.find()
{ "_id" : ObjectId("557780ebd147e9391020860d"), "name" : "replica set", "author" : "webinglin" }
rs0:SECONDARY> show collections()
2015-06-09T17:14:24.219-0700 E QUERY    Error: don't know how to show [collections()]
    at Error (<anonymous>)
    at shellHelper.show (src/mongo/shell/utils.js:733:11)
    at shellHelper (src/mongo/shell/utils.js:524:36)
    at (shellhelp2):1:1 at src/mongo/shell/utils.js:733
rs0:SECONDARY> show collections
guids
system.indexes
rs0:SECONDARY> exit
bye
root@ubuntu:~# mongo --host 192.168.236.133
MongoDB shell version: 3.0.3
connecting to: 192.168.236.133:27017/test
Server has startup warnings:
2015-06-09T17:03:11.647-0700 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not re                                      commended.
2015-06-09T17:03:11.647-0700 I CONTROL  [initandlisten]
2015-06-09T17:03:11.647-0700 I CONTROL  [initandlisten]
2015-06-09T17:03:11.648-0700 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2015-06-09T17:03:11.648-0700 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2015-06-09T17:03:11.648-0700 I CONTROL  [initandlisten]
rs0:SECONDARY> rs.slaveOk()
rs0:SECONDARY> show dbs
local  1.078GB
test   0.078GB
rs0:SECONDARY> use test
switched to db test
rs0:SECONDARY> show collections
guids
system.indexes
rs0:SECONDARY> db.guids.find()
{ "_id" : ObjectId("557780ebd147e9391020860d"), "name" : "replica set", "author" : "webinglin" }
rs0:SECONDARY> exit
bye

至此,整个验证过程说明了我们集群部署是成功的。数据能够正常同步了。那么接下来我们还要验证另一种情况,Primary异常终止之后(131),另外两个Secondary节点会不会自动选举出新的Primary节点呢? 这个实验我们这样处理: 将131机器的mongod服务停止掉。然后再来连接133或者134任意一台机器,通过rs.status()查看集群状态。

通过 ps -e | grep mongod 查看mongod服务是否开启,然后通过 killall mongod 或者 kill -15 <进程号> 来杀死mongod进程

root@ubuntu:~# ps -e | grep mongod
 3279 pts/0    00:00:19 mongod
root@ubuntu:~# killall mongod
root@ubuntu:~# mongo --host 192.168.236.133
MongoDB shell version: 3.0.3
connecting to: 192.168.236.133:27017/test
Server has startup warnings:
2015-06-09T17:03:11.647-0700 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2015-06-09T17:03:11.647-0700 I CONTROL  [initandlisten]
2015-06-09T17:03:11.647-0700 I CONTROL  [initandlisten]
2015-06-09T17:03:11.648-0700 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2015-06-09T17:03:11.648-0700 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2015-06-09T17:03:11.648-0700 I CONTROL  [initandlisten]
rs0:SECONDARY> rs.status()
{
        "set" : "rs0",
        "date" : ISODate("2015-06-10T00:22:40.283Z"),
        "myState" : 2,
        "members" : [
                {
                        "_id" : 0,
                        "name" : "192.168.236.131:27017",
                        "health" : 0,
                        "state" : 8,
                        "stateStr" : "(not reachable/healthy)",
                        "uptime" : 0,
                        "optime" : Timestamp(0, 0),
                        "optimeDate" : ISODate("1970-01-01T00:00:00Z"),
                        "lastHeartbeat" : ISODate("2015-06-10T00:22:39.642Z"),
                        "lastHeartbeatRecv" : ISODate("2015-06-10T00:18:22.292Z"),
                        "pingMs" : 3,
                        "lastHeartbeatMessage" : "Failed attempt to connect to 192.168.236.131:27017; couldn't connect to server 192.168.236.131:27017 (192.168.236.131), connection attempt failed",
                        "configVersion" : -1
                },
                {
                        "_id" : 1,
                        "name" : "192.168.236.133:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 1169,
                        "optime" : Timestamp(1433895342, 1),
                        "optimeDate" : ISODate("2015-06-10T00:15:42Z"),
                        "configVersion" : 1,
                        "self" : true
                },
                {
                        "_id" : 2,
                        "name" : "192.168.236.134:27017",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 986,
                        "optime" : Timestamp(1433895342, 1),
                        "optimeDate" : ISODate("2015-06-10T00:15:42Z"),
                        "lastHeartbeat" : ISODate("2015-06-10T00:22:38.952Z"),
                        "lastHeartbeatRecv" : ISODate("2015-06-10T00:22:38.951Z"),
                        "pingMs" : 6,
                        "electionTime" : Timestamp(1433895503, 1),
                        "electionDate" : ISODate("2015-06-10T00:18:23Z"),
                        "configVersion" : 1
                }
        ],
        "ok" : 1
}
rs0:SECONDARY> exit
bye

通过上面这段代码的观察,我们发现,当把原来的Primary节点停止掉后(131停止), 那么整个mongodb的副本集群会重新选举出新的Primary节点( 134 机器)

为了验证一下新选举的Primary是否正常,我们再次验证一把数据的同步情况,先 连接到134 主节点,将原来的数据删掉,在到133进行验证,数据是否也被删除

root@ubuntu:~# mongo --192.168.236.134
Error parsing command line: unknown option 192.168.236.134
try 'mongo --help' for more information
root@ubuntu:~# mongo --host 192.168.236.134
MongoDB shell version: 3.0.3
connecting to: 192.168.236.134:27017/test
Server has startup warnings:
2015-06-09T17:03:27.744-0700 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2015-06-09T17:03:27.744-0700 I CONTROL  [initandlisten]
2015-06-09T17:03:27.745-0700 I CONTROL  [initandlisten]
2015-06-09T17:03:27.745-0700 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2015-06-09T17:03:27.745-0700 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2015-06-09T17:03:27.745-0700 I CONTROL  [initandlisten]
rs0:PRIMARY> use test
switched to db test
rs0:PRIMARY> show collections
guids
system.indexes
rs0:PRIMARY> db.guids.find()
{ "_id" : ObjectId("557780ebd147e9391020860d"), "name" : "replica set", "author" : "webinglin" }
{ "_id" : ObjectId("557781aed5ed7ed61c16abfd"), "name" : "mongodb" }
rs0:PRIMARY> db.guids.remove({name:"mongodb"})
WriteResult({ "nRemoved" : 1 })
rs0:PRIMARY> db.guids.find()
{ "_id" : ObjectId("557780ebd147e9391020860d"), "name" : "replica set", "author" : "webinglin" }
rs0:PRIMARY> exit
bye
root@ubuntu:~# mongo --host 192.168.236.133
MongoDB shell version: 3.0.3
connecting to: 192.168.236.133:27017/test
Server has startup warnings:
2015-06-09T17:03:11.647-0700 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2015-06-09T17:03:11.647-0700 I CONTROL  [initandlisten]
2015-06-09T17:03:11.647-0700 I CONTROL  [initandlisten]
2015-06-09T17:03:11.648-0700 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2015-06-09T17:03:11.648-0700 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2015-06-09T17:03:11.648-0700 I CONTROL  [initandlisten]
rs0:SECONDARY> rs.slaveOk()
rs0:SECONDARY> db.guids.find()
{ "_id" : ObjectId("557780ebd147e9391020860d"), "name" : "replica set", "author" : "webinglin" }
rs0:SECONDARY> exit
bye

实践后发现,先选举的Primary节点也正常工作。我们的整个Mongodb副本集群测试完成。
动态添加节点,删除节点。

在开始这个实验之前,先把131的机器重新启动,然后用mongo客户端连到131进行验证数据是否也同步了。

登录131之后,我们发现数据也同步了,然后131节点变成了 Secondary节点了。

root@ubuntu:~# mongo
MongoDB shell version: 3.0.3
connecting to: test
Server has startup warnings:
2015-06-09T17:24:49.146-0700 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2015-06-09T17:24:49.146-0700 I CONTROL  [initandlisten]
2015-06-09T17:24:49.146-0700 I CONTROL  [initandlisten]
2015-06-09T17:24:49.147-0700 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2015-06-09T17:24:49.147-0700 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2015-06-09T17:24:49.147-0700 I CONTROL  [initandlisten]
rs0:SECONDARY> rs.status()
{
        "set" : "rs0",
        "date" : ISODate("2015-06-10T00:25:02.631Z"),
        "myState" : 2,
        "syncingTo" : "192.168.236.133:27017",
        "members" : [
                {
                        "_id" : 0,
                        "name" : "192.168.236.131:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 14,
                        "optime" : Timestamp(1433895834, 1),
                        "optimeDate" : ISODate("2015-06-10T00:23:54Z"),
                        "syncingTo" : "192.168.236.133:27017",
                        "configVersion" : 1,
                        "self" : true
                },
                {
                        "_id" : 1,
                        "name" : "192.168.236.133:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 13,
                        "optime" : Timestamp(1433895834, 1),
                        "optimeDate" : ISODate("2015-06-10T00:23:54Z"),
                        "lastHeartbeat" : ISODate("2015-06-10T00:25:01.196Z"),
                        "lastHeartbeatRecv" : ISODate("2015-06-10T00:25:02.228Z"),
                        "pingMs" : 1,
                        "syncingTo" : "192.168.236.134:27017",
                        "configVersion" : 1
                },
                {
                        "_id" : 2,
                        "name" : "192.168.236.134:27017",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 13,
                        "optime" : Timestamp(1433895834, 1),
                        "optimeDate" : ISODate("2015-06-10T00:23:54Z"),
                        "lastHeartbeat" : ISODate("2015-06-10T00:25:01.235Z"),
                        "lastHeartbeatRecv" : ISODate("2015-06-10T00:25:02.446Z"),
                        "pingMs" : 10,
                        "electionTime" : Timestamp(1433895503, 1),
                        "electionDate" : ISODate("2015-06-10T00:18:23Z"),
                        "configVersion" : 1
                }
        ],
        "ok" : 1
}
rs0:SECONDARY> exit
bye

登录到134 Primary节点,通过 rs.remove() 方法来删除副本集中的某一个节点,这里我们还是将 131删除。删除之后我们还往134主节点中加入数据.

rs0:PRIMARY> rs.remove("192.168.236.131:27017")
{ "ok" : 1 }
rs0:PRIMARY> rs.status
function () { return db._adminCommand("replSetGetStatus"); }
rs0:PRIMARY> rs.status()
{
        "set" : "rs0",
        "date" : ISODate("2015-06-10T00:32:15.795Z"),
        "myState" : 1,
        "members" : [
                {
                        "_id" : 1,
                        "name" : "192.168.236.133:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 1562,
                        "optime" : Timestamp(1433896329, 1),
                        "optimeDate" : ISODate("2015-06-10T00:32:09Z"),
                        "lastHeartbeat" : ISODate("2015-06-10T00:32:13.909Z"),
                        "lastHeartbeatRecv" : ISODate("2015-06-10T00:32:15.633Z"),
                        "pingMs" : 1,
                        "syncingTo" : "192.168.236.134:27017",
                        "configVersion" : 2
                },
                {
                        "_id" : 2,
                        "name" : "192.168.236.134:27017",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 1729,
                        "optime" : Timestamp(1433896329, 1),
                        "optimeDate" : ISODate("2015-06-10T00:32:09Z"),
                        "electionTime" : Timestamp(1433895503, 1),
                        "electionDate" : ISODate("2015-06-10T00:18:23Z"),
                        "configVersion" : 2,
                        "self" : true
                }
        ],
        "ok" : 1
}
rs0:PRIMARY> db.guids.find()
{ "_id" : ObjectId("557780ebd147e9391020860d"), "name" : "replica set", "author" : "webinglin" }
rs0:PRIMARY> db.guids.insert({"name":"remove one node dync"})
WriteResult({ "nInserted" : 1 })
rs0:PRIMARY> db.guids.find()
{ "_id" : ObjectId("557780ebd147e9391020860d"), "name" : "replica set", "author" : "webinglin" }
{ "_id" : ObjectId("557785bcbb56172c8e069341"), "name" : "remove one node dync" }
rs0:PRIMARY> exit
bye

删除131节点后,我们往primary节点中加入了新的数据,然后先不要将131的mongod服务停掉,我们通过mongo连接到131的mongod服务来查看数据

root@ubuntu:~# mongo --host 192.168.236.131
MongoDB shell version: 3.0.3
connecting to: 192.168.236.131:27017/test
Server has startup warnings:
2015-06-09T17:24:49.146-0700 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2015-06-09T17:24:49.146-0700 I CONTROL  [initandlisten]
2015-06-09T17:24:49.146-0700 I CONTROL  [initandlisten]
2015-06-09T17:24:49.147-0700 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2015-06-09T17:24:49.147-0700 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2015-06-09T17:24:49.147-0700 I CONTROL  [initandlisten]
> db.guids.find()
Error: error: { "$err" : "not master and slaveOk=false", "code" : 13435 }
> db.slaveOk()
2015-06-09T17:33:40.243-0700 E QUERY    TypeError: Property 'slaveOk' of object test is not a function
    at (shell):1:4
> rs.slaveOk()
> db.guids.find()
{ "_id" : ObjectId("557780ebd147e9391020860d"), "name" : "replica set", "author" : "webinglin" }
> exit
bye

实验结果可以知道,我们在134新加入的数据 {name:”remove one node dync”} 并没有同步到131(已从副本集中删除).

为了让实验结果更加确切,我们查看133是否有同步了数据:

root@ubuntu:~# mongo --host 192.168.236.133
MongoDB shell version: 3.0.3
connecting to: 192.168.236.133:27017/test
Server has startup warnings:
2015-06-09T17:03:11.647-0700 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2015-06-09T17:03:11.647-0700 I CONTROL  [initandlisten]
2015-06-09T17:03:11.647-0700 I CONTROL  [initandlisten]
2015-06-09T17:03:11.648-0700 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2015-06-09T17:03:11.648-0700 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2015-06-09T17:03:11.648-0700 I CONTROL  [initandlisten]
rs0:SECONDARY> rs.slaveOk()
rs0:SECONDARY> db.guids.find()
{ "_id" : ObjectId("557780ebd147e9391020860d"), "name" : "replica set", "author" : "webinglin" }
{ "_id" : ObjectId("557785bcbb56172c8e069341"), "name" : "remove one node dync" }
rs0:SECONDARY> exit
bye

实验数据可以看到,133同步了在134主节点中新增的文档 {“name”:”remove one node dync”},这样就证明了动态删除副本集中的某一个节点的实验成功了。那怎么动态添加节点到副本集中呢?

原理是一样的,但是调用的方法变成了 rs.add("192.168.236.131:27017")

root@ubuntu:~# mongo --host 192.168.236.134
MongoDB shell version: 3.0.3
connecting to: 192.168.236.134:27017/test
Server has startup warnings:
2015-06-09T17:03:27.744-0700 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2015-06-09T17:03:27.744-0700 I CONTROL  [initandlisten]
2015-06-09T17:03:27.745-0700 I CONTROL  [initandlisten]
2015-06-09T17:03:27.745-0700 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2015-06-09T17:03:27.745-0700 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2015-06-09T17:03:27.745-0700 I CONTROL  [initandlisten]
rs0:PRIMARY> rs.add("192.168.236.131:27017");
{ "ok" : 1 }
rs0:PRIMARY> rs.status()
{
        "set" : "rs0",
        "date" : ISODate("2015-06-10T00:34:45.974Z"),
        "myState" : 1,
        "members" : [
                {
                        "_id" : 1,
                        "name" : "192.168.236.133:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 1712,
                        "optime" : Timestamp(1433896482, 1),
                        "optimeDate" : ISODate("2015-06-10T00:34:42Z"),
                        "lastHeartbeat" : ISODate("2015-06-10T00:34:44.207Z"),
                        "lastHeartbeatRecv" : ISODate("2015-06-10T00:34:45.901Z"),
                        "pingMs" : 2,
                        "syncingTo" : "192.168.236.134:27017",
                        "configVersion" : 3
                },
                {
                        "_id" : 2,
                        "name" : "192.168.236.134:27017",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 1879,
                        "optime" : Timestamp(1433896482, 1),
                        "optimeDate" : ISODate("2015-06-10T00:34:42Z"),
                        "electionTime" : Timestamp(1433895503, 1),
                        "electionDate" : ISODate("2015-06-10T00:18:23Z"),
                        "configVersion" : 3,
                        "self" : true
                },
                {
                        "_id" : 3,
                        "name" : "192.168.236.131:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 1,
                        "optime" : Timestamp(1433896329, 1),
                        "optimeDate" : ISODate("2015-06-10T00:32:09Z"),
                        "lastHeartbeat" : ISODate("2015-06-10T00:34:44.217Z"),
                        "lastHeartbeatRecv" : ISODate("2015-06-10T00:34:44.234Z"),
                        "pingMs" : 1,
                        "syncingTo" : "192.168.236.134:27017",
                        "configVersion" : 3
                }
        ],
        "ok" : 1
}
rs0:PRIMARY> exit
bye

在rs.status()返回的结果中可以看到,131节点已经成功加入副本集中了。加入之后,理论上应该会把在134主节点加入的数据同步过来,刚才删除之后是不会同步的。那这时候重新加入副本集,应该是要同步的。下面是实验结果:

root@ubuntu:~# mongo
MongoDB shell version: 3.0.3
connecting to: test
Server has startup warnings:
2015-06-09T17:24:49.146-0700 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2015-06-09T17:24:49.146-0700 I CONTROL  [initandlisten]
2015-06-09T17:24:49.146-0700 I CONTROL  [initandlisten]
2015-06-09T17:24:49.147-0700 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2015-06-09T17:24:49.147-0700 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2015-06-09T17:24:49.147-0700 I CONTROL  [initandlisten]
rs0:SECONDARY> rs.slaveOk()
rs0:SECONDARY> db.guids.find()
{ "_id" : ObjectId("557780ebd147e9391020860d"), "name" : "replica set", "author" : "webinglin" }
{ "_id" : ObjectId("557785bcbb56172c8e069341"), "name" : "remove one node dync" }
rs0:SECONDARY> exit
bye

实验结果显示,动态添加操作也正常。动态的将131节点加入到副本集中能够保证数据同步成功。

注意

在调用 rs.add(“host:ip”) 或者 rs.remove(“host:ip”) 的时候,必须要在 Primary 节点中进行。

add方法可以加入一个document对象,这样就可以在指定具体的Secondary节点的更多的设置项了,比如指定为priority: 0 或 priority: 0,hidden: true 或 priority:0,hidden:true,arbiterOnly:true

{
  _id: <int>,
  host: <string>,
  arbiterOnly: <boolean>,
  buildIndexes: <boolean>,
  hidden: <boolean>,
  priority: <number>,
  tags: <document>,
  slaveDelay: <int>,
  votes: <number>
}

怎么对副本集进行权限验证,参考主从复制的安全部分,也是通过openssl来生成keyfile,然后再启动mongod的时候指定keyFile来设置安全的
Sincerely!
参考

《官网手册》
《MongoDB权威指南》







9,MongoDB学习札记第九篇之分片核心概念


分片集群的组成
Shards

A shard is a MongoDB instance that holds a subset of a collection’s data. Each shard is either a single mongod instance or a replica set. In production, all shards are replica sets.

Config Servers

Each config server is a mongod instance that holds metadata about the cluster. The metadata maps chunks to shards.

Routing Instances

Each router is a mongos instance that routes the reads and writes from applications to the shards. Applications do not access the shards directly.

为什么使用分片

  • 当本地磁盘不足的时候
  • 请求量巨大导致内存爆满的情况
  • 一台单独的mongod进程无法满足写的需求的情况

重要

部署分片集群是很花时间和资源的。如果你的系统已经能够达到或者超过了他的容量,那时候再去部署分片很难不影响到你现有的应用。

所以如果你觉得你的数据库在不久的将来需要进行分片,那么不要等到你的系统超过本身的承载能力的时候再去分片。

当你设计数据模型的时候,考虑分片的需求吧。

生产环境和测试环境 架构区别

生产环境下

  • 配置服务器:三个配置服务并且每个配置服务都在不同的机器上,这样能够确保安全,三台配置服务也不一定是replica set的形式。可以是单独的三个mongod进程组成。

  • 分片:生成环境下的分片采用Replica Set的形式。至少两个分片。

  • Mongos实例:至少一个mongos进程。

测试环境或者开发环境

  • 一个配置服务器(一个mongod进程)
  • 至少一个分片(分片可以是单独的mongod进程或者 replica set == 一组mongod进程)
  • 一个mongos实例(一个mongos实例最好对应一个应用容器 == 比如一台servlet容器的话,就相应的部署一个mongos实例)



10,MongoDB学习札记第十篇之分片集群搭建




实验环境准备:

configSerer

192.168.236.131:27000

mongos

192.168.236.131:28000

shards

192.168.236.131:29001

192.168.236.131:29002

192.168.236.131:29003

第一步: 创建分片实验需要的目录

root@ubuntu:~# mkdir -p ~/mongoData/shard/s1
root@ubuntu:~# mkdir -p ~/mongoData/shard/s2
root@ubuntu:~# mkdir -p ~/mongoData/shard/s3
root@ubuntu:~# mkdir -p ~/mongoData/shard/log
root@ubuntu:~# mkdir -p ~/mongoData/shard/config

第二步: 启动configServer

root@ubuntu:~# mongod --configsvr --dbpath ~/mongoData/shard/config/ --fork --logpath ~/mongoData/shard/log/configsvr.log --logappend --port 27000

第三步: 启动mongos

root@ubuntu:~# mongos --configdb 192.168.236.131:27000 --port 28000 --fork --logpath ~/mongoData/shard/log/mongs.log

第四步: 启动所有的shard分片

root@ubuntu:~/mongoData# mongod --dbpath ~/mongoData/shard/s1/ --port 29001 --fork --logpath ~/mongoData/shard/log/s1.log --shardsvr --logappend    
root@ubuntu:~/mongoData# mongod --dbpath ~/mongoData/shard/s2/ --port 29002 --fork --logpath ~/mongoData/shard/log/s2.log --shardsvr --logappend
root@ubuntu:~/mongoData# mongod --dbpath ~/mongoData/shard/s3/ --port 29003 --fork --logpath ~/mongoData/shard/log/s3.log --shardsvr --logappend

第五步: 将shard添加到mongos中 并配置

root@ubuntu:~/mongoData# mongo --port 28000
MongoDB shell version: 3.0.3
connecting to: 127.0.0.1:28000/test
... ...
mongos> use admin
switched to db admin
mongos> sh.addShard("192.168.236.131:29001")
{ "shardAdded" : "shard0000", "ok" : 1 }
mongos> sh.addShard("192.168.236.131:29002")
{ "shardAdded" : "shard0001", "ok" : 1 }
mongos> sh.addShard("192.168.236.131:29003")
{ "shardAdded" : "shard0002", "ok" : 1 }
mongos>

使用下面两个命令来配置需要分片的数据库及集合以及对应的片键

sh.enableSharding(““)

sh.shardCollection(“.“, shard-key-pattern)

mongos> sh.enableSharding("test")
{ "ok" : 1 }
mongos> sh.shardCollection("test.users",{"username":1,"_id":1})
{ "collectionsharded" : "test.users", "ok" : 1 }
mongos>

如果是添加副本集作为分片怎么处理?

addShard

The hostname and port of the mongod instance to be added as a shard. To add a replica set as a shard, specify the name of the replica set and the hostname and port of a member of the replica set.

上面这段话引用自官网,也就是说,我们只需要指定副本集的名称然后再指定其中一台机器即可。

比如:

sh.addShard(“replicaSet0/<ont host of the replica set>:<port>”);

验证分片集群部署情况

先往mongos插入100条数据,然后通过 db.users.stats() 查看集合的状态,发现集合被切分到三个分片中了,虽然第一个分片数据量比较多,其他两个分片数据量相对较少 (这个和片键 sharding key 的设置有关,我没有详细看官网关于shard key设置的文章,所以这里的片键设置比较简单,随意。)

mongos> for(var i=0; i<100; i++) {
... db.users.insert({"username":"" + i,age:i*2 , addr:"ardr"+i})
... }
WriteResult({ "nInserted" : 1 })
mongos> db.users.stats()
{
        "sharded" : true,
        "paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
        "userFlags" : 1,
        "capped" : false,
        "ns" : "test.users",
        "count" : 100,
        "numExtents" : 4,
        "size" : 11200,
        "storageSize" : 57344,
        "totalIndexSize" : 49056,
        "indexSizes" : {
                "_id_" : 24528,
                "username_1__id_1" : 24528
        },
        "avgObjSize" : 112,
        "nindexes" : 2,
        "nchunks" : 3,
        "shards" : {
                "shard0000" : {
                        "ns" : "test.users",
                        "count" : 88,
                        "size" : 9856,
                        "avgObjSize" : 112,
                        "numExtents" : 2,
                        "storageSize" : 40960,
                        "lastExtentSize" : 32768,
                        "paddingFactor" : 1,
                        "paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
                        "userFlags" : 1,
                        "capped" : false,
                        "nindexes" : 2,
                        "totalIndexSize" : 16352,
                        "indexSizes" : {
                                "_id_" : 8176,
                                "username_1__id_1" : 8176
                        },
                        "ok" : 1
                },
                "shard0001" : {
                        "ns" : "test.users",
                        "count" : 11,
                        "size" : 1232,
                        "avgObjSize" : 112,
                        "numExtents" : 1,
                        "storageSize" : 8192,
                        "lastExtentSize" : 8192,
                        "paddingFactor" : 1,
                        "paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
                        "userFlags" : 1,
                        "capped" : false,
                        "nindexes" : 2,
                        "totalIndexSize" : 16352,
                        "indexSizes" : {
                                "_id_" : 8176,
                                "username_1__id_1" : 8176
                        },
                        "ok" : 1
                },
                "shard0002" : {
                        "ns" : "test.users",
                        "count" : 1,
                        "size" : 112,
                        "avgObjSize" : 112,
                        "numExtents" : 1,
                        "storageSize" : 8192,
                        "lastExtentSize" : 8192,
                        "paddingFactor" : 1,
                        "paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
                        "userFlags" : 1,
                        "capped" : false,
                        "nindexes" : 2,
                        "totalIndexSize" : 16352,
                        "indexSizes" : {
                                "_id_" : 8176,
                                "username_1__id_1" : 8176
                        },
                        "ok" : 1
                }
        },
        "ok" : 1
}

这时候如果用mongo客户端去连接 29001 , 29002, 29003端口,会发现只有集合的部分数据是可见的,这也证明了我们实验成功了

 root@ubuntu:~/mongoData# mongo --port 29001
MongoDB shell version: 3.0.3
connecting to: 127.0.0.1:29001/test
... ...
> db.users.find().count()
88
> exit
bye
root@ubuntu:~/mongoData# mongo --port 29002
MongoDB shell version: 3.0.3
connecting to: 127.0.0.1:29002/test
... ...
> db.users.find().count()
11
> exit
bye
root@ubuntu:~/mongoData# mongo --port 29003
MongoDB shell version: 3.0.3
connecting to: 127.0.0.1:29003/test
... ...
> db.users.find().count()
1
> exit
bye
root@ubuntu:~/mongoData#

整个分片的实验基本上已经验证成功了。

如果某个集合没有进行分片,数据会存放在primary shard里面。

mongos> db.sites.insert({"site":"webinglin.github.io","author":"linwenbin"})
WriteResult({ "nInserted" : 1 })
... ...
mongos> db.sites.insert({"site":"webinglin.github.io","author":"linwenbin"})
WriteResult({ "nInserted" : 1 })
mongos> db.sites.stats()
{
        "sharded" : false,
        "primary" : "shard0000",
        "ns" : "test.sites",
        "count" : 7,
        "size" : 784,
        "avgObjSize" : 112,
        "numExtents" : 1,
        "storageSize" : 8192,
        "lastExtentSize" : 8192,
        "paddingFactor" : 1,
        "paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
        "userFlags" : 1,
        "capped" : false,
        "nindexes" : 1,
        "totalIndexSize" : 8176,
        "indexSizes" : {
                "_id_" : 8176
        },
        "ok" : 1
}
mongos>

这篇文章介绍了简单的搭建分片集群的步骤。更多关于怎么选择片键(Shard Key),参考这里



























©️2020 CSDN 皮肤主题: 大白 设计师:CSDN官方博客 返回首页