MongoDB 增量备份方案

MongoDB本身不支持增量备份,所以这里介绍几种我找到的,或者是自己使用代码实现的方法:
我的环境:集群环境,
如果是分片集群,只好各【分片集群】和【配置服务器集群】分开备份处理,因为使用了Oplog
所以不适合单机环境

一、 Delay server + Oplog replay(延迟节点+Oplog 重现)


【MongoDB The.Definitive.Guide 2nd.pdf】P363中介绍的使用mongooplog 工具来做增量备份的方法


应该 也属于此类方法。


二、使用mongosync 同步工具,    它的功能有:

    1.    全量同步
    2.    增量同步
    3.    全量和增量同步,实时同步
    4.    可批量同步某个库/表
    优点:
        切换快,(如果使用了增量备份,那备份机必须是集群,因为用到oplog )
    (url: http://www.tuicool.com/articles/iei6n2)
    下载链接:http://pan.baidu.com/s/1qXrpbDa 密码:yybn


三、使用开源代码: Wordnik MongoDB Admin Tools 进行增量备份。

    具体操作可以看git:  https://github.com/reverb/wordnik-oss/blob/master/modules/mongo-admin-utils/README.md
    下面有打包好的文件:
        链接:http://pan.baidu.com/s/1bogjYVH 密码:i82e
    我测试下来,增量备份速度不太理想,也有可能是和测试环境有关系。


这里我把我使用python 代码实现 oplog 增量备份及恢复的罗列如下:

1. Delay server + Oplog replay(延迟节点+Oplog 重现)
使用Oplog replay 的优点是:可以在恢复时,具体恢复到某一个时间点

步骤如下:
    1、在集群中添加一个延迟节点,比如延迟设置成10小时;
    2、另外找一台MongoDB 实例(可以是集群,也可以是单机)使用代码同步local.oplog.rs 表数据到
    备份库中(不要选择local库,因为local 库中的表你无法修改表结构,表索引,比如我新建立的表库名:oplog_bak)
    代码如下(python):

	'''
	Author: tang
	Date: 2016/03/07
	MongoDB Incremental Backup oplog.rs to other serverDB
	'''

	import time
	import json
	import pymongo
	import datetime
	import os
	import sys
	import bson


	def init_back_database(p_db_name):
		'''init back desc db'''
		dest_conn.get_database(p_db_name).create_collection('oplog.rs',autoIndexId=False)
		dest_conn.get_database(p_db_name).get_collection('oplog.rs').create_index([("ts",pymongo.ASCENDING)],unique=True)
		dest_conn.get_database(p_db_name).get_collection('oplog.rs').create_index([('ns',pymongo.ASCENDING)])
		dest_conn.get_database(p_db_name).get_collection('oplog.rs').insert({"ts":bson.timestamp.Timestamp(0,1)})

	def inc_oplog(p_db_name):
		"""copy source server oplog to backup db"""	
		#get last_timestamp
		row_count = dest_conn.get_database(p_db_name).get_collection('oplog.rs').count()
		if row_count==0:
			init_back_database(p_db_name)
			last_timestamp = bson.timestamp.Timestamp(int(time.time())-24*3600,1)		
		else:	
			cur_oplog_rs = dest_conn.get_database(p_db_name).get_collection('oplog.rs').find({},{"ts":1}).sort([("ts",-1)]).limit(1)
			for row in cur_oplog_rs:
				last_timestamp = row["ts"]

		
		#copy oplog
		cur_oplog = source_conn.get_database('local').get_collection('oplog.rs').find({"ts":{"$gt":last_timestamp},"op":{"$in":['i','d','u']}}).limit(100000)
		for row in cur_oplog:
			#insert
			row_data = row
			#change dist2str save, bypass: field name not start $
			if row_data.has_key('o'):
				row_data['o'] = str(row_data['o'])
			if row_data.has_key('o2'):
				row_data['o2'] = str(row_data['o2'])	
			#print row_data	
			dest_conn.get_database(p_db_name).get_collection('oplog.rs').insert(row_data)
			
		#end copy oplog	
	#end inc_oplog	
	def replay_oplog(p_db_name,p_tb_name,p_start_ts,p_end_ts):
		'''use oplog log row,replay data to every collections'''
		#copy oplog
		cur_oplog = source_conn.get_database(p_db_name).get_collection('oplog.rs').find({"ts":{"$gt":p_last_ts}})	
		for row in cur_oplog:
			db_name = row["ns"].split('.')[0]
			tbl_name = row["ns"].split('.')[1]
			#insert 		
			if row['op']=='i':
				document_dist = eval(row['o'])
				dest_conn.get_database(db_name).get_collection(tbl_name).insert(document_dist)
			
			#update
			if row.has_key('b'):
				muli_flg = row['b']
			else:
				muli_flg = False		
			if row['op']=='u':
				document_dist = eval(row['o'])
				if row.has_key('o2'):
					document2_dist = eval(row['o2'])
					dest_conn.get_database(db_name).get_collection(tbl_name).update(document2_dist,document_dist,multi=muli_flg)
				else:
					dest_conn.get_database(db_name).get_collection(tbl_name).update({},document_dist,multi=muli_flg)
			#delete
			if row['op']=='d':
				document_dist = eval(row['o'])
				dest_conn.get_database(db_name).get_collection(tbl_name).remove(document_dist,multi=muli_flg)
					
	#end def replay_oplog				
						
	if __name__=='__main__':
		btype = sys.argv[1]
		source_host = sys.argv[2]
		desc_host = sys.argv[3]
		desc_dbname = sys.argv[4]
		last_ts = sys.argv[5]	
		source_conn = pymongo.MongoClient(['mongodb://%s'%source_host])
		dest_conn = pymongo.MongoClient(['mongodb://%s'%desc_host])
		if btype in ['b','back','bak']:
			inc_oplog(desc_dbname)
		if btype in ['r','rest','restore']:
			replay_oplog(desc_dbname, last_ts)



【MongoDB The.Definitive.Guide 2nd.pdf】P363:


Creating Incremental Backups with mongooplog

All of the backup methods outlined must make a full copy of the data, even if very little
of it has changed since the last backup. If you have data that is very large relative to the
amount that is being written, you may want to look into incremental backups.
Instead of making full copies of the data every day or week, you take one backup and
then use the oplog to back up all operations that have happened since the backup. This
technique is much more complex than the ones described above, so prefer them unless
incremental backups are absolutely necessary.
This technique requires two machines, A and B, running mongod. A is your main
machine (probably a secondary) and B is your backup machine:
1. Make a note of the latest optime in A’s oplog:
> op = db.oplog.rs.find().sort({$natural: -1}).limit(1).next();
> start = op['ts']['t']/1000
Keep this somewhere safe—you’ll need it for a later step.
2. Take a backup of your data, using one of the techniques above to get a point-intime backup. Restore this backup to the data directory on B.
3. Periodically add any operations that have happened on A to B’s copy of the data.
There is a special tool that comes with MongoDB distributions that makes this easy:
mongooplog (pronounced mon-goop-log) which copies data from the oplog of one
server and applies it to the data set on another. On B, run:
$ mongooplog --from A --seconds 1234567
--seconds should be passed the number of seconds between the start variable
calculated in step 1 and the current time, then add a bit (better to replay operations
a second time than miss them).
This keeps your backup relatively up-to-date with your data. This technique is sort of
like keeping a secondary up-to-date manually, so you may just want to use a slavedelayed secondary instead


  • 2
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值