可以全部备份,备份单个库,备份单个表。
一、执行命令如下(我这里是备份单个库)
mongodb备份恢复版本不通有一些差异,最好看实际的帮助文档
备份(mongoexport和mongodump)
./mongoexport -d userPortrait -c Portraits -o /bak.json(成功)./mongodump -h 10.101.200.120 --port 27017 -d userPortrait -o /tmp/wolf.bak (成功)
如果设置验证需要加用户名密码如下
mongodump -d myTest -u superuser -p 123456 --authenticationDatabase admin -o /backup/mongodb/myTest_bak_201507021653.bak
恢复(mongoimport和mongorestore )
mongoimport --host=127.0.0.1 --port=27101 -drop -d userPortrait -c Portraits --file /tmp/bak.json(成功)
mongoimport -h 127.0.0.1:27101 -drop -d HRM -c Portraits --file /tmp/bak.json
mongoimport -h 127.0.0.1:27101 -drop -d HRM --file /tmp/bak.json
mongorestore --host 10.249.100.249:27101 -drop -d userPortrait /tmp/userPortrait(成功)
mongorestore --host 10.249.100.249:27101 -drop -d HRM /tmp/userPortrait(成功)(修改导出时的库名)
==========================================================================================
二、实际操作如下
源[root@tshopapicenter bin]# ./mongo -port 27017
MongoDB shell version: 3.2.1
connecting to: 127.0.0.1:27017/test
Server has startup warnings:
2016-10-14T08:48:22.407+0800 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2016-10-14T08:48:22.407+0800 I CONTROL [initandlisten]
2016-10-14T08:48:22.408+0800 I CONTROL [initandlisten]
2016-10-14T08:48:22.408+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2016-10-14T08:48:22.408+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2016-10-14T08:48:22.408+0800 I CONTROL [initandlisten]
2016-10-14T08:48:22.409+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2016-10-14T08:48:22.409+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2016-10-14T08:48:22.409+0800 I CONTROL [initandlisten]
> show dbs
BID 0.108GB
BID_DB 0.000GB
DEVLAND 0.054GB
LAND 0.000GB
TBID 0.049GB
TLAND 0.003GB
devplat 0.101GB
handbtest 0.000GB
local 0.000GB
mongoDBFile 0.001GB
userPortrait 0.811GB
wlb 0.000GB
[root@tshopapicenter mongodb]# cd bin
[root@tshopapicenter bin]# ./mongodump -h 10.101.200.120 --port 27017 -d userPortrait -o /tmp/wolf.bak
2016-11-19T12:00:00.712+0800 writing userPortrait.Portraits to
2016-11-19T12:00:05.874+0800 [##############..........] userPortrait.Portraits 19705/32858 (60.0%)
2016-11-19T12:00:06.712+0800 [###############.........] userPortrait.Portraits 21737/32858 (66.2%)
2016-11-19T12:00:09.712+0800 [################........] userPortrait.Portraits 23112/32858 (70.3%)
2016-11-19T12:00:12.712+0800 [#################.......] userPortrait.Portraits 24048/32858 (73.2%)
2016-11-19T12:00:15.712+0800 [##################......] userPortrait.Portraits 25047/32858 (76.2%)
2016-11-19T12:00:18.712+0800 [##################......] userPortrait.Portraits 26006/32858 (79.1%)
2016-11-19T12:00:21.712+0800 [###################.....] userPortrait.Portraits 27314/32858 (83.1%)
2016-11-19T12:00:24.712+0800 [####################....] userPortrait.Portraits 28094/32858 (85.5%)
2016-11-19T12:00:27.712+0800 [#####################...] userPortrait.Portraits 29973/32858 (91.2%)
2016-11-19T12:00:31.451+0800 [######################..] userPortrait.Portraits 31301/32858 (95.3%)
2016-11-19T12:00:33.712+0800 [#######################.] userPortrait.Portraits 32707/32858 (99.5%)
2016-11-19T12:00:33.746+0800 [########################] userPortrait.Portraits 32858/32858 (100.0%)
2016-11-19T12:00:33.746+0800 done dumping userPortrait.Portraits (32858 documents)
压缩
scp
[root@localhost tmp]# scp 10.101.200.120:/tmp/userPortrait.tar .
导入目标库
[root@localhost tmp]# mongorestore --host 10.249.100.249:27101 -drop -d userPortrait /tmp/userPortrait
2016-11-19T12:49:37.259+0800 building a list of collections to restore from /tmp/userPortrait dir
2016-11-19T12:49:37.259+0800 reading metadata for userPortrait.Portraits from /tmp/userPortrait/Portraits.metadata.json
2016-11-19T12:49:37.368+0800 restoring userPortrait.Portraits from /tmp/userPortrait/Portraits.bson
2016-11-19T12:49:40.259+0800 [#####...................] userPortrait.Portraits 170.8 MB/765.2 MB (22.3%)
2016-11-19T12:49:43.259+0800 [#########...............] userPortrait.Portraits 293.4 MB/765.2 MB (38.3%)
2016-11-19T12:49:46.259+0800 [###########.............] userPortrait.Portraits 372.3 MB/765.2 MB (48.7%)
2016-11-19T12:49:49.259+0800 [#############...........] userPortrait.Portraits 427.6 MB/765.2 MB (55.9%)
2016-11-19T12:49:52.259+0800 [#################.......] userPortrait.Portraits 562.7 MB/765.2 MB (73.5%)
2016-11-19T12:49:55.259+0800 [######################..] userPortrait.Portraits 714.6 MB/765.2 MB (93.4%)
2016-11-19T12:49:56.576+0800 [########################] userPortrait.Portraits 765.2 MB/765.2 MB (100.0%)
2016-11-19T12:49:56.576+0800 restoring indexes for collection userPortrait.Portraits from metadata
2016-11-19T12:49:56.606+0800 finished restoring userPortrait.Portraits (32858 documents)
2016-11-19T12:49:56.606+0800 done
root@localhost db]# mongo -port 27101
MongoDB shell version: 3.2.6
connecting to: 127.0.0.1:27101/test
Server has startup warnings:
2016-11-19T10:34:43.745+0800 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2016-11-19T10:34:43.746+0800 I CONTROL [initandlisten]
2016-11-19T10:34:43.746+0800 I CONTROL [initandlisten]
2016-11-19T10:34:43.746+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2016-11-19T10:34:43.746+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2016-11-19T10:34:43.746+0800 I CONTROL [initandlisten]
2016-11-19T10:34:43.746+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2016-11-19T10:34:43.746+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2016-11-19T10:34:43.746+0800 I CONTROL [initandlisten]
> show dbs
HRM 0.000GB
local 0.000GB
> show dbs
HRM 0.000GB
local 0.000GB
userPortrait 0.811GB
导入时修改库名
[root@localhost tmp]# mongorestore --host 10.249.100.249:27101 -drop -d HRM /tmp/userPortrait
2016-11-19T13:57:07.943+0800 building a list of collections to restore from /tmp/userPortrait dir
2016-11-19T13:57:07.990+0800 reading metadata for HRM.Portraits from /tmp/userPortrait/Portraits.metadata.json
2016-11-19T13:57:07.990+0800 restoring HRM.Portraits from /tmp/userPortrait/Portraits.bson
2016-11-19T13:57:10.944+0800 [###.....................] HRM.Portraits 96.9 MB/765.2 MB (12.7%)
2016-11-19T13:57:13.944+0800 [#####...................] HRM.Portraits 183.3 MB/765.2 MB (23.9%)
2016-11-19T13:57:16.944+0800 [#######.................] HRM.Portraits 223.8 MB/765.2 MB (29.2%)
2016-11-19T13:57:19.944+0800 [########................] HRM.Portraits 261.8 MB/765.2 MB (34.2%)
2016-11-19T13:57:22.944+0800 [##########..............] HRM.Portraits 319.7 MB/765.2 MB (41.8%)
2016-11-19T13:57:25.944+0800 [#############...........] HRM.Portraits 416.2 MB/765.2 MB (54.4%)
2016-11-19T13:57:28.944+0800 [#############...........] HRM.Portraits 419.0 MB/765.2 MB (54.7%)
2016-11-19T13:57:31.944+0800 [##############..........] HRM.Portraits 448.2 MB/765.2 MB (58.6%)
2016-11-19T13:57:34.944+0800 [##############..........] HRM.Portraits 463.6 MB/765.2 MB (60.6%)
2016-11-19T13:57:37.944+0800 [################........] HRM.Portraits 513.2 MB/765.2 MB (67.1%)
2016-11-19T13:57:40.944+0800 [##################......] HRM.Portraits 592.3 MB/765.2 MB (77.4%)
2016-11-19T13:57:43.944+0800 [####################....] HRM.Portraits 666.9 MB/765.2 MB (87.1%)
2016-11-19T13:57:46.944+0800 [######################..] HRM.Portraits 702.2 MB/765.2 MB (91.8%)
2016-11-19T13:57:49.944+0800 [######################..] HRM.Portraits 720.1 MB/765.2 MB (94.1%)
2016-11-19T13:57:52.944+0800 [######################..] HRM.Portraits 727.3 MB/765.2 MB (95.0%)
2016-11-19T13:57:54.669+0800 [########################] HRM.Portraits 765.2 MB/765.2 MB (100.0%)
2016-11-19T13:57:54.669+0800 restoring indexes for collection HRM.Portraits from metadata
2016-11-19T13:57:54.691+0800 finished restoring HRM.Portraits (32858 documents)
2016-11-19T13:57:54.691+0800 done
[root@localhost tmp]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
26G 16G 8.9G 65% /
tmpfs 1.9G 0 1.9G 0% /dev/shm
/dev/sda1 477M 52M 400M 12% /boot
/dev/mapper/datavg-lv_data
20G 2.9G 16G 16% /data
=================================================================================================================================
三、查看帮助
[root@localhost tmp]# mongoexport --helpUsage:
mongoexport <options>
Export data from MongoDB in CSV or JSON format.
See http://docs.mongodb.org/manual/reference/program/mongoexport/ for more information.
general options:
--help print usage
--version print the tool version and exit
verbosity options:
-v, --verbose=<level> more detailed log output (include multiple times for more verbosity, e.g. -vvvvv,
or specify a numeric value, e.g. --verbose=N)
--quiet hide all log output
connection options:
-h, --host=<hostname> mongodb host to connect to (setname/host1,host2 for replica sets)
--port=<port> server port (can also use --host hostname:port)
authentication options:
-u, --username=<username> username for authentication
-p, --password=<password> password for authentication
--authenticationDatabase=<database-name> database that holds the user's credentials
--authenticationMechanism=<mechanism> authentication mechanism to use
namespace options:
-d, --db=<database-name> database to use
-c, --collection=<collection-name> collection to use
output options:
-f, --fields=<field>[,<field>]* comma separated list of field names (required for exporting CSV) e.g. -f
"name,age"
--fieldFile=<filename> file with field names - 1 per line
--type=<type> the output format, either json or csv (defaults to 'json')
-o, --out=<filename> output file; if not specified, stdout is used
--jsonArray output to a JSON array rather than one object per line
--pretty output JSON formatted to be human-readable
querying options:
-q, --query=<json> query filter, as a JSON string, e.g., '{x:{$gt:1}}'
--queryFile=<filename> path to a file containing a query filter (JSON)
-k, --slaveOk allow secondary reads if available (default true)
--readPreference=<string>|<json> specify either a preference name or a preference json object
--forceTableScan force a table scan (do not use $snapshot)
--skip=<count> number of documents to skip
--limit=<count> limit the number of documents to export
--sort=<json> sort order, as a JSON string, e.g. '{x:1}'
[root@localhost tmp]# mongoimport --help
Usage:
mongoimport <options> <file>
Import CSV, TSV or JSON data into MongoDB. If no file is provided, mongoimport reads from stdin.
See http://docs.mongodb.org/manual/reference/program/mongoimport/ for more information.
general options:
--help print usage
--version print the tool version and exit
verbosity options:
-v, --verbose=<level> more detailed log output (include multiple times for more verbosity, e.g. -vvvvv,
or specify a numeric value, e.g. --verbose=N)
--quiet hide all log output
connection options:
-h, --host=<hostname> mongodb host to connect to (setname/host1,host2 for replica sets)
--port=<port> server port (can also use --host hostname:port)
authentication options:
-u, --username=<username> username for authentication
-p, --password=<password> password for authentication
--authenticationDatabase=<database-name> database that holds the user's credentials
--authenticationMechanism=<mechanism> authentication mechanism to use
namespace options:
-d, --db=<database-name> database to use
-c, --collection=<collection-name> collection to use
input options:
-f, --fields=<field>[,<field>]* comma separated list of field names, e.g. -f name,age
--fieldFile=<filename> file with field names - 1 per line
--file=<filename> file to import from; if not specified, stdin is used
--headerline use first line in input source as the field list (CSV and TSV only)
--jsonArray treat input source as a JSON array
--type=<type> input format to import: json, csv, or tsv (defaults to 'json')
ingest options:
--drop drop collection before inserting documents
--ignoreBlanks ignore fields with empty values in CSV and TSV
--maintainInsertionOrder insert documents in the order of their appearance in the input source
-j, --numInsertionWorkers=<number> number of insert operations to run concurrently (defaults to 1)
--stopOnError stop importing at first insert/upsert error
--upsert insert or update objects that already exist
--upsertFields=<field>[,<field>]* comma-separated fields for the query part of the upsert
--writeConcern=<write-concern-specifier> write concern options e.g. --writeConcern majority, --writeConcern '{w: 3,
wtimeout: 500, fsync: true, j: true}' (defaults to 'majority')
--bypassDocumentValidation bypass document validation
[root@localhost tmp]# mongodump --help
Usage:
mongodump <options>
Export the content of a running server into .bson files.
Specify a database with -d and a collection with -c to only dump that database or collection.
See http://docs.mongodb.org/manual/reference/program/mongodump/ for more information.
general options:
--help print usage
--version print the tool version and exit
verbosity options:
-v, --verbose=<level> more detailed log output (include multiple times for more verbosity,
e.g. -vvvvv, or specify a numeric value, e.g. --verbose=N)
--quiet hide all log output
connection options:
-h, --host=<hostname> mongodb host to connect to (setname/host1,host2 for replica sets)
--port=<port> server port (can also use --host hostname:port)
authentication options:
-u, --username=<username> username for authentication
-p, --password=<password> password for authentication
--authenticationDatabase=<database-name> database that holds the user's credentials
--authenticationMechanism=<mechanism> authentication mechanism to use
namespace options:
-d, --db=<database-name> database to use
-c, --collection=<collection-name> collection to use
query options:
-q, --query= query filter, as a JSON string, e.g., '{x:{$gt:1}}'
--queryFile= path to a file containing a query filter (JSON)
--readPreference=<string>|<json> specify either a preference name or a preference json object
--forceTableScan force a table scan
output options:
-o, --out=<directory-path> output directory, or '-' for stdout (defaults to 'dump')
--gzip compress archive our collection output with Gzip
--repair try to recover documents from damaged data files (not supported by all
storage engines)
--oplog use oplog for taking a point-in-time snapshot
--archive=<file-path> dump as an archive to the specified path. If flag is specified without
a value, archive is written to stdout
--dumpDbUsersAndRoles dump user and role definitions for the specified database
--excludeCollection=<collection-name> collection to exclude from the dump (may be specified multiple times to
exclude additional collections)
--excludeCollectionsWithPrefix=<collection-prefix> exclude all collections from the dump that have the given prefix (may
be specified multiple times to exclude additional prefixes)
-j, --numParallelCollections= number of collections to dump in parallel (4 by default)
[root@localhost tmp]# mongorestore --help
Usage:
mongorestore <options> <directory or file to restore>
Restore backups generated with mongodump to a running server.
Specify a database with -d to restore a single database from the target directory,
or use -d and -c to restore a single collection from a single .bson file.
See http://docs.mongodb.org/manual/reference/program/mongorestore/ for more information.
general options:
--help print usage
--version print the tool version and exit
verbosity options:
-v, --verbose=<level> more detailed log output (include multiple times for more verbosity, e.g. -vvvvv,
or specify a numeric value, e.g. --verbose=N)
--quiet hide all log output
connection options:
-h, --host=<hostname> mongodb host to connect to (setname/host1,host2 for replica sets)
--port=<port> server port (can also use --host hostname:port)
authentication options:
-u, --username=<username> username for authentication
-p, --password=<password> password for authentication
--authenticationDatabase=<database-name> database that holds the user's credentials
--authenticationMechanism=<mechanism> authentication mechanism to use
namespace options:
-d, --db=<database-name> database to use
-c, --collection=<collection-name> collection to use
input options:
--objcheck validate all objects before inserting
--oplogReplay replay oplog for point-in-time restore
--oplogLimit=<seconds>[:ordinal] only include oplog entries before the provided Timestamp
--archive=<filename> restore dump from the specified archive file. If flag is specified without a
value, archive is read from stdin
--restoreDbUsersAndRoles restore user and role definitions for the given database
--dir=<directory-name> input directory, use '-' for stdin
--gzip decompress gzipped input
restore options:
--drop drop each collection before import
--writeConcern=<write-concern> write concern options e.g. --writeConcern majority, --writeConcern '{w: 3,
wtimeout: 500, fsync: true, j: true}' (defaults to 'majority')
--noIndexRestore don't restore indexes
--noOptionsRestore don't restore collection options
--keepIndexVersion don't update index version
--maintainInsertionOrder preserve order of documents during restoration
-j, --numParallelCollections= number of collections to restore in parallel (4 by default)
--numInsertionWorkersPerCollection= number of insert operations to run concurrently per collection (1 by default)
--stopOnError stop restoring if an error is encountered on insert (off by default)
--bypassDocumentValidation bypass document validation