区别
区别
mongodump/mongorestore
mongoexport/mongoimport
主要用途
数据备份小规模或部分或测试期间的数据/恢复
备份/恢复小型的MongoDB数据库
导出格式
JSON/CSV
BSON
指定导出
不支持导出单个db的所有collection
支持导出单个db的所有collection
注意事项
不要用于生产环境的备份
不适合备份/恢复大型MongoDB数据库
备份 - mongodump
主要参数
NOTE:
Failed: bad option: --out not allowed when --archive is specified
-o, --out 不能和 --archive 同时指定
-h, --host:指定 MongoDB host
--port:指定 MongoDB 端口
-u, --username:指定用户名
-p, --password:指定密码
-d, --db:指定需要备份的数据库
-c, --collection:指定需要备份的集合
-q, --query:指定备份时过滤的条件,json 字符串,例:{x:{$gt:1}}
-o, --out:备份到指定文件夹(默认文件夹 dump,指定 - 则在屏幕上输出信息)
--archive:备份到一个文件
--gzip:压缩备份文件
--excludeCollection:指定集合不备份
--excludeCollectionsWithPrefix:不备份匹配前缀的集合
命令示例
备份全库,备份至文件
mongodump -d z-blog --archive=mongodump-z-blog-20200204
备份全库,备份至文件夹(-o, --out均可)
mongodump -d z-blog -o mongodump-dir-z-blog-20200204
备份全库,指定集合不备份,备份至压缩包(真实备份文件在压缩包中)
mongodump -d z-blog --excludeCollection=page_view_record --archive=monodump-z-blog-20200204.gz --gzip
备份指定集合
mongodump -d z-blog -c search_stats --archive=mongodump-z-blog-search-stat.gz --gzip
还原 - mongorestore
主要参数
NOTE:
Failed: stream or file does not appear to be a mongodump archive
可能是压缩过的文件,需指定 --gzip 参数
--drop:删除现有数据
--dir:指定要恢复的数据存放的文件夹
--archive:指定恢复的文件名称
--excludeCollection:不恢复指定集合
--excludeCollectionsWithPrefix:不恢复匹配前缀的集合
命令示例
从文件夹中还原
mongorestore -d z-blog --dir=mongodump-dir
从压缩过的备份文件中还原(压缩过的备份文件必须指定 --gzip)
mongorestore -d z-blog --archive=z-blog-dump-20200204.gz --gzip
还原时,删除原有数据
mongorestore -d z-blog --drop --archive=z-blog-dump-20200204
导出 - mongoexport
主要参数
NOTE:
Failed: CSV mode requires a field list
导出 csv 格式时必须指定字段
-f, --fields:选择导出的字段(导出 csv 时必须指定)
--type:选择导出的格式(json/csv),默认 json
-o, --out:导出的文件名
-q, --query:指定导出时过滤的条件,json 字符串,例:{x:{$gt:1}}
--skip:指定跳过数量
--limit:指定导出数量
--sort:指定 json 字符串排序,例:{x:1}
命令示例
导出json,全部字段
mongoexport -d z-blog -c post --type=json -o mongoexport-post-20200204.json
导出json,指定字段
mongoexport -d z-blog -c post --type=json -f pv,likeCount,commentCount -o mongoexport-post-20200204.json
导出csv不带首行,指定字段(导出全部字段推荐使用 json 方式导出)
mongoexport -d z-blog -c post --type=csv --noHeaderLine -f pv,likeCount,commentCount -o mongoexport-post-20200204.csv
指定IP/端口/用户名/密码,查询条件,字段,导出csv不带首行
mongoexport -h 10.95.196.91 --port 20000 -u testuser -p testpwd -d z-blog -c taskinfo -q "{taskId:'mdQ02Ea3wlpmb8OGE1s'}" --type=csv --noHeaderLine -f taskNum -o mongoexport-20200312.csv
导入 - mongoimport
主要参数
--file:指定导入的文件
--type:指定类型json, csv, tcs,默认json
--drop:导入时删除原有数据
导入 json 数据
mongoimport -d z-blog -c post --type=json --file=mongoexport-post-20200204.json
详细参数
通用参数
general options:
--help print usage
--version print the tool version and exit
verbosity options:
-v, --verbose= more detailed log output (include multiple times for more verbosity, e.g. -vvvvv, or specify a numeric value, e.g. --verbose=N)
--quiet hide all log output
connection options:
-h, --host= mongodb host to connect to (setname/host1,host2 for replica sets)
--port= server port (can also use --host hostname:port)
ssl options:
--ssl connect to a mongod or mongos that has ssl enabled
--sslCAFile= the .pem file containing the root certificate chain from the certificate authority
--sslPEMKeyFile= the .pem file containing the certificate and key
--sslPEMKeyPassword= the password to decrypt the sslPEMKeyFile, if necessary
--sslCRLFile= the .pem file containing the certificate revocation list
--sslAllowInvalidCertificates bypass the validation for server certificates
--sslAllowInvalidHostnames bypass the validation for server name
--sslFIPSMode use FIPS mode of the installed openssl library
authentication options:
-u, --username= username for authentication
-p, --password= password for authentication
--authenticationDatabase= database that holds the user's credentials
--authenticationMechanism= authentication mechanism to use
uri options:
--uri=mongodb-uri mongodb uri connection string
备份 - mongodump
Usage:
mongodump
Export the content of a running server into .bson files.
Specify a database with -d and a collection with -c to only dump that database or collection.
See http://docs.mongodb.org/manual/reference/program/mongodump/ for more information.
namespace options:
-d, --db= database to use
-c, --collection= collection to use
query options:
-q, --query= query filter, as a JSON string, e.g., '{x:{$gt:1}}'
--queryFile= path to a file containing a query filter (JSON)
--readPreference=| specify either a preference name or a preference json object
--forceTableScan force a table scan
output options:
-o, --out= output directory, or '-' for stdout (defaults to 'dump')
--gzip compress archive our collection output with Gzip
--repair try to recover documents from damaged data files (not supported by all storage engines)
--oplog use oplog for taking a point-in-time snapshot
--archive= dump as an archive to the specified path. If flag is specified without a value, archive is written to stdout
--dumpDbUsersAndRoles dump user and role definitions for the specified database
--excludeCollection= collection to exclude from the dump (may be specified multiple times to exclude additional collections)
--excludeCollectionsWithPrefix= exclude all collections from the dump that have the given prefix (may be specified multiple times to exclude additional prefixes)
-j, --numParallelCollections= number of collections to dump in parallel (4 by default) (default: 4)
--viewsAsCollections dump views as normal collections with their produced data, omitting standard collections
还原 - mongorestore
Usage:
mongorestore
Restore backups generated with mongodump to a running server.
Specify a database with -d to restore a single database from the target directory,
or use -d and -c to restore a single collection from a single .bson file.
See http://docs.mongodb.org/manual/reference/program/mongorestore/ for more information.
namespace options:
-d, --db= database to use when restoring from a BSON file
-c, --collection= collection to use when restoring from a BSON file
--excludeCollection= DEPRECATED; collection to skip over during restore (may be specified multiple times to exclude additional collections)
--excludeCollectionsWithPrefix= DEPRECATED; collections to skip over during restore that have the given prefix (may be specified multiple times to exclude additional prefixes)
--nsExclude= exclude matching namespaces
--nsInclude= include matching namespaces
--nsFrom= rename matching namespaces, must have matching nsTo
--nsTo= rename matched namespaces, must have matching nsFrom
input options:
--objcheck validate all objects before inserting
--oplogReplay replay oplog for point-in-time restore
--oplogLimit=[:ordinal] only include oplog entries before the provided Timestamp
--oplogFile= oplog file to use for replay of oplog
--archive= restore dump from the specified archive file. If flag is specified without a value, archive is read from stdin
--restoreDbUsersAndRoles restore user and role definitions for the given database
--dir= input directory, use '-' for stdin
--gzip decompress gzipped input
restore options:
--drop drop each collection before import
--dryRun view summary without importing anything. recommended with verbosity
--writeConcern= write concern options e.g. --writeConcern majority, --writeConcern '{w: 3, wtimeout: 500, fsync: true, j: true}'
--noIndexRestore don't restore indexes
--noOptionsRestore don't restore collection options
--keepIndexVersion don't update index version
--maintainInsertionOrder preserve order of documents during restoration
-j, --numParallelCollections= number of collections to restore in parallel (4 by default) (default: 4)
--numInsertionWorkersPerCollection= number of insert operations to run concurrently per collection (1 by default) (default: 1)
--stopOnError stop restoring if an error is encountered on insert (off by default)
--bypassDocumentValidation bypass document validation
--preserveUUID preserve original collection UUIDs (off by default, requires drop)
导出 - mongoexport
Usage:
mongoexport
Export data from MongoDB in CSV or JSON format.
See http://docs.mongodb.org/manual/reference/program/mongoexport/ for more information.
namespace options:
-d, --db= database to use
-c, --collection= collection to use
output options:
-f, --fields=[,]* comma separated list of field names (required for exporting CSV) e.g. -f "name,age"
--fieldFile= file with field names - 1 per line
--type= the output format, either json or csv (defaults to 'json') (default: json)
-o, --out= output file; if not specified, stdout is used
--jsonArray output to a JSON array rather than one object per line
--pretty output JSON formatted to be human-readable
--noHeaderLine export CSV data without a list of field names at the first line
querying options:
-q, --query= query filter, as a JSON string, e.g., '{x:{$gt:1}}'
--queryFile= path to a file containing a query filter (JSON)
-k, --slaveOk allow secondary reads if available (default true) (default: false)
--readPreference=| specify either a preference name or a preference json object
--forceTableScan force a table scan (do not use $snapshot)
--skip= number of documents to skip
--limit= limit the number of documents to export
--sort= sort order, as a JSON string, e.g. '{x:1}'
--assertExists if specified, export fails if the collection does not exist (default: false)
导入 - mongoimport
Usage:
mongoimport
Import CSV, TSV or JSON data into MongoDB. If no file is provided, mongoimport reads from stdin.
See http://docs.mongodb.org/manual/reference/program/mongoimport/ for more information.
namespace options:
-d, --db= database to use
-c, --collection= collection to use
input options:
-f, --fields=[,]* comma separated list of fields, e.g. -f name,age
--fieldFile= file with field names - 1 per line
--file= file to import from; if not specified, stdin is used
--headerline use first line in input source as the field list (CSV and TSV only)
--jsonArray treat input source as a JSON array
--parseGrace= controls behavior when type coercion fails - one of: autoCast, skipField, skipRow, stop (defaults to 'stop') (default: stop)
--type= input format to import: json, csv, or tsv (defaults to 'json') (default: json)
--columnsHaveTypes indicated that the field list (from --fields, --fieldsFile, or --headerline) specifies types; They must be in the form of
'.()'. The type can be one of: auto, binary, bool, date, date_go, date_ms, date_oracle, double, int32, int64, string.
For each of the date types, the argument is a datetime layout string. For the binary type, the argument can be one of: base32, base64, hex.
All other types take an empty argument. Only valid for CSV and TSV imports. e.g. zipcode.string(), thumbnail.binary(base64)
ingest options:
--drop drop collection before inserting documents
--ignoreBlanks ignore fields with empty values in CSV and TSV
--maintainInsertionOrder insert documents in the order of their appearance in the input source
-j, --numInsertionWorkers= number of insert operations to run concurrently (defaults to 1) (default: 1)
--stopOnError stop importing at first insert/upsert error
--mode=[insert|upsert|merge] insert: insert only. upsert: insert or replace existing documents. merge: insert or modify existing documents. defaults to insert
--upsertFields=[,]* comma-separated fields for the query part when --mode is set to upsert or merge
--writeConcern= write concern options e.g. --writeConcern majority, --writeConcern '{w: 3, wtimeout: 500, fsync: true, j: true}'
--bypassDocumentValidation bypass document validation
参考