By default, mysqldump writes a series of sql DDL and inserts to standard out, that you can then pipe to another database server to recreate a given database.
The problem is that this is all serial, and if you’re having to do this task regularly (because you’re sharing databases between different development environments, for example), it’d be nice if this could be sped up, and it certainly can with the current 5.1.x versions of MySQL.
MySQL recently added two new features that help with this: mysqldump --tab=path and mysqlimport --use-threads=N.
Dumping
Here’s how you’d dump multiple databases without –tab (assuming your ~/.my.cnf told mysqldump how to connect to your database server):
There are a bunch of caveats to using the --tab option:
- The mysqldump command must be run on the database server, because mysqldump will invoke SELECT INTO OUTFILE on the server.
- The FILE permission must be granted to the mysqldump user
- The directory needs to be writeable by the mysqld euid
- If you want to dump multiple databases, you’ll need to create a new directory per database so same-named tables won’t clobber eachother
In the interests of simplicity, I used globally-read-write permissions here. If untrusted users had access to these directories, these permissions would be unacceptable, of course.
Loading
Loading from a .sql.gz file is trivial — just pipe it to mysql and call it a day:
Loading from tab files is a bit more work. Note that this NUKES AND PAVES your database with the content of the dump–including the mysql users, their passwords, and their permissions! You’ll also want to play with –use-threads, depending on the number of processors your machine hardware has.
转载自:https://matthew.mceachen.us/blog/faster-mysql-dumps-and-loads-with-tab-and-use-threads-1047.html