mysql dump 导出表_[译文]MySQL中快速逻辑备份一张单独的表

本文介绍了如何使用MySQL 8.0.22的Shell工具进行单个表的逻辑备份和恢复,强调其在数据迁移和表级恢复中的重要性。通过新的表转储和加载实用程序,备份和导入大表的速度显著提高,比传统工具如mysqldump更快。
摘要由CSDN通过智能技术生成

逻辑备份在跨云环境的数据迁移和表级恢复中非常有用。8.0.22的MySQL shell引入了两个新的实用程序util.dumpTable()util.exportTable(),用于从MySQL中导出单独的表。在8.0.22之前,无法使用MySQL Shell备份单个表。

使用MySQL Shell中新的表转储程序util.dumpTable(),我们可以获取schema下特定的表。它的工作方式与8.0.21中引入的实例转储实用程序utility util.dumpInstance()和schema转储实用工具util.dumpSchemas()相同,但是选择了不同的选项。然后,可以使用MySQL Shell的加载转储实用程序util.loadDump()将导出的文件导入到MySQL数据库服务DB系统(简称MySQL DB系统)或MySQL服务器实例中。

MySQL Shell的新表导出实用程序util.exportTable()在本地服务器中导出关系数据文件

使用util.dumpTables()可以更快地转储大表,与mysqldumpmydumpermysqlpump相比,它所花的时间更少。

使用util.dumpTables()转储

我们需要最新的8.0.22MySQL Shell。

在我们的用例中,我使用Sysbench在一张表中插入了一亿的数据,插入后的数据大小约为20G。

使用sysbench加载一千万的数据

[root@mydbopstest ~]# sysbench /usr/share/sysbench/oltp_read_write.lua --mysql-port=3306 --mysql-user=root --mysql-password --mysql-socket=/data/mysql/mysql.sock --mysql-db=test1 --db-driver=mysql --tables=1 --table-size=100000000 prepare

sysbench 1.0.20 (using bundled LuaJIT 2.1.0-beta2)

Creating table 'sbtest1'...

Inserting 100000000 records into 'sbtest1'

Creating a secondary index on 'sbtest1'...

[root@mydbopstest ~]#

第一步:使用Shell程序连接MySQL服务端

这个案例中我的数据库版本为MySQL5.7.30(Percona)。Shell程序兼容任意MySQL版本。

[root@mydbopstest ~]# mysqlsh root@localhost --socket=/data/mysql/mysql.sock

Please provide the password for 'root@/data%2Fmysql%2Fmysql.sock': **********

MySQL Shell 8.0.22

Copyright (c) 2016, 2020, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its affiliates.

Other names may be trademarks of their respective owners.

Type '\help' or '\?' for help; '\quit' to exit.

Creating a session to 'root@/data%2Fmysql%2Fmysql.sock'

Fetching schema names for autocompletion... Press ^C to stop.

Your MySQL connection id is 2660844

Server version: 5.7.30-33-log Percona Server (GPL), Release 33, Revision 6517692

No default schema selected; type \use to set one.

MySQL localhost JS >

第二步:使用shell程序初始化单独表的备份

确保您使用JS模式连接shell程序。在备份时我们使用默认的4线程。

MySQL localhost JS > util.dumpTables("test1", [ "sbtest1" ], "/root/dump_table");

Acquiring global read lock

Global read lock acquired

All transactions have been started

Locking instance for backup

NOTE: Backup lock is not supported in MySQL 5.7 and DDL changes will not be blocked. The dump may fail with an error or not be completely consistent if schema changes are made while dumping.

Global read lock has been released

Writing global DDL files

Writing DDL for table `test1`.`sbtest1`

Preparing data dump for table `test1`.`sbtest1`

Data dump for table `test1`.`sbtest1` will be chunked using column `id`

Running data dump using 4 threads.

NOTE: Progress information uses estimated values and may not be accurate.

Data dump for table `test1`.`sbtest1` will be written to 788 files

1 thds dumping - 101% (100.00M rows / ~98.57M rows), 278.86K rows/s, 55.22 MB/s uncompressed, 24.63 MB/s compressed

Duration: 00:06:55s

Schemas dumped: 1

Tables dumped: 1

Uncompressed data size: 19.79 GB

Compressed data size: 8.83 GB

Compression ratio: 2.2

Rows written: 100000000

Bytes written: 8.83 GB

Average uncompressed throughput: 47.59 MB/s

Average compressed throughput: 21.23 MB/s

对一个20G的单表备份花费了大约7分钟(6:55)。备份存储在压缩的tsv文件中。

第三步:通过shell程序导入单独表备份

现在,我们通过util.loadDump()加载单独的表备份。我们使用4线程导入这些数据。

MySQL localhost JS > util.dumpTables("test1", [ "sbtest1" ], "/root/dump_table");

Acquiring global read lock

Global read lock acquired

All transactions have been started

Locking instance for backup

NOTE: Backup lock is not supported in MySQL 5.7 and DDL changes will not be blocked. The dump may fail with an error or not be completely consistent if schema changes are made while dumping.

Global read lock has been released

Writing global DDL files

Writing DDL for table `test1`.`sbtest1`

Preparing data dump for table `test1`.`sbtest1`

Data dump for table `test1`.`sbtest1` will be chunked using column `id`

Running data dump using 4 threads.

NOTE: Progress information uses estimated values and may not be accurate.

Data dump for table `test1`.`sbtest1` will be written to 788 files

1 thds dumping - 101% (100.00M rows / ~98.57M rows), 278.86K rows/s, 55.22 MB/s uncompressed, 24.63 MB/s compressed

Duration: 00:06:55s

Schemas dumped: 1

Tables dumped: 1

Uncompressed data size: 19.79 GB

Compressed data size: 8.83 GB

Compression ratio: 2.2

Rows written: 100000000

Bytes written: 8.83 GB

Average uncompressed throughput: 47.59 MB/s

Average compressed throughput: 21.23 MB/s

It took around 7 Min ( 6:55 ) to make a backup of single table of size 20GB. The backup are stored in the compressed tsv files.

Step 3: Load the single table backup via shell utility

Now let us load the single table data back via util.loadDump() utility. We have used the 4 threads to import these data.

MySQL localhost JS > util.loadDump("/root/dump_table",{schema:'test1'})

Loading DDL and Data from '/home/root/dump_table' using 4 threads.

Opening dump...

Target is MySQL 5.7.30-33-log. A dump was produced from MySQL 5.7.30-33-log

Checking for pre-existing objects...

Executing common preamble SQL

[Worker003] Executing DDL script for `test1`.`sbtest1`

[Worker001] test1@sbtest1@0.tsv.zst: Records: 126903 Deleted: 0 Skipped: 0 Warnings: 0

[Worker000] test1@sbtest1@1.tsv.zst: Records: 126903 Deleted: 0 Skipped: 0 Warnings: 0

[Worker002] test1@sbtest1@2.tsv.zst: Records: 126903 Deleted: 0 Skipped: 0 Warnings: 0

[Worker003] test1@sbtest1@3.tsv.zst: Records: 126903 Deleted: 0 Skipped: 0 Warnings: 0

[Worker001] test1@sbtest1@4.tsv.zst: Records: 126903 Deleted: 0 Skipped: 0 Warnings: 0

[Worker000] test1@sbtest1@5.tsv.zst: Records: 126903 Deleted: 0 Skipped: 0 Warnings: 0

[Worker003] test1@sbtest1@7.tsv.zst: Records: 126903 Deleted: 0 Skipped: 0 Warnings: 0

[Worker002] test1@sbtest1@6.tsv.zst: Records: 126903 Deleted: 0 Skipped: 0 Warnings: 0

[Worker001] test1@sbtest1@8.tsv.zst: Records: 126903 Deleted: 0 Skipped: 0 Warnings: 0

[Worker000] test1@sbtest1@9.tsv.zst: Records: 126903 Deleted: 0 Skipped: 0 Warnings: 0

[Worker001] test1@sbtest1@12.tsv.zst: Records: 126903 Deleted: 0 Skipped: 0 Warnings: 0

[Worker002] test1@sbtest1@11.tsv.zst: Records: 126903 Deleted: 0 Skipped: 0 Warnings: 0

[Worker003] test1@sbtest1@10.tsv.zst: Records: 126903 Deleted: 0 Skipped: 0 Warnings: 0

4 thds loading | 2% (399.22 MB / 19.79 GB), 2.60 MB/s, 1 / 1 tables done[Worker000] test1@sbtest1@13.tsv.zst: Records: 126903 Deleted: 0 Skipped: 0 Warnings: 0

[Worker001] test1@sbtest1@15.tsv.zst: Records: 126903 Deleted: 0 Skipped: 0 Warnings: 0

[Worker000] test1@sbtest1@17.tsv.zst: Records: 126903 Deleted: 0 Skipped: 0 Warnings: 0

[Worker002] test1@sbtest1@14.tsv.zst: Records: 126903 Deleted: 0 Skipped: 0 Warnings: 0

Deleted: 0 Skipped: 0 Warnings: 0

[Worker000] test1@sbtest1@784.tsv.zst: Records: 126903 Deleted: 0 Skipped: 0 Warnings: 0

[Worker003] test1@sbtest1@785.tsv.zst: Records: 126903 Deleted: 0 Skipped: 0 Warnings: 0

[Worker001] test1@sbtest1@786.tsv.zst: Records: 126903 Deleted: 0 Skipped: 0 Warnings: 0

[Worker002] test1@sbtest1@@787.tsv.zst: Records: 127339 Deleted: 0 Skipped: 0 Warnings: 0

Executing common postamble SQL

788 chunks (100.00M rows, 19.79 GB) for 1 tables in 1 schemas were loaded in 30 min 20 sec (avg throughput 7.98 MB/s)

0 warnings were reported during the load.

MySQL localhost JS >

加载数据花费了大概30分钟。在MySQL8中禁用重做日志可以进一步优化并行性。但这是一个巨大的进步。与其他逻辑备份工具相比,使用MySQL Shell将节省更多时间。我用其他的常用逻辑备份工具(如mysqldumpmydumper)重复了这个测试。结果如下:

7fef94e3300fe7035e97aac7709dbb89.png

看起来MySQL Shell在单表备份和导入上更快。这可以帮助数据库工程师重建分区表,跨云环境迁移,还可以替换常规的逻辑备份。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值