MySQL 5.7-8.5.5 Bulk Data Loading for InnoDB Tables(nnoDB表的批量数据加载)

These performance tips supplement the general guidelines for fast inserts in Section 8.2.4.1, “Optimizing INSERT Statements”.

这些性能技巧补充了8.2.4.1节“优化INSERT语句”中快速插入的一般指导原则。

When importing data into InnoDB, turn off autocommit mode, because it performs a log flush to disk for every insert. To disable autocommit during your import operation, surround it with SET autocommit and COMMIT statements:

当导入数据到InnoDB时,关闭自动提交模式,因为每次插入都会刷新日志到磁盘。要在导入操作中禁用自动提交,请使用SET autocommit和COMMIT语句:

 The mysqldump option --opt creates dump files that are fast to import into an InnoDB table, even without wrapping them with the SET autocommit and COMMIT statements.

mysqldump选项——opt创建的转储文件可以快速导入到InnoDB表中,即使不需要用SET autocommit和COMMIT语句包装它们。

If you have UNIQUE constraints on secondary keys, you can speed up table imports by temporarily turning off the uniqueness checks during the import session:

如果你对辅助键有UNIQUE约束,你可以通过在导入会话期间暂时关闭唯一性检查来加速表的导入:

 For big tables, this saves a lot of disk I/O because InnoDB can use its change buffer to write secondary index records in a batch. Be certain that the data contains no duplicate keys.

对于大表,这可以节省大量磁盘I/O,因为InnoDB可以使用它的更改缓冲区批量写入二级索引记录。确保数据不包含重复的键

If you have FOREIGN KEY constraints in your tables, you can speed up table imports by turning off the foreign key checks for the duration of the import session:

如果你的表中有外键约束,你可以通过在导入会话期间关闭外键检查来加速表的导入

 

For big tables, this can save a lot of disk I/O.

对于大表,这可以节省大量磁盘I/O。

Use the multiple-row INSERT syntax to reduce communication overhead between the client and the server if you need to insert many rows:

如果你需要插入很多行,使用多行INSERT语法来减少客户端和服务器之间的通信开销:

This tip is valid for inserts into any table, not just InnoDB tables.

这个技巧适用于插入到任何表中,而不仅仅是InnoDB表。

When doing bulk inserts into tables with auto-increment columns, set innodb_autoinc_lock_mode to 2 instead of the default value 1.

当对具有自增列的表进行批量插入时,将innodb_autoinc_lock_mode设置为2而不是默认值1。

See Section 14.6.1.6, “AUTO_INCREMENT Handling in InnoDB” for details.

When performing bulk inserts, it is faster to insert rows in PRIMARY KEY order. InnoDB tables use a clustered index, which makes it relatively fast to use data in the order of the PRIMARY KEY. Performing bulk inserts in PRIMARY KEY order is particularly important for tables that do not fit entirely within the buffer pool.

在执行批量插入时,按PRIMARY KEY顺序插入行会更快。InnoDB表使用聚集索引,这使得它相对较快地使用数据的顺序的主键。对于不能完全装入缓冲池的表,按照PRIMARY KEY顺序执行批量插入特别重要。

For optimal performance when loading data into an InnoDB FULLTEXT index, follow this set of steps:

为了在InnoDB FULLTEXT索引中加载数据时获得最佳性能,请遵循以下步骤:

Define a column FTS_DOC_ID at table creation time, of type BIGINT UNSIGNED NOT NULL, with a unique index named FTS_DOC_ID_INDEX. For example:

在创建表时定义一个列FTS_DOC_ID,类型为BIGINT UNSIGNED NOT NULL,具有一个唯一索引FTS_DOC_ID_INDEX。例如:

 

Load the data into the table.

Create the FULLTEXT index after the data is loaded.

Note

When adding FTS_DOC_ID column at table creation time, ensure that the FTS_DOC_ID column is updated when the FULLTEXT indexed column is updated, as the FTS_DOC_ID must increase monotonically with each INSERT or UPDATE. If you choose not to add the FTS_DOC_ID at table creation time and have InnoDB manage DOC IDs for you, InnoDB adds the FTS_DOC_ID as a hidden column with the next CREATE FULLTEXT INDEX call. This approach, however, requires a table rebuild which can impact performance.

当在创建表时添加FTS_DOC_ID列时,确保FTS_DOC_ID列在FULLTEXT索引列更新时也更新,因为FTS_DOC_ID必须随着每次INSERT或UPDATE的增加而单调增加。如果你选择在创建表时不添加FTS_DOC_ID,并且让InnoDB为你管理DOC id, InnoDB会在下一次CREATE FULLTEXT INDEX调用时添加FTS_DOC_ID作为隐藏列。然而,这种方法需要重新构建表,这可能会影响性能。

 

 

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值