从mysqldump备份文件恢复数据的一次过程记录
导入前优化
Mysql服务及innodb配置(基于硬件条件):
character-set-server = utf8mb4
innodb_buffer_pool_size = 2G
innodb_log_buffer_size = 256M
innodb_log_file_size = 1G
innodb_write_io_threads = 16
innodb_flush_log_at_trx_commit = 0
innodb-doublewrite = 0
net_buffer_length = 1048576
innodb_file_per_table=1
注:为了数据库安全,个别参数需要在导入后恢复到可靠的设置,比如innodb-doublewrite innodb_flush_log_at_trx_commit
配置后进行了导入,遇到问题:1)速度不够理想 2) 单个几十G的.sql文件,无法直接导入想要的表,或者并行导入
.sql文件拆分
使用python3抄了网上诸多分割文件的代码,并修改如下:
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
inputFile = "../testdb_tabtest.sql"
prefixName = "testdb_tabtest_"
f = open(inputFile, 'r', encoding='utf-8',errors='ignore')
filename = prefixName + "head.sql"
print(filename)
wf = open(filename, 'w', encoding='utf-8',errors='ignore')
while 1:
try:
line = f.readline()
## split start @ "-- Table structure"
if line.startswith("-- Table structure for table "):
## get tabname from "-- Table structure" line
tabname = line.split(" ")[5].rstrip("\n").strip("`")
##close the prev write sql file
wf.close()
filename = prefixName+tabname+".sql"
print(filename)
wf = open(filename, 'w', encoding='utf-8')
wf.write(line)
else:
wf.write(line)
if not line:
break
except Exception as e:
print("read except:" + str(e))
continue
# print("read except:" + str(e))
f.close()
print(filename)
wf.close()
以上代码加入了errors=‘ignore’,原因是此前拆分时,出现了以下类似一批错误:
read except:‘utf-8’ codec can‘t decode byte 0xf9 in position 0: invalid start byte read except:’utf
再进一步的话,可以将单个表的.sql文件,再次拆分,分为DDL和INSERT部分,用以在导入前修改表结构,比如想要尝试导入完数据后,再创建索引。
因max_allowed_packet不足导致导入时出现的错误
其中,有一个表导入时,报了:
ERROR 2006 (HY000) at line 60 in file: 'testdb_tabtest.sql': MySQL server has gone away
经过咨询以前的同事,尝试设置足够大的max_allowed_packet,导入通过:
set (global or session) max_allowed_packet=1024*1024*32