mysqldump导出注意timestamp类型

时区

mysql> show global variables like '%zone%';
+------------------+--------+
| Variable_name    | Value  |
+------------------+--------+
| system_time_zone | CST    |
| time_zone        | SYSTEM |
+------------------+--------+

[root@localhost recover]# date -R  
Sat, 04 Nov 2017 18:32:16 +0800

mysql> show create table fan3\G
*************************** 1. row ***************************
      Table: fan3
Create Table: CREATE TABLE `fan3` (
  `idx` bigint(20) NOT NULL AUTO_INCREMENT,
  `phone` varchar(50) NOT NULL COMMENT '手机号码/或者唯一标识',
  `system` varchar(50) DEFAULT NULL,
  `activeid` int(11) DEFAULT NULL COMMENT '所属活动',
  `prize` varchar(50) DEFAULT NULL COMMENT '获得的奖品',
  `yycode` varchar(100) DEFAULT '' COMMENT '预约送的礼包',
  `realname` varchar(50) DEFAULT NULL COMMENT '用户真实姓名',
  `address` varchar(500) DEFAULT NULL COMMENT '玩家地址',
  `idcard` varchar(30) DEFAULT NULL COMMENT '用户身份证号',
  `email` varchar(110) DEFAULT '' COMMENT '邮箱地址',
  `date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '预约时间',
  `awardtime` datetime DEFAULT NULL COMMENT '中奖时间',
  `awardcount` int(11) DEFAULT '0' COMMENT '抽奖次数',
  `is_share` tinyint(4) DEFAULT '0',
  `qq` varchar(100) DEFAULT '' COMMENT 'QQ',
  `channel` varchar(100) DEFAULT '' COMMENT '来源渠道',
  PRIMARY KEY (`idx`),
  KEY `idx_activeid` (`activeid`)
) ENGINE=InnoDB AUTO_INCREMENT=634203 DEFAULT CHARSET=utf8 COMMENT='手机预约表'
1 row in set (0.00 sec)
注意date为timestamp类型

开发误删除了一部分数据,于是我拿逻辑备份来恢复整个表到恢复环境,然后取出所需数据恢复会生产库

mysql> select count(*) from fan3 where date>='2017-10-24' and date <='2017-10-26' and activeid=37;
+----------+
| count(*) |
+----------+
|      153 |
+----------+
1 row in set (0.06 sec)
上面是开发误删除数据时指定的where条件

演示坑
下面我直接CTAS,然后myqldump导出

mysql> create table fan_select as select * from fan3 where date>='2017-10-24' and date <='2017-10-26' and activeid=37;
Query OK, 153 rows affected (0.09 sec)
Records: 153  Duplicates: 0  Warnings: 0

mysql> select min(date) from fan_select;
+---------------------+
| min(date)          |
+---------------------+
| 2017-10-24 00:03:35 |
+---------------------+
1 row in set (0.01 sec)
表中date最小时间

导出

mysqldump -pfanboshi -c --skip-add-drop-table --skip-extended-insert --single-transaction test fan_select > fan_select.sql

观察general log,发现mysqldump设置时区为'+00:00' ,而系统时区为'+08:00'
2017-11-04T10:10:47.978670Z        77 Query    /*!40100 SET @@SQL_MODE='' */
2017-11-04T10:10:47.978909Z        77 Query    /*!40103 SET TIME_ZONE='+00:00' */


[root@localhost recover]# cat fan_select.sql | grep INSERT | wc -l
153
导出行数
[root@localhost recover]# cat fan_select.sql | grep INSERT | grep 2017-10-23 | wc -l
21
注意这里我们刚才查date最小时间是2017-10-24 00:03:35, 而备份中却有23号的数据
[root@localhost recover]# cat fan_select.sql | grep INSERT | grep 2017-10-23 | awk -F')' '{print $2}' | awk -F',' '{print $11}' | sort
'2017-10-23 16:03:35'
'2017-10-23 16:14:17'
'2017-10-23 16:29:23'
'2017-10-23 16:38:22'
'2017-10-23 17:06:42'
'2017-10-23 17:10:28'
'2017-10-23 17:31:56'
'2017-10-23 17:47:26'
'2017-10-23 18:21:33'
'2017-10-23 18:44:25'
'2017-10-23 19:40:15'
'2017-10-23 20:38:30'
'2017-10-23 20:47:39'
'2017-10-23 20:57:22'
'2017-10-23 21:23:07'
'2017-10-23 22:11:36'
'2017-10-23 22:12:34'
'2017-10-23 22:13:36'
'2017-10-23 22:29:14'
'2017-10-23 23:41:34'
'2017-10-23 23:42:16'
查询发现最小的时间为'2017-10-23 16:03:35''2017-10-24 00:03:35'8小时

恢复

sed -i 's/fan_select/fan_load/g' fan_select.sql 

mysql> source fan_select.sql 
Query OK, 0 rows affected (0.00 sec)

Query OK, 0 rows affected (0.00 sec)
...

2017-11-04T10:13:22.989942Z        76 Query    /*!40103 SET @OLD_TIME_ZONE=@@TIME_ZONE */
2017-11-04T10:13:22.990097Z        76 Query    /*!40103 SET TIME_ZONE='+00:00' */
观察general log发现又一次设置了时区

mysql> select count(*) from fan_load;
+----------+
| count(*) |
+----------+
|      153 |
+----------+
1 row in set (0.01 sec)

mysql> select min(date) from fan_load;
+---------------------+
| min(date)          |
+---------------------+
| 2017-10-24 00:03:35 |
+---------------------+
1 row in set (0.00 sec)
恢复出来的数据和导出前一样

结论
mysqldump导出是默认会设置时区SET TIME_ZONE='+00:00' ,会导致导出的文件中timestamp列的数据与我们预期的有些 ‘不符’ ,但是实际导入后数据是一样的,无须理会

如果我们不用CTAS 直接使用–where导出

[root@localhost recover]# mysqldump -pfanboshi -c --skip-add-drop-table --skip-extended-insert --single-transaction --skip-tz-utc test fan3 --where="date>='2017-10-24' and date <='2017-10-26' and activeid=37" > fan32.sql
使用--skip-tz-utc 禁止mysqldump设置时区

[root@localhost recover]# mysqldump -pfanboshi -c --skip-add-drop-table --skip-extended-insert --single-transaction test fan3 --where="date>='2017-10-24' and date <='2017-10-26' and activeid=37" > fan3.sql
2017-11-04T10:23:20.962390Z        80 Query    /*!40100 SET @@SQL_MODE='' */
2017-11-04T10:23:20.962488Z        80 Query    /*!40103 SET TIME_ZONE='+00:00' */
这次没使用--skip-tz-utc , 可以观察general log 设置了时区

[root@localhost recover]# cat fan3.sql | grep INSERT | grep 2017-10-23 
[root@localhost recover]# cat fan32.sql | grep INSERT | grep 2017-10-23 
两个备份中都没有2017-10-23 的数据
[root@localhost recover]# cat fan3.sql | grep INSERT | wc -l
152
[root@localhost recover]# cat fan32.sql | grep INSERT | wc -l
153
行数居然不一样

恢复

[root@localhost recover]# sed -i 's/fan3/fan3_load/g' fan3.sql 
[root@localhost recover]# sed -i 's/fan3/fan32_load/g' fan32.sql 

mysql> select count(*) from fan3_load;
+----------+
| count(*) |
+----------+
|      152 |
+----------+
1 row in set (0.00 sec)

mysql> select count(*) from fan32_load;
+----------+
| count(*) |
+----------+
|      153 |
+----------+
1 row in set (0.00 sec)

mysql> select min(date) from fan32_load;
+---------------------+
| min(date)          |
+---------------------+
| 2017-10-24 00:03:35 |
+---------------------+
1 row in set (0.00 sec)

mysql> select min(date) from fan3_load;
+---------------------+
| min(date)          |
+---------------------+
| 2017-10-24 08:30:10 |
+---------------------+
1 row in set (0.00 sec)
可以看到恢复出来的数据也不对

这是为什么呢? 当不使用–skip-tz-utc 同时使用–where 包含timestamp列是,由于mysqldump设置了时区,导致原本预期的where条件也发生了变化. 所以导出的数据就已经不是我们需要的了
解决方法是
1.在–where参数中将时间自己进行转换,比如原date>='2017-10-24' and date <='2017-10-26',应写为date>='2017-10-23 16:00:00' and date <='2017-10-25 16:00:00',提前8小时即可.

[root@localhost recover]# mysqldump -pfanboshi -c --skip-add-drop-table --skip-extended-insert --single-transaction test fan3 --where="date>='2017-10-23 16:00:00' and date <='2017-10-25 16:00:00' and activeid=37" > fan33.sql
mysqldump: [Warning] Using a password on the command line interface can be insecure.
[root@localhost recover]# 
[root@localhost recover]# cat fan33.sql | grep INSERT | grep 2017-10-23 | awk -F')' '{print $2}' | awk -F',' '{print $11}' | sort
'2017-10-23 16:03:35'
'2017-10-23 16:14:17'
'2017-10-23 16:29:23'
'2017-10-23 16:38:22'
'2017-10-23 17:06:42'
'2017-10-23 17:10:28'
'2017-10-23 17:31:56'
'2017-10-23 17:47:26'
'2017-10-23 18:21:33'
'2017-10-23 18:44:25'
'2017-10-23 19:40:15'
'2017-10-23 20:38:30'
'2017-10-23 20:47:39'
'2017-10-23 20:57:22'
'2017-10-23 21:23:07'
'2017-10-23 22:11:36'
'2017-10-23 22:12:34'
'2017-10-23 22:13:36'
'2017-10-23 22:29:14'
'2017-10-23 23:41:34'
'2017-10-23 23:42:16'

[root@localhost recover]# sed -i 's/fan3/fan33_load/g' fan33.sql 

mysql> source fan33.sql
..

mysql> select count(*) from fan33_load;
+----------+
| count(*) |
+----------+
|      153 |
+----------+
1 row in set (0.00 sec)

mysql> select min(date) from fan33_load;
+---------------------+
| min(date)          |
+---------------------+
| 2017-10-24 00:03:35 |
+---------------------+
1 row in set (0.00 sec)

2.使用--skip-tz-utc

参考
http://blog.csdn.net/wbb_1216/article/details/56008947

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值