我们将超过指定时间的SQL语句查询称为慢查询
一、慢查询的体现
慢查询主要体现在慢上,通常意义上来讲,只要返回时间大于 >1 sec上的查询都可以称为慢查询。
慢查询会导致CPU,内存消耗过高。数据库服务器压力陡然过大,那么大部分情况来讲,肯定是由某些慢查询导致的。
查看/设置“慢查询”的时间定义
mysql> show variables like "long%";
+-----------------+----------+
| Variable_name | Value |
+-----------------+----------+
| long_query_time | 0.000100 |
+-----------------+----------+
1 row in set (0.00 sec)
如上述语句输出,“慢查询”的时间定义为0.0001秒(方便测试,一般设置为1-10秒)。使用下面语句定义“慢查询”时间
mysql> set long_query_time=0.0001;
Query OK, 0 rows affected (0.00 sec)
开启“慢查询”记录功能
mysql> show variables like "slow%";
+---------------------+------------------------------------+
| Variable_name | Value |
+---------------------+------------------------------------+
| slow_launch_time | 2 |
| slow_query_log | OFF |
| slow_query_log_file | /opt/mysql/data/localhost-slow.log |
+---------------------+------------------------------------+
3 rows in set (0.00 sec)
上述语句查看“慢查询”的配置信息,你可以自定义日志文件的存放,但必须将 slow_query_log 全局变量设置为“ON”状态,执行以下语句
mysql> set global slow_query_log=ON;
Query OK, 0 rows affected (0.01 sec)
结果:
mysql> show variables like "slow%";
+---------------------+------------------------------------+
| Variable_name | Value |
+---------------------+------------------------------------+
| slow_launch_time | 2 |
| slow_query_log | ON |
| slow_query_log_file | /opt/mysql/data/localhost-slow.log |
+---------------------+------------------------------------+
3 rows in set (0.00 sec)
那么哪些条件可以导致慢查询呢?或者说根据何种条件判断,优化慢查询。仅从SQL语句方面阐述慢查询,MySQL系统级别的设置暂不考虑。
二、返回列数太多
- 返回列数太多
EXPLAIN SELECT `happy_ni_nis`.*
FROM `happy_ni_nis`
ORDER BY n_total DESC LIMIT 10\G;
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: happy_ni_nis
type: ALL
possible_keys: NULL
key: NULL
key_len: NULL
ref: NULL
rows: 66493186 <- 返回值太多,坏味道
Extra: Using filesort <- 避免使用排序
1 row in set (0.01 sec)
优化建议: 添加 关于 n_total 的索引
mysql> alter table happy_ni_nis add index `idx_of_n_total` (`n_total`, `id`);
Query OK, 0 rows affected (7 min 48.27 sec)
Records: 0 Duplicates: 0 Warnings: 0
EXPLAIN SELECT `happy_ni_nis`.*
FROM `happy_ni_nis`
ORDER BY n_total DESC LIMIT 10\G;
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: happy_ni_nis
type: index
possible_keys: NULL
key: idx_of_n_total
key_len: 8
ref: NULL
rows: 10
Extra:
1 row in set (0.01 sec)
- 但是某些情况下,返回列数比较多,也不代表是慢查询。
SELECT `cd_happys`.*
FROM `cd_happys`
WHERE `cd_happys`.`p_status` = 4
AND `cd_happys`.`status` IN (0,
1,
2,
3,
4)
AND (c_time <= '2015-05-15 23:30:39'
AND update_time <= '2015-05-15 23:30:39')
ORDER BY `cd_happys`.`id` ASC LIMIT 1000
+----+-------------+-----------------+-------+---------------------------------------+---------+---------+-----+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-----------------+-------+---------------------------------------+---------+---------+-----+------+-------------+
| 1 | SIMPLE | cd_happys | index | idx_uptime_seneditor,idx_c_time | PRIMARY | 4 | | 2000 | Using where |
+----+-------------+-----------------+-------+---------------------------------------+---------+---------+-----+------+-------------+
平均查询时间:6 sec
添加一个 ``idx_of_p_status_status_c_time(
p_status,
status,
c_time,
id)
索引
mysql> explain SELECT SQL_NO_CACHE `cd_happys`.* FROM `cd_happys` WHERE `cd_happys`.`p_status` = 4 AND `cd_happys`.`status` IN (0, 1, 2, 3, 4) AND (c_time <= '2015-05-15 23:30:39' and update_time <= '2015-05-15 23:30:39') LIMIT 1000\G;
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: cd_happys
type: range
possible_keys: idx_uptime_seneditor,idx_c_time,idx_of_p_status_sanzu_check_status_signup_status,idx_of_p_status_status_c_time
key: idx_of_p_status_status_c_time
key_len: 16
ref: NULL
rows: 3782
Extra: Using where
1 row in set (0.01 sec)
平均查询时间:0.4 sec
三、分页条件,过大的offset
EXPLAIN SELECT `happys`.*
FROM `happys`
INNER JOIN `happy_infos` ON `happy_infos`.`happy_id` = `happys`.`id`
WHERE
(happys.end_time > '2015-08-13 14:08:10')
AND (happys.j_tag_id > 0)
AND (happy_infos.my_image_url = '')
ORDER BY happys.updated_at DESC LIMIT 100
OFFSET 28900\G;
+----+-------------+------------+--------+------------------------------------------------------------+---------------+---------+-----------------+--------+-----------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+------------+--------+------------------------------------------------------------+---------------+---------+-----------------+--------+-----------------------------+
| 1 | SIMPLE | happys | range | PRIMARY,idx_end_time,idx_bg_tag_pub_beg,idx_bg_tag_pub_end | idx_end_time | 8 | | 219114 | Using where; Using filesort |
+----+-------------+------------+--------+------------------------------------------------------------+---------------+---------+-----------------+--------+-----------------------------+
| 1 | SIMPLE | happy_infos | eq_ref | happy_id_index | happy_id_index | 4 | tao800.happys.id | 1 | Using where |
+----+-------------+------------+--------+------------------------------------------------------------+---------------+---------+-----------------+--------+-----------------------------+
平均查询时间:
Empty set (9.78 sec)
优化建议:
1、MySQL里对LIMIT OFFSET的处理方式是,取出OFFSET+LIMIT的所有数据,然后去掉OFFSET,返回底部的LIMIT
使用子查询,使用覆盖索引进行优化。
2、这种方式在offset很高的情况下,
如:limit 100000,20,这样系统会查询100020条,然后把前面的100000条都扔掉,这是开销很大的操作,导致慢查询很慢。
3、使用覆盖索引(convering index) 查询,然后再跟全行做join操作。这样可以直接使用index 得到数据,而不是去查询表,
当找到需要的数据之后,再与全表join获得其他列。
EXPLAIN SELECT SQL_NO_CACHE `happys`.*
FROM `happys`
INNER JOIN
(
SELECT happys.id
FROM happys
INNER JOIN `happy_infos` ON `happy_infos`.`happy_id` = `happys`.`id`
WHERE happys.end_time > '2015-08-13 14:08:10'
AND happys.j_tag_id > 0
AND happy_infos.my_image_url = ''
LIMIT 100 OFFSET 28900
) AS lim ON lim.id = happys.id
ORDER BY happys.updated_at DESC \G;
*************************** 1. row ***************************
id: 1
select_type: PRIMARY
table: NULL
type: NULL
possible_keys: NULL
key: NULL
key_len: NULL
ref: NULL
rows: NULL
Extra: Impossible WHERE noticed after reading const tables
*************************** 2. row ***************************
id: 2
select_type: DERIVED
table: happys
type: range
possible_keys: PRIMARY,idx_end_time,idx_bg_tag_pub_beg,idx_bg_tag_pub_end
key: idx_bg_tag_pub_end
key_len: 4
ref: NULL
rows: 425143
Extra: Using where; Using index
*************************** 3. row ***************************
id: 2
select_type: DERIVED
table: happy_infos
type: eq_ref
possible_keys: happy_id_index
key: happy_id_index
key_len: 4
ref: tao800.happys.id
rows: 1
Extra: Using where
3 rows in set (0.67 sec)
平均查询时间: 0.67 sec
如何提高MySQL Limit查询的性能?
在MySQL数据库操作中,我们在做一些查询的时候总希望能避免数据库引擎做全表扫描,因为全表扫描时间长,而且其中大部分扫描对客户端而言是没有意义的。其实我们可以使用Limit关键字来避免全表扫描的情况,从而提高效率。
有个几千万条记录的表 on MySQL 5.0.x,现在要读出其中几十万万条左右的记录。常用方法,依次循环:
select * from mytable where index_col = xxx limit offset, limit;
经验:如果没有blob/text字段,单行记录比较小,可以把 limit 设大点,会加快速度。
问题:头几万条读取很快,但是速度呈线性下降,同时 mysql server cpu 99% ,速度不可接受。
调用 explain select * from mytable where index_col = xxx limit offset, limit;
显示 type = ALL
在 MySQL optimization 的文档写到"All"的解释
A full table scan is done for each combination of rows from the previous tables. This is normally not good if the table is the first table not marked const, and usually very bad in all other cases. Normally, you can avoid ALL by adding indexes that allow row retrieval from the table based on constant values or column values from earlier tables.
看样子对于 all, mysql 就使用比较笨的方法,那就改用 range 方式? 因为 id 是递增的,也很好修改 sql 。
select * from mytable where id > offset and id < offset + limit and index_col = xxx
explain 显示 type = range,结果速度非常理想,返回结果快了几十倍。
Limit语法:
SELECT * FROM table LIMIT [offset,] rows | rows OFFSET offset
LIMIT子句可以被用于强制 SELECT 语句返回指定的记录数。LIMIT接受一个或两个数字参数。参数必须是一个整数常量。
如果给定两个参数,第一个参数指定第一个返回记录行的偏移量,第二个参数指定返回记录行的最大数目。初始记录行的偏移量是 0(而不是 1)。
为了与 PostgreSQL 兼容,MySQL 也支持句法:LIMIT # OFFSET #。
mysql> SELECT * FROM table LIMIT 5,10; //检索记录行6-15
//为了检索从某一个偏移量到记录集的结束所有的记录行,可以指定第二个参数为-1
mysql> SELECT * FROM table LIMIT 95,-1; //检索记录行96-last
//如果只给定一个参数,它表示返回最大的记录行数目,换句话说,LIMIT n 等价于 LIMIT 0,n
mysql> SELECT * FROM table LIMIT 5; //检索前5个记录行
MySQL的limit给分页带来了极大的方便,但数据量一大的时候,limit的性能就急剧下降。同样是取10条数据,下面两句就不是一个数量级别的。
select * from table limit 10000,10
select * from table limit 0,10
文中不是直接使用limit,而是首先获取到offset的id然后直接使用limit size来获取数据。根据他的数据,明显要好于直接使用limit。
这里我具体使用数据分两种情况进行测试。
1、offset比较小的时候:
select * from table limit 10,10
//多次运行,时间保持在0.0004-0.0005之间
Select * From table Where vid >=(Select vid From table Order By vid limit 10,1) limit 10
//多次运行,时间保持在0.0005-0.0006之间,主要是0.0006
结论:偏移offset较小的时候,直接使用limit较优。这个显然是子查询的原因。
2、offset大的时候:
select * from table limit 10000,10
//多次运行,时间保持在0.0187左右
Select * From table Where vid >=(Select vid From table Order By vid limit 10000,1) limit 10
//多次运行,时间保持在0.0061左右,只有前者的1/3。可以预计offset越大,后者越优。
四、驱动表,临时表,排序
mysql> EXPLAIN SELECT `happy_d_sales`.`happy_id`
-> FROM `happy_d_sales`
-> INNER JOIN happys ON happys.id = happy_d_sales.happy_id
-> AND happys.j_tag_id = happy_d_sales.tag_id
-> WHERE (status = 1
-> AND date = '2015-08-12')
-> ORDER BY sales DESC LIMIT 300\G;
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: happys
type: index
possible_keys: PRIMARY,idx_bg_tag_pub_beg,idx_bg_tag_pub_end
key: idx_bg_tag_pub_beg
key_len: 20
ref: NULL
rows: 850279 <- 返回数据太多,坏味道
Extra: Using index; Using temporary; Using filesort
*************************** 2. row ***************************
id: 1
select_type: SIMPLE
table: happy_d_sales
type: eq_ref
possible_keys: happy_id_date_tag_idx,tag_id_date
key: happy_id_date_tag_idx
key_len: 11
ref: tao800.happys.id,const,tao800.happys.j_tag_id
rows: 1
Extra: Using where
2 rows in set (0.00 sec)
ERROR:
No query specified
平均查询时间 Empty set (3.90 sec)
EXPLAIN 返回的第一列的表,就是驱动表,在驱动表上排序,非常快。但是如果在非驱动表上,排序,就很慢。
默认情况下,记录就是按照顺序排列好的,不需要进行排序。但是上述结果,从happys
表中取出一些数据,建立临时表,并且还在临时表上进行排序。
所以就会出现 Using temporary; Using filesort 这种情况。
优化建议:
1、减少返回值rows
2、指定正确的驱动表
mysql> ALTER table `happy_d_sales` ADD INDEX `idx_of_date_and_status` (`date`, `status`, `sales`, `id`);
Query OK, 0 rows affected (19.45 sec)
Records: 0 Duplicates: 0 Warnings: 0
mysql> EXPLAIN SELECT `happy_d_sales`.`happy_id`
-> FROM `happy_d_sales`
-> STRAIGHT_JOIN happys ON happys.id = happy_d_sales.happy_id
-> AND happys.j_tag_id = happy_d_sales.tag_id
-> WHERE date = '2015-08-12' AND status = 1
-> ORDER BY `happy_d_sales`.sales DESC LIMIT 300\G;
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: happy_d_sales <-- 正确的驱动表
type: ref
possible_keys: happy_id_date_tag_idx,tag_id_date,idx_of_date_and_status
key: idx_of_date_and_status
key_len: 7
ref: const,const
rows: 1
Extra: Using where
*************************** 2. row ***************************
id: 1
select_type: SIMPLE
table: happys
type: eq_ref
possible_keys: PRIMARY,idx_bg_tag_pub_beg,idx_bg_tag_pub_end
key: PRIMARY
key_len: 4
ref: tao800.happy_d_sales.happy_id
rows: 1
Extra: Using where
2 rows in set (0.01 sec)
ERROR:
No query specified
平均查询时间:0.03 sec
五、不合适的group by 分组条件
大表上的group by
CREATE TABLE `p_acc_logs` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`account_id` int(11) NOT NULL DEFAULT '0' ,
`b_amount` int(11) NOT NULL DEFAULT '0' ,
`r_amount` int(11) NOT NULL DEFAULT '0' ,
`amount` int(11) NOT NULL DEFAULT '0' ,
`l_type` int(11) NOT NULL DEFAULT '0',
`created_at` datetime NOT NULL,
`updated_at` datetime NOT NULL,
PRIMARY KEY (`id`),
KEY `idx_created_at` (`created_at`,`id`),
KEY `idx_account_type` (`account_id`,`l_type`,`id`)
) ENGINE=InnoDB AUTO_INCREMENT=3212639 DEFAULT CHARSET=utf8;
该表上有百万条的数据,也有 关于 account_id
的索引,但是使用 group by 查询时,确实很慢。
SELECT MAX(id) max_id FROM `p_acc_logs` WHERE (created_at <= '2015-08-14 00:00:00') GROUP BY account_id
平均查询时间 10sec
六、不合适的order by
EXPLAIN SELECT SQL_NO_CACHE id,
u_id,
amount
FROM `t_orders`
WHERE (settlement_time >= '2015-07-01 00:00:00'
AND settlement_time <= '2015-09-30 23:59:59')
ORDER BY `t_orders`.`id` ASC LIMIT 3000;
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: t_orders
type: index
possible_keys: idx_settlement_time
key: PRIMARY <--- 注意这里
key_len: 4
ref: NULL
rows: 6705 <---- 注意这里
Extra: Using where
1 row in set (0.09 sec)
ERROR:
No query specified
平均查询时间:40 sec
CREATE TABLE `t_orders` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`u_id` int(11) NOT NULL DEFAULT '0' ,
`order_id` varchar(20) NOT NULL DEFAULT '' ,
`amount` int(11) NOT NULL DEFAULT '0' ,
`settlement_time` datetime NOT NULL DEFAULT '1970-01-01 00:00:00' ,
`d_s_time` datetime DEFAULT '1970-01-01 00:00:00' ,
`serial_number` bigint(20) unsigned DEFAULT NULL ,
`order_type` int(11) DEFAULT NULL ,
`created_at` datetime NOT NULL,
`updated_at` datetime NOT NULL,
PRIMARY KEY (`id`),
KEY `index_t_orders_on_u_id_and_settlement_time_and_id` (`u_id`,`settlement_time`,`id`),
KEY `index_t_orders_on_order_id_and_order_type_and_id` (`order_id`,`order_type`,`id`),
KEY `index_t_orders_on_serial_number_and_order_type_and_id` (`serial_number`,`order_type`,`id`),
KEY `idx_settlement_time` (`settlement_time`,`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
优化建议:
1、去除错误的order by 查询,记录集默认就是按照id升序排列的。
2、使查询语句能够按照正确的方式查询
EXPLAIN SELECT SQL_NO_CACHE id,
u_id,
amount
FROM `t_orders`
WHERE (settlement_time >= '2015-07-01 00:00:00'
AND settlement_time <= '2015-09-30 23:59:59')
LIMIT 3000;
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: t_orders
type: range
possible_keys: idx_settlement_time
key: idx_settlement_time <-- 注意这里
key_len: 8
ref: NULL
rows: 12784434 <-- 注意这里
Extra: Using where
1 row in set (0.00 sec)
ERROR:
No query specified
平均查询时间:0.16 sec
七、模糊查询
EXPLAIN SELECT `p_d_logs`.*
FROM `p_d_logs`
WHERE (details LIKE '%waiting%'
AND created_at < '2015-08-09 03:40:35')
AND (`p_d_logs`.`id` > 297273949)
ORDER BY `p_d_logs`.`id` ASC LIMIT 10000\G
+----+-------------+-------------------+-------+---------------+---------+---------+-----+---------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------------------+-------+---------------+---------+---------+-----+---------+-------------+
| 1 | SIMPLE | p_d_logs | range | PRIMARY | PRIMARY | 4 | | 3340552 | Using where |
+----+-------------+-------------------+-------+---------------+---------+---------+-----+---------+-------------+
平均查询时间:7.5889787673950195 sec
EXPLAIN SELECT `e_logs`.`loggable_id`
FROM `e_logs`
WHERE (user_name LIKE '%王大傻%'
AND action_type_id = 0
AND loggable_type = 'HappyBase') \G;
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: e_logs
type: ref
possible_keys: by_loggable_type
key: by_loggable_type
key_len: 92
ref: const
rows: 684252
Extra: Using where
1 row in set (10.61 sec)
15738 rows in set (3 min 45.75 sec)
CREATE TABLE `e_logs` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`loggable_type` varchar(30) NOT NULL DEFAULT '' ,
`loggable_id` int(11) NOT NULL DEFAULT '0' ,
`u_id` int(11) NOT NULL DEFAULT '0' ,
`user_name` varchar(24) NOT NULL DEFAULT '' ,
`create_time` datetime NOT NULL DEFAULT '1970-01-01 00:00:00' ,
`execution_type_id` tinyint(4) NOT NULL DEFAULT '0' ,
`action_type_id` tinyint(4) NOT NULL DEFAULT '0' ,
`from_url` varchar(200) NOT NULL DEFAULT '' ,
`updated_changes` longtext ,
`t_s_id` int(11) NOT NULL DEFAULT '0' ,
PRIMARY KEY (`id`),
KEY `index_e_logs_on_loggable_id_and_loggable_type` (`loggable_id`,`loggable_type`),
KEY `idx_user_name` (`user_name`,`action_type_id`,`id`),
KEY `index_e_logs_on_create_time` (`create_time`),
KEY `by_t_s_id` (`t_s_id`,`id`),
KEY `by_loggable_type` (`loggable_type`,`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
优化建议:
1、调整查询语句顺序
2、添加合适的索引。删除旧的索引。
alter table e_logs drop index `by_loggable_type`;
Query OK, 0 rows affected (3 min 12.71 sec)
alter table e_logs add index `idx_of_loggable_type_and_type_id` (`loggable_type`, `action_type_id`, `id`);
Query OK, 0 rows affected (6 min 5.70 sec)
EXPLAIN SELECT SQL_NO_CACHE `e_logs`.`loggable_id`
FROM `e_logs`
WHERE (loggable_type = 'HappyBase' AND action_type_id = 0 AND user_name LIKE '王大傻%') \G;
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: e_logs
type: range
possible_keys: idx_user_name,idx_of_loggable_type_and_type_id
key: idx_user_name
key_len: 75
ref: NULL
rows: 39546
Extra: Using where
1 row in set (0.01 sec)
平均查询时间: 0.13 sec 效率提升 1700 倍
八、检索条件,没有按照索引列查询
索引,中范围查询什么的,是否能使用到索引,详细演示一下
SELECT happy_id, sum(sales) as all_sales
FROM `happy_d_sales`
WHERE `happy_d_sales`.`status` = 1
AND `happy_d_sales`.`tag_id` IN (xxx,)
GROUP BY happy_id ORDER BY all_sales desc LIMIT 100 OFFSET 0
平均查询时间:20sec
CREATE TABLE `happy_d_sales` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`happy_id` int(11) NOT NULL ,
`tag_id` int(11) NOT NULL ,
`sales` int(11) NOT NULL DEFAULT '0' ,
`status` int(11) NOT NULL DEFAULT '0' ,
`date` date NOT NULL ,
`created_at` datetime NOT NULL,
`updated_at` datetime NOT NULL,
`r_count` int(11) NOT NULL DEFAULT '0' ,
`a_t_s_count` int(11) NOT NULL DEFAULT '0' ,
PRIMARY KEY (`id`),
UNIQUE KEY `happy_id_date_tag_idx` (`happy_id`,`date`,`tag_id`),
KEY `tag_id_date` (`tag_id`,`date`) ,
KEY `idx_of_date_and_status` (`date`,`status`,`sales`,`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 ;
调整下SQL语句的顺序,并且添加合适的索引,瞬间减少查询时间。
explain SELECT SQL_NO_CACHE happy_id, sum(sales) as all_sales
FROM `happy_d_sales`
WHERE `happy_d_sales`.`tag_id` IN (xxxx,xxxx,xxxx)
AND `happy_d_sales`.`status` = 1
GROUP BY happy_id ORDER BY all_sales desc
LIMIT 100 OFFSET 0\G;
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: happy_d_sales
type: range
possible_keys: tag_id_date,idx_of_tag_id_status_happy_id
key: idx_of_tag_id_status_happy_id
key_len: 8
ref: NULL
rows: 94380
Extra: Using where; Using temporary; Using filesort
1 row in set (0.06 sec)
平均查询时间: 100 rows in set (0.43 sec)
九、根本没有索引
explain select t_s_id from guang_happy_outs group by t_s_id;
+----+-------------+-----------------+------+---------------+------+---------+------+----------+---------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-----------------+------+---------------+------+---------+------+----------+---------------------------------+
| 1 | SIMPLE | guang_happy_outs | ALL | NULL | NULL | NULL | NULL | 69008793 | Using temporary; Using filesort |
+----+-------------+-----------------+------+---------------+------+---------+------+----------+---------------------------------+
1 row in set (0.00 sec)
平均查询时间:3 sec
CREATE TABLE `guang_happy_outs` (
`id` int(11) NOT NULL AUTO_INCREMENT ,
`t_s_id` int(11) NOT NULL DEFAULT '0' ,
`happy_id` int(11) NOT NULL DEFAULT '0' ,
`count` int(11) NOT NULL DEFAULT '0' ,
`de_id` int(11) NOT NULL DEFAULT '0' ,
`ad_count` int(11) NOT NULL DEFAULT '0' ,
`st_date` date NOT NULL DEFAULT '1970-01-01' ,
`created_at` datetime DEFAULT NULL,
`updated_at` datetime DEFAULT NULL,
`amount` int(11) NOT NULL DEFAULT '0' ,
PRIMARY KEY (`id`),
KEY `idx_guang_dlout_dlid` (`happy_id`),
KEY `idx_guang_dlout_cat` (`created_at`),
KEY `idx_guang_dlout_stdate` (`st_date`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
优化建议:
1、添加必要的索引
mysql> alter table guang_happy_outs add index `idx_of_t_s_id` (`t_s_id`, `id`);
Query OK, 0 rows affected (7 min 41.47 sec)
Records: 0 Duplicates: 0 Warnings: 0
explain select t_s_id from guang_happy_outs group by t_s_id\G;
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: guang_happy_outs
type: range
possible_keys: NULL
key: idx_of_t_s_id
key_len: 4
ref: NULL
rows: 201
Extra: Using index for group-by
1 row in set (0.01 sec)
ERROR:
No query specified
平均查询时间:
4818 rows in set (0.03 sec)
不过针对大表而言,对于一些常用的查询,可以单独建立小表,在关联的小表上进行查询。
十、EXPLAIN 返回type 为 ALL,全表扫描
EXPLAIN SELECT `r_records`.*
FROM `r_records`
WHERE (create_time >= '2015-08-13'
AND create_time < '2015-08-14'
AND device_id !="0")\G;
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: r_records
type: ALL <-- 看这里
possible_keys: NULL
key: NULL
key_len: NULL
ref: NULL
rows: 10753575 <- 看这里
Extra: Using where
1 row in set (0.01 sec)
ERROR:
No query specified
优化建议:
1、添加关于 create_time 和 device_id 的联合索引。
2、最好的方式就是单独单独建立统计表,针对每天的日志情况做统计。避免在大表上进行范围查询。
alter table r_records add index `idx_of_create_time` (`create_time`, `device_id`, `id`);
Query OK, 0 rows affected (4 min 14.59 sec)
EXPLAIN SELECT `r_records`.*
FROM `r_records`
WHERE (create_time >= '2015-08-13'
AND create_time < '2015-08-14'
AND device_id !="0")\G;
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: r_records
type: range
possible_keys: idx_of_create_time <-- 看这里
key: idx_of_create_time
key_len: 777 <-- 看这里
ref: NULL
rows: 1
Extra: Using where
1 row in set (0.00 sec)
另一个例子
EXPLAIN SELECT `happy_ni_nis`.*
FROM `happy_ni_nis`
ORDER BY n_total DESC LIMIT 10\G;
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: happy_ni_nis
type: ALL
possible_keys: NULL
key: NULL
key_len: NULL
ref: NULL
rows: 66493186 <- 返回值太多,坏味道
Extra: Using filesort <- 避免使用排序
1 row in set (0.01 sec)
平均查询时间:13 sec
优化建议:
1、添加 关于 n_total 的索引
mysql> alter table happy_ni_nis add index `idx_of_n_total` (`n_total`, `id`);
Query OK, 0 rows affected (7 min 48.27 sec)
Records: 0 Duplicates: 0 Warnings: 0
EXPLAIN SELECT `happy_ni_nis`.*
FROM `happy_ni_nis`
ORDER BY n_total DESC LIMIT 10\G;
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: happy_ni_nis
type: index
possible_keys: NULL
key: idx_of_n_total
key_len: 8
ref: NULL
rows: 10
Extra:
1 row in set (0.01 sec)
ERROR:
No query specified
平均查询时间:0.01 sec
十一、针对大表的 count(id)
mysql> select SQL_NO_CACHE count(id) from guang_happy_outs;
+-----------+
| count(id) |
+-----------+
| 69008543 |
+-----------+
1 row in set (31.14 sec)
千万表上的 count(id) 操作
无解。
如有同学知道怎么解决,请回复,3Q
在 select count (*) from table_name 为什么没有使用主键索引? 有详细的讨论。
mysql> select SQL_NO_CACHE count(id) from g_d_outs\G;
*************************** 1. row ***************************
count(id): 69008543
1 row in set (10.76 sec)
ERROR:
No query specified
如果你想得到近似的统计值,使用 下面的这个方式 查找 Rows
值,这种方式是非常快的。
mysql> show table status where name = 'g_d_outs'\G;
*************************** 1. row ***************************
Name: g_d_outs
Engine: InnoDB
Version: 10
Row_format: Compact
Rows: 69008796
Avg_row_length: 71
Data_length: 4965007360
Max_data_length: 0
Index_length: 3560964096
Data_free: 5242880
Auto_increment: 138022949
Create_time: 2015-08-14 18:46:13
Update_time: NULL
Check_time: NULL
Collation: utf8_general_ci
Checksum: NULL
Create_options:
Comment: ??????????
1 row in set (0.01 sec)
总结:
1、索引优化,最傻的办法,给查询语句添加覆盖索引。但不是最好的方式。
2、将大表拆成小的汇总表。
3、重在实践,MySQL优化器在很多情况下不能给出,最快的实现方式。
4、避免在大表上的group by,order by,offset 操作,除非你知道如何优化的前提下。
5、SQL WHERE查询条件,尽量按照目前添加的索引顺序来。