mysql的优化器执行过程(附代码解析,手动注释等)

一、前言

      这篇文章的诞生很偶然,优化器大家基本都听过,感觉就是只要自己不能理解的东西,都归于mysql优化器就完事了,哈哈。但是优化器到底是什么呢,执行过程是什么样子的呢?博主是在看一篇博文的时候,看到人家开启优化器追踪sql语句,emmm,很高大上,行吧,那咱也研究研究。

      这里贴出的sql,在博主本地执行50对秒才执行完。数据量大概是10W左右,也就是说,sql的质量很差劲。咱们这里借着学习优化器,顺便也优化一下咱们的sql;

起源文章:https://www.jianshu.com/p/caf5818eca81

二、什么是mysql的优化器

1、解释

      优化器的主要作用就是为待执行的sql语句找到最优的执行计划。也就是mysql会在我们执行每条sql语句的时候,自动为我们找到它认为的最佳方案。这个最佳方案是通过一定的原则,对比使用不同索引的查询行数总数,查询开销对比,选择出最小的方案。

其分析过程主要分为逻辑优化和物理优化。具体的请参照下面贴出的代码注释

2、优化器的优缺点

      从MySQL5.6版本开始,optimizer_trace 可支持把MySQL查询执行计划树打印出来,对DBA深入分析SQL执行计划,COST成本都非常有用,打印的内部信息比较全面。默认是关闭的,功能支持动态开关,因为对性能有20%左右影响,只建议分析问题时,临时开启。

也就是说,我们要只管的看到优化器的话,是要牺牲一些性能的,所以这就意味着我们不能随便在线上开启这个功能。在本地测试sql的时候,是可以这样做的。

参考:https://blog.csdn.net/zhang123456456/article/details/73824710

3、开启优化器

1.set optimizer_trace='enabled=on';    --- 开启trace

2.set optimizer_trace_max_mem_size=1000000;    --- 设置trace大小

3.set end_markers_in_json=on;    --- 增加trace中注释

4.select * from information_schema.optimizer_trace\G;   //计划树打印

4、关于优化器相关的一些优秀博客推荐

			(1)optimizer trace   :https://www.jianshu.com/p/caf5818eca81
			(2)从MySQL5.6版本开始,optimizer_trace 可支持把MySQL查询执行计划树打印出来,对DBA深入分析SQL执行计划,COST成本都非常有用,打印的内部信息比较全面。默认是关闭的,功能支持动态开关,因为对性能有20%左右影响,只建议分析问题时,临时开启。
			参考:https://blog.csdn.net/zhang123456456/article/details/73824710
			(3)  解析的很详细:https://blog.csdn.net/melody_mr/article/details/48950601
			(4)跟踪解析optimizer trace  :https://yq.aliyun.com/articles/41060

			关于分析,还参考了:https://yq.aliyun.com/articles/41060

三、贴出分析文件(文件很长,耐心看一下就懂了)

      怕大家看不进去,我先说几句,主要看看注释部分,大概了解优化器是怎么工作的就行。看一下优化器计算rows和cost部分的注释,大家可以用ctrl+F来查询看,这样快一点。

好的,我要开始了!

mysql> select trace from information_schema.optimizer_trace\G;
*************************** 1. row ****************************
trace:
{
"steps": [
{
"join_preparation": {     //优化准备阶段
"select#": 1,
"steps": [
{                         //标出查询的sql
"expanded_query": "/* select#1 */ select `a`.`uin` AS `uin`,`a`.`game_centerid` AS `game_centerid`,`a`.`fbuid` AS `fbuid`,0 AS `app_version_c`,`a`.`reg_ip` AS `player_register_ip`,`b`.`last_login_ip` AS `player_last_login_ip`,`b`.`reg_time` AS `player_create_time`,`a`.`reg_time_stamp` AS `user_create_time`,`a`.`last_login_time` AS `user_last_login_time`,`a`.`fb_email` AS `fb_email`,`a`.`ticket` AS `ticket`,'' AS `androidid`,'' AS `imei`,'' AS `adid`,`a`.`last_login_time_stamp` AS `user_last_login_time_stamp`,`a`.`STATUS` AS `user_STATUS`,`a`.`country` AS `country`,`a`.`is_robot` AS `is_robot`,`a`.`platform` AS `platform`,`a`.`language` AS `language`,`a`.`model` AS `model`,`a`.`osver` AS `osver`,`a`.`session_key` AS `session_key`,`b`.`userid` AS `userid`,`b`.`serverid` AS `serverid`,`b`.`last_login_time` AS `player_last_login_time`,`b`.`STATUS` AS `player_STATUS`,`d`.`server_name` AS `server_name`,`a`.`email_state` AS `email_state` from ((`user` `a` left join `user_server` `b` on((`a`.`uin` = `b`.`uin`))) left join `server_list` `d` on((`d`.`serverid` = `b`.`serverid`))) where 1 order by `a`.`uin` desc limit 10"
},
{
"transformations_to_nested_joins":
{
"transformations":
[
"parenthesis_removal"
] ,
"expanded_query": "/* select#1 */ select `a`.`uin` AS `uin`,`a`.`game_centerid` AS `game_centerid`,`a`.`fbuid` AS `fbuid`,0 AS `app_version_c`,`a`.`reg_ip` AS `player_register_ip`,`b`.`last_login_ip` AS `player_last_login_ip`,`b`.`reg_time` AS `player_create_time`,`a`.`reg_time_stamp` AS `user_create_time`,`a`.`last_login_time` AS `user_last_login_time`,`a`.`fb_email` AS `fb_email`,`a`.`ticket` AS `ticket`,'' AS `androidid`,'' AS `imei`,'' AS `adid`,`a`.`last_login_time_stamp` AS `user_last_login_time_stamp`,`a`.`STATUS` AS `user_STATUS`,`a`.`country` AS `country`,`a`.`is_robot` AS `is_robot`,`a`.`platform` AS `platform`,`a`.`language` AS `language`,`a`.`model` AS `model`,`a`.`osver` AS `osver`,`a`.`session_key` AS `session_key`,`b`.`userid` AS `userid`,`b`.`serverid` AS `serverid`,`b`.`last_login_time` AS `player_last_login_time`,`b`.`STATUS` AS `player_STATUS`,`d`.`server_name` AS `server_name`,`a`.`email_state` AS `email_state` from `user` `a` left join `user_server` `b` on((`a`.`uin` = `b`.`uin`)) left join `server_list` `d` on((`d`.`serverid` = `b`.`serverid`)) where 1 order by `a`.`uin` desc limit 10"
}
}
]
}
},
{
"join_optimization": {      //优化工作的主要阶段,包括逻辑优化和物理优化两个阶段
"select#": 1,
"steps": [
{
"condition_processing": {   //逻辑优化部分
"condition": "WHERE",       //先看where条件,我这边没有where条件,所以这部分就分析的比较少
"original_condition": "1",
"steps": [
{
"transformation": "equality_propagation",   //逻辑优化,条件化简,等式处理
"resulting_condition": "1"
},
{
"transformation": "constant_propagation",   //逻辑优化,条件化简,常量处理
"resulting_condition": "1"
},
{
"transformation": "trivial_condition_removal",  //逻辑优化,条件化简,条件去除
"resulting_condition": null
}
]
}
},         //这里逻辑优化之where条件优化结束
{
"substitute_generated_columns": {
}
},
{
"table_dependencies": [   //逻辑优化, 找出表之间的相互依赖关系. 非直接可用的优化方式。我这里是三表相连,所以列出三张表
{
"table": "`user` `a`",
"row_may_be_null": false,   //是否可以不存在该行。(PS:这部分我理解是因为我用的left join,所以左表的数据行不能为null,右表是可以的)
"map_bit": 0,               //这行类似于排序位置,从0开始
"depends_on_map_bits": [
]
},
{
"table": "`user_server` `b`",
"row_may_be_null": true,
"map_bit": 1,
"depends_on_map_bits": [
0
]
},
{
"table": "`server_list` `d`",
"row_may_be_null": true,
"map_bit": 2,
"depends_on_map_bits": [
0,
1
]
}
]
},
{
"ref_optimizer_key_uses": [     //逻辑优化,  找出备选的索引
{
"table": "`user_server` `b`",
"field": "uin",           //这个是索引字段,下面的雷同
"equals": "`a`.`uin`",
"null_rejecting": false
},
{
"table": "`server_list` `d`",
"field": "serverid",
"equals": "`b`.`serverid`",
"null_rejecting": true
}
]
},
{
"rows_estimation": [    //逻辑优化, 估算每个表的元组个数. 单表上进行全表扫描和索引扫描的代价估算. 每个索引都估算索引扫描代价(估算行数)
{
"table": "`user` `a`",
"table_scan": { //逻辑优化, 估算每个表的元组个数. 单表上进行全表扫描的代价
"rows": 5574,   //行数
"cost": 97      //代价。这个值越大,代表花费的代价就越大
}
},
{
"table": "`user_server` `b`",
"table_scan": {
"rows": 82555,
"cost": 417
}
},
{
"table": "`server_list` `d`",
"table_scan": {
"rows": 2,
"cost": 1
}
}
]
},
{
"considered_execution_plans": [   //物理优化, 开始多表连接的物理优化计算。这里也是相当于对多表进行连接上的排序,因为我们用的左连接,所以这里的顺序是既定的
{
"plan_prefix": [
] ,
"table": "`user` `a`",      //这里对应不同的表名,以及表的分析结果,这里为三表联合,所以下面两个表的分析和这个类似
"best_access_path": {       //最优结果汇总
"considered_access_paths": [
{
"rows_to_scan": 5574,
"access_type": "scan",  //代表全表扫描
"resulting_rows": 5574,
"cost": 1211.8,
"chosen": true      //代表这个计算方式是可行的(PS:也就是主表采用了全表扫描,索引字段没用上,从这一步开始,就有问题了!!!)
}
]
} ,
"condition_filtering_pct": 100,
"rows_for_plan": 5574,
"cost_for_plan": 1211.8,
"rest_of_plan": [
{
"plan_prefix": [
"`user` `a`"
] ,
"table": "`user_server` `b`",   //user_server表
"best_access_path": {
"considered_access_paths": [
{
"access_type": "ref",       //根据下面的index,可知查询方式为使用索引
"index": "uin",
"rows": 16.416,
"cost": 109802,
"chosen": true            //选用
},
{
"rows_to_scan": 82555,
"access_type": "scan",    //这部分代表全表扫描没有被采用
"using_join_cache": true,
"buffers_needed": 35,
"resulting_rows": 82555,
"cost": 9.2e7,
"chosen": false
}
]
} ,
"condition_filtering_pct": 100,   //这部分推测为前两个表连接需要扫描的行数和花费计划
"rows_for_plan": 91502,
"cost_for_plan": 111014,
"rest_of_plan": [
{
"plan_prefix": [
"`user` `a`",
"`user_server` `b`"
] ,
"table": "`server_list` `d`",
"best_access_path": {
"considered_access_paths": [
{
"access_type": "eq_ref",
"index": "PRIMARY",   //第三个表使用主键索引即可
"rows": 1,
"cost": 109802,
"chosen": true,
"cause": "clustered_pk_chosen_by_heuristics"
},
{
"rows_to_scan": 2,
"access_type": "scan",
"using_join_cache": true,
"buffers_needed": 619,
"resulting_rows": 2,
"cost": 37220,
"chosen": true
}
]
},
"condition_filtering_pct": 100,
"rows_for_plan": 183003,   //这应该算是最终的查询量,只是本人推测
"cost_for_plan": 148234,
"chosen": true
}
]
}
]
}
]
},
{
"condition_on_constant_tables": "1",
"condition_value": true
},
{
"attaching_conditions_to_tables": {   //逻辑优化,尽量把条件绑定到对应的表上。此处条件为server_list表的条件
"original_condition": "1",
"attached_conditions_computation": [
{
"table": "`server_list` `d`",
"rechecking_index_usage": {
"recheck_reason": "not_first_table",   //重新检查,不是第一个表
"range_analysis": {
"table_scan": {
"rows": 2,
"cost": 3.5
} ,
"potential_range_indexes": [    //逻辑优化, 列出备选的索引。此处列出的为server_list表的索引
{
"index": "PRIMARY",
"usable": true,       //这个字段代表是否用到了索引
"key_parts": [
"serverid"
]
},
{
"index": "gameid",
"usable": false,
"cause": "not_applicable"
},
{
"index": "server_state",
"usable": false,
"cause": "not_applicable"
}
],
"setup_range_conditions": [
],
"group_index_range": {      //分组的索引范围。因为不是单一的表,所以这里显示了false。(PS:分组这里也有问题,就是什么索引都没用到,所以才会影响速度)
"chosen": false,
"cause": "not_single_table"
},
"analyzing_range_alternatives": {   //逻辑优化,开始计算每个索引做范围扫描的花费(等值比较是范围扫描的特例)
"range_scan_alternatives": [
{
"index": "PRIMARY",
"chosen": false,
"cause": "depends_on_unread_values"  //取决于未读的值
}
],
"analyzing_roworder_intersect": {  //分析相交的行
"usable": false,
"cause": "too_few_roworder_scans"   //太少的扫描行
}
}
}
}
}
],
"attached_conditions_summary": [    //附加条件汇总,这一栏在函数最后打印上述的最终结果
{
"table": "`user` `a`",
"attached": null
},
{
"table": "`user_server` `b`",
"attached": null
},
{
"table": "`server_list` `d`",
"attached": "<if>(is_not_null_compl(d), (`d`.`serverid` = `b`.`serverid`), true)"   //只有server_list表有
}
]
}
},
{
"clause_processing": {  //子句处理,一般是尝试优化distinct/group by/ order by等
"clause": "ORDER BY",
"original_clause": "`a`.`uin` desc",
"items": [
{
"item": "`a`.`uin`"
}
],
"resulting_clause_is_simple": true,
"resulting_clause": "`a`.`uin` desc"
}
},
{
"refine_plan": [    //完善计划
{
"table": "`user` `a`"
},
{
"table": "`user_server` `b`"
},
{
"table": "`server_list` `d`"
}
]
}
]
}
},
{
"join_execution": {       //表连接执行相关
"select#": 1,
"steps": [
{
"creating_tmp_table": {       //此处创建了临时表。临时表和文件排序,都对应explain执行计划中的extra:Using temporary; Using filesort
"tmp_table_info": {
"table": "intermediate_tmp_table",
"row_length": 1915,
"key_length": 0,
"unique_constraint": false,
"location": "memory (heap)",      //内存表
"row_limit_estimate": 8760      //行数限制估计
}
}
},
{
"converting_tmp_table_to_ondisk": {
"cause": "memory_table_size_exceeded",
"tmp_table_info": {
"table": "intermediate_tmp_table",
"row_length": 1915,
"key_length": 0,
"unique_constraint": false,
"location": "disk (InnoDB)",      //磁盘表
"record_format": "packed"
}
}
},
{
"filesort_information": [     //文件排序相关
{
"direction": "desc",
"table": "intermediate_tmp_table",
"field": "uin"            //文件排序的字段
}
],
"filesort_priority_queue_optimization": {
"limit": 10,
"rows_estimate": 352,
"row_size": 14,
"memory_available": 262144,
"chosen": true
},
"filesort_execution": [
],
"filesort_summary": {     //文件排序汇总
"rows": 11,
"examined_rows": 82400,
"number_of_tmp_files": 0,
"sort_buffer_size": 248,
"sort_mode": "<sort_key, rowid>"
}
}
]
}
}
]
}

四、根据上面的优化器分析,优化一下咱们的sql

1、原始的执行计划

+----+-------------+-------+------------+------+---------------+------+---------+----------------+------+----------+----------------------------------------------------+
| id | select_type | table | partitions | type | possible_keys | key  | key_len | ref            | rows | filtered | Extra                                              |
+----+-------------+-------+------------+------+---------------+------+---------+----------------+------+----------+----------------------------------------------------+
|  1 | SIMPLE      | a     | NULL       | ALL  | NULL          | NULL | NULL    | NULL           | 5574 |   100.00 | Using temporary; Using filesort                    |
|  1 | SIMPLE      | b     | NULL       | ref  | uin           | uin  | 8       | evonymob.a.uin |   16 |   100.00 | NULL                                               |
|  1 | SIMPLE      | d     | NULL       | ALL  | PRIMARY       | NULL | NULL    | NULL           |    2 |   100.00 | Using where; Using join buffer (Block Nested Loop) |
+----+-------------+-------+------------+------+---------------+------+---------+----------------+------+----------+----------------------------------------------------+

      光看执行计划就知道,这个sql是十分辣鸡的。不过也很奇怪,这些连接字段都是加上索引的,但是速度还是很慢。

2、优化器的分析结果:

1、主表没有用到索引,导致全表扫描
2、主表没有用到索引,导致后续的order by也没用到索引,从而在查询的时候,使用了临时表,影响速度
3、优化器显示最后使用了文件排序,而使用文件排序的原罪就是order by的字段没用上索引
根据分析结果,罪恶之源就是主表的索引没用到,所以这里强制给主表加索引试试。

3、给主表加强制索引之后的执行计划

+----+-------------+-------+------------+-------+---------------+---------+---------+----------------+------+----------+-------------+
| id | select_type | table | partitions | type  | possible_keys | key     | key_len | ref            | rows | filtered | Extra       |
+----+-------------+-------+------------+-------+---------------+---------+---------+----------------+------+----------+-------------+
|  1 | SIMPLE      | a     | NULL       | index | NULL          | PRIMARY | 8       | NULL           |    1 |   100.00 | NULL        |
|  1 | SIMPLE      | b     | NULL       | ref   | uin           | uin     | 8       | evonymob.a.uin |   16 |   100.00 | NULL        |
|  1 | SIMPLE      | d     | NULL       | ALL   | PRIMARY       | NULL    | NULL    | NULL           |    2 |   100.00 | Using where |

      大家可以看到,同样的sql,比着上面的那个强多了,扫描的行数也少了很多。本来是执行50多秒的sql,瞬间0.3秒就能跑完。世界都安静了

end

  • 8
    点赞
  • 13
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 4
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

铁柱同学

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值