java中explain什么意思_explain详解

[TOC]

# 语法

~~~

explain [extended | dependency | authorization] query

~~~

加上extended就是显示更详细的信息

# hive语句执行顺序

## msyql语句执行顺序

代码写的顺序:

~~~

select ... from... where.... group by... having... order by..

或者

from ... select ...

~~~

代码的执行顺序:

~~~

from... where...group by... having.... select ... order by...

~~~

## hive 语句执行顺序

大致顺序

~~~

from … where … group by … having … select … order by …

from … on … join … where … group by … having …

select … distinct … order by … limit

~~~

# explain查看执行计划

## 例子一

~~~

select count(1) from dw.fact_ord_arranged where dt = '20160101'

~~~

~~~

Explain

STAGE DEPENDENCIES:

Stage-1 is a root stage

Stage-0 is a root stage

STAGE PLANS:

Stage: Stage-1

Map Reduce

Map Operator Tree: --------------- Map阶段

TableScan

alias: fact_ord_arranged --------------- 扫描的表

Statistics: Num rows: 0 Data size: 1379094784 Basic stats: PARTIAL Column stats: COMPLETE

Select Operator

Statistics: Num rows: 0 Data size: 1379094784 Basic stats: PARTIAL Column stats: COMPLETE

Group By Operator

aggregations: count(1) --------------- 聚合函数

mode: hash

outputColumnNames: _col0 --------------- 临时字段

Statistics: Num rows: 1 Data size: 8 Basic stats: COMPLETE Column stats: COMPLETE

Reduce Output Operator

sort order:

Statistics: Num rows: 1 Data size: 8 Basic stats: COMPLETE Column stats: COMPLETE

value expressions: _col0 (type: bigint)

Reduce Operator Tree: --------------- Reduce阶段

Group By Operator

aggregations: count(VALUE._col0)

mode: mergepartial

outputColumnNames: _col0

Statistics: Num rows: 1 Data size: 8 Basic stats: COMPLETE Column stats: COMPLETE

Select Operator

expressions: _col0 (type: bigint)

outputColumnNames: _col0

Statistics: Num rows: 1 Data size: 8 Basic stats: COMPLETE Column stats: COMPLETE

File Output Operator

compressed: false

Statistics: Num rows: 1 Data size: 8 Basic stats: COMPLETE Column stats: COMPLETE

table:

input format: org.apache.hadoop.mapred.TextInputFormat

output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat --------------- 输出文件格式

serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe

Stage: Stage-0

Fetch Operator

limit: -1 --------------- job没有limit,所有没有操作

~~~

## 例子二

~~~

explain

select city,ad_type,device,sum(cnt) as cnt

from tb_pmp_raw_log_basic_analysis

where day = '2016-05-28' and type = 0 and media = 'sohu' and (deal_id = '' or deal_id = '-' or deal_id is NULL)

group by city,ad_type,device

~~~

~~~

STAGE DEPENDENCIES:

Stage-1 is a root stage

Stage-0 is a root stage

STAGE PLANS:

Stage: Stage-1

Map Reduce

Map Operator Tree:

TableScan

alias: tb_pmp_raw_log_basic_analysis

Statistics: Num rows: 8195357 Data size: 580058024 Basic stats: COMPLETE Column stats: NONE

Filter Operator

predicate: (((deal_id = '') or (deal_id = '-')) or deal_id is null) (type: boolean)

Statistics: Num rows: 8195357 Data size: 580058024 Basic stats: COMPLETE Column stats: NONE

Select Operator

expressions: city (type: string), ad_type (type: string), device (type: string), cnt (type: bigint)

outputColumnNames: city, ad_type, device, cnt

Statistics: Num rows: 8195357 Data size: 580058024 Basic stats: COMPLETE Column stats: NONE

Group By Operator

aggregations: sum(cnt)

keys: city (type: string), ad_type (type: string), device (type: string)

mode: hash

outputColumnNames: _col0, _col1, _col2, _col3

Statistics: Num rows: 8195357 Data size: 580058024 Basic stats: COMPLETE Column stats: NONE

Reduce Output Operator

key expressions: _col0 (type: string), _col1 (type: string), _col2 (type: string)

sort order: +++

Map-reduce partition columns: _col0 (type: string), _col1 (type: string), _col2 (type: string)

Statistics: Num rows: 8195357 Data size: 580058024 Basic stats: COMPLETE Column stats: NONE

value expressions: _col3 (type: bigint)

Reduce Operator Tree:

Group By Operator

aggregations: sum(VALUE._col0)

keys: KEY._col0 (type: string), KEY._col1 (type: string), KEY._col2 (type: string)

mode: mergepartial

outputColumnNames: _col0, _col1, _col2, _col3

Statistics: Num rows: 4097678 Data size: 290028976 Basic stats: COMPLETE Column stats: NONE

Select Operator

expressions: _col0 (type: string), _col1 (type: string), _col2 (type: string), _col3 (type: bigint)

outputColumnNames: _col0, _col1, _col2, _col3

Statistics: Num rows: 4097678 Data size: 290028976 Basic stats: COMPLETE Column stats: NONE

File Output Operator

compressed: false

Statistics: Num rows: 4097678 Data size: 290028976 Basic stats: COMPLETE Column stats: NONE

table:

input format: org.apache.hadoop.mapred.TextInputFormat

output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat

serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe

Stage: Stage-0

Fetch Operator

limit: -1

~~~

~~~

具体介绍如下

**stage1的map阶段**

TableScan:from加载表,描述中有行数和大小等

Filter Operator:where过滤条件筛选数据,描述有具体筛选条件和行数、大小等

Select Operator:筛选列,描述中有列名、类型,输出类型、大小等。

Group By Operator:分组,描述了分组后需要计算的函数,keys描述用于分组的列,outputColumnNames为输出的列名,可以看出列默认使用固定的别名_col0,以及其他信息

Reduce Output Operator:map端本地的reduce,进行本地的计算,然后按列映射到对应的reduce

**stage1的reduce阶段Reduce Operator Tree**

Group By Operator:总体分组,并按函数计算。map计算后的结果在reduce端的合并。描述类似。mode: mergepartial是说合并map的计算结果。map端是hash映射分组

Select Operator:最后过滤列用于输出结果

File Output Operator:输出结果到临时文件中,描述介绍了压缩格式、输出文件格式。

stage0第二阶段没有,这里可以实现limit 100的操作。

~~~

总结

~~~

1,每个stage都是一个独立的MR,复杂的hql语句可以产生多个stage,可以通过执行计划的描述,看看具体步骤是什么。

2,执行计划有时预测数据量,不是真实运行,可能不准确

~~~

# group by的MR

~~~

hive语句最好写子查询嵌套,这样分阶段的导入数据,可以逐步减少数据量。但可能会浪费时间。所以需要设计好。

group by本身也是一种数据筛选,可以大量减少数据,尤其用于去重等方面,功效显著。但group by产生MR有时不可控,不知道在哪个阶段更好。尤其,map端本地的reduce减少数据有很大作用。

尤其,hadoop的MR不患寡而患不均。数据倾斜将是MR计算的最大瓶颈。hive中可以设置分区、桶、distribute by等来控制分配数据给Reduce。

那么,group by生成MR是否可以优化呢?

下面两端代码,可以对比一下,

~~~

代码1

~~~

explain

select advertiser_id,crt_id,ad_place_id,channel,ad_type,rtb_type,media,count(1) as cnt

from (

select

split(all,'\\\\|~\\\\|')[41] as advertiser_id,

split(all,'\\\\|~\\\\|')[11] as crt_id,

split(all,'\\\\|~\\\\|')[8] as ad_place_id,

split(all,'\\\\|~\\\\|')[34] as channel,

split(all,'\\\\|~\\\\|')[42] as ad_type,

split(all,'\\\\|~\\\\|')[43] as rtb_type,

split(split(all,'\\\\|~\\\\|')[5],'/')[1] as media

from tb_pmp_raw_log_bid_tmp tb

) a

group by advertiser_id,crt_id,ad_place_id,channel,ad_type,rtb_type,media;

~~~

代码2

~~~

explain

select

split(all,'\\\\|~\\\\|')[41] as advertiser_id,

split(all,'\\\\|~\\\\|')[11] as crt_id,

split(all,'\\\\|~\\\\|')[8] as ad_place_id,

split(all,'\\\\|~\\\\|')[34] as channel,

split(all,'\\\\|~\\\\|')[42] as ad_type,

split(all,'\\\\|~\\\\|')[43] as rtb_type,

split(split(all,'\\\\|~\\\\|')[5],'/')[1] as media

from tb_pmp_raw_log_bid_tmp tb

group by split(all,'\\\\|~\\\\|')[41],split(all,'\\\\|~\\\\|')[11],split(all,'\\\\|~\\\\|')[8],split(all,'\\\\|~\\\\|')[34],split(all,'\\\\|~\\\\|')[42],split(all,'\\\\|~\\\\|')[43],split(split(all,'\\\\|~\\\\|')[5],'/')[1]

~~~

~~~

先进行子查询,然后group by,还是直接group by,两种那个好一点,

我个人测试后认为,数据量小,第一种会好一点,如果数据量大,可能第二种会好。至于数据量多大。TB级以下的都是小数据。

两个执行计划对比如下,可以看出基本执行的步骤的数据分析量差不多。

group by一定要用,但内外,先后执行顺序效果差不多。

~~~

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值