Hive analyze命令解析

关于Hive analyze命令
1. 命令用法:
表与分区的状态信息统计
ANALYZE TABLE tablename
[PARTITION(partcol1[=val1], partcol2[=val2], ...)]
COMPUTE STATISTICS [noscan];
列信息统计
ANALYZE TABLE tablename
[PARTITION(partcol1[=val1], partcol2[=val2], ...)]
COMPUTE STATISTICS FOR COLUMNS ( columns name1 , columns name2…) [noscan];
当表存在分区时,需要在命令中指定,否则会报错;
不支持使用列与表的别名
2. 某个有分区表的analyze命令执行结果:
Partition default.test{dt=a} stats: [num_files: 1, num_rows: 0, total_size: 41, raw_data_size: 0]
Table default.test stats: [num_partitions: 1, num_files: 1, num_rows: 0, total_size: 41, raw_data_size: 0]
3. 源码分析执行过程
命令的执行步骤:
当完成命令转化完ast树时 进入ColumnStatsSemanticAnalyzer类。
1) 命令类型的检查( 比如 no scan , partial scan 等)
2) 查询重写,例如执行以下查询:
analyze table pokes compute statistics for columns foo,bar;
以上查询会根据ast树,获取 表名, 列名,列的数量和类型。分区名字,数量等信息。生成一个新的查询:
select compute_stats(foo , 16 ) , compute_stats(bar , 16 ) from pokes
3) 生成新的ast树
4) 回到Driver 完成语法分析,生成查询计划。 在做语法分析时使用了新的ast树和原有的ctx
5) 生成一个列统计的任务替代fetch task,并写统计信息到metastore中。列统计任务的生成是通过MapReduceCompiler 类中的genColumsStatsTask方法来完成的,每个task中都有对应的work 。核心代码如下:

对于2中的查询,所生成的rootTasks 如下:

上图中的MapRedTask会执行一次聚合操作的RS.
6) 生成plan 。 对于2中的查询,将会生成如下的plan:
{"queryId":"zhangyun_20140403102424_07e3332f-12b9-4c54-b30f-f5fc912bb032","queryType":null,"queryAttributes":{"queryString":"analyze table pokes compute statistics for columns foo,bar"},"queryCounters":"null","stageGraph":{"nodeType":"STAGE","roots":"null","adjacencyList":"]"},"stageList":[{"stageId":"Stage-0","stageType":"MAPRED","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-0_MAP","taskType":"MAP","taskAttributes":"null","taskCounters":"null","operatorGraph":{"nodeType":"OPERATOR","roots":"null","adjacencyList":[{"node":"TS_0","children":["SEL_1"],"adjacencyType":"CONJUNCTIVE"},{"node":"SEL_1","children":["GBY_2"],"adjacencyType":"CONJUNCTIVE"},{"node":"GBY_2","children":["RS_3"],"adjacencyType":"CONJUNCTIVE"}]},"operatorList":[{"operatorId":"TS_0","operatorType":"TABLESCAN","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"SEL_1","operatorType":"SELECT","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"GBY_2","operatorType":"GROUPBY","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"RS_3","operatorType":"REDUCESINK","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"}],"done":"false","started":"false"},{"taskId":"Stage-0_REDUCE","taskType":"REDUCE","taskAttributes":"null","taskCounters":"null","operatorGraph":{"nodeType":"OPERATOR","roots":"null","adjacencyList":[{"node":"GBY_4","children":["SEL_5"],"adjacencyType":"CONJUNCTIVE"},{"node":"SEL_5","children":["FS_6"],"adjacencyType":"CONJUNCTIVE"}]},"operatorList":[{"operatorId":"GBY_4","operatorType":"GROUPBY","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"SEL_5","operatorType":"SELECT","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"},{"operatorId":"FS_6","operatorType":"FILESINK","operatorAttributes":"null","operatorCounters":"null","done":"false","started":"false"}],"done":"false","started":"false"}],"done":"false","started":"false"},{"stageId":"Stage-1","stageType":"COLUMNSTATS","stageAttributes":"null","stageCounters":"}","taskList":[{"taskId":"Stage-1_OTHER","taskType":"OTHER","taskAttributes":"null","taskCounters":"null","operatorGraph":"null","operatorList":"]","done":"false","started":"false"}],"done":"false","started":"false"}],"done":"false","started":"false"}
7) 列统计信息的输出schema
Schema(
fieldSchemas:[FieldSchema(name:_c0,type:struct<columntype:string,min:bigint,max:bigint,countnulls:bigint,numdistinctvalues:bigint>,comment:null),
FieldSchema(name:_c1,type:struct<columntype:string,maxlength:bigint,avglength:double,countnulls:bigint,numdistinctvalues:bigint>, comment:null)],
properties:null)
以下是一些总结的资料:
1:列统计
针对表中的列数,特定列数据的直方图,有多种方式可以实现。作为查询优化的一种方法,统计输入给优化器的代价函数,然后优化器比较不同的计划,并从中获取较优的计划。
统计有时能够满足用户的查询,从而让用户,快速获取结果(需执行存储的统计信息,而不需要触发长时间的执行计划)
注: 以上来自wiki 但目前还没有实现统计输入给优化器的这种优化。
2:范围
Hive 现在支持的表和分区级别的统计, 不支持列中数据的统计。由于这些表和分区的统计不足以完成1中所述通过cost model 计算获取最优的计算。
统计首先要支持表和分区,这些统计会存在MetaStore中
比如分区:
• Number of Rows
• Number of files
• Size in Bytes.
针对表,还包括表中的分区的格式Number of Partitions
针对分区级别的统计,可以实现列级别前N个值Top K Statistics

3:实现
针对新创建的表,如果一个JOB创建一个表通过MapReduce Job,每个Mapper在复制列时,对应收集统计信息在Job结束时,也被汇总存在在MetaStore中
针对已经存在的表,在表扫描操作时,也会搜集相应的统计信息,并且存储在结果中
当然,需要一个数据库存储临时的统计数据:MySQL或者HBase
有两个接口
IStatsPublisher
IStatsAggregateor
4. 具体接口内容:
public interface IStatsPublisher{
public boolean init(Configuration hconf);
 
public boolean publishStat(String rowID,String key,String value);
 
public boolean terminate();
}
 
public interface IStatsAggregator{
public boolean init(Configuration hconf);
 
public String aggregateStats(String rowID,String key);
 
public boolean terminate();
}
场景:
1:新创建的表
针对通过INSERT OVERWRITE生成的表和分区,统计会自动计算生成,可以通过配置控制是否生效
set hive.stats.autogather=false;
将不会生成统计信息
用户可以自定义统计的实现,来指定临时统计信息的存储
hive.stats.dbclass=hbase
将通过hbase来存储
缺省为{{jdbc:derby}}
针对通过JDBC来实现临时存储统计(Derby或者mysql),用户可以指定对应的连接字符变量
set hive.stats.dbclass=jdbc:derby;
set hive.stats.dbconnectionstring="jdbc:derby:;databaseName=TempStatsStore;create=true";
set hive.stats.jdbcdriver="org.apache.derby.jdbc.EmbeddedDriver";
针对查询可能会无法准确的收集统计信息
可通过hive.stats.reliable可以设置如果不能够可靠的收集统计信息,则查询失败,缺省是false
2:已存在的表
对应已存在的表,需要通过
ANALYZE来搜集统计信息并写入MetaStore
ANALYZE TABLE tablename [PARTITION(partcol1[=val1], partcol2[=val2], ...)] COMPUTE STATISTICS [noscan];
如果用户执行该命令,但是不指定任何分区时,会搜集所有表和分区,如果指定分区则只搜集该分区的统计信息
如果统计是跨分区的,则分区列仍然需要指定
如果noscan参数指定,该命令就不扫描文件,从而更快,但是此时的统计就只限于如下项:
• Number of files
• Physical size in bytes

实例:
假定有一个表具有4个分区
• Partition1: (ds='2013-03-08', hr=11)
• Partition2: (ds='2013-03-08', hr=12)
• Partition3: (ds='2013-03-09', hr=11)
• Partition4: (ds='2013-03-09', hr=12)

执行命令如下
ANALYZE TABLE Table1 PARTITION(ds='2013-03-09', hr=11) COMPUTE STATISTICS;
将只搜集分区3
如果执行
ANALYZE TABLE Table1 PARTITION(ds='2013-03-09', hr) COMPUTE STATISTICS;
则同时搜集分区间3和4

如果执行
ANALYZE TABLE Table1 PARTITION(ds, hr) COMPUTE STATISTICS;
则搜集所有分区

针对分分区表,通过
ANALYZE TABLE Table1 COMPUTE STATISTICS;

查看分区的统计信息:
DESCRIBE EXTENDED TABLE1;
输入类似:
... , parameters:{numPartitions=4, numFiles=16, numRows=2000, totalSize=16384, ...}, ....
实例:
DESCRIBE EXTENDED TABLE1 PARTITION(ds='2013-03-09', hr=11);
 

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值