Hive_HIVE优化指南_场景七_数据倾斜问题

 

参考文章:

1. hive.map.aggr、hive.groupby.skewindata执行过程

https://blog.csdn.net/chybin500/article/details/80988089

2.hive倾斜的最终解决方案(非常经典),结合了Spark倾斜优化的一些思想

hive数据倾斜优化 

https://blog.csdn.net/jin6872115/article/details/79878391

3.还算不错,无 大表 JOIN 大表 的优化

Hive学习之路 (十九)Hive的数据倾斜

https://www.cnblogs.com/qingyunzong/p/8847597.html

 

 

==================================

 

 

 

 

Hive 优化,大纲地址 :

https://blog.csdn.net/u010003835/article/details/105334641

 

测试数据

 

场景七.数据倾斜问题

GROUP BY 场景下的数据倾斜

 

 

JOIN 场景下的数据倾斜

 

1) 由于空值导致的数据倾斜问题

2) 由于数据类型不一致,导致的转换问题,导致的数据倾斜

3) 业务数据本身分布不均,导致的数据倾斜

3)  下面4个小场景

i.大表与小表JOIN (Map JOIN)

ii.大表与大表JOIN, 一张表数据分布均匀,另一张表数据特定的KEY(有限几个) 分布不均

iii.大表与大表JOIN, 一张表数据分布均匀,另一张表大量的KEY 分布不均

iiii.大表与大表JOIN, 桶表,进行表拆分

 

 

================================================================

                                下面我们进入正题

 

 

测试数据

use data_warehouse_test;
 
 
CREATE TABLE IF NOT EXISTS datacube_salary_org (
 company_name STRING COMMENT '公司名称'
 ,dep_name STRING COMMENT '部门名称'
 ,user_id BIGINT COMMENT '用户id'
 ,user_name STRING COMMENT '用户姓名'
 ,salary DECIMAL(10,2) COMMENT '薪水'
 ,create_time DATE COMMENT '创建时间'
 ,update_time DATE COMMENT '修改时间'
) 
PARTITIONED BY(
 pt STRING COMMENT '数据分区'
)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
STORED AS TEXTFILE
;
 
 
CREATE TABLE IF NOT EXISTS datacube_salary_basic_aggr(
 company_name STRING COMMENT '公司名称'
 ,dep_name STRING COMMENT '部门名称'
 ,user_id BIGINT COMMENT '用户id'
 ,salary DECIMAL(10,2) COMMENT '薪水'
)
STORED AS ORC
;
 
 
CREATE TABLE IF NOT EXISTS datacube_salary_dep_aggr(
 company_name STRING COMMENT '公司名称'
 ,dep_name STRING COMMENT '部门名称'
 ,total_salary DECIMAL(10,2) COMMENT '薪水'
)
STORED AS ORC
;
 
 
CREATE TABLE IF NOT EXISTS datacube_salary_company_aggr(
 company_name STRING COMMENT '公司名称'
 ,total_salary DECIMAL(10,2) COMMENT '薪水'
)
STORED AS ORC
;
 
 
CREATE TABLE IF NOT EXISTS datacube_salary_total_aggr(
 total_salary DECIMAL(10,2) COMMENT '薪水'
)
STORED AS ORC
;

 

用到的表的数据:

datacube_salary_org 表的数据

+-----------------------------------+-------------------------------+------------------------------+--------------------------------+-----------------------------+----------------------------------+----------------------------------+-------------------------+
| datacube_salary_org.company_name  | datacube_salary_org.dep_name  | datacube_salary_org.user_id  | datacube_salary_org.user_name  | datacube_salary_org.salary  | datacube_salary_org.create_time  | datacube_salary_org.update_time  | datacube_salary_org.pt  |
+-----------------------------------+-------------------------------+------------------------------+--------------------------------+-----------------------------+----------------------------------+----------------------------------+-------------------------+
| s.zh                              | engineer                      | 1                            | szh                            | 28000.00                    | 2020-04-07                       | 2020-04-07                       | 20200405                |
| s.zh                              | engineer                      | 2                            | zyq                            | 26000.00                    | 2020-04-03                       | 2020-04-03                       | 20200405                |
| s.zh                              | tester                        | 3                            | gkm                            | 20000.00                    | 2020-04-07                       | 2020-04-07                       | 20200405                |
| x.qx                              | finance                       | 4                            | pip                            | 13400.00                    | 2020-04-07                       | 2020-04-07                       | 20200405                |
| x.qx                              | finance                       | 5                            | kip                            | 24500.00                    | 2020-04-07                       | 2020-04-07                       | 20200405                |
| x.qx                              | finance                       | 6                            | zxxc                           | 13000.00                    | 2020-04-07                       | 2020-04-07                       | 20200405                |
| x.qx                              | kiccp                         | 7                            | xsz                            | 8600.00                     | 2020-04-07                       | 2020-04-07                       | 20200405                |
| s.zh                              | engineer                      | 1                            | szh                            | 28000.00                    | 2020-04-07                       | 2020-04-07                       | 20200406                |
| s.zh                              | engineer                      | 2                            | zyq                            | 26000.00                    | 2020-04-03                       | 2020-04-03                       | 20200406                |
| s.zh                              | tester                        | 3                            | gkm                            | 20000.00                    | 2020-04-07                       | 2020-04-07                       | 20200406                |
| x.qx                              | finance                       | 4                            | pip                            | 13400.00                    | 2020-04-07                       | 2020-04-07                       | 20200406                |
| x.qx                              | finance                       | 5                            | kip                            | 24500.00                    | 2020-04-07                       | 2020-04-07                       | 20200406                |
| x.qx                              | finance                       | 6                            | zxxc                           | 13000.00                    | 2020-04-07                       | 2020-04-07                       | 20200406                |
| x.qx                              | kiccp                         | 7                            | xsz                            | 8600.00                     | 2020-04-07                       | 2020-04-07                       | 20200406                |
| s.zh                              | enginer                       | 1                            | szh                            | 28000.00                    | 2020-04-07                       | 2020-04-07                       | 20200407                |
| s.zh                              | enginer                       | 2                            | zyq                            | 26000.00                    | 2020-04-03                       | 2020-04-03                       | 20200407                |
| s.zh                              | tester                        | 3                            | gkm                            | 20000.00                    | 2020-04-07                       | 2020-04-07                       | 20200407                |
| x.qx                              | finance                       | 4                            | pip                            | 13400.00                    | 2020-04-07                       | 2020-04-07                       | 20200407                |
| x.qx                              | finance                       | 5                            | kip                            | 24500.00                    | 2020-04-07                       | 2020-04-07                       | 20200407                |
| x.qx                              | finance                       | 6                            | zxxc                           | 13000.00                    | 2020-04-07                       | 2020-04-07                       | 20200407                |
| x.qx                              | kiccp                         | 7                            | xsz                            | 8600.00                     | 2020-04-07                       | 2020-04-07                       | 20200407                |
+-----------------------------------+-------------------------------+------------------------------+--------------------------------+-----------------------------+----------------------------------+----------------------------------+-------------------------+
21 rows selected (0.153 seconds)

datacube_salary_dep_aggr 表的数据

+----------------------------------------+------------------------------------+----------------------------------------+
| datacube_salary_dep_aggr.company_name  | datacube_salary_dep_aggr.dep_name  | datacube_salary_dep_aggr.total_salary  |
+----------------------------------------+------------------------------------+----------------------------------------+
| s.zh                                   | enginer                            | 54000.00                               |
| s.zh                                   | tester                             | 20000.00                               |
| x.qx                                   | finance                            | 50900.00                               |
| x.qx                                   | kiccp                              | 8600.00                                |
+----------------------------------------+------------------------------------+----------------------------------------+
4 rows selected (0.262 seconds)

 

 

 

GROUP BY 场景下的数据倾斜

 

group by 场景下的其实比较简单,我们只需要在 HIVE 中设置如下两个参数即可 :

set hive.map.aggr=true;
set hive.groupby.skewindata=true;

我们看下,设置这两个参数为什么能解决 GROUP BY 的数据倾斜问题

set hive.map.aggr=true; (默认 : true)   第一个参数表示在 Map 端进行预聚。 因为传到数据量小了,所以效率高了,可以缓解数据倾斜问题。

 

 

最主要的参数,其实是 set hive.groupby.skewindata=true;  

这个参数有什么作用呢。这场来说 GROUP BY 流程只会产生一个MR JOB。但是,设置这个参数为 true 以后, 原来 GROUP BY 的 MR JOB 会由原来的一个变为两个。

流程如下:

 

JOB1 .第一个作业会进行预处理,将数据进行预聚合,并随机分发到 不同的 Reducer 中。

      Map流程 : 会生成两个job来执行group by,第一个job中,各个map是平均读取分片的,在map阶段对这个分片中的数据根据group by 的key进行局部聚合操作,这里就相当于Combiner操作。
     Shuffle流程:在第一次的job中,map输出的结果随机分区,这样就可以平均分到reduce中  
     Reduce流程:  在第一次的job中,reduce中按照group by的key进行分组后聚合,这样就在各个reduce中又进行了一次局部的聚合。
 

JOB2.读取上一个阶段MR的输出作为Map输入,并局部聚合。按照key分区,将数据分发到 Reduce 中,进行统计。

      Map流程 : 因为第一个job中分区是随机的,所有reduce结果的数据的key也是随机的,所以第二个job的map读取的数据也是随机的key,所以第二个map中不存在数据倾斜的问题。
在第二个job的map中,也会进行一次局部聚合。
    Shuffle流程 :  第二个job中分区是按照group by的key分区的,这个地方就保证了整体的group by没有问题,相同的key分到了同一个reduce中。
    Reduce流程 :经过前面几个聚合的局部聚合,这个时候的数据量已经大大减少了,在最后一个reduce里进行最后的整体聚合。
 

 

 

实践

现在我们对比一下:开启与不开启以上优化策略,有什么区别

SQL

SELECT 
 pt
 ,COUNT(1)
FROM datacube_salary_org
GROUP BY pt
;

开启前

INFO  : Starting task [Stage-2:EXPLAIN] in serial mode
INFO  : Completed executing command(queryId=hive_20200415090112_c91e5b83-756d-4c96-99ad-4f5a89b8099f); Time taken: 0.006 seconds
INFO  : OK
+----------------------------------------------------+
|                      Explain                       |
+----------------------------------------------------+
| STAGE DEPENDENCIES:                                |
|   Stage-1 is a root stage                          |
|   Stage-0 depends on stages: Stage-1               |
|                                                    |
| STAGE PLANS:                                       |
|   Stage: Stage-1                                   |
|     Map Reduce                                     |
|       Map Operator Tree:                           |
|           TableScan                                |
|             alias: datacube_salary_org             |
|             Statistics: Num rows: 7 Data size: 1628 Basic stats: COMPLETE Column stats: COMPLETE |
|             Select Operator                        |
|               expressions: pt (type: string)       |
|               outputColumnNames: pt                |
|               Statistics: Num rows: 7 Data size: 1628 Basic stats: COMPLETE Column stats: COMPLETE |
|               Group By Operator                    |
|                 aggregations: count(1)             |
|                 keys: pt (type: string)            |
|                 mode: hash                         |
|                 outputColumnNames: _col0, _col1    |
|                 Statistics: Num rows: 3 Data size: 576 Basic stats: COMPLETE Column stats: COMPLETE |
|                 Reduce Output Operator             |
|                   key expressions: _col0 (type: string) |
|                   sort order: +                    |
|                   Map-reduce partition columns: _col0 (type: string) |
|                   Statistics: Num rows: 3 Data size: 576 Basic stats: COMPLETE Column stats: COMPLETE |
|                   value expressions: _col1 (type: bigint) |
|       Reduce Operator Tree:                        |
|         Group By Operator                          |
|           aggregations: count(VALUE._col0)         |
|           keys: KEY._col0 (type: string)           |
|           mode: mergepartial                       |
|           outputColumnNames: _col0, _col1          |
|           Statistics: Num rows: 3 Data size: 576 Basic stats: COMPLETE Column stats: COMPLETE |
|           File Output Operator                     |
|             compressed: false                      |
|             Statistics: Num rows: 3 Data size: 576 Basic stats: COMPLETE Column stats: COMPLETE |
|             table:                                 |
|                 input format: org.apache.hadoop.mapred.SequenceFileInputFormat |
|                 output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat |
|                 serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe |
|                                                    |
|   Stage: Stage-0                                   |
|     Fetch Operator                                 |
|       limit: -1                                    |
|       Processor Tree:                              |
|         ListSink                                   |
|                                                    |
+----------------------------------------------------+
48 rows selected (0.115 seconds)

 

开启后:

INFO  : Starting task [Stage-3:EXPLAIN] in serial mode
INFO  : Completed executing command(queryId=hive_20200415090242_0e4480dd-06dd-4dd0-bc28-7e3136820b5c); Time taken: 0.003 seconds
INFO  : OK
+----------------------------------------------------+
|                      Explain                       |
+----------------------------------------------------+
| STAGE DEPENDENCIES:                                |
|   Stage-1 is a root stage                          |
|   Stage-2 depends on stages: Stage-1               |
|   Stage-0 depends on stages: Stage-2               |
|                                                    |
| STAGE PLANS:                                       |
|   Stage: Stage-1                                   |
|     Map Reduce                                     |
|       Map Operator Tree:                           |
|           TableScan                                |
|             alias: datacube_salary_org             |
|             Statistics: Num rows: 7 Data size: 1628 Basic stats: COMPLETE Column stats: COMPLETE |
|             Select Operator                        |
|               expressions: pt (type: string)       |
|               outputColumnNames: pt                |
|               Statistics: Num rows: 7 Data size: 1628 Basic stats: COMPLETE Column stats: COMPLETE |
|               Group By Operator                    |
|                 aggregations: count(1)             |
|                 keys: pt (type: string)            |
|                 mode: hash                         |
|                 outputColumnNames: _col0, _col1    |
|                 Statistics: Num rows: 3 Data size: 576 Basic stats: COMPLETE Column stats: COMPLETE |
|                 Reduce Output Operator             |
|                   key expressions: _col0 (type: string) |
|                   sort order: +                    |
|                   Map-reduce partition columns: rand() (type: double) |
|                   Statistics: Num rows: 3 Data size: 576 Basic stats: COMPLETE Column stats: COMPLETE |
|                   value expressions: _col1 (type: bigint) |
|       Reduce Operator Tree:                        |
|         Group By Operator                          |
|           aggregations: count(VALUE._col0)         |
|           keys: KEY._col0 (type: string)           |
|           mode: partials                           |
|           outputColumnNames: _col0, _col1          |
|           Statistics: Num rows: 3 Data size: 576 Basic stats: COMPLETE Column stats: COMPLETE |
|           File Output Operator                     |
|             compressed: false                      |
|             table:                                 |
|                 input format: org.apache.hadoop.mapred.SequenceFileInputFormat |
|                 output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat |
|                 serde: org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe |
|                                                    |
|   Stage: Stage-2                                   |
|     Map Reduce                                     |
|       Map Operator Tree:                           |
|           TableScan                                |
|             Reduce Output Operator                 |
|               key expressions: _col0 (type: string) |
|               sort order: +                        |
|               Map-reduce partition columns: _col0 (type: string) |
|               Statistics: Num rows: 3 Data size: 576 Basic stats: COMPLETE Column stats: COMPLETE |
|               value expressions: _col1 (type: bigint) |
|       Reduce Operator Tree:                        |
|         Group By Operator                          |
|           aggregations: count(VALUE._col0)         |
|           keys: KEY._col0 (type: string)           |
|           mode: final                              |
|           outputColumnNames: _col0, _col1          |
|           Statistics: Num rows: 3 Data size: 576 Basic stats: COMPLETE Column stats: COMPLETE |
|           File Output Operator                     |
|             compressed: false                      |
|             Statistics: Num rows: 3 Data size: 576 Basic stats: COMPLETE Column stats: COMPLETE |
|             table:                                 |
|                 input format: org.apache.hadoop.mapred.SequenceFileInputFormat |
|                 output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat |
|                 serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe |
|                                                    |
|   Stage: Stage-0                                   |
|     Fetch Operator                                 |
|       limit: -1                                    |
|       Processor Tree:                              |
|         ListSink                                   |
|                                                    |
+----------------------------------------------------+
73 rows selected (0.122 seconds)

可以明显的看到开启优化后。增加了一层 JOB 

 

 

 

 

 

 

 

 

JOIN 场景下的数据倾斜

   JOIN 是更为常见的场景。以下是 JOIN 前提下,数据倾斜的几种情况。

1) 由于空值导致的数据倾斜问题

2) 由于数据类型不一致,导致的转换问题,导致的数据倾斜

3) 业务数据本身分布不均,导致的数据倾斜

i.大表与小表JOIN (Map JOIN)

ii.大表与大表JOIN, 一张表数据分布均匀,另一张表数据特定的KEY(有限几个) 分布不均

iii.大表与大表JOIN, 一张表数据分布均匀,另一张表大量的KEY 分布不均

iiii.大表与大表JOIN, 桶表,进行表拆分

 

 

1) 由于空值导致的数据倾斜问题

构造下测试数据

CREATE TABLE IF NOT EXISTS skew_null_action_info(
 user_id BIGINT
 ,user_action STRING
);

INSERT INTO TABLE skew_null_action_info
VALUES 
(NULL, 'cc')
,(NULL, 'pp')
,(NULL, 'kk')
,(NULL, 'kk')
,(NULL, 'kk')
,(NULL, 'kk')
,(NULL, 'kk')
,(1, 'cc')
,(2, 'zz')
,(3, 'pp')
;

CREATE TABLE IF NOT EXISTS skew_user_info(
 user_id BIGINT
 ,user_name STRING
);

INSERT INTO TABLE skew_user_info
VALUES
(1, 'puzheli')
,(2, 'sjz')
,(3, 'xxx')
;

导致问题的SQL

SELECT
 a.*
 ,b.user_name 
FROM skew_null_action_info AS a
LEFT JOIN skew_user_info AS b
 ON a.user_id = b.user_id
;

 

结果

+------------+----------------+--------------+
| a.user_id  | a.user_action  | b.user_name  |
+------------+----------------+--------------+
| NULL       | kk             | NULL         |
| NULL       | kk             | NULL         |
| NULL       | kk             | NULL         |
| NULL       | kk             | NULL         |
| NULL       | kk             | NULL         |
| NULL       | pp             | NULL         |
| NULL       | cc             | NULL         |
| 1          | cc             | puzheli      |
| 2          | zz             | sjz          |
| 3          | pp             | xxx          |
+------------+----------------+--------------+

 

产生问题的原因:

   NULL值的 KEY 会分布到同一个Reduce上

   由于空值本身并不会对数据结果造成影响。所以我们有以下两种方式,可以解决

 

方式1 排除NULL值 进行连接,之后UNION ALL

方式2 在 JOIN 时候,使用 CASE WHEN ELSE END,将NULL 值打散 (更优)

 

方式1 排除NULL值 进行连接,之后UNION ALL

SQL

SELECT
 a.*
 ,b.user_name 
FROM skew_null_action_info AS a
JOIN skew_user_info AS b
 ON a.user_id = b.user_id
 AND a.user_id IS NOT NULL
UNION ALL
SELECT
 c.*
 ,NULL AS user_name 
FROM skew_null_action_info AS c
WHERE c.user_id IS NULL
;

结果

+--------------+------------------+----------------+
| _u1.user_id  | _u1.user_action  | _u1.user_name  |
+--------------+------------------+----------------+
| 1            | cc               | puzheli        |
| 2            | zz               | sjz            |
| 3            | pp               | xxx            |
| NULL         | cc               | NULL           |
| NULL         | pp               | NULL           |
| NULL         | kk               | NULL           |
| NULL         | kk               | NULL           |
| NULL         | kk               | NULL           |
| NULL         | kk               | NULL           |
| NULL         | kk               | NULL           |
+--------------+------------------+----------------+

执行计划 

INFO  : Starting task [Stage-5:EXPLAIN] in serial mode
INFO  : Completed executing command(queryId=hive_20200415103139_35cf476f-3473-4ef5-ae08-84e0b85ed18a); Time taken: 0.006 seconds
INFO  : OK
+----------------------------------------------------+
|                      Explain                       |
+----------------------------------------------------+
| STAGE DEPENDENCIES:                                |
|   Stage-1 is a root stage                          |
|   Stage-2 depends on stages: Stage-1               |
|   Stage-0 depends on stages: Stage-2               |
|                                                    |
| STAGE PLANS:                                       |
|   Stage: Stage-1                                   |
|     Map Reduce                                     |
|       Map Operator Tree:                           |
|           TableScan                                |
|             alias: a                               |
|             Statistics: Num rows: 10 Data size: 47 Basic stats: COMPLETE Column stats: NONE |
|             Filter Operator                        |
|               predicate: user_id is not null (type: boolean) |
|               Statistics: Num rows: 10 Data size: 47 Basic stats: COMPLETE Column stats: NONE |
|               Reduce Output Operator               |
|                 key expressions: user_id (type: bigint) |
|                 sort order: +                      |
|                 Map-reduce partition columns: user_id (type: bigint) |
|                 Statistics: Num rows: 10 Data size: 47 Basic stats: COMPLETE Column stats: NONE |
|                 value expressions: user_action (type: string) |
|           TableScan                                |
|             alias: b                               |
|             Statistics: Num rows: 3 Data size: 19 Basic stats: COMPLETE Column stats: NONE |
|             Filter Operator                        |
|               predicate: user_id is not null (type: boolean) |
|               Statistics: Num rows: 3 Data size: 19 Basic stats: COMPLETE Column stats: NONE |
|               Reduce Output Operator               |
|                 key expressions: user_id (type: bigint) |
|                 sort order: +                      |
|                 Map-reduce partition columns: user_id (type: bigint) |
|                 Statistics: Num rows: 3 Data size: 19 Basic stats: COMPLETE Column stats: NONE |
|                 value expressions: user_name (type: string) |
|       Reduce Operator Tree:                        |
|         Join Operator                              |
|           condition map:                           |
|                Inner Join 0 to 1                   |
|           keys:                                    |
|             0 user_id (type: bigint)               |
|             1 user_id (type: bigint)               |
|           outputColumnNames: _col0, _col1, _col6   |
|           Statistics: Num rows: 11 Data size: 51 Basic stats: COMPLETE Column stats: NONE |
|           Select Operator                          |
|             expressions: _col0 (type: bigint), _col1 (type: string), _col6 (type: string) |
|             outputColumnNames: _col0, _col1, _col2 |
|             Statistics: Num rows: 11 Data size: 51 Basic stats: COMPLETE Column stats: NONE |
|             File Output Operator                   |
|               compressed: false                    |
|               table:                               |
|                   input format: org.apache.hadoop.mapred.SequenceFileInputFormat |
|                   output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat |
|                   serde: org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe |
|                                                    |
|   Stage: Stage-2                                   |
|     Map Reduce                                     |
|       Map Operator Tree:                           |
|           TableScan                                |
|             Union                                  |
|               Statistics: Num rows: 16 Data size: 74 Basic stats: COMPLETE Column stats: NONE |
|               File Output Operator                 |
|                 compressed: false                  |
|                 Statistics: Num rows: 16 Data size: 74 Basic stats: COMPLETE Column stats: NONE |
|                 table:                             |
|                     input format: org.apache.hadoop.mapred.SequenceFileInputFormat |
|                     output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat |
|                     serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe |
|           TableScan                                |
|             alias: c                               |
|             Statistics: Num rows: 10 Data size: 47 Basic stats: COMPLETE Column stats: NONE |
|             Filter Operator                        |
|               predicate: user_id is null (type: boolean) |
|               Statistics: Num rows: 5 Data size: 23 Basic stats: COMPLETE Column stats: NONE |
|               Select Operator                      |
|                 expressions: null (type: bigint), user_action (type: string), null (type: string) |
|                 outputColumnNames: _col0, _col1, _col2 |
|                 Statistics: Num rows: 5 Data size: 23 Basic stats: COMPLETE Column stats: NONE |
|                 Union                              |
|                   Statistics: Num rows: 16 Data size: 74 Basic stats: COMPLETE Column stats: NONE |
|                   File Output Operator             |
|                     compressed: false              |
|                     Statistics: Num rows: 16 Data size: 74 Basic stats: COMPLETE Column stats: NONE |
|                     table:                         |
|                         input format: org.apache.hadoop.mapred.SequenceFileInputFormat |
|                         output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat |
|                         serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe |
|                                                    |
|   Stage: Stage-0                                   |
|     Fetch Operator                                 |
|       limit: -1                                    |
|       Processor Tree:                              |
|         ListSink                                   |
|                                                    |
+----------------------------------------------------+
92 rows selected (0.178 seconds)

 

 

方式2 在 JOIN 时候,使用 CASE WHEN ELSE END,将NULL 值打散 (更优)

由于里面用到了 RAND() 函数,我们看下其介绍:

DESCRIBE FUNCTION RAND;
+----------------------------------------------------+
|                      tab_name                      |
+----------------------------------------------------+
| RAND([seed]) - Returns a pseudorandom number between 0 and 1 |
+----------------------------------------------------+
1 row selected (0.033 seconds)

SQL

SELECT
 a.*
 ,b.user_name 
FROM skew_null_action_info AS a
LEFT JOIN skew_user_info AS b
 ON 
 (
 CASE  
   WHEN a.user_id IS NULL THEN CONCAT('hive',RAND()) 
   ELSE a.user_id  
 END   
 ) = b.user_id
;

结果

+------------+----------------+--------------+
| a.user_id  | a.user_action  | b.user_name  |
+------------+----------------+--------------+
| NULL       | kk             | NULL         |
| NULL       | kk             | NULL         |
| NULL       | kk             | NULL         |
| NULL       | kk             | NULL         |
| NULL       | kk             | NULL         |
| NULL       | pp             | NULL         |
| NULL       | cc             | NULL         |
| 1          | cc             | puzheli      |
| 2          | zz             | sjz          |
| 3          | pp             | xxx          |
+------------+----------------+--------------+
10 rows selected (29.615 seconds)

执行计划

INFO  : Starting task [Stage-3:EXPLAIN] in serial mode
INFO  : Completed executing command(queryId=hive_20200415104500_1f173154-f42d-4f4c-9fcc-2a1c6e297285); Time taken: 0.015 seconds
INFO  : OK
+----------------------------------------------------+
|                      Explain                       |
+----------------------------------------------------+
| STAGE DEPENDENCIES:                                |
|   Stage-1 is a root stage                          |
|   Stage-0 depends on stages: Stage-1               |
|                                                    |
| STAGE PLANS:                                       |
|   Stage: Stage-1                                   |
|     Map Reduce                                     |
|       Map Operator Tree:                           |
|           TableScan                                |
|             alias: a                               |
|             Statistics: Num rows: 10 Data size: 47 Basic stats: COMPLETE Column stats: NONE |
|             Reduce Output Operator                 |
|               key expressions: UDFToDouble(CASE WHEN (user_id is null) THEN (concat('hive', rand())) ELSE (user_id) END) (type: double) |
|               sort order: +                        |
|               Map-reduce partition columns: UDFToDouble(CASE WHEN (user_id is null) THEN (concat('hive', rand())) ELSE (user_id) END) (type: double) |
|               Statistics: Num rows: 10 Data size: 47 Basic stats: COMPLETE Column stats: NONE |
|               value expressions: user_id (type: bigint), user_action (type: string) |
|           TableScan                                |
|             alias: b                               |
|             Statistics: Num rows: 3 Data size: 19 Basic stats: COMPLETE Column stats: NONE |
|             Reduce Output Operator                 |
|               key expressions: UDFToDouble(user_id) (type: double) |
|               sort order: +                        |
|               Map-reduce partition columns: UDFToDouble(user_id) (type: double) |
|               Statistics: Num rows: 3 Data size: 19 Basic stats: COMPLETE Column stats: NONE |
|               value expressions: user_name (type: string) |
|       Reduce Operator Tree:                        |
|         Join Operator                              |
|           condition map:                           |
|                Left Outer Join0 to 1               |
|           keys:                                    |
|             0 UDFToDouble(CASE WHEN (user_id is null) THEN (concat('hive', rand())) ELSE (user_id) END) (type: double) |
|             1 UDFToDouble(user_id) (type: double)  |
|           outputColumnNames: _col0, _col1, _col6   |
|           Statistics: Num rows: 11 Data size: 51 Basic stats: COMPLETE Column stats: NONE |
|           Select Operator                          |
|             expressions: _col0 (type: bigint), _col1 (type: string), _col6 (type: string) |
|             outputColumnNames: _col0, _col1, _col2 |
|             Statistics: Num rows: 11 Data size: 51 Basic stats: COMPLETE Column stats: NONE |
|             File Output Operator                   |
|               compressed: false                    |
|               Statistics: Num rows: 11 Data size: 51 Basic stats: COMPLETE Column stats: NONE |
|               table:                               |
|                   input format: org.apache.hadoop.mapred.SequenceFileInputFormat |
|                   output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat |
|                   serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe |
|                                                    |
|   Stage: Stage-0                                   |
|     Fetch Operator                                 |
|       limit: -1                                    |
|       Processor Tree:                              |
|         ListSink                                   |
|                                                    |
+----------------------------------------------------+
53 rows selected (0.156 seconds)

 

优劣 :

  可以看到第二种方式更好  JOIN 时候,使用 CASE WHEN ELSE END,将NULL 值打散 (更优)

1.JOB数量更少,申请资源耗时更少。

2.存在因为NULL值,导致分布不均的表只扫描了一次。

 

 

 

2) 由于数据类型不一致,导致的转换问题,导致的数据倾斜

场景说明:

   用户表中 user_id 字段为 int,log 表中 user_id 为既有 string 也有 int 的类型。

   当按照两个表的 user_id 进行 join 操作的时候,默认的 hash 操作会按照 int 类型的 id 进 行分配,这样就会导致所有的 string 类型的 id 就被分到同一个 reducer 当中。

 

解决方案

将 INT 类型id , 转换为 STRING 类型的 id.

SELECT *
FROM user AS a
LEFT OUTER JOIN log AS b
 ON b.user_id = CAST (a.user_id AS STRING) 
;

 

 

 

 

 

3) 业务数据本身分布不均,导致的数据倾斜

 

i.大表与小表JOIN (Map JOIN)

ii.大表与大表JOIN, 一张表数据分布均匀,另一张表数据特定的KEY(有限几个) 分布不均

iii.大表与大表JOIN, 一张表数据分布均匀,另一张表大量的KEY 分布不均

iiii.大表与大表JOIN, 桶表,进行表拆分

 

构建数据

CREATE TABLE IF NOT EXISTS skew_multi_action_info(
 user_id BIGINT
 ,user_action STRING
);

INSERT INTO TABLE skew_multi_action_info
VALUES 
(1, 'cc')
,(1, 'pp')
,(1, 'kk')
,(1, 'kk')
,(1, 'kk')
,(1, 'kk')
,(1, 'kk')
,(1, 'cc')
,(2, 'zz')
,(2, 'zz')
,(2, 'zz')
,(2, 'zS')
,(2, 'zS')
,(2, 'zS')
,(3, 'pp')
,(4, 'lp')
,(5, 'op')
;


CREATE TABLE IF NOT EXISTS skew_user_info_2(
 user_id BIGINT
 ,user_name STRING
);

INSERT INTO TABLE skew_user_info_2
VALUES
(1, 'puzheli')
,(2, 'sjz')
,(3, 'xxx')
,(4, 'ccc')
,(5, 'xdi')
;

 

 

i.大表与小表JOIN (Map JOIN)

这种容易解决,一般来说

我们参考Map JOIN 的参数,去调整 MAP JOIN 的小表的输入大小即可。

参考文章 :https://blog.csdn.net/u010003835/article/details/105495067

Map JOIN 会将小表分发到JOB 中每个 Map 中, 相当于 map 端执行了 JOIN 操作,省去了 shuffle 流程,所以避免了大量相同 KEY 传入到 一个 Reduce 中。

使用Map JOIN 时,会先执行一个本地任务(mapreduce local task)将小表转成hashtable并序列化为文件再压缩,随后这些 hashtable文件会被上传到hadoop缓存,提供给各个mapjoin 使用。

 

针对上面的 数据,我们设置 hive Map JOIN 默认优化    

set hive.auto.convert.join = true;

我们看下执行计划

INFO  : Starting task [Stage-5:EXPLAIN] in serial mode
INFO  : Completed executing command(queryId=hive_20200415120237_62f7cffb-d70a-4c39-860f-038b5b677373); Time taken: 0.006 seconds
INFO  : OK
+----------------------------------------------------+
|                      Explain                       |
+----------------------------------------------------+
| STAGE DEPENDENCIES:                                |
|   Stage-4 is a root stage                          |
|   Stage-3 depends on stages: Stage-4               |
|   Stage-0 depends on stages: Stage-3               |
|                                                    |
| STAGE PLANS:                                       |
|   Stage: Stage-4                                   |
|     Map Reduce Local Work                          |
|       Alias -> Map Local Tables:                   |
|         b                                          |
|           Fetch Operator                           |
|             limit: -1                              |
|       Alias -> Map Local Operator Tree:            |
|         b                                          |
|           TableScan                                |
|             alias: b                               |
|             Statistics: Num rows: 5 Data size: 29 Basic stats: COMPLETE Column stats: NONE |
|             HashTable Sink Operator                |
|               keys:                                |
|                 0 user_id (type: bigint)           |
|                 1 user_id (type: bigint)           |
|                                                    |
|   Stage: Stage-3                                   |
|     Map Reduce                                     |
|       Map Operator Tree:                           |
|           TableScan                                |
|             alias: a                               |
|             Statistics: Num rows: 17 Data size: 68 Basic stats: COMPLETE Column stats: NONE |
|             Map Join Operator                      |
|               condition map:                       |
|                    Left Outer Join0 to 1           |
|               keys:                                |
|                 0 user_id (type: bigint)           |
|                 1 user_id (type: bigint)           |
|               outputColumnNames: _col0, _col1, _col6 |
|               Statistics: Num rows: 18 Data size: 74 Basic stats: COMPLETE Column stats: NONE |
|               Select Operator                      |
|                 expressions: _col0 (type: bigint), _col1 (type: string), _col6 (type: string) |
|                 outputColumnNames: _col0, _col1, _col2 |
|                 Statistics: Num rows: 18 Data size: 74 Basic stats: COMPLETE Column stats: NONE |
|                 File Output Operator               |
|                   compressed: false                |
|                   Statistics: Num rows: 18 Data size: 74 Basic stats: COMPLETE Column stats: NONE |
|                   table:                           |
|                       input format: org.apache.hadoop.mapred.SequenceFileInputFormat |
|                       output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat |
|                       serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe |
|       Local Work:                                  |
|         Map Reduce Local Work                      |
|                                                    |
|   Stage: Stage-0                                   |
|     Fetch Operator                                 |
|       limit: -1                                    |
|       Processor Tree:                              |
|         ListSink                                   |
|                                                    |
+----------------------------------------------------+
56 rows selected (0.205 seconds)

 

 

 

 

 

ii.大表与大表JOIN, 一张表数据分布均匀,另一张表数据特定的KEY(有限几个) 分布不均

 

适用场景:

    两个Hive表进行join的时候,如果数据量都比较大,那么此时可以看一下两个Hive表中的key分布情况。如果出现数据倾斜,是因为其中某一个Hive表中的少数几个key的数据量过大,而另一个Hive表中的所有key都分布比较均匀,那么采用这个解决方案是比较合适的

 

方案实现思路:

  • 对包含少数几个数据量过大的key的那个表,通过sample算子采样出一份样本来,然后统计一下每个key的数量,计算出来数据量最大的是哪几个key。

 

  • 然后将这几个key对应的数据从原来的表中拆分出来,形成一个单独的表,并给每个key都打上n以内的随机数作为前缀,而不会导致倾斜的大部分key形成另外一个表

 

  • 接着将需要join的另一个表,也过滤出来那几个倾斜key对应的数据并形成一个单独的表,将每条数据膨胀成n条数据,这n条数据都按顺序附加一个0~n的前缀,不会导致倾斜的大部分key也形成另外一个表。

 

  • 再将附加了随机前缀的独立表与另一个膨胀n倍的独立表进行join,此时就可以将原先相同的key打散成n份,分散到多个task中去进行join了。

 

  • 而另外两个普通的表就照常join即可。

 

  • 最后将两次join的结果使用union算子合并起来即可,就是最终的join结果。
     

 

 

 

 

数据倾斜的SQL

SELECT
 a.*
 ,b.user_name 
FROM skew_multi_action_info AS a
JOIN skew_user_info_2 AS b
 ON a.user_id = b.user_id
;

 

针对上述数据样例:

可以看到  skew_multi_action_info  user_id  IN (1,2) 大量倾斜,skew_user_info_2 表的数据分布 均匀。

所以,我们采用上述方案,拆分并合并的方式

倾斜数据 1,2 的结果

SELECT 
 tmp_a.uid
 ,tmp_a.user_action
 ,tmp_b.user_name
FROM
(
SELECT 
 CONCAT(a.user_id, '_', CAST ((RAND()*2%2 + 1) AS INT)) AS uid
 ,a.user_action
FROM skew_multi_action_info AS a
WHERE a.user_id IN (1,2)
) AS tmp_a
JOIN 
(
SELECT 
 CONCAT(user_id,'_1') AS uid
 ,user_name
FROM skew_user_info_2 
WHERE user_id IN (1,2)
UNION ALL
SELECT 
 CONCAT(user_id,'_2') AS uid
 ,user_name
FROM skew_user_info_2 
WHERE user_id IN (1,2)
) AS tmp_b
ON tmp_a.uid = tmp_b.uid
;

其中膨胀数据的部分,我用 -- tmp_b 代替膨胀N倍的操作, 正常应该用自定义函数完成,即每一行变为N行数据

(
SELECT 
 CONCAT(user_id,'_1') AS uid
 ,user_name
FROM skew_user_info_2 
WHERE user_id IN (1,2)
UNION ALL
SELECT 
 CONCAT(user_id,'_2') AS uid
 ,user_name
FROM skew_user_info_2 
WHERE user_id IN (1,2)
) AS tmp_b

 

 

整体结果 :

SELECT 
 CAST(a.user_id AS STRING) AS uid
 ,a.user_action
 ,b.user_name
FROM skew_multi_action_info AS a
JOIN skew_user_info_2 AS b
 ON a.user_id = b.user_id
 AND a.user_id NOT IN (1,2)
 AND b.user_id NOT IN (1,2)


UNION ALL

SELECT 
 tmp_a.uid
 ,tmp_a.user_action
 ,tmp_b.user_name
FROM
(
SELECT 
 CONCAT(a.user_id, '_', CAST ((RAND()*2%2 + 1) AS INT)) AS uid
 ,a.user_action
FROM skew_multi_action_info AS a
WHERE a.user_id IN (1,2)
) AS tmp_a
JOIN 
(
SELECT 
 CONCAT(user_id,'_1') AS uid
 ,user_name
FROM skew_user_info_2 
WHERE user_id IN (1,2)
UNION ALL
SELECT 
 CONCAT(user_id,'_2') AS uid
 ,user_name
FROM skew_user_info_2 
WHERE user_id IN (1,2)
) AS tmp_b
ON tmp_a.uid = tmp_b.uid
;

 

结果

+----------+------------------+----------------+
| _u1.uid  | _u1.user_action  | _u1.user_name  |
+----------+------------------+----------------+
| 1_1      | cc               | puzheli        |
| 1_2      | pp               | puzheli        |
| 1_2      | kk               | puzheli        |
| 1_1      | kk               | puzheli        |
| 1_1      | kk               | puzheli        |
| 1_2      | kk               | puzheli        |
| 1_2      | kk               | puzheli        |
| 1_1      | cc               | puzheli        |
| 2_1      | zz               | sjz            |
| 2_1      | zz               | sjz            |
| 2_1      | zz               | sjz            |
| 2_1      | zS               | sjz            |
| 2_1      | zS               | sjz            |
| 2_1      | zS               | sjz            |
| 3        | pp               | xxx            |
| 4        | lp               | ccc            |
| 5        | op               | xdi            |
+----------+------------------+----------------+
17 rows selected (27.653 seconds)

 

 

 

方案优点:

    对于join导致的数据倾斜,如果只是某几个key导致了倾斜,采用该方式可以用最有效的方式打散key进行join。而且只需要针对少数倾斜key对应的数据进行扩容n倍,不需要对全量数据进行扩容。避免了占用过多内存。

方案缺点:

   如果导致倾斜的key特别多的话,比如成千上万个key都导致数据倾斜,那么这种方式也不适合。
 

 

 

 

iii.大表与大表JOIN, 一张表数据分布均匀,另一张表大量的KEY 分布不均

 

目的 :使用随机前缀 和 扩容 进行 join

 

适用场景:

    如果在进行join操作时,表中有大量的key导致数据倾斜,那么进行分拆key也没什么意义,此时就只能使用最后一种方案来解决问题了。

 

方案实现思路:

  • 该方案的实现思路基本和“解决方案四”类似,首先查看Hive表中的数据分布情况,找到那个造成数据倾斜的Hive表,比如有多个key都对应了超过1万条数据。
  • 然后将该表的每条数据都打上一个n以内的随机前缀。
  • 同时对另外一个正常的表进行扩容,将每条数据都扩容成n条数据,扩容出来的每条数据都依次打上一个0~n的前缀。
  • 最后将两个处理后的表进行join即可。
     

方案优点:

     对join类型的数据倾斜基本都可以处理,而且效果也相对比较显著,性能提升效果非常不错。

 

方案缺点:

    该方案更多的是缓解数据倾斜,而不是彻底避免数据倾斜。而且需要对整个表进行扩容,对内存资源要求很高。

 

 

 

iiii.大表与大表JOIN, 桶表,进行表拆分

目的:增加并行度

 

场景:

    两个大表,数据分布均匀,为了提高效率,使用mapjoin,采用切分大表的方法

采用将大表切分为小表,然后进行连接

 

原始测试表

+----------+------------+
| test.id  | test.name  |
+----------+------------+
| 1        | aa         |
| 2        | bb         |
| 3        | cc         |
| 4        | dd         |
+----------+------------+


将其切分为两个:

 select * from test tablesample(bucket 1 out of 2 on id);

+----------+------------+
| test.id  | test.name  |
+----------+------------+
| 2        | bb         |
| 4        | dd         |
+----------+------------+

 

 select * from test tablesample(bucket 2 out of 2 on id);

+----------+------------+
| test.id  | test.name  |
+----------+------------+
| 1        | aa         |
| 3        | cc         |
+----------+------------+


 

 

  • 1
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值