Spark SQL优化

1,聚合源数据:Spark Sql结果转化成RDD tuple(where条件为key, 结果为value)
2,过滤导致倾斜的key:where过滤
3,提高shuffle并行度:
设置spark.sql.shuffle.partitions参数:默认是200,也就是shuffle默认reduse task 只有200个。
sqlContext.setConf(“spark.sql.shuffle.partitions”,”1000”);
4,双重group by
package util;

import org.apache.spark.sql.api.java.UDF2;

import java.util.Random;

public class RandomPrefixUDF implements UDF2<String,Integer,String> {

private static final long serialVersionUID = -6580046446266406341L;

@Override
public String call(String value, Integer number) throws Exception {
    Random random = new Random();
    int randNum = random.nextInt(number);
    return randNum + "_" + value;
}

}

package util;

import org.apache.spark.sql.api.java.UDF1;
import org.apache.spark.sql.api.java.UDF2;

import java.util.Random;

public class RemoveRandomPrefixUDF implements UDF2<String,Integer,String> {

private static final long serialVersionUID = -743427068120694601L;

@Override
public String call(String value,Integer number) throws Exception {
    return value.substring(number/10+1);
}

}

注册自定义的函数:
sqlContext.udf().register(“random_prefix”,
new RandomPrefixUDF(),DataTypes.StringType);
sqlContext.udf().register(“remove_random_prefix”,
new RemoveRandomPrefixUDF(),DataTypes.StringType);

String sql = “select area ,
productId,
count(*) click_count
from tmp_click_product_basic
group by area,productId”;

String sql = “
Select real_key area_productId,
Count(click_count) click_count
From(
select remove_random_prefix(random_area_productId) real_key,
random_area_productId random_key ,
count(*) click_count
from (
select
random_prefix(concat(area,”_”,productId),10) random_area_productId,
From tmp_click_product_basic
) t1
group by random_area_productId”)t2
Group by real_key;”

String sql = “
Select real_key area_productId,
Count(click_count) click_count
From(
Select remove_random_prefix(random_key) real_key,
click_count
From(
select random_area_productId random_key ,
count(*) click_count
from (
select
random_prefix(concat(area,”_”,productId),10) random_area_productId,
From tmp_click_product_basic
) t1
group by random_area_productId”)t2
)t3
Group by real_key;”

5,reduce join 转换成 map join
(1)可以将表做成RDD,自己动手去实现map join
(2)也可以sparksql内置的 map join,默认是如果有一个小表,是在10M以内,默认将该小表进行broadcast,然后执行map join操作。可以自定义阈值(大于表),到20M,50M,甚至1G。方法如下:spark.sql.autoBroadcastJoinThreshold(默认10485760)
sqlContext.setConf(“spark.sql.autoBroadcastJoinThreshold”,”20971520”);

7,采样倾斜key并单独进行 join (前面方法都不行时采用,需转换为rdd)

8,随机key和扩容表:spark sql + spark core
JavaRDD rdd = sqlContext.sql(“select * from product_info”).javaRDD();
JavaRDD flattedRDD = rdd.flatMap(new FlatMapFunction<Row,Row>(){
@Override
Public Interable call(Row row) throws Exception {
List list = new ArrayList();
For(int i = 0; i<10; i++){
Long productId = row.getLong(0);
String productId= i + “” + productId;
Row _row = RowFactory.create(_productId,row.get(1),row.get(2));
list.add(_row );
}
Return list;
}

})
接着 rdd创建成临时表。
另一张表需拼接10的随机数表random_prefix(concat(area,”_”,productId),10)
两表 join
最后将结果去掉随机数remove_random_prefix(random_key)

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值