DWM层和DWS层
一、设计
设计思路
我们之前在DWD层通过分流将数据拆分成了相互独立的kafka topic。那么接下来如何处理数据,就需要思考一下我们到底要通过实时计算出哪些指标项。
因为实时计算与离线不同,实时计算的开发和运维成本都是非常高的,要结合实际情况考虑是否有必要象离线数仓一样,建一个大而全的中间层。
如果没有必要大而全,这时候就需要大体规划一下要实时计算出的指标需求了。把这些指标以主题宽表的形式输出就是我们的DWS层。
要求梳理
统计主题 | 需求指标 | 输出方式 | 计算来源 | 来源层级 |
---|---|---|---|---|
访客 | pv | 可视化大屏 | page_log直接可求 | dwd |
uv | 可视化大屏 | 需要用page_log过滤去重 | dwm | |
跳出率 | 可视化大屏 | 需要通过page_log行为判断 | dwm | |
进入页面数 | 可视化大屏 | 需要识别开始访问标识 | dwd | |
连续访问时长 | 可视化大屏 | page_log直接可求 | dwd | |
商品 | 点击 | 多维分析 | page_log直接可求 | dwd |
收藏 | 多维分析 | 收藏表 | dwd | |
加入购物车 | 多维分析 | 购物车表 | dwd | |
下单 | 可视化大屏 | 订单宽表 | dwm | |
支付 | 多维分析 | 支付宽表 | dwm | |
退款 | 多维分析 | 退款表 | dwd | |
评论 | 多维分析 | 评论表 | dwd | |
地区 | pv | 多维分析 | page_log直接可求 | dwd |
uv | 多维分析 | 需要用page_log过滤去重 | dwm | |
下单 | 可视化大屏 | 订单宽表 | dwm | |
关键词 | 搜索关键词 | 可视化大屏 | 页面访问日志 直接可求 | dwd |
点击商品关键词 | 可视化大屏 | 商品主题下单再次聚合 | dws | |
下单商品关键词 | 可视化大屏 | 商品主题下单再次聚合 | dws |
实际需求还会有更多,这里主要以为可视化大屏为目的进行实时计算的处理。
DWM层的定位是什么,DWM层主要服务DWS,因为部分需求直接从DWD层到DWS层中间会有一定的计算量,而且这部分计算的结果很有可能被多个DWS层主题复用,所以部分DWD成会形成一层DWM,我们这里主要涉及业务
- 访问UV计算
- 跳出明细计算
- 订单宽表
- 支付宽表
二、DWM-访客-UV计算
需求分析与思路
UV,全称是Unique Visitor,即独立访客,对于实时计算中,也可以称为DAU(Daily Active User),即每日活跃用户,因为实时计算中的uv通常是指当日的访客数。
那么如何从用户行为日志中识别出当日的访客,那么有两点:
- 其一,是识别出该访客打开的第一个页面,表示这个访客开始进入我们的应用
- 其二,由于访客可以在一天中多次进入应用,所以我们要在一天的范围内进行去重
代码
package com.atguigu.app.dwm;
import com.alibaba.fastjson.JSON;
import com.alibaba.fastjson.JSONObject;
import com.atguigu.utils.MyKafkaUtil;
import org.apache.flink.api.common.functions.RichFilterFunction;
import org.apache.flink.api.common.state.StateTtlConfig;
import org.apache.flink.api.common.state.ValueState;
import org.apache.flink.api.common.state.ValueStateDescriptor;
import org.apache.flink.api.common.time.Time;
import org.apache.flink.configuration.Configuration;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.datastream.KeyedStream;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.functions.ProcessFunction;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
import org.apache.flink.util.Collector;
import org.apache.flink.util.OutputTag;
import java.text.SimpleDateFormat;
/*
数据流:
web/app -> nginx -> springboot -> Kafka -> FlinkApp(LogBaseApp) -> Kafka
FlinkApp(DauApp) -> Kafka
服务: Nginx Logger zookeeper Kafka LogbaseApp DauApp 消费者(dwm_unique_visit) MockLog
*/
public class DauApp {
public static void main(String[] args) throws Exception {
//1.获取执行环境
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(1);
// //1.1 设置状态后端
// env.setStateBackend(new FsStateBackend("hdfs://hadoop102:9000/gmall/dwd_log/ck"));
//1.2 开启CK
// env.enableCheckpointing(10000L, CheckpointingMode.EXACTLY_ONCE);
// env.getCheckpointConfig().setCheckpointTimeout(60000L);
//2.读取Kafka dwd_page_log主题数据创建流
String groupId = "unique_visit_app";
String sourceTopic = "dwd_page_log";
String sinkTopic = "dwm_unique_visit";
FlinkKafkaConsumer<String> kafkaSource = MyKafkaUtil.getKafkaSource(sourceTopic, groupId);
DataStreamSource<String> kafkaDS = env.addSource(kafkaSource);
//3.将每行数据转换为JSON对象
SingleOutputStreamOperator<JSONObject> jsonObjDS = kafkaDS.process(new ProcessFunction<String, JSONObject>() {
@Override
public void processElement(String s, Context context, Collector<JSONObject> collector) throws Exception {
try {
JSONObject jsonObject = JSON.parseObject(s);
collector.collect(jsonObject);
} catch (Exception e) {
e.printStackTrace();
context.output(new OutputTag<String>("dirty") {
}, s);
}
}
});
/*
kafkaDS.map(new MapFunction<String, JSONObject>() {
@Override
public JSONObject map(String s) throws Exception {
try {
return JSON.parseObject(s);
} catch (Exception e) {
e.printStackTrace();
System.out.println("发现脏数据:"+s);
return null;
}
}
});
*/
//4.按照mid分组
KeyedStream<JSONObject, String> keyedStream = jsonObjDS.keyBy
(jsonObject -> jsonObject.getJSONObject("common").getString("mid"));
//5.过滤掉不是今天第一次访问的数据
SingleOutputStreamOperator<JSONObject> filterDS = keyedStream.filter(new UvRichFilterFunction());
//6.写入DWM层Kafka主题中
filterDS.map(json -> json.toString()).addSink(MyKafkaUtil.getKafkaSink(sinkTopic));
//7.启动任务
env.execute();
}
public static class UvRichFilterFunction extends RichFilterFunction<JSONObject>{
private ValueState<String> firstVisitState ;
private SimpleDateFormat simpleDateFormat;
@Override
public void open(Configuration parameters) throws Exception {
simpleDateFormat=new SimpleDateFormat("yyyy-MM-dd");
ValueStateDescriptor<String> stringValueStateDescriptor = new ValueStateDescriptor<>("visit-state", String.class);
//创建状态TTL配置对象
StateTtlConfig stateTtlConfig = StateTtlConfig.newBuilder(Time.days(1))
.setUpdateType(StateTtlConfig.UpdateType.OnCreateAndWrite)
.build();
//保留24小时
stringValueStateDescriptor.enableTimeToLive(stateTtlConfig);
firstVisitState = getRuntimeContext().getState(stringValueStateDescriptor);
}
@Override
public boolean filter(JSONObject jsonObject) throws Exception {
//取出上一次访问页面
String lastPageId = jsonObject.getJSONObject("page").getString("last_page_id");
//判断是否存在上一个页面(是不是从上一个页面进来的,而不是新打开的)
if (lastPageId == null || lastPageId.length()<=0){
//取出状态数据
String firstVisitData = firstVisitState.value();
//取出数据时间
Long ts = jsonObject.getLong("ts");
String curDate = simpleDateFormat.format(ts);
if (firstVisitData == null || firstVisitData.equals(curDate)){
firstVisitState.update(curDate);
return true;
}else {
return false;
}
}else {
return false;
}
}
}
}
三、DWM-访客-跳出明细计算
需求分析与思路
跳出
跳出就是用户成功访问了网站的一个页面后就退出,不在继续访问网站的其它页面。而跳出率就是用跳出次数除以访问次数。
关注跳出率,可以看出引流过来的访客是否能很快的被吸引,渠道引流过来的用户之间的质量对比,对于应用优化前后跳出率的对比也能看出优化改进的成果。
计算跳出行为的思路
首先要识别哪些是跳出行为,要把这些跳出的访客最后一个访问的页面识别出来。那么要抓住几个特征:
- 该页面是用户近期访问的第一个页面
这个可以通过该页面是否有上一个页面(last_page_id)来判断,如果这个表示为空,就说明这是这个访客这次访问的第一个页面。
- 首次访问之后很长一段时间(自己设定),用户没继续再有其他页面的访问。
这第一个特征的识别很简单,保留last_page_id为空的就可以了。但是第二个访问的判断,其实有点麻烦,首先这不是用一条数据就能得出结论的,需要组合判断,要用一条存在的数据和不存在的数据进行组合判断。而且要通过一个不存在的数据求得一条存在的数据。更麻烦的他并不是永远不存在,而是在一定时间范围内不存在。那么如何识别有一定失效的组合行为呢?
最简单的办法就是Flink自带的CEP技术。这个CEP非常适合通过多条数据组合来识别某个事件。
用户跳出事件,本质上就是一个条件事件加一个超时事件的组合。
{“common”:{“mid”:“101”},“page”:{“page_id”:“home”},“ts”:1000000000}
{“common”:{“mid”:“102”},“page”:{“page_id”:“home”},“ts”:1000000001}
{“common”:{“mid”:“102”},“page”:{“page_id”:“home”,“last_page_id”:“aa”},“ts”:1000000020}
{“common”:{“mid”:“102”},“page”:{“page_id”:“home”,“last_page_id”:“aa”},“ts”:1000000030}
代码
package com.atguigu.app.dwm;
import com.alibaba.fastjson.JSON;
import com.alibaba.fastjson.JSONObject;
import com.atguigu.utils.MyKafkaUtil;
import org.apache.flink.api.common.eventtime.SerializableTimestampAssigner;
import org.apache.flink.api.common.eventtime.WatermarkStrategy;
import org.apache.flink.cep.CEP;
import org.apache.flink.cep.PatternFlatSelectFunction;
import org.apache.flink.cep.PatternFlatTimeoutFunction;
import org.apache.flink.cep.PatternStream;
import org.apache.flink.cep.pattern.Pattern;
import org.apache.flink.cep.pattern.conditions.SimpleCondition;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.datastream.KeyedStream;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.functions.ProcessFunction;
import org.apache.flink.streaming.api.windowing.time.Time;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer;
import org.apache.flink.util.Collector;
import org.apache.flink.util.OutputTag;
import java.util.List;
import java.util.Map;
public class UserJumpDetailApp {
public static void main(String[] args) throws Exception {
//1.获取执行环境
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(1);
// //1.1 设置状态后端
// env.setStateBackend(new FsStateBackend("hdfs://hadoop102:9000/gmall/dwd_log/ck"));
//1.2 开启CK
// env.enableCheckpointing(10000L, CheckpointingMode.EXACTLY_ONCE);
// env.getCheckpointConfig().setCheckpointTimeout(60000L);
//2.读取Kafka dwd_page_log主题数据创建流
String sourceTopic = "dwd_page_log";
String groupId = "userJumpDetailApp";
String sinkTopic = "dwm_user_jump_detail";
FlinkKafkaConsumer<String> kafkaSource = MyKafkaUtil.getKafkaSource(sourceTopic, groupId);
DataStreamSource<String> kafkaDS = env.addSource(kafkaSource);
// DataStream<String> kafkaDS = env
// .fromElements(
// "{\"common\":{\"mid\":\"101\"},\"page\":{\"page_id\":\"home\"},\"ts\":1000000000} ",
// "{\"common\":{\"mid\":\"102\"},\"page\":{\"page_id\":\"home\"},\"ts\":1000000001}",
// "{\"common\":{\"mid\":\"102\"},\"page\":{\"page_id\":\"good_list\",\"last_page_id\":" +
// "\"home\"},\"ts\":1000000020} ",
// "{\"common\":{\"mid\":\"102\"},\"page\":{\"page_id\":\"good_list\",\"last_page_id\":" +
// "\"detail\"},\"ts\":1000000025} "
// );
//
//DataStream<String> kafkaDS = env.socketTextStream("hadoop103",9999);
//提取数据中的时间戳,生成watermark
WatermarkStrategy<JSONObject> watermarkStrategy = WatermarkStrategy.<JSONObject>forMonotonousTimestamps().withTimestampAssigner(new SerializableTimestampAssigner<JSONObject>() {
@Override
public long extractTimestamp(JSONObject jsonObject, long l) {
return jsonObject.getLong("ts");
}
});
//3.将数据转换为JSON对象
SingleOutputStreamOperator<JSONObject> jsonObjDS = kafkaDS.process(new ProcessFunction<String, JSONObject>() {
@Override
public void processElement(String s, Context context, Collector<JSONObject> collector) throws Exception {
try {
JSONObject jsonObject = JSON.parseObject(s);
collector.collect(jsonObject);
} catch (Exception e) {
context.output(new OutputTag<String>("dirty"){},s);
}
}
}).assignTimestampsAndWatermarks(watermarkStrategy);
//4.按照Mid进行分区
KeyedStream<JSONObject, String> keyedStream = jsonObjDS.keyBy(jsonObject -> jsonObject.getJSONObject("common").getString("mid"));
//5.定义模式序列
Pattern<JSONObject, JSONObject> pattern = Pattern.<JSONObject>begin("start").where(new SimpleCondition<JSONObject>() {
@Override
public boolean filter(JSONObject jsonObject) throws Exception {
//取出上一条页面信息
String lastPageId = jsonObject.getJSONObject("page").getString("last_page_id");
return (lastPageId == null || lastPageId.length() <= 0);
}
}).followedBy("follow").where(new SimpleCondition<JSONObject>() {
@Override
public boolean filter(JSONObject value) throws Exception {
String lastPageId = value.getJSONObject("page").getString("last_page_id");
return lastPageId != null && lastPageId.length() > 0;
}
}).within(Time.seconds(10));
//6.将模式序列作用在流上
PatternStream<JSONObject> patternStream = CEP.pattern(keyedStream, pattern);
//7.提取事件和超时事件
OutputTag<String> timeOutTag = new OutputTag<String>("timeOut") {};
SingleOutputStreamOperator<Object> selectDS = patternStream.flatSelect(timeOutTag, new PatternFlatTimeoutFunction<JSONObject, String>() {
@Override
public void timeout(Map<String, List<JSONObject>> map, long l, Collector<String> collector) throws Exception {
collector.collect(map.get("start").get(0).toString());
}
}, new PatternFlatSelectFunction<JSONObject, Object>() {
@Override
public void flatSelect(Map<String, List<JSONObject>> map, Collector<Object> collector) throws Exception {
//什么都没有!
//因为这里面的东西!啥用没有!
}
});
//8.将数据写入Kafka
FlinkKafkaProducer<String> kafkaSink = MyKafkaUtil.getKafkaSink(sinkTopic);
selectDS.getSideOutput(timeOutTag).addSink(kafkaSink);
//9.执行任务
env.execute();
}
}
四、DWM-商品-订单宽表
需求分析与思路
订单是统计分析的重要的对象,围绕订单有很多的维度统计需求,比如用户、地区、商品、品类、品牌等等。
为了之后统计计算更加方便,减少大表之间的关联,所以在实时计算过程中将围绕订单的相关数据整合成为一张订单的宽表。
那究竟哪些数据需要和订单整合在一起?
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-kDTYAKAz-1629586654467)(D:\实时数仓\图片13.png)]
如上图,由于在之前的操作我们已经把数据分拆成了事实数据和维度数据,事实数据(绿色)进入kafka数据流(DWD层)中,维度数据(蓝色)进入hbase中长期保存。那么我们在DWM层中要把实时和维度数据进行整合关联在一起,形成宽表。那么这里就要处理有两种关联,事实数据和事实数据关联、事实数据和维度数据关联。
- 事实数据和事实数据关联,其实就是流与流之间的关联。
- 事实数据与维度数据关联,其实就是流计算中查询外部数据源。
代码实现
从Kafka的dwd层接收订单和订单明细数据
package com.atguigu.app.dwm;
import com.alibaba.fastjson.JSON;
import com.atguigu.bean.OrderDetail;
import com.atguigu.bean.OrderInfo;
import com.atguigu.bean.OrderWide;
import com.atguigu.utils.MyKafkaUtil;
import org.apache.flink.api.common.eventtime.SerializableTimestampAssigner;
import org.apache.flink.api.common.eventtime.WatermarkStrategy;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.datastream.KeyedStream;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.functions.co.ProcessJoinFunction;
import org.apache.flink.streaming.api.windowing.time.Time;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
import org.apache.flink.util.Collector;
import java.text.SimpleDateFormat;
/**
* Mock -> MySQL(binlog) -> MaxWell -> Kafka(ods_base_db_m)
* -> DbBaseApp(修改配置,Phoenix)
* -> Kafka(dwd_order_info,dwd_order_detail) -> OrderWideApp
*/
public class OrderWideApp {
public static void main(String[] args) throws Exception {
//1.获取执行环境
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(1);
// //1.1 设置状态后端
// env.setStateBackend(new FsStateBackend("hdfs://hadoop102:9000/gmall/dwd_log/ck"));
//1.2 开启CK
// env.enableCheckpointing(10000L, CheckpointingMode.EXACTLY_ONCE);
// env.getCheckpointConfig().setCheckpointTimeout(60000L);
//2.读取kafka订单和订单明细主题数据 dwd_order_info dwd_order_detail
String orderInfoSourceTopic = "dwd_order_info";
String orderDetailSourceTopic = "dwd_order_detail";
String orderWideSinkTopic = "dwm_order_wide";
String groupId = "order_wide_group";
FlinkKafkaConsumer<String> orderInfoKafkaSource = MyKafkaUtil.getKafkaSource(orderInfoSourceTopic, groupId);
DataStreamSource<String> orderInfoKafkaDS = env.addSource(orderInfoKafkaSource);
FlinkKafkaConsumer<String> orderDetailKafkaSource = MyKafkaUtil.getKafkaSource(orderDetailSourceTopic, groupId);
DataStreamSource<String> orderDetailKafkaDS = env.addSource(orderDetailKafkaSource);
//3.将每行数据转换为JavaBean,提取时间戳,生成WaterMark
WatermarkStrategy<OrderInfo> orderInfoWatermarkStrategy = WatermarkStrategy.<OrderInfo>forMonotonousTimestamps()
.withTimestampAssigner(new SerializableTimestampAssigner<OrderInfo>() {
@Override
public long extractTimestamp(OrderInfo orderInfo, long l) {
return orderInfo.getCreate_ts();
}
});
WatermarkStrategy<OrderDetail> orderDetailWatermarkStrategy = WatermarkStrategy.<OrderDetail>forMonotonousTimestamps()
.withTimestampAssigner(new SerializableTimestampAssigner<OrderDetail>() {
@Override
public long extractTimestamp(OrderDetail orderDetail, long l) {
return orderDetail.getCreate_ts();
}
});
KeyedStream<OrderInfo, Long> orderInfoWithIdKeyedStream = orderInfoKafkaDS.map(jsonStr -> {
SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
//将JSON字符串转换为JavaBean
OrderInfo orderInfo = JSON.parseObject(jsonStr, OrderInfo.class);
//取出创建时间字段
String create_time = orderInfo.getCreate_time();
//按照空格分割
String[] createTimeArr = create_time.split(" ");
orderInfo.setCreate_date(createTimeArr[0]);
orderInfo.setCreate_hour(createTimeArr[1]);
orderInfo.setCreate_ts(sdf
.parse(create_time).getTime());
return orderInfo;
}).assignTimestampsAndWatermarks(orderInfoWatermarkStrategy)
.keyBy(OrderInfo::getId);
KeyedStream<OrderDetail, Long> orderDetailWithOrderKeyedStream = orderDetailKafkaDS.map(jsonStr -> {
SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
OrderDetail orderDetail = JSON.parseObject(jsonStr, OrderDetail.class);
orderDetail.setCreate_ts(sdf.parse(orderDetail.getCreate_time()).getTime());
return orderDetail;
}).assignTimestampsAndWatermarks(orderDetailWatermarkStrategy)
.keyBy(OrderDetail::getOrder_id);
//4.双流JOIN
SingleOutputStreamOperator<OrderWide> orderWideDS = orderInfoWithIdKeyedStream.intervalJoin(orderDetailWithOrderKeyedStream)
.between(Time.seconds(-5), Time.seconds(5))//生产环境,为了不丢数据,设置时间为最大网络延迟
.process(new ProcessJoinFunction<OrderInfo, OrderDetail, OrderWide>() {
@Override
public void processElement(OrderInfo orderInfo, OrderDetail orderDetail, Context context, Collector<OrderWide> collector) throws Exception {
collector.collect(new OrderWide(orderInfo, orderDetail));
}
});
//测试打印
orderWideDS.print(">>>>>>>>>>>>>>>");
//5.关联维度
//6.写入数据到Kafka dwm_order_wide
//7.开启任务
env.execute();
}
}
测试结果:
查询Phoenix工具类封装
package com.atguigu.utils;
import com.alibaba.fastjson.JSON;
import com.alibaba.fastjson.JSONObject;
import com.atguigu.common.GmallConfig;
import org.apache.commons.beanutils.BeanUtils;
import java.sql.*;
import java.util.ArrayList;
import java.util.List;
public class PhoenixUtil {
//声明
private static Connection connection;
//初始化连接
private static void init(){
try {
Class.forName(GmallConfig.PHOENIX_DRIVER);
connection = DriverManager.getConnection(GmallConfig.PHOENIX_SERVER);
//设置连接到的Phoenix的库
connection.setSchema(GmallConfig.HBASE_SCHEMA);
} catch (Exception e) {
e.printStackTrace();
throw new RuntimeException("获取连接失败!");
}
}
public static <T>List<T> queryList(String sql,Class<T> cls){
//初始化连接
init();
PreparedStatement preparedStatement = null;
ResultSet resultSet = null;
try {
//编译SQL
preparedStatement = connection.prepareStatement(sql);
//执行查询
resultSet = preparedStatement.executeQuery();
//获取查询结果中的元数据信息
ResultSetMetaData metaData = resultSet.getMetaData();
int columnCount = metaData.getColumnCount();
ArrayList<T> list = new ArrayList<>();
while (resultSet.next()){//对行
T t = cls.newInstance();
for (int i = 1; i <= columnCount; i++) {//对列
BeanUtils.setProperty(t,metaData.getColumnName(i),resultSet.getObject(i));
}
list.add(t);
}
return list;
} catch (Exception throwables) {
throw new RuntimeException("查询维度信息失败");
}finally {
if (preparedStatement != null){
try {
preparedStatement.close();
} catch (SQLException throwables) {
throwables.printStackTrace();
}
}
if (resultSet != null){
try {
resultSet.close();
} catch (SQLException throwables) {
throwables.printStackTrace();
}
}
}
}
public static void main(String[] args) {
System.out.println(queryList("select * from GMALL200821_REALTIME.DIM_BASE_TRADEMARK", JSONObject.class));
}
}
测试结果
查询维度信息工具类封装
package com.atguigu.utils;
import com.alibaba.fastjson.JSONObject;
import org.apache.flink.api.java.tuple.Tuple2;
import java.util.List;
//查询维度表的工具类
/*
select * from t where id = '' and name ='';
*/
public class DimUtil {
public static JSONObject getDimInfo(String tableName, Tuple2<String,String>... columnValues){
if (columnValues.length<=0){
throw new RuntimeException("查询维度数据时,请至少设置一个查询条件");//抛出异常,停止运行
}
StringBuilder whereSql = new StringBuilder(" where ");
//遍历查询条件并赋值whereSql
for (int i = 0; i < columnValues.length; i++) {
Tuple2<String, String> columnValue = columnValues[i];
String column = columnValue.f0;
String value = columnValue.f1;
whereSql.append(column).append(" = '").append(value).append("'");
//判断如果不是最后一个条件,则添加“and”
if (i < columnValues.length -1){
whereSql.append("and");
}
}
String querySql= "select * from "+tableName + whereSql.toString();
System.out.println(querySql);
//查询Phoenix中的维度数据
List<JSONObject> queryList = PhoenixUtil.queryList(querySql, JSONObject.class);
return queryList.get(0);
}
public static JSONObject getDimInfo(String tableName,String value){
return getDimInfo(tableName,new Tuple2<>("id",value));
}
public static void main(String[] args) {
System.out.println(getDimInfo("DIM_BASE_TRADEMARK","20"));
}
}
测试结果
优化1旁路缓存
代码实现-DimUtil
package com.atguigu.utils;
import com.alibaba.fastjson.JSON;
import com.alibaba.fastjson.JSONObject;
import org.apache.flink.api.java.tuple.Tuple2;
import redis.clients.jedis.Jedis;
import java.util.List;
//查询维度表的工具类
/*
select * from t where id = '' and name ='';
*/
/**
* <p>
* Redis
* 1.存什么数据 维度数据 JsonStr
* 2,用什么类型 String Set Hash
* 3.RedisKey的设计? String:tableName+id Set : tableName Hash:tableName
*
* <p>
* 集合方式排除,原因在于我们需要对每条独立的维度数据设置过期时间
*/
public class DimUtil {
public static JSONObject getDimInfo(String tableName, Tuple2<String,String>... columnValues){
if (columnValues.length<=0){
throw new RuntimeException("查询维度数据时,请至少设置一个查询条件");//抛出异常,停止运行
}
//创建Phoenix Where子句
StringBuilder whereSql = new StringBuilder(" where ");
StringBuilder rediskey = new StringBuilder(tableName).append(":");
//遍历查询条件并赋值whereSql
for (int i = 0; i < columnValues.length; i++) {
Tuple2<String, String> columnValue = columnValues[i];
String column = columnValue.f0;
String value = columnValue.f1;
whereSql.append(column).append(" = '").append(value).append("'");
rediskey.append(value);
//判断如果不是最后一个条件,则添加“and”
if (i < columnValues.length -1){
whereSql.append("and");
rediskey.append(":");
}
}
//获取Redis连接
Jedis jedis = RedisUtil.getJedis();
String dimJsonStr = jedis.get(rediskey.toString());
//判断是否从Redis中查询到数据
if (dimJsonStr!=null&& dimJsonStr.length()>0){
jedis.close();
return JSON.parseObject(dimJsonStr);
}
String querySql= "select * from "+tableName + whereSql.toString();
System.out.println(querySql);
//查询Phoenix中的维度数据
List<JSONObject> queryList = PhoenixUtil.queryList(querySql, JSONObject.class);
JSONObject dimJsonObj = queryList.get(0);
//将数据写入Redis
jedis.set(rediskey.toString(),dimJsonObj.toString());
jedis.expire(rediskey.toString(),24*60*60);
return dimJsonObj;
}
public static JSONObject getDimInfo(String tableName,String value){
return getDimInfo(tableName,new Tuple2<>("id",value));
}
public static void main(String[] args) {
System.out.println(getDimInfo("DIM_BASE_TRADEMARK","20"));
}
}
代码实现-RedisUtil
package com.atguigu.utils;
import redis.clients.jedis.Jedis;
import redis.clients.jedis.JedisPool;
import redis.clients.jedis.JedisPoolConfig;
public class RedisUtil {
public static JedisPool jedisPool=null;
public static Jedis getJedis(){
if(jedisPool==null){
JedisPoolConfig jedisPoolConfig =new JedisPoolConfig();
jedisPoolConfig.setMaxTotal(100); //最大可用连接数
jedisPoolConfig.setBlockWhenExhausted(true); //连接耗尽是否等待
jedisPoolConfig.setMaxWaitMillis(2000); //等待时间
jedisPoolConfig.setMaxIdle(5); //最大闲置连接数
jedisPoolConfig.setMinIdle(5); //最小闲置连接数
jedisPoolConfig.setTestOnBorrow(true); //取连接的时候进行一下测试 ping pong
jedisPool=new JedisPool( jedisPoolConfig, "hadoop103",6379 ,1000);
System.out.println("开辟连接池");
return jedisPool.getResource();
}else{
// System.out.println(" 连接池:"+jedisPool.getNumActive());
return jedisPool.getResource();
}
}
}
启动redis后
优化2:异步查询
在Flink 流处理过程中,经常需要和外部系统进行交互,用维度表补全事实表中的字段。
例如:在电商场景中,需要一个商品的skuid去关联商品的一些属性,例如商品所属行业、商品的生产厂家、生产厂家的一些情况;在物流场景中,知道包裹id,需要去关联包裹的行业属性、发货信息、收货信息等等。
默认情况下,在Flink的MapFunction中,单个并行只能用同步方式去交互: **将请求发送到外部存储,IO阻塞,等待请求返回,然后继续发送下一个请求。**这种同步交互的方式往往在网络等待上就耗费了大量时间。为了提高处理效率,可以增加MapFunction的并行度,但增加并行度就意味着更多的资源,并不是一种非常好的解决方式。
Flink 在1.2中引入了Async I/O,在异步模式下,将IO操作异步化,单个并行可以连续发送多个请求,哪个请求先返回就先处理,从而在连续的请求间不需要阻塞式等待,大大提高了流处理效率。
Async I/O 是阿里巴巴贡献给社区的一个呼声非常高的特性,解决与外部系统交互时网络延迟成为了系统瓶颈的问题。
异步查询实际上是把维表的查询操作托管给单独的线程池完成,这样不会因为某一个查询造成阻塞,单个并行可以连续发送多个请求,提高并发效率。
这种方式特别针对涉及网络IO的操作,减少因为请求等待带来的消耗。
代码实现-OrderWideApp
package com.atguigu.app.dwm;
import com.alibaba.fastjson.JSON;
import com.alibaba.fastjson.JSONObject;
import com.atguigu.app.func.DimAsyncFunction;
import com.atguigu.bean.OrderDetail;
import com.atguigu.bean.OrderInfo;
import com.atguigu.bean.OrderWide;
import com.atguigu.utils.MyKafkaUtil;
import org.apache.flink.api.common.eventtime.SerializableTimestampAssigner;
import org.apache.flink.api.common.eventtime.WatermarkStrategy;
import org.apache.flink.streaming.api.datastream.AsyncDataStream;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.datastream.KeyedStream;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.functions.co.ProcessJoinFunction;
import org.apache.flink.streaming.api.windowing.time.Time;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
import org.apache.flink.util.Collector;
import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.concurrent.TimeUnit;
/**
* Mock -> Mysql(binLog) -> MaxWell -> Kafka(ods_base_db_m) -> DbBaseApp(修改配置,Phoenix)
* -> Kafka(dwd_order_info,dwd_order_detail) -> OrderWideApp(关联维度,Redis)
*/
public class OrderWideApp {
public static void main(String[] args) throws Exception {
//1.获取执行环境
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(1);
//1.1 设置状态后端
// env.setStateBackend(new FsStateBackend("hdfs://hadoop102:8020/gmall/dwd_log/ck"));
// //1.2 开启CK
// env.enableCheckpointing(10000L, CheckpointingMode.EXACTLY_ONCE);
// env.getCheckpointConfig().setCheckpointTimeout(60000L);
//2.读取Kafka订单和订单明细主题数据 dwd_order_info dwd_order_detail
String orderInfoSourceTopic = "dwd_order_info";
String orderDetailSourceTopic = "dwd_order_detail";
String orderWideSinkTopic = "dwm_order_wide";
String groupId = "order_wide_group";
FlinkKafkaConsumer<String> orderInfoKafkaSource = MyKafkaUtil.getKafkaSource(orderInfoSourceTopic, groupId);
DataStreamSource<String> orderInfoKafkaDS = env.addSource(orderInfoKafkaSource);
FlinkKafkaConsumer<String> orderDetailKafkaSource = MyKafkaUtil.getKafkaSource(orderDetailSourceTopic, groupId);
DataStreamSource<String> orderDetailKafkaDS = env.addSource(orderDetailKafkaSource);
//3.将每行数据转换为JavaBean,提取时间戳生成WaterMark
WatermarkStrategy<OrderInfo> orderInfoWatermarkStrategy = WatermarkStrategy.<OrderInfo>forMonotonousTimestamps()
.withTimestampAssigner(new SerializableTimestampAssigner<OrderInfo>() {
@Override
public long extractTimestamp(OrderInfo element, long recordTimestamp) {
return element.getCreate_ts();
}
});
WatermarkStrategy<OrderDetail> orderDetailWatermarkStrategy = WatermarkStrategy.<OrderDetail>forMonotonousTimestamps()
.withTimestampAssigner(new SerializableTimestampAssigner<OrderDetail>() {
@Override
public long extractTimestamp(OrderDetail element, long recordTimestamp) {
return element.getCreate_ts();
}
});
KeyedStream<OrderInfo, Long> orderInfoWithIdKeyedStream = orderInfoKafkaDS.map(jsonStr -> {
SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
//将JSON字符串转换为JavaBean
OrderInfo orderInfo = JSON.parseObject(jsonStr, OrderInfo.class);
//取出创建时间字段
String create_time = orderInfo.getCreate_time();
//按照空格分割
String[] createTimeArr = create_time.split(" ");
orderInfo.setCreate_date(createTimeArr[0]);
orderInfo.setCreate_hour(createTimeArr[1]);
orderInfo.setCreate_ts(sdf.parse(create_time).getTime());
return orderInfo;
}).assignTimestampsAndWatermarks(orderInfoWatermarkStrategy)
.keyBy(OrderInfo::getId);
KeyedStream<OrderDetail, Long> orderDetailWithOrderIdKeyedStream = orderDetailKafkaDS.map(jsonStr -> {
SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
OrderDetail orderDetail = JSON.parseObject(jsonStr, OrderDetail.class);
orderDetail.setCreate_ts(sdf.parse(orderDetail.getCreate_time()).getTime());
return orderDetail;
}).assignTimestampsAndWatermarks(orderDetailWatermarkStrategy)
.keyBy(OrderDetail::getOrder_id);
//4.双流JOIN
SingleOutputStreamOperator<OrderWide> orderWideDS = orderInfoWithIdKeyedStream.intervalJoin(orderDetailWithOrderIdKeyedStream)
.between(Time.seconds(-5), Time.seconds(5)) //生产环境,为了不丢数据,设置时间为最大网络延迟
.process(new ProcessJoinFunction<OrderInfo, OrderDetail, OrderWide>() {
@Override
public void processElement(OrderInfo orderInfo, OrderDetail orderDetail, Context context, Collector<OrderWide> collector) throws Exception {
collector.collect(new OrderWide(orderInfo, orderDetail));
}
});
//测试打印
// orderWideDS.print(">>>>>>>>>");
//5.关联维度
//5.1 关联用户维度
SingleOutputStreamOperator<OrderWide> orderWideWithUserDS = AsyncDataStream.unorderedWait(orderWideDS,
new DimAsyncFunction<OrderWide>("DIM_USER_INFO") {
@Override
public String getKey(OrderWide orderWide) {
return orderWide.getUser_id().toString();
}
@Override
public void join(OrderWide orderWide, JSONObject dimInfo) throws ParseException {
SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd");
//取出用户维度中的生日
String birthday = dimInfo.getString("BIRTHDAY");
long currentTS = System.currentTimeMillis();
Long ts = sdf.parse(birthday).getTime();
//将生日字段处理成年纪
Long ageLong = (currentTS - ts) / 1000L / 60 / 60 / 24 / 365;
orderWide.setUser_age(ageLong.intValue());
//取出用户维度中的性别
String gender = dimInfo.getString("GENDER");
orderWide.setUser_gender(gender);
}
},
60,
TimeUnit.SECONDS);
//orderWideWithUserDS.print("Users>>>>>>");
//5.2 关联地区维度
SingleOutputStreamOperator<OrderWide> orderWideWithProvinceDS = AsyncDataStream.unorderedWait(orderWideWithUserDS,
new DimAsyncFunction<OrderWide>("DIM_BASE_PROVINCE") {
@Override
public String getKey(OrderWide orderWide) {
return orderWide.getProvince_id().toString();
}
@Override
public void join(OrderWide orderWide, JSONObject dimInfo) throws Exception {
//提取维度信息并设置进orderWide
orderWide.setProvince_name(dimInfo.getString("NAME"));
orderWide.setProvince_area_code(dimInfo.getString("AREA_CODE"));
orderWide.setProvince_iso_code(dimInfo.getString("ISO_CODE"));
orderWide.setProvince_3166_2_code(dimInfo.getString("ISO_3166_2"));
}
}, 60, TimeUnit.SECONDS);
//orderWideWithProvinceDS.print("Province>>>>>>>>");
//5.3 关联SKU维度
SingleOutputStreamOperator<OrderWide> orderWideWithSkuDS = AsyncDataStream.unorderedWait(
orderWideWithProvinceDS, new DimAsyncFunction<OrderWide>("DIM_SKU_INFO") {
@Override
public void join(OrderWide orderWide, JSONObject jsonObject) throws Exception {
orderWide.setSku_name(jsonObject.getString("SKU_NAME"));
orderWide.setCategory3_id(jsonObject.getLong("CATEGORY3_ID"));
orderWide.setSpu_id(jsonObject.getLong("SPU_ID"));
orderWide.setTm_id(jsonObject.getLong("TM_ID"));
}
@Override
public String getKey(OrderWide orderWide) {
return String.valueOf(orderWide.getSku_id());
}
}, 60, TimeUnit.SECONDS);
//orderWideWithSkuDS.print("sku>>>>");
//5.4 关联SPU维度
SingleOutputStreamOperator<OrderWide> orderWideWithSpuDS = AsyncDataStream.unorderedWait(
orderWideWithSkuDS, new DimAsyncFunction<OrderWide>("DIM_SPU_INFO") {
@Override
public void join(OrderWide orderWide, JSONObject jsonObject) throws Exception {
orderWide.setSpu_name(jsonObject.getString("SPU_NAME"));
}
@Override
public String getKey(OrderWide orderWide) {
return String.valueOf(orderWide.getSpu_id());
}
}, 60, TimeUnit.SECONDS);
//orderWideWithSpuDS.print("spu");
//5.5 关联品牌维度
SingleOutputStreamOperator<OrderWide> orderWideWithTmDS = AsyncDataStream.unorderedWait(
orderWideWithSpuDS, new DimAsyncFunction<OrderWide>("DIM_BASE_TRADEMARK") {
@Override
public void join(OrderWide orderWide, JSONObject jsonObject) throws Exception {
orderWide.setTm_name(jsonObject.getString("TM_NAME"));
}
@Override
public String getKey(OrderWide orderWide) {
return String.valueOf(orderWide.getTm_id());
}
}, 60, TimeUnit.SECONDS);
//5.6 关联品类维度
SingleOutputStreamOperator<OrderWide> orderWideWithCategory3DS = AsyncDataStream.unorderedWait(
orderWideWithTmDS, new DimAsyncFunction<OrderWide>("DIM_BASE_CATEGORY3") {
@Override
public void join(OrderWide orderWide, JSONObject jsonObject) throws Exception {
orderWide.setCategory3_name(jsonObject.getString("NAME"));
}
@Override
public String getKey(OrderWide orderWide) {
return String.valueOf(orderWide.getCategory3_id());
}
}, 60, TimeUnit.SECONDS);
orderWideWithCategory3DS.print("3DS>>>>");
//6.写入数据到Kafka dwm_order_wide
orderWideWithCategory3DS.map(JSON::toJSONString)
.addSink(MyKafkaUtil.getKafkaSink(orderWideSinkTopic));
//7.开启任务
env.execute();
}
}
五、DWM-商品-支付宽表
需求分析与思路
支付宽表的目的,最主要的原因是支付表没有到订单明细,支付金额没有细分到商品上,没有办法统计商品级的支付状况。
所以本次宽表的核心就是要把支付表的信息与订单明细关联上。
代码实现-支付实体类-PaymentInfo
package com.atguigu.bean;
import lombok.Data;
import java.math.BigDecimal;
@Data
public class PaymentInfo {
Long id;
Long order_id;
Long user_id;
BigDecimal total_amount;
String subject;
String payment_type;
String create_time;
String callback_time;
}
代码实现-支付宽表实体类-PaymentWide
package com.atguigu.bean;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import org.apache.commons.beanutils.BeanUtils;
import java.lang.reflect.InvocationTargetException;
import java.math.BigDecimal;
@Data
@AllArgsConstructor
@NoArgsConstructor
public class PaymentWide {
Long payment_id;
String subject;
String payment_type;
String payment_create_time;
String callback_time;
Long detail_id;
Long order_id;
Long sku_id;
BigDecimal order_price;
Long sku_num;
String sku_name;
Long province_id;
String order_status;
Long user_id;
BigDecimal total_amount;
BigDecimal activity_reduce_amount;
BigDecimal coupon_reduce_amount;
BigDecimal original_total_amount;
BigDecimal feight_fee;
BigDecimal split_feight_fee;
BigDecimal split_activity_amount;
BigDecimal split_coupon_amount;
BigDecimal split_total_amount;
String order_create_time;
String province_name;//查询维表得到
String province_area_code;
String province_iso_code;
String province_3166_2_code;
Integer user_age; //用户信息
String user_gender;
Long spu_id; //作为维度数据 要关联进来
Long tm_id;
Long category3_id;
String spu_name;
String tm_name;
String category3_name;
public PaymentWide(PaymentInfo paymentInfo, OrderWide orderWide) {
mergeOrderWide(orderWide);
mergePaymentInfo(paymentInfo);
}
public void mergePaymentInfo(PaymentInfo paymentInfo) {
if (paymentInfo != null) {
try {
BeanUtils.copyProperties(this, paymentInfo);
payment_create_time = paymentInfo.create_time;
payment_id = paymentInfo.id;
} catch (IllegalAccessException e) {
e.printStackTrace();
} catch (InvocationTargetException e) {
e.printStackTrace();
}
}
}
public void mergeOrderWide(OrderWide orderWide) {
if (orderWide != null) {
try {
BeanUtils.copyProperties(this, orderWide);
order_create_time = orderWide.create_time;
} catch (IllegalAccessException e) {
e.printStackTrace();
} catch (InvocationTargetException e) {
e.printStackTrace();
}
}
}
}
代码实现-支付宽表处理主程序-PaymentWideApp
package com.atguigu.app.dwm;
import com.alibaba.fastjson.JSON;
import com.atguigu.bean.OrderWide;
import com.atguigu.bean.PaymentInfo;
import com.atguigu.bean.PaymentWide;
import com.atguigu.utils.MyKafkaUtil;
import org.apache.flink.api.common.eventtime.SerializableTimestampAssigner;
import org.apache.flink.api.common.eventtime.WatermarkStrategy;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.datastream.KeyedStream;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.functions.co.ProcessJoinFunction;
import org.apache.flink.streaming.api.windowing.time.Time;
import org.apache.flink.util.Collector;
import java.text.ParseException;
import java.text.SimpleDateFormat;
/**
* Mock -> Mysql(binLog) -> MaxWell -> Kafka(ods_base_db_m) -> DbBaseApp(修改配置,Phoenix)
* -> Kafka(dwd_order_info,dwd_order_detail) -> OrderWideApp(关联维度,Redis) -> Kafka(dwm_order_wide)
* -> PaymentWideApp -> Kafka(dwm_payment_wide)
*/
public class PaymentWideApp {
public static void main(String[] args) throws Exception {
//1.获取执行环境
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(1);
//1.1 设置状态后端
//env.setStateBackend(new FsStateBackend("hdfs://hadoop102:8020/gmall/dwd_log/ck"));
//1.2 开启CK
//env.enableCheckpointing(10000L, CheckpointingMode.EXACTLY_ONCE);
//env.getCheckpointConfig().setCheckpointTimeout(60000L);
//修改用户名
//System.setProperty("HADOOP_USER_NAME", "atguigu");
//2.读取Kafka主题数据 dwd_payment_info dwm_order_wide
String groupId = "payment_wide_group";
String paymentInfoSourceTopic = "dwd_payment_info";
String orderWideSourceTopic = "dwm_order_wide";
String paymentWideSinkTopic = "dwm_payment_wide";
DataStreamSource<String> paymentKafkaDS = env.addSource(MyKafkaUtil.getKafkaSource(paymentInfoSourceTopic, groupId));
DataStreamSource<String> orderWideKafkaDS = env.addSource(MyKafkaUtil.getKafkaSource(orderWideSourceTopic, groupId));
//3.将数据转换为JavaBean并提取时间戳生成WaterMark
SingleOutputStreamOperator<PaymentInfo> paymentInfoDS = paymentKafkaDS
.map(jsonStr -> JSON.parseObject(jsonStr, PaymentInfo.class))
.assignTimestampsAndWatermarks(WatermarkStrategy.<PaymentInfo>forMonotonousTimestamps()
.withTimestampAssigner(new SerializableTimestampAssigner<PaymentInfo>() {
@Override
public long extractTimestamp(PaymentInfo element, long recordTimestamp) {
SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
try {
return sdf.parse(element.getCreate_time()).getTime();
} catch (ParseException e) {
e.printStackTrace();
throw new RuntimeException("时间格式错误!!");
}
}
}));
SingleOutputStreamOperator<OrderWide> orderWideDS = orderWideKafkaDS
.map(jsonStr -> JSON.parseObject(jsonStr, OrderWide.class))
.assignTimestampsAndWatermarks(WatermarkStrategy.<OrderWide>forMonotonousTimestamps()
.withTimestampAssigner(new SerializableTimestampAssigner<OrderWide>() {
@Override
public long extractTimestamp(OrderWide element, long recordTimestamp) {
SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
try {
return sdf.parse(element.getCreate_time()).getTime();
} catch (ParseException e) {
e.printStackTrace();
throw new RuntimeException("时间格式错误!!");
}
}
}));
//4.按照OrderID分组
KeyedStream<PaymentInfo, Long> paymentInfoKeyedStream = paymentInfoDS.keyBy(PaymentInfo::getOrder_id);
KeyedStream<OrderWide, Long> orderWideKeyedStream = orderWideDS.keyBy(OrderWide::getOrder_id);
//5.双流JOIN
SingleOutputStreamOperator<PaymentWide> paymentWideDS = paymentInfoKeyedStream.intervalJoin(orderWideKeyedStream)
.between(Time.minutes(-15), Time.seconds(0))
.process(new ProcessJoinFunction<PaymentInfo, OrderWide, PaymentWide>() {
@Override
public void processElement(PaymentInfo paymentInfo, OrderWide orderWide, Context ctx, Collector<PaymentWide> out) throws Exception {
out.collect(new PaymentWide(paymentInfo, orderWide));
}
});
//打印测试
paymentWideDS.print(">>>>>>>>>>");
//6.将数据写入Kafka dwm_payment_wide
paymentWideDS.map(JSON::toJSONString).addSink(MyKafkaUtil.getKafkaSink(paymentWideSinkTopic));
//7.启动任务
env.execute();
}
}
代码实现-日期转换工具类-DateTimeUtil
package com.atguigu.utils;
import java.time.LocalDateTime;
import java.time.ZoneId;
import java.time.ZoneOffset;
import java.time.format.DateTimeFormatter;
import java.util.Date;
public class DateTimeUtil {
private final static DateTimeFormatter formator = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss");
public static String toYMDhms(Date date) {
LocalDateTime localDateTime = LocalDateTime.ofInstant(date.toInstant(), ZoneId.systemDefault());
return formator.format(localDateTime);
}
public static Long toTs(String YmDHms) {
LocalDateTime localDateTime = LocalDateTime.parse(YmDHms, formator);
return localDateTime.toInstant(ZoneOffset.of("+8")).toEpochMilli();
}
}