系列文章目录
前言
Flink:1.12.4
Kafka:2.4
HBase: 2.3
在企业实时数仓建设过程中,需要制作DWD明细层的业务宽表,结合业务场景调研了一下使用Flink做实时宽表的方案,发现使用窗口join可能存在丢数据的风险,最终还是选择了稳定的Kafka + HBase的架构方案实现,该方案可以支撑大并发量的查询关联,而且保证了数据的准确性。
maven依赖
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-scala_2.11</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-scala_2.11</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-statebackend-rocksdb_2.11</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-kafka_2.11</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-json</artifactId>
<version>1.12.2</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-table-api-java-bridge_2.11</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-table-api-scala-bridge_2.11</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-table-common</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-table-planner_2.11</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-table-planner-blink_2.11</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-clients_2.11</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-hbase-2.2_2.11</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-jdbc_2.11</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>2.4.1</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>2.4.1</version>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.47</version>
</dependency>
创建Kafka流表
CREATE TABLE kafka_do_master (
name STRING,
age INT
) WITH (
'connector' = 'kafka',
'topic' = 'xxx',
'properties.bootstrap.servers' = 'xxxx:9092',
'properties.group.id' = 'test-1',
'scan.startup.mode' = 'earliest-offset',
'format' = 'json'
)
经过测试,kafka字段顺序无需和Json顺序一致,底层的JsonObject应该是个HashMap获取的。
创建HBase维度表
CREATE TABLE dim_hbase (
rowkey STRING,
info ROW<som_sysno STRING>,
PRIMARY KEY (rowkey) NOT ENFORCED
) WITH (
'connector' = 'hbase-2.2',
'table-name' = 'default:xxx',
'zookeeper.quorum' = 'xxx:2181',
'zookeeper.znode.parent' = '/hbase'
)
这里的字段顺序也无需和HBase的存储顺序一致。
创建Sink表
CREATE TABLE flink_do_so_master (
name VARCHAR,
age BIGINT,
som_sysno VARCHAR,
PRIMARY KEY (name,som_sysno) NOT ENFORCED
) WITH (
'connector' = 'adbpg',
'url' = 'jdbc:postgresql://xxx:5432/test',
'tablename' = 'xxx',
'username' = 'xx',
'password' = 'xx',
'maxretrytimes' = '2',
'batchsize' = '100',
'connectionmaxactive' = '5',
'conflictmode' = 'upsert',
'usecopy' = '0',
'targetschema' = 'test',
'exceptionmode' = 'ignore',
'casesensitive' = '0',
'writemode' = '2',
'retrywaittime' = '200'
)
因为博主是用的阿里云的ADB for Postgresql, 开源的flink并没有支持这个Connector,自己实现了一个自定义的connector。
计算逻辑
insert into flink_do_so_master
select kafka_do_master.*,dim_hbase.info.* from kafka_do_master
left join dim_hbase
FOR SYSTEM_TIME AS OF kafka_do_master.proctime
on reverseKey(kafka_do_master.name) = dim_hbase.rowkey
自己实现了一个Rowkey反转的UDF函数,使用了Flink的Temporal Join语法关联HBase,其中 FOR SYSTEM_TIME AS OF kafka_do_master.proctime 这是Temporal Join固有的Join语法,使用处理时间proctime,如果程序使用的是EventTime事件事件,则需要改成rowtime,并且在kafka流表中指定watermark,还有在测试过程中发现必须在kafka流表中声明rowtime字段,不然会报错。
Temporal Join 目前必须要在Blink planner中使用,且能够支持Temporal Join的数据库Connector必须要实现LookupableTableSource
接口,LookupableTableSource接口意味着可以用一个或者多个key去查询外部存储,目前JDBC,HBase,Hive支持该特性。
总结
就目前各大公司的实时数仓现状,不少都是在使用Temporal Join作为宽表关联。在使用Flink sql的过程中还是发现了Flink sql对某些场景下还待完善的功能,日常也会遇到些Bug,需要改源码的问题,希望以后能够有时间记录下来,给大家解读源码。
欢迎大家扫一扫下面个人微信,我会拉大家进入大数据技术交流群,一起学习一起进步吧。