Hudi 0.11.0 + Flink1.14.4 + Hive + Flink CDC + Kafka 集成
一、环境准备
1.1 软件版本
Flink 1.14.4
Scala 2.11
CDH 6.1.0
Hadoop 3.0.0
Hive 2.1.1
Hudi 0.11.0
Flink CDC 2.2.0
Mysql 5.7
1.2 Flink 准备
- 下载flink 1.14.4 到$HUDI_HOME
wget https://archive.apache.org/dist/flink/flink-1.14.4/flink-1.14.4-bin-scala_2.11.tgz
- 解压
tar zxvf flink-1.14.4-bin-scala_2.11.tgz
- 下载flink-sql-connector
wget https://repo1.maven.org/maven2/com/ververica/flink-sql-connector-mysql-cdc/2.2.0/flink-sql-connector-mysql-cdc-2.2.0.jar
wget https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-kafka_2.11/1.14.4/flink-sql-connector-kafka_2.11-1.14.4.jar
1.3 Hadoop 准备
- 设置Hadoop环境
export HADOOP_CONF_DIR=/etc/hadoop/conf
1.4 Hudi 准备
- 下载Hudi 0.11.0 到$HUDI_HOME
wget --no-check-certificate https://dlcdn.apache.org/hudi/0.11.0/hudi-0.11.0.src.tgz
- 解压
tar zxvf hudi-0.11.0.src.tgz
- 完成后进入 packaging/hudi-flink-bundle 目录,执行命令:
mvn clean install -DskipTests -Drat.skip=true -Pflink-bundle-shade-hive2
- 将packaging/hudi-flink-bundle/target/hudi-flink1.14-bundle_2.11-0.11.0.jar 拷贝到$HUDI_HOME/flink-1.14.4/lib/
1.5 Hive 准备
- 在 Hive 的根目录下创建 auxlib 文件夹
- 进入packaging/hudi-hadoop-mr-bundle 目录,执行命令:
mvn clean install -DskipTests - 进入packaging/hudi-hive-sync-bundle 目录,执行命令:
mvn clean install -DskipTests - 将上面两个打包好的jar包拷贝到 auxlib目录
hudi-hadoop-mr-bundle/target/hudi-hadoop-mr-bundle-0.10.1.jar
hudi-hive-sync-bundle/target/hudi-hive-sync-bundle-0.10.1.jar
1.6 注意
修改hudi-flink-bundle中的pom.xml文件的Hive版本为集群对应的版本
<properties>
<hive.version>2.1.1-cdh6.1.0</hive.version>
<flink.bundle.hive.scope>compile</flink.bundle.hive.scope>
</properties>
<!--编译报错,在repository中加入-->
<repository>
<id>cloudera</id>
<url>https://repository.cloudera.com/artifactory/cloudera-repos/</url>
</repository>
二、kafka + flink + hudi + hive
2.1 启动Flink SQL
bin/yarn-session.sh -nm kafka2hudi -d -qu root.analysis -jm 2048 -tm 4096
bin/sql-client.sh embedded
SET execution.checkpointing.interval = 60000;
2.2 创建一个 kafka 的 source 和 hudi sink,启动 sql 流任务:
CREATE TABLE user_report_topic(
uid string,
userIp string,
countryName string,
countryCode string,
regionName string,
cityName string,
ispName string,
cVersion string,
deviceId string,
deviceType string,
appType string,
flagLevel Array<string>,
visitType int,
visitTime TIMESTAMP(3),
WATERMARK FOR visitTime AS visitTime - INTERVAL '5' SECOND
) WITH (
'connector' = 'kafka',
'topic' = 'user_report_topic',
'properties.group.id' = 'user_report_topic_group2',
'scan.startup.mode' = 'earliest-offset',
'properties.bootstrap.servers' = 'xx.xx.xx.25:9092,xx.xx.xx.26:9092,xx.xx.xx.27:9092',
'format' = 'json'
);
create table user_report_hudi(
uid string,
userIp string,
countryName string,
countryCode string,
regionName string,
cityName string,
ispName string,
cVersion string,
deviceId string,
deviceType string,
appType string,
PRIMARY KEY(uid) NOT ENFORCED
)
with (
'connector' = 'hudi',
'path' = 'hdfs:///hudi/data/user_report_hudi',
'table.type' = 'MERGE_ON_READ',
'write.bucket_assign.tasks' = '1',
'write.tasks' = '1',
'hive_sync.enable'= 'true',-- 开启自动同步hive
'hive_sync.mode'= 'hms',-- 自动同步hive模式,默认jdbc模式
'hive_sync.metastore.uris'= 'thrift://xx.xx.xx.27:9083',-- hive metastore地址
'hive_sync.jdbc_url' = 'jdbc:hive2://xx.xx.xx.27:10000',-- required, hiveServer地址
'hive_sync.table'= 'user_report_hudi',-- hive 新建表名
'hive_sync.db'= 'test',-- hive 新建数据库名
'hive_sync.username'= 'admin',-- HMS 用户名
'hive_sync.password'= 'admin',-- HMS 密码
'hive_sync.support_timestamp'= 'true'-- 兼容hive timestamp类型
);