注意 1. Flink使用1.11.0版本、HIVE使用2.3.6版本、Hadoop使用2.10.0版本
注意 2. 将hive-site.xml文件放在maven项目的resource目录下。
注意 3. 不编写脚本的话要执行 export HADOOP_CLASSPATH=`hadoop classpath` 语句
第一步:根据官网填入一下pom依赖
org.apache.flink
flink-connector-hive_2.11
1.11.0
provided
org.apache.flink
flink-table-api-java-bridge_2.11
1.11.0
provided
org.apache.hive
hive-exec
2.3.6
provided
第二步:编写代码如下import org.apache.flink.streaming.api.TimeCharacteristic;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.functions.AssignerWithPunctuatedWatermarks;
import org.apache.flink.streaming.api.functions.source.SourceFunction;
import org.apache.flink.streaming.api.watermark.Watermark;
import org.apache.flink.table.api.SqlDialect;
import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
import org.apache.flink.table.catalog.hive.HiveCatalog;
import java.sql.Timestamp;
public class StreamMain {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment bsEnv = StreamExecutionEnvironment.getExecutionEnvironment();
bsEnv.enableCheckpointing(10000);
bsEnv.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
StreamTableEnvironment tEnv = StreamTableEnvironment.create(bsEnv);
DataStream dataStream = bsEnv.addSource(new MySource())
.assignTimestampsAndWatermarks(
new AssignerWithPunctuatedWatermarks() {
long water = 0l;
@Override