Flink—读Hive表数据写入Kafka


技术公众号:后端技术解忧铺
关注微信公众号:CodingTechWork,一起学习进步。

引言

场景

  数仓Hive中的数据需要读取后写入Kafka中进行数据服务输出。

选型

  选用Flink进行读Hive写Kafka,因为其拥有丰富的connector可选择。

开发

pom依赖

<dependencies>
	<dependency>
		<groupId>org.apache.flink</groupId>
		<artifactId>flink-table-api-java-bridge_2.11</artifactId>
		<version>1.13.2</version>
	</dependency>
		<dependency>
		<groupId>org.apache.flink</groupId>
		<artifactId>flink-connector-hive_2.11</artifactId>
		<version>2.2.0</version>
	</dependency>
</dependencies>

<build>
	<plugins>
		<plugin>
			<groupId>org.apache.maven.plugins</groupId>
			<artifactId>maven-source-plugin</artifactId>
			<executions>
				<execution>
					<id>attach-sources</id>
					<goals>
						<goal>jar</goal>
					</goals>
				</execution>
			</executions>
			<configuration>
				<skipSource>true</skipSource>
			</configuration>
		</plugin>
		<plugin>
			<groupId>org.apache.maven.plugins</groupId>
			<artifactId>maven-jar-plugin</artifactId>
			<version>3.2.0</version>
			<configuration>
				<archive>
				<!--确定主类-->
					<manifest>
						<mainClass>com.test.demo.flinkhive2kafka.job.Hive2Kafka</mainClass>
					</manifest>
				</archive>
			</configuration>
		</plugin>
	</plugins>
</build>

job类


import org.apache.flink.api.java.utils.ParameterTool;
import org.apache.flink.table.api.EnvironmentSettings;
import org.apache.flink.table.api.TableEnvironment;
import org.apache.flink.table.api.catalog.hive.HiveCatalog;
import lombok.extern.slf4j.SLf4j;

@Slf4j
public class Hive2Kafka {
	public static void main(String[] args) {
		// 设置flink sql环境
		EnvironmentSettings environmentSettings = EnvironmentSettings.newInstance().useBlinkPlanner().build();

		// 创建table环境
		TableEnvironment tableEnvironment = TableEnvironment.create(environmentSettings);

		// 设置配置
		tableEnvironment.getConfig().getConfiguration().setString("table.exec.hive.fallback-mapred-reader", "true")
		
		// 获取外部配置
		ParameterTool parameterTool = ParameterTool.fromArgs(args);
		log.info("parameters size: {}", parameterTool.getNumberOfParameters());
		
		// 获取所有配置
		String hiveCatalogName = parameterTool.get("hive.catalog.name");
		String hiveConfDir = parameterTool.get("hive.conf.dir");
		String hiveDatabaseName = parameterTool.get("hive.db.name");
		String hiveKafakaTable = parameterTool.get("hive.kafka.tb");
		String kafkaBootstrapServer = parameterTool.get("kafka.bootstrap.server");
		String kafkaTopic = parameterTool.get("kafka.topic");
		String kafkaGroupId = parameterTool.get("kafka.group.id");
		String kafkaUsername = parameterTool.get("kafka.username");
		String kafkaPassword = parameterTool.get("kafka.password");
		String insertKafkaTableSql = parameterTool.get("insert.kafka.table.sql");

		// 创建hive catalog
		HiveCatalog hiveCatalog = new HiveCatalog(hiveCatalogName, hiveDatabaseName, hiveConfDir);
		// 注册catalog
		tableEnvironment.registerCatalog(hiveCatalogName, hiveCatalog);
		// 使用catalog
		tableEnvironment.useCatalog(hiveCatalogName);
		
		String createKafkaTableSql = String.format("CREATE TABLE IF NOT EXISTS %s(`field01` STRING) \n" +
		"WITH('connector' = 'kafka', \n" +
		"'topic' = '%s', \n" + 
		"'properties.group.id' = '%s', \n" +
		"'properties.bootstrap.servers' = '%s', \n" +
		"'scan.startup.mode' = 'group-offsets', \n" +
		"'properties.auto.offset.reset' = 'earliest', \n" +
		"'format' = 'raw', \n" +
		"'properties.security.protocol' = 'SASL_PLAINTEXT', \n" +
		"'properties.sasl.mechanism' = 'PLAIN', \n" +
		"'properties.sasl.mechanism' = 'org.apache.kafka.common.security.plain.PlainLoginModule " +
		"required username = \"%s\" password=\"%s\";'\n" +
		")",hiveKafkaTable, kafkaTopic, kafkaGroupId, kafkaBootstrapServer, kafkaUsername, kafkaPassword);
		// 创建kafka表	
		tableEnvironment.executeSql(createKafkaTableSql).print();
		// 执行flink sql
		tableEnvironment.executeSql(insertKafkaTableSql).print();
	}
}

执行

使用yarn-application模式

./fkink run-application -t yarn-application flink-hive-2-kafka-1.0.jar --hive.db.name xxx --hive.kafka.tb xxx --kafka.bootstrap.server xxx:9092,xxx:9092 --kafka.topic xxx --kafka.group.id xxx --kafka.username xxx --kafka.password 'xxx' --sql.insert.kafka.table 'xxxxxxx'
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值