object WordCountKafka5Seconds { //创建redis 映射 class RedisExampleMapper extends RedisMapper[(String)] { override def getCommandDescription: RedisCommandDescription = { //"term_user_group_wxgz_test" 是redis的集合 new RedisCommandDescription(RedisCommand.HSET,"term_user_group_wxgz_test") } override def getKeyFromData(data: (String)): String = data override def getValueFromData(data: (String)): String = data } // zk 地址 val ZOOKEEPER_HOST = "node1:2181,node2:2181,node3:2181" //kafka 地址 val KAFKA_BROKER = "node4:9092,node5:9092,node6:9092,node7:9092," //kafka的组 val TRANSACTION_GROUP = "flink_group" //kafka topic val TOPIC = "topic_test" //输入redis节点IP val REDIS_HOST = "172.10.x.xxx" def main(args: Array[String]): Unit = { System.setProperty("HADOOP_USER_NAME","hadoop") val env = StreamExecutionEnvironment.getExecutionEnvironmen
1,Flink入坑 本地代码 kafka 到 Flink 到 redis
最新推荐文章于 2022-12-12 15:23:17 发布
该博客介绍了如何在Flink中实现从Kafka消费数据,然后将处理后的数据写入Redis的过程。通过创建RedisMapper类进行数据映射,并配置Kafka和Redis的相关参数,如Zookeeper和Kafka的地址、组ID、topic,以及Redis的主机IP和密码。在Flink环境中设置检查点和处理模式,确保数据的准确传输。
摘要由CSDN通过智能技术生成