Flink-05 Flink Java 3分钟上手 Redis FlinkJedisPoolConfig 从Kafka写入Redis FlinkKafkaConsumer消费 结果写入Redis

代码仓库

会同步代码到 GitHub
https://github.com/turbo-duck/flink-demo

在这里插入图片描述

内容介绍

上节我们已经实现了,对Kafka数据的消费和计算,最终把结果输出到了控制台上。如下图:

Kafka In Docker

请添加图片描述

TestKafkaProducer

将数据写入到Kafka中的效果
请添加图片描述

FlinkConsumer

Flink消费Kafka的效果如下图,已经按照我们的需求进行计算了。
请添加图片描述

这节内容

本节依然使用FlinkKafka进行消费,但与上节不同的是(上节将结果输出到控制台上),本节将把Flink计算的结果输出到Redis中进行保存(当然也可以存储到别的地方,这里以Redis为例)。

pom.xml

重点关注 flink-connector-redis_2.11 这个包。这是Redis相关的依赖。

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>org.example</groupId>
    <artifactId>flink-demo-01</artifactId>
    <version>1.0-SNAPSHOT</version>

    <properties>
        <maven.compiler.source>8</maven.compiler.source>
        <maven.compiler.target>8</maven.compiler.target>
        <flink.version>1.13.2</flink.version>
        <scala.binary.version>2.12</scala.binary.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-java</artifactId>
            <version>${flink.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-streaming-java_${scala.binary.version}</artifactId>
            <version>${flink.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-clients_${scala.binary.version}</artifactId>
            <version>${flink.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-connector-kafka_2.11</artifactId>
            <version>${flink.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka-clients</artifactId>
            <version>3.0.0</version>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-connector-redis_2.11</artifactId>
            <version>1.1.0</version>
        </dependency>

    </dependencies>
</project>

KafkaProducer.java

生产数据存入到Kafka这种

package icu.wzk.demo05;

import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;

import java.util.Properties;

public class TestKafkaProducer {

    public static void main(String[] args) throws InterruptedException {
        Properties props = new Properties();
        props.put("bootstrap.servers", "0.0.0.0:9092");
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        Producer<String, String> producer = new KafkaProducer<>(props);
        for (int i = 0; i < 500; i++) {
            String key = "key-" + i;
            String value = "value-" + i;
            ProducerRecord<String, String> record = new ProducerRecord<>("test", key, value);
            producer.send(record);
            System.out.println("send: " + key);
            Thread.sleep(200);
        }
        producer.close();
    }

}

StartApp

Flink消费Kafka,计算后写入到Redis中。

FlinkJedisPoolConfig

连接池的配置
请添加图片描述

MyRedisMapper

自定义的Mapper,需要实现RedisMapper
请添加图片描述

完整代码

package icu.wzk.demo05;


import org.apache.flink.api.common.functions.MapFunction;
import org.apache.flink.api.common.serialization.SimpleStringSchema;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
import org.apache.flink.streaming.connectors.redis.RedisSink;
import org.apache.flink.streaming.connectors.redis.common.config.FlinkJedisPoolConfig;
import org.apache.flink.streaming.connectors.redis.common.mapper.RedisCommand;
import org.apache.flink.streaming.connectors.redis.common.mapper.RedisCommandDescription;
import org.apache.flink.streaming.connectors.redis.common.mapper.RedisMapper;

import java.util.Properties;

public class StartApp {

    private static final String KAFKA_SERVER = "0.0.0.0:9092";

    private static final Integer KAFKA_PORT = 9092;

    private static final String KAFKA_TOPIC = "test";

    private static final String REDIS_SERVER = "0.0.0.0";

    private static final Integer REDIS_PORT = 6379;

    public static void main(String[] args) throws Exception {
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        Properties properties = new Properties();
        properties.setProperty("bootstrap.servers", String.format("%s:%d", KAFKA_SERVER, KAFKA_PORT));
        FlinkKafkaConsumer<String> consumer = new FlinkKafkaConsumer<>(KAFKA_TOPIC, new SimpleStringSchema(), properties);
        DataStreamSource<String> data = env.addSource(consumer);

        SingleOutputStreamOperator<Tuple2<String, String>> wordData = data.map(new MapFunction<String, Tuple2<String, String>>() {
            @Override
            public Tuple2<String, String> map(String value) throws Exception {
                return new Tuple2<>("l_words", value);
            }
        });

        FlinkJedisPoolConfig conf = new FlinkJedisPoolConfig
                .Builder()
                .setHost(REDIS_SERVER)
                .setPort(REDIS_PORT)
                .build();
        RedisSink<Tuple2<String, String>> redisSink = new RedisSink<>(conf, new MyRedisMapper());
        wordData.addSink(redisSink);
        env.execute();
    }

    public static class MyRedisMapper implements RedisMapper<Tuple2<String,String>> {

        @Override
        public RedisCommandDescription getCommandDescription() {
            return new RedisCommandDescription(RedisCommand.LPUSH);
        }

        @Override
        public String getKeyFromData(Tuple2<String,String> data) {
            return data.f0;
        }

        @Override
        public String getValueFromData(Tuple2<String,String> data) {
            return data.f1;
        }
    }

}
  • 23
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
可以通过在Flink使用Kafka消费者的`FlinkKafkaConsumer`类来读取Kafka消息,并使用`FlinkKafkaConsumer`提供的`assignTimestampsAndWatermarks`方法指定用于生成Watermark的时间戳生成器。然后,你可以使用Flink的`redis`客户端库来将偏移量写入Redis。 具体的实现步骤如下: 1. 创建`FlinkKafkaConsumer`实例并指定Kafka主题和消费者组。 ``` FlinkKafkaConsumer consumer = new FlinkKafkaConsumer(topic, new SimpleStringSchema(), properties); consumer.setStartFromEarliest(); // 从最早的消息开始消费 ``` 2. 使用`assignTimestampsAndWatermarks`方法为Kafka消息生成Watermark。 ``` consumer.assignTimestampsAndWatermarks(new AssignerWithPeriodicWatermarks() { private long currentTimestamp = Long.MIN_VALUE; @Override public long extractTimestamp(String element, long previousElementTimestamp) { // 从消息中提取时间戳 long timestamp = Long.parseLong(element.split(",")[0]); currentTimestamp = Math.max(timestamp, currentTimestamp); return timestamp; } @Override public Watermark getCurrentWatermark() { // 根据最大时间戳生成Watermark return new Watermark(currentTimestamp == Long.MIN_VALUE ? Long.MIN_VALUE : currentTimestamp - 1); } }); ``` 3. 使用Flink的`redis`客户端库将偏移量写入Redis。 ``` DataStream<String> stream = env.addSource(consumer); stream.map(new MapFunction<String, Tuple2<String, Long>>() { @Override public Tuple2<String, Long> map(String value) throws Exception { // 从消息中提取偏移量 long offset = Long.parseLong(value.split(",")[1]); return new Tuple2<>("offset", offset); } }).addSink(new RedisSink<>(redisConfig, new RedisOffsetMapper())); ``` 其中,`RedisOffsetMapper`是一个实现了`RedisMapper`接口的类,用于将偏移量写入Redis。 ``` public class RedisOffsetMapper implements RedisMapper<Tuple2<String, Long>> { @Override public RedisCommandDescription getCommandDescription() { return new RedisCommandDescription(RedisCommand.SET); } @Override public String getKeyFromData(Tuple2<String, Long> data) { return data.f0; } @Override public String getValueFromData(Tuple2<String, Long> data) { return data.f1.toString(); } } ``` 这样,当FlinkKafka读取消息时,就会自动生成Watermark,并将偏移量写入Redis中。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值