Flink中使用spring-data-redis + lettuce作为redis驱动链接

        如题,最近在开发flink项目。项目中使用到redis作为中间缓存,但是非使用FlinkRedisSink。考虑到之前spring-data-redis中已经封装了比较全面的操作,故引入了spring-data-redis。开始使用的是spring-data-redis+jedis,但是发现单纯的获取数据就需要耗费二十多毫秒。因此想把jedis换成lettuce。

        原本以为很简单的事情,没想到上flink on yarn 环境之后,一直报错类型转换错误:io.netty.channel.epoll.EpollEventLoopGroup cannot be cast to io.netty.channel.EventLoopGroup。于是乎以为是jar,排除掉lettuce-core中的netty相关的jar包后,启动直接报错没有netty相关的jar。也查了相关flink的技术博客,说是强制使用NIO,设置flin-conf.yaml后无果。

        后面实则没办法,直接翻阅源码,发现EpollEventLoopGroup并不在lettuce-core中,而是在netty-transport-native-epoll中。抱着试试看的心态,引入netty-transport-native-epoll相关包后,上传jar,提交任务。竟然出奇般的解决了问题。

        经历此遭之后,最大的新的体会就是日志报错不精确,害人不浅。源码自有颜如玉,源码自有黄金屋。

        在此附上maven相关依赖:      

  <dependency>
            <groupId>org.springframework.data</groupId>
            <artifactId>spring-data-redis</artifactId>
            <version>2.5.1</version>
        </dependency>
        <dependency>
            <groupId>io.lettuce</groupId>
            <artifactId>lettuce-core</artifactId>
            <version>6.1.2.RELEASE</version>
        </dependency>
        <dependency>
            <groupId>io.netty</groupId>
            <artifactId>netty-transport-native-epoll</artifactId>
            <version>4.1.63.Final</version>
            <classifier>linux-x86_64</classifier>
            <scope>compile</scope>
        </dependency>
        <dependency>
            <groupId>org.apache.commons</groupId>
            <artifactId>commons-pool2</artifactId>
            <version>2.9.0</version>
        </dependency>

 代码实现:

public class RedisTemplateFactory {
    private static final Logger LOGGER = LoggerFactory.getLogger(RedisTemplateFactory.class);
    private static Map<Class, RedisTemplate> objectRedisTemplateMap = new ConcurrentHashMap<>();
    private final static ReentrantLock LOCK = new ReentrantLock();

    public static <K, V> RedisTemplate<K, V> getRedisTemplate(Class<V> vClass) {
        if (objectRedisTemplateMap.get(vClass) == null) {
            LOCK.lock();
            long start = System.currentTimeMillis();
            LOGGER.error("开始获取objectRedisTemplate....");
            try {
                Properties prop = getRedisProperties();
                String auth = prop.getProperty("redis.auth");
                String brokerListStr = prop.getProperty("redis.borker.list");
                String[] brokerList = brokerListStr.split(",");
                RedisTemplate<K, V> redisTemplate = new RedisTemplate<K, V>();

                //集群模式
                LettuceConnectionFactory fac = null;
                if (brokerList.length == 1) {
                    //单机模式
                    fac = getStandaloneConnectionFactory(auth, brokerList[0]);

                } else {
                    fac = getClusterConnectionFactory(prop, auth, brokerList);
                }

                fac.afterPropertiesSet();
                redisTemplate.setConnectionFactory(fac);
                if (vClass.equals(String.class)) {
                    redisTemplate.setDefaultSerializer(new StringRedisSerializer());
                } else {
                    redisTemplate.setValueSerializer(new JdkSerializationRedisSerializer());
                    redisTemplate.setKeySerializer(new StringRedisSerializer());
                }
                redisTemplate.afterPropertiesSet();
                objectRedisTemplateMap.put(vClass, redisTemplate);

            } catch (Exception e) {
                LOGGER.error("获取RedisTemplate失败,原因是:{}", e.getMessage());
            } finally {
                LOCK.unlock();
                LOGGER.error("获取objectRedisTemplate结束,耗时{}ms", System.currentTimeMillis() - start);
            }
        }
        return objectRedisTemplateMap.get(vClass);
    }

    private static Properties getRedisProperties() throws Exception {
        String redisConfigName = "redis-config-dev.properties";
        String profile = System.getProperty("profile");
        if (StringUtils.isNotEmpty(profile)) {
            redisConfigName = "redis-config-" + profile + ".properties";
        }
        InputStream stream = Thread.currentThread().getContextClassLoader().getResourceAsStream(redisConfigName);
        Properties prop = new Properties();
        try {
            prop.load(stream);
        } catch (IOException e) {
            LOGGER.error("加载配置文件失败,请检查位置文件是否存在");
            throw new Exception("加载配置文件失败,请检查位置文件是否存在");
        }
        return prop;
    }

    //    private static JedisConnectionFactory getClusterConnectionFactory(Properties prop, String auth,
    //        String[] brokerList) {
    //        JedisConnectionFactory fac;
    //        List<RedisNode> nodes = new ArrayList<RedisNode>();
    //        for (String broker : brokerList) {
    //            String[] hostInfo = broker.split(":");
    //            nodes.add(new RedisNode(hostInfo[0], Integer.parseInt(hostInfo[1])));
    //        }
    //        RedisClusterConfiguration rcc = new RedisClusterConfiguration();
    //        rcc.setClusterNodes(nodes);
    //        rcc.setPassword(auth);
    //
    //        JedisPoolConfig poolConfig = getJedisPoolConfig(prop);
    //
    //        fac = new JedisConnectionFactory(rcc, poolConfig);
    //        return fac;
    //    }
    //
    //    private static JedisPoolConfig getJedisPoolConfig(Properties prop) {
    //        JedisPoolConfig poolConfig = new JedisPoolConfig();
    //        poolConfig.setMaxTotal(Integer.parseInt(prop.getProperty("redis.max.total")));
    //        poolConfig.setMaxIdle(Integer.parseInt(prop.getProperty("redis.max.ldle")));
    //        poolConfig.setMaxIdle(Integer.parseInt(prop.getProperty("redis.min.ldle")));
    //        poolConfig.setMaxWaitMillis(Long.parseLong(prop.getProperty("redis.max.wait.millis")));
    //        poolConfig.setTestOnBorrow(Boolean.parseBoolean(prop.getProperty("redis.test.on.borrow")));
    //        return poolConfig;
    //    }
    private static LettuceConnectionFactory getClusterConnectionFactory(Properties prop, String auth,
        String[] brokerList) {
        List<RedisNode> nodes = new ArrayList<RedisNode>();
        for (String broker : brokerList) {
            String[] hostInfo = broker.split(":");
            nodes.add(new RedisNode(hostInfo[0], Integer.parseInt(hostInfo[1])));
        }
        RedisClusterConfiguration rcc = new RedisClusterConfiguration();
        rcc.setClusterNodes(nodes);
        rcc.setPassword(auth);
        rcc.setMaxRedirects(5);
        LettuceClientConfiguration poolCfg =
            LettucePoolingClientConfiguration.builder().commandTimeout(Duration.ofMillis(5000)).poolConfig(getPoolCfg(prop)).build();
        LettuceConnectionFactory fac = new LettuceConnectionFactory(rcc,poolCfg);
        fac.afterPropertiesSet();
        return fac;
    }

    private static GenericObjectPoolConfig getPoolCfg(Properties prop) {
        GenericObjectPoolConfig poolConfig = new GenericObjectPoolConfig();
        poolConfig.setMaxTotal(Integer.parseInt(prop.getProperty("redis.max.total")));
        poolConfig.setMaxIdle(Integer.parseInt(prop.getProperty("redis.max.ldle")));
        poolConfig.setMaxIdle(Integer.parseInt(prop.getProperty("redis.min.ldle")));
        poolConfig.setMaxWaitMillis(Long.parseLong(prop.getProperty("redis.max.wait.millis")));
        poolConfig.setTestOnBorrow(Boolean.parseBoolean(prop.getProperty("redis.test.on.borrow")));
        return poolConfig;
    }

    private static LettuceConnectionFactory getStandaloneConnectionFactory(String auth, String s) {
        LettuceConnectionFactory fac;
        RedisStandaloneConfiguration rsc = new RedisStandaloneConfiguration();
        rsc.setHostName(s.split(":")[0]);
        rsc.setPort(Integer.parseInt(s.split(":")[1]));
        rsc.setPassword(auth);
        fac = new LettuceConnectionFactory(rsc);
        return fac;
    }

}

此方法也适用于非spring环境中使用RedisTemplate。代码中注释的部分是使用jedis的部分。

以上皆拙见,希望能给大家提供帮助。也希望大家指正。

  • 6
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
flink-statebackend-redisFlink 提供的一个 StateBackend 插件,用于将 Flink 程序的状态数据存储到 Redis 。如果您想在 Flink 程序使用 RedisStateBackend,需要在项目引入 flink-statebackend-redis 依赖。 具体来说,在 Maven 项目,您可以在 pom.xml 文件添加以下依赖: ``` <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-statebackend-redis</artifactId> <version>${flink.version}</version> </dependency> ``` 在 Gradle 项目,您可以在 build.gradle 文件添加以下依赖: ``` dependencies { implementation "org.apache.flink:flink-statebackend-redis:${flinkVersion}" } ``` 这里的 ${flink.version} 或 ${flinkVersion} 是指您使用Flink 版本号。如果您使用的是 Flink 1.12 及以上版本,可以直接使用 flink-statebackend-redis 依赖。如果您使用的是 Flink 1.11 及以下版本,需要先引入 flink-statebackend-rocksdb 依赖: ``` <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-statebackend-rocksdb</artifactId> <version>${flink.version}</version> </dependency> ``` 或者 ``` dependencies { implementation "org.apache.flink:flink-statebackend-rocksdb:${flinkVersion}" } ``` 然后再引入 flink-statebackend-redis 依赖: ``` <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-statebackend-redis</artifactId> <version>${flink.version}</version> </dependency> ``` 或者 ``` dependencies { implementation "org.apache.flink:flink-statebackend-redis:${flinkVersion}" } ```
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值