JedisException: Could not return the resource to the pool

1 报错信息

2020-07-25 16:20:11,085 [analyser-thread-3] ERROR com.everdata.etl.main.ETLMain - analyser error !
redis.clients.jedis.exceptions.JedisException: Could not return the resource to the pool
        at redis.clients.util.Pool.returnResourceObject(Pool.java:69) ~[jedis-2.9.0.jar:na]
        at redis.clients.jedis.ShardedJedisPool.returnResource(ShardedJedisPool.java:62) ~[jedis-2.9.0.jar:na]
        at redis.clients.jedis.ShardedJedisPool.returnResource(ShardedJedisPool.java:14) ~[jedis-2.9.0.jar:na]
        at redis.clients.jedis.ShardedJedis.close(ShardedJedis.java:740) ~[jedis-2.9.0.jar:na]
        at com.everdata.etl.analyser.impl.GuardSignalCellAnalyser.hsetByScript(GuardSignalCellAnalyser.java:101) ~[everdata-etl-analyser-ygsh-1.2.0.jar:na]
        at com.everdata.etl.analyser.impl.GuardSignalCellAnalyser.analyse(GuardSignalCellAnalyser.java:51) ~[everdata-etl-analyser-ygsh-1.2.0.jar:na]
        at com.everdata.etl.analyser.ETLAnalyser.analyse(ETLAnalyser.java:68) ~[everdata-etl-1.2.0.jar:na]
        at com.everdata.etl.main.ETLMain$3.run(ETLMain.java:507) ~[everdata-etl-1.2.0.jar:1.2.0]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_121]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_121]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_121]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_121]
        at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
Caused by: java.lang.IllegalStateException: Object has already been returned to this pool or is invalid
        at org.apache.commons.pool2.impl.GenericObjectPool.returnObject(GenericObjectPool.java:551) ~[commons-pool2-2.4.2.jar:2.4.2]
        at redis.clients.util.Pool.returnResourceObject(Pool.java:67) ~[jedis-2.9.0.jar:na]
        ... 12 common frames omitted
2020-07-25 16:20:11,085 [analyser-thread-1] ERROR com.everdata.etl.main.ETLMain - analyser error !
redis.clients.jedis.exceptions.JedisException: Could not return the resource to the pool
        at redis.clients.util.Pool.returnResourceObject(Pool.java:69) ~[jedis-2.9.0.jar:na]
        at redis.clients.jedis.ShardedJedisPool.returnResource(ShardedJedisPool.java:62) ~[jedis-2.9.0.jar:na]
        at redis.clients.jedis.ShardedJedisPool.returnResource(ShardedJedisPool.java:14) ~[jedis-2.9.0.jar:na]
        at redis.clients.jedis.ShardedJedis.close(ShardedJedis.java:740) ~[jedis-2.9.0.jar:na]
        at com.everdata.etl.analyser.impl.GuardSignalCellAnalyser.hsetByScript(GuardSignalCellAnalyser.java:101) ~[everdata-etl-analyser-ygsh-1.2.0.jar:na]
        at com.everdata.etl.analyser.impl.GuardSignalCellAnalyser.analyse(GuardSignalCellAnalyser.java:51) ~[everdata-etl-analyser-ygsh-1.2.0.jar:na]
        at com.everdata.etl.analyser.ETLAnalyser.analyse(ETLAnalyser.java:68) ~[everdata-etl-1.2.0.jar:na]
        at com.everdata.etl.main.ETLMain$3.run(ETLMain.java:507) ~[everdata-etl-1.2.0.jar:1.2.0]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_121]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_121]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_121]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_121]
        at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
Caused by: java.lang.IllegalStateException: Object has already been returned to this pool or is invalid
        at org.apache.commons.pool2.impl.GenericObjectPool.returnObject(GenericObjectPool.java:551) ~[commons-pool2-2.4.2.jar:2.4.2]
        at redis.clients.util.Pool.returnResourceObject(Pool.java:67) ~[jedis-2.9.0.jar:na]
        ... 12 common frames omitted

2 问题分析

查看源码,发现
1、多线程获取的是静态变量jredis 客户端;
2、默认redis pool 默认只有最大8个活动对象数,太小了,调大点到1000个;
3、关闭时没有判断就close,异常情况下肯定会有问题;

3 解决方法

  private ShardedJedisPool jedisPool;
  @Override
    public void prepare(Map<String, Object> config, String[] params) {
        setParams(params);
        if (config != null) {
            ObjectMapper mapper = new ObjectMapper();
            mapper.setPropertyNamingStrategy(PropertyNamingStrategy.CAMEL_CASE_TO_LOWER_CASE_WITH_UNDERSCORES);
            dmpRealTimeAnalyserConfig = mapper.convertValue(config,
                    DmpRealTimeAnalyserConfig.class);
            //msisdnField = dmpRealTimeAnalyserConfig.getMsisdn();
            DmpRealTimeAnalyserConfig.RedisConfig redisConfig = dmpRealTimeAnalyserConfig
                    .getRedisConfig();
            //fieldsConfig = dmpRealTimeAnalyserConfig.getFieldsConfig();
            int maxTotal = redisConfig.getMaxTotal();
            // redis配置初始化
            List<JedisShardInfo> shards = new ArrayList<>();
            if (redisConfig != null) {
                String[] hostAndPorts = redisConfig.getHostAndPort();
                String auth =redisConfig.getAuth();
                if (hostAndPorts != null && hostAndPorts.length >= 1) {
                    for (String hostAndPort : hostAndPorts) {
                        if (hostAndPort.startsWith("http://")) {
                            try {
                                JedisShardInfo info = new JedisShardInfo(new URI(hostAndPort));
                                if(auth!=null && !auth.trim().equals("")){
                                    info.setPassword(auth);
                                }
                                shards.add(info);
                            } catch (URISyntaxException e) {
                                logger.error("redis uri config error .", e);
                            }
                        } else {
                        	String[] arrs = hostAndPort.split(":", 2);
                            
                            JedisShardInfo info = new JedisShardInfo(arrs[0], Integer.valueOf(arrs[1]));
							if(auth!=null && !auth.trim().equals("")){
							    info.setPassword(auth);
							}
							shards.add(info);
                        }
                    }
                }
            }
            JedisPoolConfig jedisPoolConfig = new JedisPoolConfig();
            //设置最大连接数;
            jedisPoolConfig.setMaxTotal(maxTotal);
            jedisPool = new ShardedJedisPool(jedisPoolConfig,
                    shards);
        } else {
            logger.error("parse analyzer config occur error!");
        }
    }


     
   PKShardedJedisPipeline pipeline = new PKShardedJedisPipeline();
        ShardedJedis shardedJedis = jedisPool.getResource();   #使用获取
        pipeline.setShardedJedis(shardedJedis);

        // keys -> msisdn,ip,cell,st,et
        // args -> ip-v,cell-v,time-v
        pipeline.evalPrimaryKey(sb.toString(), record[0], Arrays.asList(record[0]), Arrays.asList(record[1], record[2], record[3]));
        pipeline.sync();
        if(shardedJedis!=null){   #用完归还,记得判断;
        	shardedJedis.close();
        }

4 线上情况

日志正常,不再有报错信息;

2020-07-30 16:09:53,009 [main] INFO  com.everdata.etl.main.ETLMain - ETL HOME : /home/etl-v1.2
2020-07-30 16:09:53,023 [main] INFO  com.everdata.etl.main.ETLMain - hello etl , group : usercell-analyser , name : 1_3_2 , ip : 202.102.23.58 , host : guard1 , pid : 2
4536.
   _____                ____        _          _____ _____ _     
 | ____|_   _____ _ __|  _ \  __ _| |_ __ _  | ____|_   _| |    
 |  _| \ \ / / _ \ '__| | | |/ _` | __/ _` | |  _|   | | | |    
 | |___ \ V /  __/ |  | |_| | (_| | || (_| | | |___  | | | |___ 
 |_____| \_/ \___|_|  |____/ \__,_|\__\__,_| |_____| |_| |_____|
                                                                

2020-07-30 16:09:53,024 [main] INFO  com.everdata.etl.main.ETLMain - metrics plugin not work.
2020-07-30 16:09:53,097 [input-thread] INFO  org.apache.kafka.clients.consumer.ConsumerConfig - ConsumerConfig values: 
        auto.commit.interval.ms = 5000
        auto.offset.reset = latest
        bootstrap.servers = [guard1:9092, guard2:9092, guard3:9092]
        check.crcs = true
        client.id = 
        connections.max.idle.ms = 540000
        enable.auto.commit = true
        exclude.internal.topics = true
        fetch.max.bytes = 52428800
        fetch.max.wait.ms = 500
        fetch.min.bytes = 1
        group.id = user-cell
        heartbeat.interval.ms = 3000
        interceptor.classes = null
        key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
        max.partition.fetch.bytes = 1048576
        max.poll.interval.ms = 300000
        max.poll.records = 500
        metadata.max.age.ms = 300000
        metric.reporters = []
        metrics.num.samples = 2
        metrics.recording.level = INFO
        metrics.sample.window.ms = 30000
        partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
        receive.buffer.bytes = 65536
        reconnect.backoff.ms = 50
        request.timeout.ms = 305000
        retry.backoff.ms = 100
        sasl.jaas.config = null
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        sasl.kerberos.min.time.before.relogin = 60000
        sasl.kerberos.service.name = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        sasl.kerberos.ticket.renew.window.factor = 0.8
        sasl.mechanism = GSSAPI
        security.protocol = PLAINTEXT
        send.buffer.bytes = 131072
        session.timeout.ms = 10000
        ssl.cipher.suites = null
        ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
        ssl.endpoint.identification.algorithm = null
        ssl.key.password = null
        ssl.keymanager.algorithm = SunX509
        ssl.keystore.location = null
        ssl.keystore.password = null
        ssl.keystore.type = JKS
        ssl.protocol = TLS
        ssl.provider = null
        ssl.secure.random.implementation = null
        ssl.trustmanager.algorithm = PKIX
        ssl.truststore.location = null
        ssl.truststore.password = null
        ssl.truststore.type = JKS
        value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer

2020-07-30 16:09:53,168 [input-thread] INFO  org.apache.kafka.clients.consumer.ConsumerConfig - ConsumerConfig values: 
        auto.commit.interval.ms = 5000
        auto.offset.reset = latest
        bootstrap.servers = [guard1:9092, guard2:9092, guard3:9092]
        check.crcs = true
        client.id = consumer-1
        connections.max.idle.ms = 540000
        enable.auto.commit = true
        exclude.internal.topics = true
        fetch.max.bytes = 52428800
        fetch.max.wait.ms = 500
        fetch.min.bytes = 1
        group.id = user-cell
        heartbeat.interval.ms = 3000
        interceptor.classes = null
        key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
        max.partition.fetch.bytes = 1048576
        max.poll.interval.ms = 300000
        max.poll.records = 500
        metadata.max.age.ms = 300000
        metric.reporters = []
        metrics.num.samples = 2
        metrics.recording.level = INFO
        metrics.sample.window.ms = 30000
        partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
        receive.buffer.bytes = 65536
        reconnect.backoff.ms = 50
        request.timeout.ms = 305000
        retry.backoff.ms = 100
        sasl.jaas.config = null
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        sasl.kerberos.min.time.before.relogin = 60000
        sasl.kerberos.service.name = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        sasl.kerberos.ticket.renew.window.factor = 0.8
        sasl.mechanism = GSSAPI
        security.protocol = PLAINTEXT
        send.buffer.bytes = 131072
        session.timeout.ms = 10000
        ssl.cipher.suites = null
        ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
        ssl.endpoint.identification.algorithm = null
        ssl.key.password = null
        ssl.keymanager.algorithm = SunX509
        ssl.keystore.location = null
        ssl.keystore.password = null
        ssl.keystore.type = JKS
        ssl.protocol = TLS
        ssl.provider = null
        ssl.secure.random.implementation = null
        ssl.trustmanager.algorithm = PKIX
        ssl.truststore.location = null
        ssl.truststore.password = null
        ssl.truststore.type = JKS
        value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer

2020-07-30 16:09:53,186 [main] WARN  com.everdata.etl.main.ETLMain - etl notify module not start .
2020-07-30 16:09:53,233 [input-thread] WARN  org.apache.kafka.clients.consumer.ConsumerConfig - The configuration 'zookeeper.connect' was supplied but isn't a known con
fig.
2020-07-30 16:09:53,236 [input-thread] INFO  org.apache.kafka.common.utils.AppInfoParser - Kafka version : 0.10.2.0
2020-07-30 16:09:53,236 [input-thread] INFO  org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : 576d93a8dc0cf421
2020-07-30 16:09:53,238 [input-thread] INFO  com.everdata.etl.input.kafka.consumer.KafkaStringConsumer - consume strategy : %s ,topic : %s .
2020-07-30 16:09:53,241 [kafka-block-timer-thread] INFO  BlockingTimer - start block timer thread.
2020-07-30 16:09:53,391 [input-thread-1] INFO  org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Discovered coordinator guard1:9092 (id: 2147483646 rack
: null) for group user-cell.
2020-07-30 16:09:53,393 [input-thread-1] INFO  org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Revoking previously assigned partitions [] for group us
er-cell
2020-07-30 16:09:53,394 [input-thread-1] INFO  com.everdata.etl.input.kafka.consumer.listener.ETLConsumerRebalanceListener - on partitions revoked : []
2020-07-30 16:09:53,394 [input-thread-1] INFO  org.apache.kafka.clients.consumer.internals.AbstractCoordinator - (Re-)joining group user-cell
2020-07-30 16:09:53,403 [input-thread-1] INFO  org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Successfully joined group user-cell with generation 20
2020-07-30 16:09:53,404 [input-thread-1] INFO  org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Setting newly assigned partitions [cdr-1, cdr-2, cdr-0,
 cdr-9, cdr-7, cdr-8, cdr-5, cdr-6, cdr-3, cdr-4] for group user-cell
2020-07-30 16:09:53,404 [input-thread-1] INFO  com.everdata.etl.input.kafka.consumer.listener.ETLConsumerRebalanceListener - on partitions assigned : [cdr-1, cdr-2, cdr
-0, cdr-9, cdr-7, cdr-8, cdr-5, cdr-6, cdr-3, cdr-4]
2020-07-30 16:10:08,187 [check-thread] INFO  com.everdata.etl.main.ETLMain - output timeout check , output queue size : 0 , input queue size : 0
2020-07-30 16:10:38,186 [check-thread] INFO  com.everdata.etl.main.ETLMain - output timeout check , output queue size : 0 , input queue size : 0
2020-07-30 16:11:08,186 [check-thread] INFO  com.everdata.etl.main.ETLMain - output timeout check , output queue size : 0 , input queue size : 0
2020-07-30 16:11:11,114 [input-thread-1] INFO  BlockingTimer - init blocking timer.
2020-07-30 16:11:11,218 [analyser-thread-3] INFO  com.everdata.etl.analyser.impl.GuardUserCellAnalyser - Add  19801208457 to redis
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值