最近生产在搞实时统计需求,在开发环境进行开发的初期,想通过Flink的AsyncDataStream.orderedWait()异步请求的方式,将统计结果最终落地到数据库中。过程中需要查询一些MongoDB中维表的数据,本地测试数据量可能不是很大,所以没有问题。但是当到生产上运行后,就出现了下述问题:
java.util.concurrent.RejectedExecutionException: java.lang.IllegalStateException: Mailbox is in state CLOSED, but is required to be in state OPEN for put operations.
at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxExecutorImpl.execute(MailboxExecutorImpl.java:60)
at org.apache.flink.streaming.api.operators.async.AsyncWaitOperator$ResultHandler.processInMailbox(AsyncWaitOperator.java:335)
at org.apache.flink.streaming.api.operators.async.AsyncWaitOperator$ResultHandler.complete(AsyncWaitOperator.java:330)
at com.it.flink.base.sink.IndexStatisticsAsyncFunc.lambda$asyncInvoke$1(IndexStatisticsAsyncFunc.java:103)
at io.vertx.ext.jdbc.impl.JDBCClientImpl.lambda$null$8(JDBCClientImpl.java:295)
at io.vertx.core.impl.ContextImpl.executeTask(ContextImpl.java:369)
at io.vertx.core.impl.EventLoopContext.lambda$executeAsync$0(EventLoopContext.java:38)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:510)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:518)
at io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1044)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalStateException: Mailbox is in state CLOSED, but is required to be in state OPEN for put operations.
at org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.checkPutStateConditions(TaskMailboxImpl.java:265)
at org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.put(TaskMailboxImpl.java:193)
at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxExecutorImpl.execute(MailboxExecutorImpl.java:58)
... 13 more
问题很清晰就是MongoDB数据的连接数不够。
解决:去除异步请求的方式,将结果直接写入数据库后,问题就不出现了。
另外还有一个情况,之前处理上述问题的时候,尝试每次查询完MongoDB,将连接关闭,
try (MongoClient mongoClient = MongoConnection.getInstance()) {}
但是实际运行时报错:
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console. Set system property 'org.apache.logging.log4j.simplelog.StatusLogger.level' to TRACE to show Log4j2 internal initialization logging.
java.lang.IllegalStateException: state should be: open
at com.mongodb.assertions.Assertions.isTrue(Assertions.java:70)
at com.mongodb.connection.BaseCluster.selectServer(BaseCluster.java:82)
at com.mongodb.binding.ClusterBinding$ClusterBindingConnectionSource.<init>(ClusterBinding.java:75)
at com.mongodb.binding.ClusterBinding$ClusterBindingConnectionSource.<init>(ClusterBinding.java:71)
at com.mongodb.binding.ClusterBinding.getReadConnectionSource(ClusterBinding.java:63)
at com.mongodb.operation.OperationHelper.withConnection(OperationHelper.java:402)
at com.mongodb.operation.FindOperation.execute(FindOperation.java:510)
at com.mongodb.operation.FindOperation.execute(FindOperation.java:81)
at com.mongodb.Mongo.execute(Mongo.java:836)
at com.mongodb.Mongo$2.execute(Mongo.java:823)
at com.mongodb.OperationIterable.iterator(OperationIterable.java:47)
at com.mongodb.FindIterableImpl.iterator(FindIterableImpl.java:151)
at com.it.flink.base.source.MongoUpdateSelectSource.getDocumentByPrimaryKey(MongoUpdateSelectSource.java:55)
at com.it.flink.base.transform.EffectiveFlatMapFunc$MongoUpdateType.getMessage(EffectiveFlatMapFunc.java:213)
at com.it.flink.base.transform.EffectiveFlatMapFunc.flatMap(EffectiveFlatMapFunc.java:59)
at com.it.flink.base.transform.EffectiveFlatMapFunc.flatMap(EffectiveFlatMapFunc.java:24)
at org.apache.flink.streaming.api.operators.StreamFlatMap.processElement(StreamFlatMap.java:50)
at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:641)
at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:616)
at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:596)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:730)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:708)
at org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104)
at org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collectWithTimestamp(StreamSourceContexts.java:111)
at org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecordWithTimestamp(AbstractFetcher.java:398)
at org.apache.flink.streaming.connectors.kafka.internal.Kafka010Fetcher.emitRecord(Kafka010Fetcher.java:91)
at org.apache.flink.streaming.connectors.kafka.internal.Kafka09Fetcher.runFetchLoop(Kafka09Fetcher.java:156)
at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:718)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63)
at org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:200)
说明MongoDB数据连接是不需要关闭的,否则会导致其他查询获取不到可用的连接,从而导致查询失败。
问题解决,后续还需要处理下,为什么异步请求会出现上述问题。