Flink之异步请求AsyncDataStream生产问题记录

在开发Flink实时统计功能时,遇到生产环境中MongoDB连接数不足的问题,导致异步请求失败。问题源于MongoDB连接池资源耗尽。解决方案是去除异步请求,直接写入数据库。此外,尝试手动关闭MongoDB连接反而引发错误,表明连接不应被关闭,否则影响其他查询。这提示我们需要正确管理和配置MongoDB连接池以避免资源争抢。
摘要由CSDN通过智能技术生成

最近生产在搞实时统计需求,在开发环境进行开发的初期,想通过Flink的AsyncDataStream.orderedWait()异步请求的方式,将统计结果最终落地到数据库中。过程中需要查询一些MongoDB中维表的数据,本地测试数据量可能不是很大,所以没有问题。但是当到生产上运行后,就出现了下述问题:

java.util.concurrent.RejectedExecutionException: java.lang.IllegalStateException: Mailbox is in state CLOSED, but is required to be in state OPEN for put operations.
	at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxExecutorImpl.execute(MailboxExecutorImpl.java:60)
	at org.apache.flink.streaming.api.operators.async.AsyncWaitOperator$ResultHandler.processInMailbox(AsyncWaitOperator.java:335)
	at org.apache.flink.streaming.api.operators.async.AsyncWaitOperator$ResultHandler.complete(AsyncWaitOperator.java:330)
	at com.it.flink.base.sink.IndexStatisticsAsyncFunc.lambda$asyncInvoke$1(IndexStatisticsAsyncFunc.java:103)
	at io.vertx.ext.jdbc.impl.JDBCClientImpl.lambda$null$8(JDBCClientImpl.java:295)
	at io.vertx.core.impl.ContextImpl.executeTask(ContextImpl.java:369)
	at io.vertx.core.impl.EventLoopContext.lambda$executeAsync$0(EventLoopContext.java:38)
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:510)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:518)
	at io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1044)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalStateException: Mailbox is in state CLOSED, but is required to be in state OPEN for put operations.
	at org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.checkPutStateConditions(TaskMailboxImpl.java:265)
	at org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.put(TaskMailboxImpl.java:193)
	at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxExecutorImpl.execute(MailboxExecutorImpl.java:58)
	... 13 more

问题很清晰就是MongoDB数据的连接数不够。

解决:去除异步请求的方式,将结果直接写入数据库后,问题就不出现了。

另外还有一个情况,之前处理上述问题的时候,尝试每次查询完MongoDB,将连接关闭,

try (MongoClient mongoClient = MongoConnection.getInstance()) {}

但是实际运行时报错:

ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console. Set system property 'org.apache.logging.log4j.simplelog.StatusLogger.level' to TRACE to show Log4j2 internal initialization logging.
java.lang.IllegalStateException: state should be: open
	at com.mongodb.assertions.Assertions.isTrue(Assertions.java:70)
	at com.mongodb.connection.BaseCluster.selectServer(BaseCluster.java:82)
	at com.mongodb.binding.ClusterBinding$ClusterBindingConnectionSource.<init>(ClusterBinding.java:75)
	at com.mongodb.binding.ClusterBinding$ClusterBindingConnectionSource.<init>(ClusterBinding.java:71)
	at com.mongodb.binding.ClusterBinding.getReadConnectionSource(ClusterBinding.java:63)
	at com.mongodb.operation.OperationHelper.withConnection(OperationHelper.java:402)
	at com.mongodb.operation.FindOperation.execute(FindOperation.java:510)
	at com.mongodb.operation.FindOperation.execute(FindOperation.java:81)
	at com.mongodb.Mongo.execute(Mongo.java:836)
	at com.mongodb.Mongo$2.execute(Mongo.java:823)
	at com.mongodb.OperationIterable.iterator(OperationIterable.java:47)
	at com.mongodb.FindIterableImpl.iterator(FindIterableImpl.java:151)
	at com.it.flink.base.source.MongoUpdateSelectSource.getDocumentByPrimaryKey(MongoUpdateSelectSource.java:55)
	at com.it.flink.base.transform.EffectiveFlatMapFunc$MongoUpdateType.getMessage(EffectiveFlatMapFunc.java:213)
	at com.it.flink.base.transform.EffectiveFlatMapFunc.flatMap(EffectiveFlatMapFunc.java:59)
	at com.it.flink.base.transform.EffectiveFlatMapFunc.flatMap(EffectiveFlatMapFunc.java:24)
	at org.apache.flink.streaming.api.operators.StreamFlatMap.processElement(StreamFlatMap.java:50)
	at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:641)
	at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:616)
	at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:596)
	at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:730)
	at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:708)
	at org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104)
	at org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collectWithTimestamp(StreamSourceContexts.java:111)
	at org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecordWithTimestamp(AbstractFetcher.java:398)
	at org.apache.flink.streaming.connectors.kafka.internal.Kafka010Fetcher.emitRecord(Kafka010Fetcher.java:91)
	at org.apache.flink.streaming.connectors.kafka.internal.Kafka09Fetcher.runFetchLoop(Kafka09Fetcher.java:156)
	at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:718)
	at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100)
	at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63)
	at org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:200)

说明MongoDB数据连接是不需要关闭的,否则会导致其他查询获取不到可用的连接,从而导致查询失败。

问题解决,后续还需要处理下,为什么异步请求会出现上述问题。

评论 6
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值