Netty的线程模式

原文链接 http://netty.io/wiki/thread-model.html

To put simply, for a channel:

  1. Regardless of its transport and type, its all upstream (i.e. inbound) events must be fired from the thread that performs I/O for the channel (i.e. I/O thread).
  2. All downstream (i.e. outbound) events can be triggered from any thread including the I/O thread and non-I/O threads. However, any upstream events triggered as a side effect of the downstream event must be fired from the I/O thread. (e.g. IfChannel.close() triggers channelDisconnected, channelUnbound, and channelClosed, they must be fired by the I/O thread.
对于Channel的简单介绍:
  1. 无论哪种传输类型,所有的上行事件必须由I/O线程产生。
  2. 所有的下行消息可以在I/O线程中产生也可以在非I/O线程中产生。任何由下行事件触发的上行事件都必须在I/O线程中产生。例如由Channel.close()所触发的channelDisconnected,channelUnbound channelClosed都是在I/O线程中产生的。

Current problems (UGLY - causes a race condition in an upstream handler, BAD - does not cause a race condition but violates the expected thread model):

  • UGLY: The upstream events triggered as a side effect of a downstream event is triggered by the caller thread,
  • UGLY: The local transport always uses a caller thread to trigger an event.
  • BAD: channelOpen is triggered by the thread that called ChannelFactory.newChannel(), which is not an I/O thread. It's kind of bad but otherwise its not possible to limit the concurrent active channels by closing the channel here. If we would do this in the IO-Thread it would not be that efficient.

  • BAD: Client-side channels are run by two I/O threads. One that makes a connection attempts and the other that does actual I/O.

现在的问题有两2方面,UGLY-在上行的handler中产生竞争条件,BAD- 不会产生竞争条件,但是违反了预期的线程模型

  • UGLY:调用线程产生了一个下行事件触发的上行事件。
  • UGLY: 本地传输层总是使用调用线程触发事件
  • BAD:channelOpen 由调用ChannelFactory.newChannel()的线程触发,而不是由I/O线程触发。这种方式不好,但我们不能通过关闭Channel来限制当前并发的Channel。如果我们要在I/O线程中做这件事效率不会很高。
  • BAD: 客户端的channel运行在两个线程中。一个处理连接,另一个用来处理实际的I/O操作。

Action items:

  • Merge client-side boss, server-side boss, and NioWorker into a universal I/O thread that can perform all I/O operations. By doing this:
    • We solve the client-side channel problem because the thread which attempted a connection attempt can continue to perform reads and writes.
    • We solve the problem where Netty creates as many threads as the number of open server ports.
    • We can share a pool of NioWorkers more easily and will potentially have more flexibility in channel-worker mapping.
    • We also need to investigate if we can make an abstract I/O thread class so that all transports (socket, datagram, SCTP, ..) can extend it. We currently have too much duplication between socket, datagram, and SCTP.
  • If the caller thread is not the I/O thread, Netty triggers an upstream event later in the I/O thread. Along with this change, allow a user to trigger one's own upstream event later in the I/O thread by adding the sendUpstreamLater() method to ChannelPipeline and ChannelHandlerContext.
    • However, we cannot use sendUpstreamLater() only if the current thread is not the I/O thread because OMATPE or MATPE will interfere with it, so we will have to let user decide. (i.e. to call sendUpstream() or sendUpstreamLater())
  • ChannelFactory.newChannel() must not trigger an event immediately. newChannel() must wait until the I/O thread notifies the channel has been registered to the I/O thread, before returning the new channel to the caller.
  • Rewrite the local transport.
解决方案
  • 将客户端的boss, 服务器端的boss和woker合并到一个通用的I/O线程中来执行所有I/O操作。要解决客户端的问题可以在尝试连接的线程中继续处理读写操作;我们解决了Netty要创建和服务器已打开端口数目相同的线程的问题;我们可以采用一个NioWorkers池来供多个channel-worker共同使用;我还需要研究采用一个抽象的I/O线程来供所有传输方式扩展。目前我们在socket,datagram, sctp之间有很多重复的东西。
  • 如果调用线程不是I/O线程,Netty将会在I/O线程中触发上行消息。这种方式允许用户在ChannelPipeline ChannelHandlerContext中添加sendUpstreamLater()方法到I/O线程中来触发上行消息。但是我们不能在非I/O线程中使用sendUpstreamLater(),因为OMATPEMATPE会干扰它(这一句没搞懂什么意思),所以必须让用户自己决定。(调用sendUpstream()sendUpstreamLater())
  • ChannelFactory.newChannel()并不会立即触发一个事件,而是要等待I/O线程通知channel已经注册到I/O线程了,在新channel返回给调用者之前。
  • 重新本地传输。

Questions:

  • Can we make all these changes in v3 and keep things still backward-compatible? Wouldn't it be easier to get this done in v4? Fully asynchronous user application which does all I/O in an handler making heavy use of ChannelFuture shouldn't be affected by the current flawed thread model, which means a user can some how work around this issue, so it might be better move on to v4 instead of making the same changes on two branches.
问题:
  • 对V3版本做的改动能否保持向后兼容?能否很容易的迁移到V4版本了?在I/O handler中使用大量ChannelFuture的异步程序不应受到当前不合理线程模型的影响,意味着用户如何解决这个问题,因此我们应该快速的迁移到V4版本。

Answers:

  • I think if its to much work to "backport" it to v3 we should just move ahead and "ignore" it for v3. Maybe we can find some "easier" workaround for v3 which would at least help us to get rid of the Channel.close() race, as this is the one that will most likely hit our users. (normanmaurer)
答案:
  • 反向移植的V3版本要花很多工作,那么我们应该迁移到最新版本上。也许我们会发现我们的项目采用V3版本可以让我们很容易的避免Channel.close()的竞争。

本文还有很多地方不理解,如果您有好的翻译欢迎补充。




  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值