Thread Model

http://www.orbzone.org/wp-print.php?p=56[@more@]

Inside Orbacus: An Overview of Orbacus Concurrency Models

Posted By sylvia On 15th September 2005 @ 17:26 In Recent Articles | No Comments

Do you use or have an interest in Orbacus?[1] Would you like to understand the server-side concurrency models in this ORB? This article gives a good overview of the various server-side concurrency models in Orbacus[2]. Lets start by defining a couple of terms, to make things a bit clearer.




Downcall: a user request/reply that gets packed together and sent 'down' into the orb for shipping across the wire.

Upcall: a request read off the wire, unpacked, and sent to the POA for processing




For this article we will focus on the upcalls as they are most important for our server-side scope. Orbacus has divided its server-side processing into two important processes: the reading of the upcall from the wire and the dispatching of the upcall to the POA (and eventually the proper servant). So each concurrency models in Orbacus can be thought of as a team comprised of a Receptionist and Dispatcher. The various server-side concurrency models are "Threaded (or Thread-per-Client)", "Thread-Pool", "Reactive", "Leader-Follower". Of these models, two are available on the client-side: Threaded and Reactive.

Threaded (Thread-per-Client)



The Threaded concurrency model is the simplest of the server-side models (and coincidentally is also used on the client-side as well). This is the default model selected for both the client and server ends. For each client that connects to our server, Orbacus will spawn a new thread that is dedicated to receiving requests AND dispatching them to the POA/Servants. This is an important fact and is what mainly differentiates it from the Thread-Pool model. What this means is that the receiver thread cannot read another upcall off the wire until the servant has finished processing and returns a reply back to the client. Essentially your requests/replies become synchronized with the client and only one is in operation at a given time (per client of course). If you have only one single-threaded client using that particular connection, this is a great model to choose. However, if you have multiple clients sharing this connection or if your client has multiple threads making requests over this connection, you will get lackluster concurrency with this model. An even worse situation occurs when you have thousands of clients trying to make requests to this single server - your OS will not allow you to create a new thread-per-client after a few hundred are reached so scalability becomes a big issue.




Thread-Pool



The Thread-Pool model is an extension of the Thread-per-Client model designed to allow much better per-connection concurrency by taking advantage of the division between Receptionist and Dispatcher. Like the threaded model, Orbacus will create a receiver thread per client. However, unlike the threaded model, this receiver thread will ONLY receive messages off the wire and not send these to the POA/servant. When an upcall is read by the receiver thread, it is put into a queue and the receiver thread goes right back to receiving upcalls. A thread from the thread-pool will be awoken to take an item from this upcall queue and invoke it on the proper servant (and send the reply back to the client). For a thread-pool of size N, this means that up to N requests can happen concurrently. This can be N requests per client or N requests across all clients as the threads of the thread-pool are not per-connection. An easy way to see the benefit of this is to write a servant that simply sleeps for 5 seconds before returning. If you have a multithreaded client making requests in the threaded model, only one request will be serviced every 5 seconds. But with the thread-pool model, up to N requests will be serviced every 5 seconds. It is still important to realize however that the thread-pool model can still suffer from scalability problems when your server is trying to handle thousands of clients; you will reach a thread-limit on your OS.




Reactive



The Reactive model is major change from the two threaded models we've just discussed. It allows for much better scalability than either of the previous models but it sacrifies concurrency to achieve this. Rather than creating a dedicated receiver thread per-client, the Reactive model simply registers the connections transport (socket in Orbacus IIOP terms) with a 'Reactor'. This Reactor will essentially utilize the select() call to determine if a given transport can be used to read/write. When such events occur, it starts dispatching these events to the reactive connections in order. In this regard, the Reactor and Reactive connections are essentially single-threaded which is why it scales so well to large numbers of clients. But as far as concurrency goes, its non-existent. Upcalls per client or upcalls across all clients are all handled in a single-threaded fashion. Only one upcall is read from the wire at a time and only one servant invocation occurs at a time. Highly scalable, poorly concurrent.





Leader-Follower



If the Reactive model is likened to the Thread-Per-Client model, then the Leader-Follower is like the Thread-Pool model. However it probably more apt to say that the Leader-Follower model is the Reactive model on steroids. Like the Reactive model, each connection does not start a special receiver thread but rather it registers its transport (remember, think socket) with the LeaderFollower Reactor. This special Reactor has a thread-pool of worker threads at its disposal. The main thread of this pool (the leader thread) enters a select() call as with the Reactive model. However when events are ready, instead of handling them in a single-threaded order as before, it will first promote a new leader thread from the pool to enter the select() call. The previous leader will then start handling the batch of events it has in its queue in-order. This means that if 5 different transports had events ready when select() returned, this thread will process them sequentially (like the reactive model). (Note: This can increase the latency of the Leader-Follower model in cases where there are a limited number of clients. In those cases, one of the two threaded models is a better choice). Meanwhile the new leader is free to repeat the process of its former leader; that is, read events, promote a new leader and process events. When an old leader finishes its processing, it goes back into the pool to be promoted again later. In this regard, a Leader-Follower model of size N can act like N Reactive Models at once. The Leader-Follower gives much more concurrency than the Reactive model with far more scalability than either of the threaded models. I like to think of it as the 'Enterprise Server Model' where you have thousands of clients connecting to a server. For small situations however (tens of clients per server), it has more latency than the Thread-Pool model.




Summary

I hope this shed some light on how to configure your server-side processing in Orbacus. A lot of the discussion here will also be applicable to other Orbs as well to its valid knowledge for those not using Orbacus too. The important thing is to test your servers using the various models and measure your performance. Here is a basic breakdown the general guidelines I would instruct a customer if asked.



Choose Thread-Per-Client if:
- Small to medium number (1 to ~100) of clients per server
- Clients are single-threaded
- No connection reused among clients

Choose Thread-Pool if:
- Small to medium number (1 to ~100) of clients per server
- Clients are multithreaded
- Connection reuse among clients

Choose Reactive if:
- Small to large number of clients per server
- Servant invocations are fast
- OS highly limits number of threads
- Scalability and not concurrency are priority

Choose Leader-Follower if:
- Medium to large number (100+) of clients per server
- Scalability AND concurrency are priorities




There are other factors involved in the decision here so it isn't as clearcut as one might think. An example of this could be a situation where a multithreaded client is invoking a lot of simultaneous requests to a single servant. In this case, the POA could be locking access to the servant in order to preserve the integrity of the servant's data. Here, a Reactive model might perform as good or better than a Thread-Pool or Leader-Follower since concurrency cannot be achieved. The lack of thread context switching helps the Reactive model in this case. The point is that these guidelines can help but performance testing is imperative.

  • [1] http://www.orbacus.com
  • [2] http://www.orbacus.com

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/69498/viewspace-910839/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/69498/viewspace-910839/

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值