Interprocess Communication Affects Application Response Time

Interprocess Communication Affects Application Response Time

Randy Stafford

RESPONSE TiME iS CRiTiCAL TO SOFTWARE USABiLiTY. Few things are as frustrating as waiting for some software system to respond, especially when our interaction with the software involves repeated cycles of stimulus and response. We feel as if the software is wasting our time and affecting our pro- ductivity. However, the causes of poor response time are less well appreciated, especially in modern applications. Much performance management literature still focuses on data structures and algorithms, issues that can make a differ- ence in some cases but are far less likely to dominate performance in modern multitier enterprise applications.
When performance is a problem in such applications, my experience has been that examining data structures and algorithms isn’t the right place to look for improvements. Response time depends most strongly on the number of remote interprocess communications (IPCs) conducted in response to a stimulus. While there can be other local bottlenecks, the number of remote interprocess communications usually dominates. Each remote interprocess communication contributes some nonnegligible latency to the overall response time, and these individual contributions add up, especially when they are incurred in sequence.
A prime example is ripple loading in an application using object-relational mapping. Ripple loading describes the sequential execution of many database calls to select the data needed for building a graph of objects (see Lazy Load* in Martin Fowler’s Patterns of Enterprise Application Architecture [Addison- Wesley Professional]). When the database client is a middle-tier application server rendering a web page, these database calls are usually executed sequen- tially in a single thread. Their individual latencies accumulate, contributing to the overall response time. Even if each database call takes only 10 milliseconds,
* http://martinfowler.com/eaaCatalog/lazyLoad.html
82 97 Things Every Programmer Should Know

a page requiring 1,000 calls (which is not uncommon) will exhibit at least a 10-second response time. Other examples include web-service invocation, HTTP requests from a web browser, distributed object invocation, request– reply messaging, and data-grid interaction over custom network protocols. The more remote IPCs needed to respond to a stimulus, the greater the response time will be.
There are a few relatively obvious and well-known strategies for reducing the number of remote interprocess communications per stimulus. One strat- egy is to apply the principle of parsimony, optimizing the interface between processes so that exactly the right data for the purpose at hand is exchanged with the minimum amount of interaction. Another strategy is to parallelize the interprocess communications where possible, so that the overall response time becomes driven mainly by the longest-latency IPC. A third strategy is to cache the results of previous IPCs, so that future IPCs may be avoided by hit- ting local cache instead.
When you’re designing an application, be mindful of the number of interprocess communications in response to each stimulus. When analyzing applications that suffer from poor performance, I have often found IPC-to-stimulus ratios of thousands-to-one. Reducing this ratio, whether by caching or parallelizing or some other technique, will pay off much more than changing data structure choice or tweaking a sorting algorithm.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值