A (somewhat) brief history of the performance landscape

I’d like to enlist your help. As I’ve mentioned, last week I led a session onweb performance automation for the members of the NY Web Performance Meetup group. For the session, I created a set of slides that outline my theory about how the front-end performance landscape has evolved over the past 15 years. Now I want to solicit your feedback and help me fill the gaps.

Evolution: From delivery to transformation

Most companies know that if site speed is an issue for them, the problem isn’t infrastructure, and throwing more bandwidth and servers at the problem isn’t the solution. As I understand the current solution landscape, the web performance problem can be approached in two ways:

1. Delivery
Delivery-based solutions are focused on getting the data from the server to the browser more quickly. This is a $4.2 billion/year market, encompassing CDNs, network devices/accelerators, and others:

  • CDNs
    Pros: Make sites faster by shortening round trips; easy to deploy
    Cons: Expensive; don’t take advantage of acceleration opportunities like roundtrip reduction and optimizing pages for browsers
  • Network devices/accelerators (e.g. load balancers)
    Pros: Proven technology; easy to implement and deploy
    Cons: Don’t address performance problems that occur at the browser level;very hard to configure which is why many sites using them don’t even use the basic features of compression and keep-alive
  • Other (TCP, DNS, etc.)
    Other delivery players exist, such as DNS solution and TCP optimization solution, but they are at the fringes of this market, and I consider them features rather than unique market segments when it comes to performance.

Here’s the diagram I’ve created to show the breakdown of delivery-based solutions and the major players in this space:

Includes companies like F5, Citrix, Akamai, Limelight, Cotendo and CDNetworks

2. Transformation
Transformation-based solutions focus on analyzing each page of a site from the browser’s perspective and optimizing each page so that it is delivered most efficiently to the browser. Thanks to teams at Yahoo and Google, there are emerging sets of best practices that serve as guidelines for this recoding.

Note that transformation is a complement to, not a replacement for, delivery-based solutions.

It is difficult to segment this emerging market, as very few players are actively involved in it. I choose to segment it by how transformation is delivered (via server, network or cloud) as this discussion seems to be a clear dividing line between the various players.

  • Server: In this category I put all of the tools that sit within the datacenter on the server. In this category we have the pure-play server plugs-ins as well as the virtual machines. I see a further distinction in this market between platform-specific products (i.e. this only works on Apache or IIS) versus solutions that work across all platforms.
  • Network: In this category I have placed all of the physical hardware devices that do transformation. You will see an eclectic mix of new and old, with 10+ year code bases like F5 and Cisco mixed in with modern transformation products.
  • Cloud: In this category I put all of the solutions you can subscribe to. This is a very small category. I really hesitated to include Akamai, as they do almost no transformation today, but they do parse HTML for the pre-fetching feature, which gets objects to the edge faster. (I also didn’t want to have a category of one.)

This is a first stab and I’m not convinced I have it right, however I am excited to put something down on virtual paper, so in three years I can look back and see how far our industry has evolved and realize how naive I was.

Server-, network- and cloud-based solution providers, including Strangeloop, Aptimize, Acceloweb, and Webo

Web performance timeline: Any trends here?

After organizing the solution providers in both the delivery and transformation camps, I thought it would be interesting to put the key players in front-end performance on a timeline and see if any patterns emerged:

Includes Gomez, Akamai, Strangeloop, SPDY, and Velocity

As you can see, in addition to showing solution providers, this timeline also shows when new browsers appeared on the market, as well as the appearance of widely embraced performance tools and reference materials. This is a brain dump, but I tried to capture they key elements that I think of when it comes to front-end performance.

This historical bird’s eye view corroborates my delivery-to-transformation theory of performance evolution:

  • The early web was all about the basics: seeing content (i.e. browsers) and getting to modems (Gzip and other server side tricks).
  • The exuberance of the late ’90s was made possible by huge investments in basic infrastructure and foundational datacenter technology. In our world, the key developments were the first load balancers (F5/Netscaler), the introduction of Akamai, and the development of measurement tools such as Gomez and Keynote, which set the standard for web performance measurement.
  • The late ’90s was a hotbed for innovation and produced the first interesting cloud play for dynamic content (Netli) and the first real transformation play (Pivia, which was subsequently bought by Swan Labs and then swallowed by F5; this 10-year-old technology is now branded as the F5 Web Accelerator).
  • 2000-2006 was a tough time for the front-end performance market. We did see some incredible innovation in related markets, such as the branch office acceleration market (i.e. technology that speeds up Outlook and Office between branch offices). The only interesting and key innovator in my eyes was Fineground, which blazed a trail in transformation but sold to Cisco and subsequently was killed.
  • With the recovery of the web economy came greater investment in new tools and research. In 2006, I co-founded Strangeloop and we filed our first patent on the technology that formed the basis for the set of solutions now known as Site Optimizer.
  • Shortly afterward, O’Reilly published Steve Souders’ book High Performance Web Sites. On its heels came a number of developer resources and diagnostic tools such as Webpagetest, and Browserscope, as well as the Velocity conference, which quickly became an unofficial hub of the performance community.
  • In more recent times, our industry has matured with more entrants into the transformation space and legitimization of the core premise with seminal moments like the inclusion of page speed as a key ranking factor in the Google search algorithm.

Your thoughts?

This is just my wide-angle take on the front-end web performance landscape. I’m very interested to hear yours. Is my classification scheme accurate? Have I left out any major developments or solution providers? Are there any gaps that need to be filled? Trends I’ve missed?

And what about the future of solution delivery? Given the trajectory we’re on, where do you see our industry going in the next few years?

Related posts


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
精简下面表达:Existing protein function prediction methods integrate PPI networks and multivariate bioinformatics data to improve the performance of function prediction. By combining multivariate information, the interactions between proteins become diverse. Different interactions’ functions in functional prediction are various. Combining multiple interactions simply between two proteins can effectively reduce the effect of false negatives and increase the number of predicted functions, but it can also increase the number of false positive functions, which contribute to nonobvious enhancement for the overall functional prediction performance. In this article, we have presented a framework for protein function prediction algorithms based on PPI network and semantic similarity with the addition of protein hierarchical functions to them. The framework relies on diverse clustering algorithms and the calculation of protein semantic similarity for protein function prediction. Classification and similarity calculations for protein pairs clustered by the functional feature are more accurate and reliable, allowing for the prediction of protein function at different functional levels from different proteomes, and giving biological applications greater flexibility.The method proposed in this paper performs well on protein data from wine yeast cells, but how well it matches other data remains to be verified. Yet until now, most unknown proteins have only been able to predict protein function by calculating similarities to their homologues. The predictions result of those unknown proteins without homologues are unstable because they are relatively isolated in the protein interaction network. It is difficult to find one protein with high similarity. In the framework proposed in this article, the number of features selected after clustering and the number of protein features selected for each functional layer has a significant impact on the accuracy of subsequent functional predictions. Therefore, when making feature selection, it is necessary to select as many functional features as possible that are important for the whole interaction network. When an incorrect feature was selected, the prediction results will be somewhat different from the actual function. Thus as a whole, the method proposed in this article has improved the accuracy of protein function prediction based on the PPI network method to a certain extent and reduces the probability of false positive prediction results.
最新发布
02-27

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值