雅虎14条性能优化原则

14个优化网站性能提高网站访问速度的技巧


又叫“雅虎十四条”,想起一年前那个懵懂的我,大四傻乎乎的跑到大学城面试前端,那个时候以为寒暑假看了两套CSS的视频,就很牛B了,出发先还把视频温了一下,嗯嗯,这是滑动门,嗯嗯这是绝对定位,嗯嗯这是浮动清除雅虎十四条 <wbr>- <wbr>14个优化网站性能提高网站访问速度的技巧……

当时是彪叔面试我的,当时我还不知道那个人,全身黑漆漆的,黑色T-shirt,黑色皮肤,黑色帽子,黑色墨镜,还有点黑色胡渣的人,就是彪叔,补做了试题后支支吾吾的跟他谈了一下,发现完全不行,第一个问题是“雅虎十四条”是什么?然后我蒙了,pardon? 听都没听过,接着就阵亡了,回家后发了篇日志在QQ空间,不过当时也是一知半解,今天看了一整天,把它贴出来跟大家分享:

相信互联网已经越来越成为人们生活中不可或缺的一部分。ajax,flex等等富客户端的应用使得人们越加“幸福”地体验着许多原先只能在C/S实 现的功能。比如Google机会已经把最基本的office应用都搬到了互联网上。当然便利的同时毫无疑问的也使页面的速度越来越慢。自己是做前端开发 的,在性能方面,根据yahoo的调查,后台只占5%,而前端高达95%之多,其中有88%的东西是可以优化的。

雅虎十四条 <wbr>- <wbr>14个优化网站性能提高网站访问速度的技巧

以上是一张web2.0页面的生命周期图。工程师很形象地讲它分成了“怀孕,出生,毕业,结婚”四个阶段。如果在我们点击网页链接的时候能够意识到 这个过程而不是简单的请求-响应的话,我们便可以挖掘出很多细节上可以提升性能的东西。今天听了淘宝小马哥的一个对yahoo开发团队对web性能研究的 一个讲座,感觉收获很大,想在blog上做个分享。

相信很多人都听过优化网站性能的14条规则。更多的信息可见developer.yahoo.com

1. 尽可能的减少 HTTP 的请求数 [content]

2. 使用 CDN(Content Delivery Network) [server]

3. 添加 Expires 头(或者 Cache-control ) [server]

4. Gzip 组件 [server]

5. 将 CSS 样式放在页面的上方 [css]

6. 将脚本移动到底部(包括内联的) [javascript]

7. 避免使用 CSS 中的 Expressions [css]

8. 将 JavaScript 和 CSS 独立成外部文件 [javascript] [css]

9. 减少 DNS 查询 [content]

10. 压缩 JavaScript 和 CSS (包括内联的) [javascript] [css]

11. 避免重定向 [server]

12. 移除重复的脚本 [javascript]

13. 配置实体标签(ETags) [css]

14. 使 AJAX 缓存 

在firefox下有一个插件yslow,集成在firebug中,你可以用它很方便地来看看自己的网站在这几个方面的表现。

雅虎十四条 <wbr>- <wbr>14个优化网站性能提高网站访问速度的技巧

这是对用yslow对我的网站西风坊测评的结果,很遗憾,只有51分。呵呵。中国各大网站的分值都不高,刚测了一下,新浪和网易都是31分。然后 yahoo(美国)的分值确实97分!可见yahoo在这方面作出的努力。从他们总结的这14条规则,已经现在又新增加的20个点来看,有很多细节我们真 得是怎么都不会去想,有些做法甚至是有些“变态”了。

 

第一条、尽可能的减少 HTTP 的请求数 (Make Fewer HTTP Requests )

 

http请求是要开销的,想办法减少请求数自然可以提高网页速度。常用的方法,合并css,js(将一个页面中的css和js文件分别合并)以及 Image maps和css sprites等。当然或许将css,js文件拆分多个是因为css结构,共用等方面的考虑。阿里巴巴中文站当时的做法是开发时依然分开开发,然后在后台 对js,css进行合并,这样对于浏览器来说依然是一个请求,但是开发时仍然能还原成多个,方便管理和重复引用。yahoo甚至建议将首页的css和js 直接写在页面文件里面,而不是外部引用。因为首页的访问量太大了,这么做也可以减少两个请求数。而事实上国内的很多门户都是这么做的。

而css sprites是指只用将页面上的背景图合并成一张,然后通过css的background-position属性定义不过的值来取他的背景。淘宝和阿里巴巴中文站目前都是这样做的。有兴趣的可以看下淘宝和阿里巴巴的背景图。

http://www.csssprites.com/ 这是个工具网站,它可以自动将你上传的图片合并并给出对应的background-position坐标。并将结果以png和gif的格式输出。

 

第二条、使用CDN(内容分发网络): Use a Content Delivery Network

 

说实话,对于CDN这一块自己并不是很了解,简单地讲,通过在现有的Internet中增加一层新的网络架构,将网站的内容发布到最接近用户的 cache服务器内,通过DNS负载均衡的技术,判断用户来源就近访问cache服务器取得所需的内容,杭州的用户访问近杭州服务器上的内容,北京的访问 近北京服务器上的内容。这样可以有效减少数据在网络上传输的时间,提高速度。更详细地内容大家可以参考百度百科上对于CDN的解释。Yahoo!把静态内 容分布到CDN减少了用户影响时间20%或更多。

CDN技术示意图:

雅虎十四条 <wbr>- <wbr>14个优化网站性能提高网站访问速度的技巧

 

CDN组网示意图:

雅虎十四条 <wbr>- <wbr>14个优化网站性能提高网站访问速度的技巧

 

第三条、 添加Expire/Cache-Control 头:Add an Expires Header

 

现在越来越多的图片,脚本,css,flash被嵌入到页面中,当我们访问他们的时候势必会做许多次的http请求。其实我们可以通过设置 Expires header 来缓存这些文件。Expire其实就是通过header报文来指定特定类型的文件在览器中的缓存时间。大多数的图片,flash在发布后都是不需要经常修 改的,做了缓存以后这样浏览器以后就不需要再从服务器下载这些文件而是而直接从缓存中读取,这样再次访问页面的速度会大大加快。一个典型的HTTP 1.1协议返回的头信息:

HTTP/1.1 200 OK

Date: Fri, 30 Oct 1998 13:19:41 GMT

Server: Apache/1.3.3 (Unix)

Cache-Control: max-age=3600, must-revalidate

Expires: Fri, 30 Oct 1998 14:19:41 GMT

Last-Modified: Mon, 29 Jun 1998 02:28:12 GMT

ETag: “3e86-410-3596fbbc”

Content-Length: 1040

Content-Type: text/html

其中通过服务器端脚本设置Cache-Control和Expires可以完成。

如,在php中设置30天后过期:

以下为引用的内容:

<!--pHeader("Cache-Control: must-revalidate");
$offset = 60 * 60 * 24 * 30;

$
ExpStr = "Expires: " . gmdate("D, d M Y H:i:s", time() + $offset) . " GMT";
Header($ExpStr);-->

也可以通过配置服务器本身完成,这些偶就不是很清楚了,呵呵。想了解跟多的朋友可以参考http://www.web-caching.com/

据我了解,目前阿里巴巴中文站的Expires过期时间是30天。不过期间也有过问题,特别是对于脚本过期时间的设置还是应该仔细考虑下,不然相应 的脚本功能更新后客户端可能要过很长一段时间才能“感知”到这样的变化。以前做[suggest项目] 的时候就遇到过这个问题。所以,哪些应该缓存,哪些不该缓存还是应该仔细斟酌一番。

 

第四条、启用Gzip压缩:Gzip Components

 

Gzip的思想就是把文件先在服务器端进行压缩,然后再传输。这样可以显著减少文件传输的大小。传输完毕后浏览器会 重新对压缩过的内容进行解压缩,并执行。目前的浏览器都能“良好”地支持 gzip。不仅浏览器可以识别,而且各大“爬虫”也同样可以识别,各位seoer可以放下心了。而且gzip的压缩比例非常大,一般压缩率为85%,就是 说服务器端100K的页面可以压缩到25K左右再发送到客户端。具体的Gzip压缩原理大家可以参考csdn上的《gzip压缩算法》 这篇文章。雅虎特别强调, 所有的文本内容都应该被gzip压缩: html (php), js, css, xml, txt… 这一点我们网站做得不错,是一个A。以前我们的首页也并不是A,因为首页上还有很多广告代码投放的js,这些广告代码拥有者的网站的js没有经过gzip 压缩,也会拖累我们网站。

以上三点大多属于服务器端的内容,本人也是粗浅地了解而已。说得不对的地方有待各位指正。

 

第五条、将css放在页面最上面 ( Put Stylesheets at the Top)

 

将css放在页面最上面,这是为什么?因为 ie,firefox等浏览器在css全部传输完全之前不会去渲染任何的东西。理由诚如小马哥说得那样很简单。css,全称Cascading Style Sheets (层叠样式表单)。层叠即意味这后面的css可以覆盖前面的css,级别高的css可以覆盖级别低的css。在[css之!important] 这篇文章的最下面曾简单地提到过这层级关系,这里我们只需要知道css可以被覆盖的。既然前面的可以被覆盖,浏览器在他完全加载完毕之后再去渲染无疑也是 合情合理的很多浏览器下,如IE,把样式表放在页面的底部的问题在于它禁止了网页内容的顺序显示。浏览器阻止显示以免重画页面元素,那用户只能看到空白页 了。Firefox不会阻止显示,但这意味着当样式表下载后,有些页面元素可能需要重画,这导致闪烁问题。所以我们应该尽快让css加载完毕

顺着这层意思,如果我们再细究的话,其实还有可以优化的地方。比如本站上面包含的两个css文件,<linkrel=“stylesheet” rev=“stylesheet”href=“http://www.space007.com/themes/google/style/google.css” type=“text/css” media=“screen”/> 和<link rel=“stylesheet” rev=“stylesheet” href=“http://www.space007.com/css/print.css”type=“text/css” media=“print” />。  从media就可以看出第一个css是针对浏览器的,第二个css文件是针对打印样式的。从用户的行为习惯上来将,要打印页面的动作一定是发生在页面页面 显示出来之后的。所以比较好的方法应该是在页面加载完毕之后再动态地为这张页面加上针对打印设备的css,这样又可以提高一点速度。(哈哈)

 

第六条、将script放在页面最下面 (Put Scripts at the Bottom )

 

将脚本放在页面最下面的目的有那么两点: 1、 因为防止script脚本的执行阻塞页面的下载。在页面loading的过程中,当浏览器读到js执行语句的时候一定会把它全部解释完毕后在会接下来读下 面的内容。不信你可以写一个js死循环看看页面下面的东西还会不会出来。(setTimeout 和 setInterval的执行有点类似于多线程,在相应的响应时间之前也会继续下面的内容渲染。)浏览器这么做的逻辑是因为js随时可能执 行 location.href或是其他可能完全中断此页面过程的函数,即如此,当然得等他执行完毕之后再加载咯。所以放在页面最后,可以有效减少页面可 视元素的加载时间。 2、脚本引起的第二个问题是它阻塞并行下载数量。HTTP/1.1规范建议浏览器每个主机的并行下载数不超过2个(IE只能为2个,其他浏览器如ff等都 是默认设置为2个,不过新出的ie8可以达6个)。因此如果您把图像文件分布到多台机器的话,您可以达到超过2个的并行下载。但是当脚本文件下载时,浏览 器不会启动其他的并行下载。

当然对各个网站来说,把脚本都放到页面底部加载的可行性还是值得商榷的。就比如阿里巴巴中文站的页面。很多地方有内联的js,页面的显示严重依赖于此,我承认这和无侵入脚本的理念相差甚远,但是很多“历史遗留问题”却不是那么容易解决的。

 

第七条、避免在CSS中使用Expressions (Avoid CSS Expressions )

 

不过这样就多了两层无意义的嵌套,肯定不好。还需要一个更好的办法。

 

第八条、把javascript和css都放到外部文件中 (Make JavaScript and CSS External )

 

这点我想还是很容易理解的。不仅从性能优化上会这么做,用代码易于维护的角度看也应该这么做。把css和js写在页面内容可以减少2次请求,但也增 大了页面的大小。如果已经对css和js做了缓存,那也就没有2次多余的http请求了。当然,我在前面中也说过,有些特殊的页面开发人员还是会选择内联 的css和js文件。

 

第九条、减少DNS查询 (Reduce DNS Lookups)

 

在 Internet上域名与IP地址之间是一一对应的,域名(kuqin.com)很好记,但计算机不认识,计算机之间的“相认”还要转成ip地址。在网络 上每台计算机都对应有一个独立的ip地址。在域名和ip地址之间的转换工作称为域名解析,也称DNS查询。一次DNS的解析过程会消耗20-120毫秒的 时间,在dns查询结束之前,浏览器不会下载该域名下的任何东西。所以减少dns查询的时间可以加快页面的加载速度。yahoo的建议一个页面所包含的域 名数尽量控制在2-4个。这就需要对页面整体有一个很好的规划。目前我们这点做的不好,很多打点的广告投放系统拖累了我们。

 

第十条、压缩 JavaScript 和 CSS (Minify JavaScript )

 

压缩js和css的左右很显然,减少页面字节数。容量小页面加载速度自然也就快。而且压缩除了减少体积以外还可以起到一定的保护左右。这点我们做得 不错。常用的压缩工具有JsMin、YUI compressor等。另外像http://dean.edwards.name/packer/还给我们提供了一个非常方便的在线压缩工具。你可以在 jQuery的网页看到压缩过的js文件和没有压缩过的js文件的容量差别:

雅虎十四条 <wbr>- <wbr>14个优化网站性能提高网站访问速度的技巧

当然,压缩带来的一个弊端就是代码的可读性没了。相信很多做前端的朋友都遇到过这个问题:看Google的效果很酷,可是去看他的源代码却是一大堆 挤在一起的字符,连函数名都是替换过的,汗死!自己的代码也这样岂不是对维护非常不方便。所有阿里巴巴中文站目前采用的做法是在js和css发布的时候在 服务器端进行压缩。这样在我们很方便地维护自己的代码。

 

第十一条、避免重定向 (Avoid Redirects )

 

不久前在ieblog上看到过《Internet Explorer and Connection Limits》 这篇文章,比如 当你输入http://www.kuqin.com/ 的时候服务器会自动产生一个301服务器转向 http://www.kuqin.com/ ,你看浏览器的地址栏就能看出来。这种重定向自然也是需要消耗时间的。当然这只是一个例子,发生重定向的原因还有很多,但是不变的是每增加一次重定向就会 增加一次web请求,所以因该尽量减少。

 

第十二条、移除重复的脚本 (Remove Duplicate Scripts )

 

这点我想不说也知道,不仅是从性能上考虑,代码规范上看也是这样。但是不得不承认,很多时候我们会因为图一时之快而加上一些或许是重复的代码。或许一个统一的css框架和js框架可以比较好的解决我们的问题。小猪的观点很对,不仅是要做到不重复,更是要做到可重用。

 

第十三条、配置实体标签(ETags) (Configure ETags )

 

这点我也不懂,呵呵。在inforQ上找到一篇解释得比较详细的说明《使用ETags减少Web应用带宽和负载》,有兴趣的同学可以去看看。

 

第十四条、使 AJAX 缓存 (Make Ajax Cacheable )

 

ajax还要去缓存?做ajax请求的时候往往还要增加一个时间戳去避免他缓存。It’s important to remember that “asynchronous” does not imply “instantaneous”.(记住“异步”不是“瞬间”这一点很重要)。记住,即使AJAX是动态产生的而且只对一个用户起作用,他们依然可以被缓 存。


目前能做到的就是关于css方面的,拼图,压缩减少冗余,合理书写分类,让咱们css在YSlow显示都是"A",至于服务器类的,来日方长,咱们慢慢学……只要有热情在,迟早都会学到手……


后补一下,因为现在十四条已经扩展了很多,在这篇文章上面可以看到详细的分析:

http://uicss.cn/yslow/#more-12319

在Yslow上面可以看到有23条之多,看下图:

雅虎十四条 <wbr>- <wbr>14个优化网站性能提高网站访问速度的技巧

    1. 减少HTTP请求次数
      合并图片、CSS、JS,改进首次访问用户等待时间。
    2. 使用CDN
      就近缓存==>智能路由==>负载均衡==>WSA全站动态加速
    3. 避免空的src和href
      当link标签的href属性为空、script标签的src属性为空的时候,浏览器渲染的时候会把当前页面的URL作为它们的属性值,从而把页面的内容加载进来作为它们的值。测试
    4. 为文件头指定Expires
      使内容具有缓存性。避免了接下来的页面访问中不必要的HTTP请求。
    5. 使用gzip压缩内容
      压缩任何一个文本类型的响应,包括XML和JSON,都是值得的。
    6. 把CSS放到顶部
    7. 把JS放到底部
      防止js加载对之后资源造成阻塞。
    8. 避免使用CSS表达式
    9. 将CSS和JS放到外部文件中
      目的是缓存,但有时候为了减少请求,也会直接写到页面里,需根据PV和IP的比例权衡。
    10. 权衡DNS查找次数
      减少主机名可以节省响应时间。但同时,需要注意,减少主机会减少页面中并行下载的数量。
      IE浏览器在同一时刻只能从同一域名下载两个文件。当在一个页面显示多张图片时,IE 用户的图片下载速度就会受到影响。所以新浪会搞N个二级域名来放图片。
    11. 精简CSS和JS
    12. 避免跳转
      同域:注意避免反斜杠 “/” 的跳转;
      跨域:使用Alias或者mod_rewirte建立CNAME(保存域名与域名之间关系的DNS记录)
    13. 删除重复的JS和CSS
      重复调用脚本,除了增加额外的HTTP请求外,多次运算也会浪费时间。在IE和Firefox中不管脚本是否可缓存,它们都存在重复运算JavaScript的问题。
    14. 配置ETags
      它用来判断浏览器缓存里的元素是否和原来服务器上的一致。比last-modified date更具有弹性,例如某个文件在1秒内修改了10次,Etag可以综合Inode(文件的索引节点(inode)数),MTime(修改时间)和 Size来精准的进行判断,避开UNIX记录MTime只能精确到秒的问题。 服务器集群使用,可取后两个参数。使用ETags减少Web应用带宽和负载
    15. 可缓存的AJAX
      “异步”并不意味着“即时”:Ajax并不能保证用户不会在等待异步的JavaScript和XML响应上花费时间。
    16. 使用GET来完成AJAX请求
      当使用XMLHttpRequest时,浏览器中的POST方法是一个“两步走”的过程:首先发送文件头,然后才发送数据。因此使用GET获取数据时更加有意义。
    17. 减少DOM元素数量
      是否存在一个是更贴切的标签可以使用?标签语义化,避免滥用无意义标签
    18. 避免404
      有些站点把404错误响应页面改为“你是不是要找***”,这虽然改进了用户体验但是同样也会浪费服务器资源(如数据库等)。最糟糕的情况是指向外部 JavaScript的链接出现问题并返回404代码。首先,这种加载会破坏并行加载;其次浏览器会把试图在返回的404响应内容中找到可能有用的部分当 作JavaScript代码来执行。
    19. 减少Cookie的大小
    20. 使用无cookie的域
      比如图片 CSS 等,Yahoo! 的静态文件都在 yimg.com 上,客户端请求静态文件的时候,减少了 Cookie 的反复传输对主域名 (yahoo.com) 的影响。
    21. 不要使用滤镜
      png24的在IE6半透明那种东西,别乱使,淡定的切成PNG8+jpg
    22. 不要在HTML中缩放图片
    23. 缩小favicon.ico并缓存
      1. Minimize HTTP Requests

        tag: content

        80% of the end-user response time is spent on the front-end. Most of this time is tied up in downloading all the components in the page: images, stylesheets, scripts, Flash, etc. Reducing the number of components in turn reduces the number of HTTP requests required to render the page. This is the key to faster pages.

        One way to reduce the number of components in the page is to simplify the page's design. But is there a way to build pages with richer content while also achieving fast response times? Here are some techniques for reducing the number of HTTP requests, while still supporting rich page designs.

        Combined files are a way to reduce the number of HTTP requests by combining all scripts into a single script, and similarly combining all CSS into a single stylesheet. Combining files is more challenging when the scripts and stylesheets vary from page to page, but making this part of your release process improves response times.

        CSS Sprites are the preferred method for reducing the number of image requests. Combine your background images into a single image and use the CSSbackground-image and background-position properties to display the desired image segment.

        Image maps combine multiple images into a single image. The overall size is about the same, but reducing the number of HTTP requests speeds up the page. Image maps only work if the images are contiguous in the page, such as a navigation bar. Defining the coordinates of image maps can be tedious and error prone. Using image maps for navigation is not accessible too, so it's not recommended.

        Inline images use the data: URL scheme to embed the image data in the actual page. This can increase the size of your HTML document. Combining inline images into your (cached) stylesheets is a way to reduce HTTP requests and avoid increasing the size of your pages. Inline images are not yet supported across all major browsers.

        Reducing the number of HTTP requests in your page is the place to start. This is the most important guideline for improving performance for first time visitors. As described in Tenni Theurer's blog post Browser Cache Usage - Exposed!, 40-60% of daily visitors to your site come in with an empty cache. Making your page fast for these first time visitors is key to a better user experience.

        top | discuss this rule

        Use a Content Delivery Network

        tag: server

        The user's proximity to your web server has an impact on response times. Deploying your content across multiple, geographically dispersed servers will make your pages load faster from the user's perspective. But where should you start?

        As a first step to implementing geographically dispersed content, don't attempt to redesign your web application to work in a distributed architecture. Depending on the application, changing the architecture could include daunting tasks such as synchronizing session state and replicating database transactions across server locations. Attempts to reduce the distance between users and your content could be delayed by, or never pass, this application architecture step.

        Remember that 80-90% of the end-user response time is spent downloading all the components in the page: images, stylesheets, scripts, Flash, etc. This is thePerformance Golden Rule. Rather than starting with the difficult task of redesigning your application architecture, it's better to first disperse your static content. This not only achieves a bigger reduction in response times, but it's easier thanks to content delivery networks.

        A content delivery network (CDN) is a collection of web servers distributed across multiple locations to deliver content more efficiently to users. The server selected for delivering content to a specific user is typically based on a measure of network proximity. For example, the server with the fewest network hops or the server with the quickest response time is chosen.

        Some large Internet companies own their own CDN, but it's cost-effective to use a CDN service provider, such as Akamai TechnologiesEdgeCast, or level3. For start-up companies and private web sites, the cost of a CDN service can be prohibitive, but as your target audience grows larger and becomes more global, a CDN is necessary to achieve fast response times. At Yahoo!, properties that moved static content off their application web servers to a CDN (both 3rd party as mentioned above as well as Yahoo’s own CDN) improved end-user response times by 20% or more. Switching to a CDN is a relatively easy code change that will dramatically improve the speed of your web site.

        top | discuss this rule

        Add an Expires or a Cache-Control Header

        tag: server

        There are two aspects to this rule:

        • For static components: implement "Never expire" policy by setting far future Expires header
        • For dynamic components: use an appropriate Cache-Control header to help the browser with conditional requests

        Web page designs are getting richer and richer, which means more scripts, stylesheets, images, and Flash in the page. A first-time visitor to your page may have to make several HTTP requests, but by using the Expires header you make those components cacheable. This avoids unnecessary HTTP requests on subsequent page views. Expires headers are most often used with images, but they should be used on all components including scripts, stylesheets, and Flash components.

        Browsers (and proxies) use a cache to reduce the number and size of HTTP requests, making web pages load faster. A web server uses the Expires header in the HTTP response to tell the client how long a component can be cached. This is a far future Expires header, telling the browser that this response won't be stale until April 15, 2010.

              Expires: Thu, 15 Apr 2010 20:00:00 GMT

        If your server is Apache, use the ExpiresDefault directive to set an expiration date relative to the current date. This example of the ExpiresDefault directive sets the Expires date 10 years out from the time of the request.

              ExpiresDefault "access plus 10 years"

        Keep in mind, if you use a far future Expires header you have to change the component's filename whenever the component changes. At Yahoo! we often make this step part of the build process: a version number is embedded in the component's filename, for example, yahoo_2.0.6.js.

        Using a far future Expires header affects page views only after a user has already visited your site. It has no effect on the number of HTTP requests when a user visits your site for the first time and the browser's cache is empty. Therefore the impact of this performance improvement depends on how often users hit your pages with a primed cache. (A "primed cache" already contains all of the components in the page.) We measured this at Yahoo! and found the number of page views with a primed cache is 75-85%. By using a far future Expires header, you increase the number of components that are cached by the browser and re-used on subsequent page views without sending a single byte over the user's Internet connection.

        top | discuss this rule

        Gzip Components

        tag: server

        The time it takes to transfer an HTTP request and response across the network can be significantly reduced by decisions made by front-end engineers. It's true that the end-user's bandwidth speed, Internet service provider, proximity to peering exchange points, etc. are beyond the control of the development team. But there are other variables that affect response times. Compression reduces response times by reducing the size of the HTTP response.

        Starting with HTTP/1.1, web clients indicate support for compression with the Accept-Encoding header in the HTTP request.

              Accept-Encoding: gzip, deflate

        If the web server sees this header in the request, it may compress the response using one of the methods listed by the client. The web server notifies the web client of this via the Content-Encoding header in the response.

              Content-Encoding: gzip

        Gzip is the most popular and effective compression method at this time. It was developed by the GNU project and standardized by RFC 1952. The only other compression format you're likely to see is deflate, but it's less effective and less popular.

        Gzipping generally reduces the response size by about 70%. Approximately 90% of today's Internet traffic travels through browsers that claim to support gzip. If you use Apache, the module configuring gzip depends on your version: Apache 1.3 uses mod_gzip while Apache 2.x uses mod_deflate.

        There are known issues with browsers and proxies that may cause a mismatch in what the browser expects and what it receives with regard to compressed content. Fortunately, these edge cases are dwindling as the use of older browsers drops off. The Apache modules help out by adding appropriate Vary response headers automatically.

        Servers choose what to gzip based on file type, but are typically too limited in what they decide to compress. Most web sites gzip their HTML documents. It's also worthwhile to gzip your scripts and stylesheets, but many web sites miss this opportunity. In fact, it's worthwhile to compress any text response including XML and JSON. Image and PDF files should not be gzipped because they are already compressed. Trying to gzip them not only wastes CPU but can potentially increase file sizes.

        Gzipping as many file types as possible is an easy way to reduce page weight and accelerate the user experience.

        top | discuss this rule

        Put Stylesheets at the Top

        tag: css

        While researching performance at Yahoo!, we discovered that moving stylesheets to the document HEAD makes pages appear to be loading faster. This is because putting stylesheets in the HEAD allows the page to render progressively.

        Front-end engineers that care about performance want a page to load progressively; that is, we want the browser to display whatever content it has as soon as possible. This is especially important for pages with a lot of content and for users on slower Internet connections. The importance of giving users visual feedback, such as progress indicators, has been well researched and documented. In our case the HTML page is the progress indicator! When the browser loads the page progressively the header, the navigation bar, the logo at the top, etc. all serve as visual feedback for the user who is waiting for the page. This improves the overall user experience.

        The problem with putting stylesheets near the bottom of the document is that it prohibits progressive rendering in many browsers, including Internet Explorer. These browsers block rendering to avoid having to redraw elements of the page if their styles change. The user is stuck viewing a blank white page.

        The HTML specification clearly states that stylesheets are to be included in the HEAD of the page: "Unlike A, [LINK] may only appear in the HEAD section of a document, although it may appear any number of times." Neither of the alternatives, the blank white screen or flash of unstyled content, are worth the risk. The optimal solution is to follow the HTML specification and load your stylesheets in the document HEAD.

        top | discuss this rule

        Put Scripts at the Bottom

        tag: javascript

        The problem caused by scripts is that they block parallel downloads. The HTTP/1.1 specification suggests that browsers download no more than two components in parallel per hostname. If you serve your images from multiple hostnames, you can get more than two downloads to occur in parallel. While a script is downloading, however, the browser won't start any other downloads, even on different hostnames.

        In some situations it's not easy to move scripts to the bottom. If, for example, the script uses document.write to insert part of the page's content, it can't be moved lower in the page. There might also be scoping issues. In many cases, there are ways to workaround these situations.

        An alternative suggestion that often comes up is to use deferred scripts. The DEFER attribute indicates that the script does not contain document.write, and is a clue to browsers that they can continue rendering. Unfortunately, Firefox doesn't support the DEFER attribute. In Internet Explorer, the script may be deferred, but not as much as desired. If a script can be deferred, it can also be moved to the bottom of the page. That will make your web pages load faster.

        top | discuss this rule

        Avoid CSS Expressions

        tag: css

        CSS expressions are a powerful (and dangerous) way to set CSS properties dynamically. They were supported in Internet Explorer starting with version 5, but were deprecated starting with IE8. As an example, the background color could be set to alternate every hour using CSS expressions:

              background-color: expression( (new Date()).getHours()%2 ? "#B8D4FF" : "#F08A00" );

        As shown here, the expression method accepts a JavaScript expression. The CSS property is set to the result of evaluating the JavaScript expression. Theexpression method is ignored by other browsers, so it is useful for setting properties in Internet Explorer needed to create a consistent experience across browsers.

        The problem with expressions is that they are evaluated more frequently than most people expect. Not only are they evaluated when the page is rendered and resized, but also when the page is scrolled and even when the user moves the mouse over the page. Adding a counter to the CSS expression allows us to keep track of when and how often a CSS expression is evaluated. Moving the mouse around the page can easily generate more than 10,000 evaluations.

        One way to reduce the number of times your CSS expression is evaluated is to use one-time expressions, where the first time the expression is evaluated it sets the style property to an explicit value, which replaces the CSS expression. If the style property must be set dynamically throughout the life of the page, using event handlers instead of CSS expressions is an alternative approach. If you must use CSS expressions, remember that they may be evaluated thousands of times and could affect the performance of your page.

        top | discuss this rule

        Make JavaScript and CSS External

        tag: javascript, css

        Many of these performance rules deal with how external components are managed. However, before these considerations arise you should ask a more basic question: Should JavaScript and CSS be contained in external files, or inlined in the page itself?

        Using external files in the real world generally produces faster pages because the JavaScript and CSS files are cached by the browser. JavaScript and CSS that are inlined in HTML documents get downloaded every time the HTML document is requested. This reduces the number of HTTP requests that are needed, but increases the size of the HTML document. On the other hand, if the JavaScript and CSS are in external files cached by the browser, the size of the HTML document is reduced without increasing the number of HTTP requests.

        The key factor, then, is the frequency with which external JavaScript and CSS components are cached relative to the number of HTML documents requested. This factor, although difficult to quantify, can be gauged using various metrics. If users on your site have multiple page views per session and many of your pages re-use the same scripts and stylesheets, there is a greater potential benefit from cached external files.

        Many web sites fall in the middle of these metrics. For these sites, the best solution generally is to deploy the JavaScript and CSS as external files. The only exception where inlining is preferable is with home pages, such as Yahoo!'s front page and My Yahoo!. Home pages that have few (perhaps only one) page view per session may find that inlining JavaScript and CSS results in faster end-user response times.

        For front pages that are typically the first of many page views, there are techniques that leverage the reduction of HTTP requests that inlining provides, as well as the caching benefits achieved through using external files. One such technique is to inline JavaScript and CSS in the front page, but dynamically download the external files after the page has finished loading. Subsequent pages would reference the external files that should already be in the browser's cache.

        top | discuss this rule

        Reduce DNS Lookups

        tag: content

        The Domain Name System (DNS) maps hostnames to IP addresses, just as phonebooks map people's names to their phone numbers. When you type www.yahoo.com into your browser, a DNS resolver contacted by the browser returns that server's IP address. DNS has a cost. It typically takes 20-120 milliseconds for DNS to lookup the IP address for a given hostname. The browser can't download anything from this hostname until the DNS lookup is completed.

        DNS lookups are cached for better performance. This caching can occur on a special caching server, maintained by the user's ISP or local area network, but there is also caching that occurs on the individual user's computer. The DNS information remains in the operating system's DNS cache (the "DNS Client service" on Microsoft Windows). Most browsers have their own caches, separate from the operating system's cache. As long as the browser keeps a DNS record in its own cache, it doesn't bother the operating system with a request for the record.

        Internet Explorer caches DNS lookups for 30 minutes by default, as specified by the DnsCacheTimeout registry setting. Firefox caches DNS lookups for 1 minute, controlled by the network.dnsCacheExpiration configuration setting. (Fasterfox changes this to 1 hour.)

        When the client's DNS cache is empty (for both the browser and the operating system), the number of DNS lookups is equal to the number of unique hostnames in the web page. This includes the hostnames used in the page's URL, images, script files, stylesheets, Flash objects, etc. Reducing the number of unique hostnames reduces the number of DNS lookups.

        Reducing the number of unique hostnames has the potential to reduce the amount of parallel downloading that takes place in the page. Avoiding DNS lookups cuts response times, but reducing parallel downloads may increase response times. My guideline is to split these components across at least two but no more than four hostnames. This results in a good compromise between reducing DNS lookups and allowing a high degree of parallel downloads.

        top | discuss this rule

        Minify JavaScript and CSS

        tag: javascript, css

        Minification is the practice of removing unnecessary characters from code to reduce its size thereby improving load times. When code is minified all comments are removed, as well as unneeded white space characters (space, newline, and tab). In the case of JavaScript, this improves response time performance because the size of the downloaded file is reduced. Two popular tools for minifying JavaScript code are JSMin and YUI Compressor. The YUI compressor can also minify CSS.

        Obfuscation is an alternative optimization that can be applied to source code. It's more complex than minification and thus more likely to generate bugs as a result of the obfuscation step itself. In a survey of ten top U.S. web sites, minification achieved a 21% size reduction versus 25% for obfuscation. Although obfuscation has a higher size reduction, minifying JavaScript is less risky.

        In addition to minifying external scripts and styles, inlined <script> and <style> blocks can and should also be minified. Even if you gzip your scripts and styles, minifying them will still reduce the size by 5% or more. As the use and size of JavaScript and CSS increases, so will the savings gained by minifying your code.

        top | discuss this rule

        Avoid Redirects

        tag: content

        Redirects are accomplished using the 301 and 302 status codes. Here's an example of the HTTP headers in a 301 response:

              HTTP/1.1 301 Moved Permanently
              Location: http://example.com/newuri
              Content-Type: text/html

        The browser automatically takes the user to the URL specified in the Location field. All the information necessary for a redirect is in the headers. The body of the response is typically empty. Despite their names, neither a 301 nor a 302 response is cached in practice unless additional headers, such as Expires or Cache-Control, indicate it should be. The meta refresh tag and JavaScript are other ways to direct users to a different URL, but if you must do a redirect, the preferred technique is to use the standard 3xx HTTP status codes, primarily to ensure the back button works correctly.

        The main thing to remember is that redirects slow down the user experience. Inserting a redirect between the user and the HTML document delays everything in the page since nothing in the page can be rendered and no components can start being downloaded until the HTML document has arrived.

        One of the most wasteful redirects happens frequently and web developers are generally not aware of it. It occurs when a trailing slash (/) is missing from a URL that should otherwise have one. For example, going to http://astrology.yahoo.com/astrology results in a 301 response containing a redirect tohttp://astrology.yahoo.com/astrology/ (notice the added trailing slash). This is fixed in Apache by using Alias or mod_rewrite, or the DirectorySlash directive if you're using Apache handlers.

        Connecting an old web site to a new one is another common use for redirects. Others include connecting different parts of a website and directing the user based on certain conditions (type of browser, type of user account, etc.). Using a redirect to connect two web sites is simple and requires little additional coding. Although using redirects in these situations reduces the complexity for developers, it degrades the user experience. Alternatives for this use of redirects include using Alias and mod_rewrite if the two code paths are hosted on the same server. If a domain name change is the cause of using redirects, an alternative is to create a CNAME (a DNS record that creates an alias pointing from one domain name to another) in combination with Alias or mod_rewrite.

        top | discuss this rule

        Remove Duplicate Scripts

        tag: javascript

        It hurts performance to include the same JavaScript file twice in one page. This isn't as unusual as you might think. A review of the ten top U.S. web sites shows that two of them contain a duplicated script. Two main factors increase the odds of a script being duplicated in a single web page: team size and number of scripts. When it does happen, duplicate scripts hurt performance by creating unnecessary HTTP requests and wasted JavaScript execution.

        Unnecessary HTTP requests happen in Internet Explorer, but not in Firefox. In Internet Explorer, if an external script is included twice and is not cacheable, it generates two HTTP requests during page loading. Even if the script is cacheable, extra HTTP requests occur when the user reloads the page.

        In addition to generating wasteful HTTP requests, time is wasted evaluating the script multiple times. This redundant JavaScript execution happens in both Firefox and Internet Explorer, regardless of whether the script is cacheable.

        One way to avoid accidentally including the same script twice is to implement a script management module in your templating system. The typical way to include a script is to use the SCRIPT tag in your HTML page.

              <script type="text/javascript" src="menu_1.0.17.js"></script>

        An alternative in PHP would be to create a function called insertScript.

              <?php insertScript("menu.js") ?>

        In addition to preventing the same script from being inserted multiple times, this function could handle other issues with scripts, such as dependency checking and adding version numbers to script filenames to support far future Expires headers.

        top | discuss this rule

        Configure ETags

        tag: server

        Entity tags (ETags) are a mechanism that web servers and browsers use to determine whether the component in the browser's cache matches the one on the origin server. (An "entity" is another word a "component": images, scripts, stylesheets, etc.) ETags were added to provide a mechanism for validating entities that is more flexible than the last-modified date. An ETag is a string that uniquely identifies a specific version of a component. The only format constraints are that the string be quoted. The origin server specifies the component's ETag using the ETag response header.

              HTTP/1.1 200 OK
              Last-Modified: Tue, 12 Dec 2006 03:03:59 GMT
              ETag: "10c24bc-4ab-457e1c1f"
              Content-Length: 12195

        Later, if the browser has to validate a component, it uses the If-None-Match header to pass the ETag back to the origin server. If the ETags match, a 304 status code is returned reducing the response by 12195 bytes for this example.

              GET /i/yahoo.gif HTTP/1.1
              Host: us.yimg.com
              If-Modified-Since: Tue, 12 Dec 2006 03:03:59 GMT
              If-None-Match: "10c24bc-4ab-457e1c1f"
              HTTP/1.1 304 Not Modified

        The problem with ETags is that they typically are constructed using attributes that make them unique to a specific server hosting a site. ETags won't match when a browser gets the original component from one server and later tries to validate that component on a different server, a situation that is all too common on Web sites that use a cluster of servers to handle requests. By default, both Apache and IIS embed data in the ETag that dramatically reduces the odds of the validity test succeeding on web sites with multiple servers.

        The ETag format for Apache 1.3 and 2.x is inode-size-timestamp. Although a given file may reside in the same directory across multiple servers, and have the same file size, permissions, timestamp, etc., its inode is different from one server to the next.

        IIS 5.0 and 6.0 have a similar issue with ETags. The format for ETags on IIS is Filetimestamp:ChangeNumber. A ChangeNumber is a counter used to track configuration changes to IIS. It's unlikely that the ChangeNumber is the same across all IIS servers behind a web site.

        The end result is ETags generated by Apache and IIS for the exact same component won't match from one server to another. If the ETags don't match, the user doesn't receive the small, fast 304 response that ETags were designed for; instead, they'll get a normal 200 response along with all the data for the component. If you host your web site on just one server, this isn't a problem. But if you have multiple servers hosting your web site, and you're using Apache or IIS with the default ETag configuration, your users are getting slower pages, your servers have a higher load, you're consuming greater bandwidth, and proxies aren't caching your content efficiently. Even if your components have a far future Expires header, a conditional GET request is still made whenever the user hits Reload or Refresh.

        If you're not taking advantage of the flexible validation model that ETags provide, it's better to just remove the ETag altogether. The Last-Modified header validates based on the component's timestamp. And removing the ETag reduces the size of the HTTP headers in both the response and subsequent requests. This Microsoft Support article describes how to remove ETags. In Apache, this is done by simply adding the following line to your Apache configuration file:

              FileETag none

        top | discuss this rule

        Make Ajax Cacheable

        tag: content

        One of the cited benefits of Ajax is that it provides instantaneous feedback to the user because it requests information asynchronously from the backend web server. However, using Ajax is no guarantee that the user won't be twiddling his thumbs waiting for those asynchronous JavaScript and XML responses to return. In many applications, whether or not the user is kept waiting depends on how Ajax is used. For example, in a web-based email client the user will be kept waiting for the results of an Ajax request to find all the email messages that match their search criteria. It's important to remember that "asynchronous" does not imply "instantaneous".

        To improve performance, it's important to optimize these Ajax responses. The most important way to improve the performance of Ajax is to make the responses cacheable, as discussed in Add an Expires or a Cache-Control Header. Some of the other rules also apply to Ajax:

         


        Let's look at an example. A Web 2.0 email client might use Ajax to download the user's address book for autocompletion. If the user hasn't modified her address book since the last time she used the email web app, the previous address book response could be read from cache if that Ajax response was made cacheable with a future Expires or Cache-Control header. The browser must be informed when to use a previously cached address book response versus requesting a new one. This could be done by adding a timestamp to the address book Ajax URL indicating the last time the user modified her address book, for example,&t=1190241612. If the address book hasn't been modified since the last download, the timestamp will be the same and the address book will be read from the browser's cache eliminating an extra HTTP roundtrip. If the user has modified her address book, the timestamp ensures the new URL doesn't match the cached response, and the browser will request the updated address book entries.

        Even though your Ajax responses are created dynamically, and might only be applicable to a single user, they can still be cached. Doing so will make your Web 2.0 apps faster.

        top | discuss this rule

        Flush the Buffer Early

        tag: server

        When users request a page, it can take anywhere from 200 to 500ms for the backend server to stitch together the HTML page. During this time, the browser is idle as it waits for the data to arrive. In PHP you have the function flush(). It allows you to send your partially ready HTML response to the browser so that the browser can start fetching components while your backend is busy with the rest of the HTML page. The benefit is mainly seen on busy backends or light frontends.

        A good place to consider flushing is right after the HEAD because the HTML for the head is usually easier to produce and it allows you to include any CSS and JavaScript files for the browser to start fetching in parallel while the backend is still processing.

        Example:

              ... <!-- css, js -->
            </head>
            <?php flush(); ?>
            <body>
              ... <!-- content -->
        

        Yahoo! search pioneered research and real user testing to prove the benefits of using this technique.

        top

        Use GET for AJAX Requests

        tag: server

        The Yahoo! Mail team found that when using XMLHttpRequest, POST is implemented in the browsers as a two-step process: sending the headers first, then sending data. So it's best to use GET, which only takes one TCP packet to send (unless you have a lot of cookies). The maximum URL length in IE is 2K, so if you send more than 2K data you might not be able to use GET.

        An interesting side affect is that POST without actually posting any data behaves like GET. Based on the HTTP specs, GET is meant for retrieving information, so it makes sense (semantically) to use GET when you're only requesting data, as opposed to sending data to be stored server-side.

         

        top

        Post-load Components

        tag: content

        You can take a closer look at your page and ask yourself: "What's absolutely required in order to render the page initially?". The rest of the content and components can wait.

        JavaScript is an ideal candidate for splitting before and after the onload event. For example if you have JavaScript code and libraries that do drag and drop and animations, those can wait, because dragging elements on the page comes after the initial rendering. Other places to look for candidates for post-loading include hidden content (content that appears after a user action) and images below the fold.

        Tools to help you out in your effort: YUI Image Loader allows you to delay images below the fold and the YUI Get utility is an easy way to include JS and CSS on the fly. For an example in the wild take a look at Yahoo! Home Page with Firebug's Net Panel turned on.

        It's good when the performance goals are inline with other web development best practices. In this case, the idea of progressive enhancement tells us that JavaScript, when supported, can improve the user experience but you have to make sure the page works even without JavaScript. So after you've made sure the page works fine, you can enhance it with some post-loaded scripts that give you more bells and whistles such as drag and drop and animations.

        top

        Preload Components

        tag: content

        Preload may look like the opposite of post-load, but it actually has a different goal. By preloading components you can take advantage of the time the browser is idle and request components (like images, styles and scripts) you'll need in the future. This way when the user visits the next page, you could have most of the components already in the cache and your page will load much faster for the user.

        There are actually several types of preloading:

        • Unconditional preload - as soon as onload fires, you go ahead and fetch some extra components. Check google.com for an example of how a sprite image is requested onload. This sprite image is not needed on the google.com homepage, but it is needed on the consecutive search result page.
        • Conditional preload - based on a user action you make an educated guess where the user is headed next and preload accordingly. On search.yahoo.comyou can see how some extra components are requested after you start typing in the input box.
        • Anticipated preload - preload in advance before launching a redesign. It often happens after a redesign that you hear: "The new site is cool, but it's slower than before". Part of the problem could be that the users were visiting your old site with a full cache, but the new one is always an empty cache experience. You can mitigate this side effect by preloading some components before you even launched the redesign. Your old site can use the time the browser is idle and request images and scripts that will be used by the new site

        top

        Reduce the Number of DOM Elements

        tag: content

        A complex page means more bytes to download and it also means slower DOM access in JavaScript. It makes a difference if you loop through 500 or 5000 DOM elements on the page when you want to add an event handler for example.

        A high number of DOM elements can be a symptom that there's something that should be improved with the markup of the page without necessarily removing content. Are you using nested tables for layout purposes? Are you throwing in more <div>s only to fix layout issues? Maybe there's a better and more semantically correct way to do your markup.

        A great help with layouts are the YUI CSS utilities: grids.css can help you with the overall layout, fonts.css and reset.css can help you strip away the browser's defaults formatting. This is a chance to start fresh and think about your markup, for example use <div>s only when it makes sense semantically, and not because it renders a new line.

        The number of DOM elements is easy to test, just type in Firebug's console:
        document.getElementsByTagName('*').length

        And how many DOM elements are too many? Check other similar pages that have good markup. For example the Yahoo! Home Page is a pretty busy page and still under 700 elements (HTML tags).

        top

        Split Components Across Domains

        tag: content

        Splitting components allows you to maximize parallel downloads. Make sure you're using not more than 2-4 domains because of the DNS lookup penalty. For example, you can host your HTML and dynamic content on www.example.org and split static components between static1.example.org andstatic2.example.org

        For more information check "Maximizing Parallel Downloads in the Carpool Lane" by Tenni Theurer and Patty Chi.

        top

        Minimize the Number of iframes

        tag: content

        Iframes allow an HTML document to be inserted in the parent document. It's important to understand how iframes work so they can be used effectively.

        <iframe> pros:

        • Helps with slow third-party content like badges and ads
        • Security sandbox
        • Download scripts in parallel

        <iframe> cons:

        • Costly even if blank
        • Blocks page onload
        • Non-semantic

        top

        No 404s

        tag: content

        HTTP requests are expensive so making an HTTP request and getting a useless response (i.e. 404 Not Found) is totally unnecessary and will slow down the user experience without any benefit.

        Some sites have helpful 404s "Did you mean X?", which is great for the user experience but also wastes server resources (like database, etc). Particularly bad is when the link to an external JavaScript is wrong and the result is a 404. First, this download will block parallel downloads. Next the browser may try to parse the 404 response body as if it were JavaScript code, trying to find something usable in it.

        top

        tag: cookie

        HTTP cookies are used for a variety of reasons such as authentication and personalization. Information about cookies is exchanged in the HTTP headers between web servers and browsers. It's important to keep the size of cookies as low as possible to minimize the impact on the user's response time.

        For more information check "When the Cookie Crumbles" by Tenni Theurer and Patty Chi. The take-home of this research:

         

        • Eliminate unnecessary cookies
        • Keep cookie sizes as low as possible to minimize the impact on the user response time
        • Be mindful of setting cookies at the appropriate domain level so other sub-domains are not affected
        • Set an Expires date appropriately. An earlier Expires date or none removes the cookie sooner, improving the user response time

        top

        tag: cookie

        When the browser makes a request for a static image and sends cookies together with the request, the server doesn't have any use for those cookies. So they only create network traffic for no good reason. You should make sure static components are requested with cookie-free requests. Create a subdomain and host all your static components there.

        If your domain is www.example.org, you can host your static components on static.example.org. However, if you've already set cookies on the top-level domain example.org as opposed to www.example.org, then all the requests to static.example.org will include those cookies. In this case, you can buy a whole new domain, host your static components there, and keep this domain cookie-free. Yahoo! uses yimg.com, YouTube uses ytimg.com, Amazon uses images-amazon.com and so on.

        Another benefit of hosting static components on a cookie-free domain is that some proxies might refuse to cache the components that are requested with cookies. On a related note, if you wonder if you should use example.org or www.example.org for your home page, consider the cookie impact. Omitting www leaves you no choice but to write cookies to *.example.org, so for performance reasons it's best to use the www subdomain and write the cookies to that subdomain.

        top

        Minimize DOM Access

        tag: javascript

        Accessing DOM elements with JavaScript is slow so in order to have a more responsive page, you should:

        • Cache references to accessed elements
        • Update nodes "offline" and then add them to the tree
        • Avoid fixing layout with JavaScript

        For more information check the YUI theatre's "High Performance Ajax Applications" by Julien Lecomte.

        top

        Develop Smart Event Handlers

        tag: javascript

        Sometimes pages feel less responsive because of too many event handlers attached to different elements of the DOM tree which are then executed too often. That's why using event delegation is a good approach. If you have 10 buttons inside a div, attach only one event handler to the div wrapper, instead of one handler for each button. Events bubble up so you'll be able to catch the event and figure out which button it originated from.

        You also don't need to wait for the onload event in order to start doing something with the DOM tree. Often all you need is the element you want to access to be available in the tree. You don't have to wait for all images to be downloaded. DOMContentLoaded is the event you might consider using instead of onload, but until it's available in all browsers, you can use the YUI Event utility, which has an onAvailable method.

        For more information check the YUI theatre's "High Performance Ajax Applications" by Julien Lecomte.

        top

        tag: css

        One of the previous best practices states that CSS should be at the top in order to allow for progressive rendering.

        In IE @import behaves the same as using <link> at the bottom of the page, so it's best not to use it.

        top

        Avoid Filters

        tag: css

        The IE-proprietary AlphaImageLoader filter aims to fix a problem with semi-transparent true color PNGs in IE versions < 7. The problem with this filter is that it blocks rendering and freezes the browser while the image is being downloaded. It also increases memory consumption and is applied per element, not per image, so the problem is multiplied.

        The best approach is to avoid AlphaImageLoader completely and use gracefully degrading PNG8 instead, which are fine in IE. If you absolutely needAlphaImageLoader, use the underscore hack _filter as to not penalize your IE7+ users.

        top

        Optimize Images

        tag: images

        After a designer is done with creating the images for your web page, there are still some things you can try before you FTP those images to your web server.

        • You can check the GIFs and see if they are using a palette size corresponding to the number of colors in the image. Using imagemagick it's easy to check using 
          identify -verbose image.gif 
          When you see an image using 4 colors and a 256 color "slots" in the palette, there is room for improvement.
        • Try converting GIFs to PNGs and see if there is a saving. More often than not, there is. Developers often hesitate to use PNGs due to the limited support in browsers, but this is now a thing of the past. The only real problem is alpha-transparency in true color PNGs, but then again, GIFs are not true color and don't support variable transparency either. So anything a GIF can do, a palette PNG (PNG8) can do too (except for animations). This simple imagemagick command results in totally safe-to-use PNGs:
          convert image.gif image.png 
          "All we are saying is: Give PiNG a Chance!"
        • Run pngcrush (or any other PNG optimizer tool) on all your PNGs. Example: 
          pngcrush image.png -rem alla -reduce -brute result.png
        • Run jpegtran on all your JPEGs. This tool does lossless JPEG operations such as rotation and can also be used to optimize and remove comments and other useless information (such as EXIF information) from your images. 
          jpegtran -copy none -optimize -perfect src.jpg dest.jpg

        top

        Optimize CSS Sprites

        tag: images

        • Arranging the images in the sprite horizontally as opposed to vertically usually results in a smaller file size.
        • Combining similar colors in a sprite helps you keep the color count low, ideally under 256 colors so to fit in a PNG8.
        • "Be mobile-friendly" and don't leave big gaps between the images in a sprite. This doesn't affect the file size as much but requires less memory for the user agent to decompress the image into a pixel map. 100x100 image is 10 thousand pixels, where 1000x1000 is 1 million pixels

        top

        Don't Scale Images in HTML

        tag: images

        Don't use a bigger image than you need just because you can set the width and height in HTML. If you need 
        <img width="100" height="100" src="mycat.jpg" alt="My Cat" /> 
        then your image (mycat.jpg) should be 100x100px rather than a scaled down 500x500px image.

        top

        Make favicon.ico Small and Cacheable

        tag: images

        The favicon.ico is an image that stays in the root of your server. It's a necessary evil because even if you don't care about it the browser will still request it, so it's better not to respond with a 404 Not Found. Also since it's on the same server, cookies are sent every time it's requested. This image also interferes with the download sequence, for example in IE when you request extra components in the onload, the favicon will be downloaded before these extra components.

        So to mitigate the drawbacks of having a favicon.ico make sure:

        • It's small, preferably under 1K.
        • Set Expires header with what you feel comfortable (since you cannot rename it if you decide to change it). You can probably safely set the Expires header a few months in the future. You can check the last modified date of your current favicon.ico to make an informed decision.

        Imagemagick can help you create small favicons

        top

        Keep Components under 25K

        tag: mobile

        This restriction is related to the fact that iPhone won't cache components bigger than 25K. Note that this is the uncompressed size. This is where minification is important because gzip alone may not be sufficient.

        For more information check "Performance Research, Part 5: iPhone Cacheability - Making it Stick" by Wayne Shea and Tenni Theurer.

        top

        Pack Components into a Multipart Document

        tag: mobile

        Packing components into a multipart document is like an email with attachments, it helps you fetch several components with one HTTP request (remember: HTTP requests are expensive). When you use this technique, first check if the user agent supports it (iPhone does not).

        Avoid Empty Image src

        tag: server

        Image with empty string src attribute occurs more than one will expect. It appears in two form:

        1. straight HTML
          <img src="">
        2. JavaScript
          var img = new Image();
          img.src = "";

         

        Both forms cause the same effect: browser makes another request to your server.

        • Internet Explorer makes a request to the directory in which the page is located.
        • Safari and Chrome make a request to the actual page itself.
        • Firefox 3 and earlier versions behave the same as Safari and Chrome, but version 3.5 addressed this issue[bug 444931] and no longer sends a request.
        • Opera does not do anything when an empty image src is encountered.

         


        Why is this behavior bad?

        1. Cripple your servers by sending a large amount of unexpected traffic, especially for pages that get millions of page views per day.
        2. Waste server computing cycles generating a page that will never be viewed.
        3. Possibly corrupt user data. If you are tracking state in the request, either by cookies or in another way, you have the possibility of destroying data. Even though the image request does not return an image, all of the headers are read and accepted by the browser, including all cookies. While the rest of the response is thrown away, the damage may already be done.

         


        The root cause of this behavior is the way that URI resolution is performed in browsers. This behavior is defined in RFC 3986 - Uniform Resource Identifiers. When an empty string is encountered as a URI, it is considered a relative URI and is resolved according to the algorithm defined in section 5.2. This specific example, an empty string, is listed in section 5.4. Firefox, Safari, and Chrome are all resolving an empty string correctly per the specification, while Internet Explorer is resolving it incorrectly, apparently in line with an earlier version of the specification, RFC 2396 - Uniform Resource Identifiers (this was obsoleted by RFC 3986). So technically, the browsers are doing what they are supposed to do to resolve relative URIs. The problem is that in this context, the empty string is clearly unintentional.

        HTML5 adds to the description of the  tag's src attribute to instruct browsers not to make an additional request in section 4.8.2:

        The src attribute must be present, and must contain a valid URL referencing a non-interactive, optionally animated, image resource that is neither paged nor scripted. If the base URI of the element is the same as the document's address, then the src attribute's value must not be the empty string.
        Hopefully, browsers will not have this problem in the future. Unfortunately, there is no such clause for <script src=""> and <link href="">. Maybe there is still time to make that adjustment to ensure browsers don't accidentally implement this behavior.

         

        This rule was inspired by Yahoo!'s JavaScript guru Nicolas C. Zakas. For more information check out his article "Empty image src can destroy your site"

转载于:https://www.cnblogs.com/zgblog/p/3323332.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值