SSL handshake latency and HTTPS optimizations.

文章出处:http://www.semicomplete.com/blog/geekery/ssl-latency.html


At work today, I started investigating the latency differences for similarrequests between HTTP and HTTPS. Historically, I was running with the assumptionthat higher latency on HTTPS (SSL) traffic was to be expected since SSL handshakesare more CPU intensive. I didn't really think about the network consequences ofSSL until today.

It's all in the handshake.

TCP handshake is a3-packet event. The client sends 2 packets, the server sends 1. Best case,you're looking at one round-trip for establishing your connection. We can showthis empirically by comparing ping and tcp connect times:

% fping -q -c 5 www.csh.rit.edu
www.csh.rit.edu : xmt/rcv/%loss = 5/5/0%, min/avg/max = 112/115/123

Average is 115ms for ping round-trip. How about TCP? Let's ask curl how long tcp connect takes:
% seq 5 | xargs -I@ -n1 curl -so /dev/null -w "%{time_connect}\n" http://www.csh.rit.edu
0.117
0.116
0.117
0.116
0.116

There's your best case. This is because when you (the client) receive the 2ndpacket in the handshake (SYN+ACK), you reply with ACK and consider theconnection open. Exactly 1 round-trip is required before you can send your httprequest.

What about when using SSL? Let's ask curl again:

% curl -kso /dev/null -w "tcp:%{time_connect}, ssldone:%{time_appconnect}\n" https://www.csh.rit.edu
tcp:0.117, ssldone:0.408

# How about to google?
% curl -kso /dev/null -w "tcp:%{time_connect}, ssldone:%{time_appconnect}\n" https://www.google.com
tcp:0.021, ssldone:0.068

3.5x jump in latency just for adding SSL to the mix, and this is before we sentthe http request.

The reason for this is easily shown with tcpdump. For this test, I'll usetcpdump to sniff https traffic and then use openssl s_client to simply connectto the http server over ssl and do nothing else. Start tcpdump first, then runopenssl s_client.

terminal1 % sudo tcpdump -ttttt -i any 'port 443 and host www.csh.rit.edu'
...

terminal2 % openssl s_client -connect www.csh.rit.edu:443
...

Tcpdump output trimmed for content:

# Start TCP Handshake
00:00:00.000000 IP snack.home.40855 > csh.rit.edu.https: Flags [S] ...
00:00:00.114298 IP csh.rit.edu.https > snack.home.40855: Flags [S.] ...
00:00:00.114341 IP snack.home.40855 > csh.rit.edu.https: Flags [.] ...
# TCP Handshake complete.

# Start SSL Handshake.
00:00:00.114769 IP snack.home.40855 > csh.rit.edu.https: Flags [P.] ...
00:00:00.226456 IP csh.rit.edu.https > snack.home.40855: Flags [.] ...
00:00:00.261945 IP csh.rit.edu.https > snack.home.40855: Flags [.] ...
00:00:00.261960 IP csh.rit.edu.https > snack.home.40855: Flags [P.] ...
00:00:00.261985 IP snack.home.40855 > csh.rit.edu.https: Flags [.] ...
00:00:00.261998 IP snack.home.40855 > csh.rit.edu.https: Flags [.] ...
00:00:00.273284 IP snack.home.40855 > csh.rit.edu.https: Flags [P.] ...
00:00:00.398473 IP csh.rit.edu.https > snack.home.40855: Flags [P.] ...
00:00:00.436372 IP snack.home.40855 > csh.rit.edu.https: Flags [.] ...

# SSL handshake complete, ready to send HTTP request. 
# At this point, openssl s_client is sitting waiting for you to type something
# into stdin.

Summarizing the above tcpdump data for this ssl handshake:
  • 12 packets for SSL, vs 3 for TCP alone
  • TCP handshake took 114ms
  • Total SSL handshake time was 436ms
  • Number of network round-trips was 3.
  • SSL portion took 322ms (network and crypto)
The server tested above has a 2048 bit ssl cert. Running 'openssl speed rsa' onthe webserver shows it can do a signature in 22ms:

                  sign    verify    sign/s verify/s
rsa 2048 bits 0.022382s 0.000542s     44.7   1845.4

Anyway. The point is, no matter how fast your SSL accelerators (hardwareloadbalancer, etc), if your SSL end points aren't near the user, then yourfirst connect will be slow. As shown above, 22ms for the crypto piece of SSLhandshake, which means 300ms of the SSL portion above was likely networklatency and some other overhead.

Once SSL is established, though, it switches to a block cipher (3DES, etc)which is much faster and the resource (network, cpu) overhead is pretty tiny bycomparison.

Summarizing from above: Using SSL incurs a 3.5x latency overhead for eachhandshake, but afterwards it's generally fast like plain TCP. If you acceptthis conclusion, let's examine how this can affect website performance.

Got firebug? Open any website. Seriously. Watch the network activity. How manyHTTP requests are made? Can you tell how many of those that go to the samedomain use http pipelining (or keepalive)? How many initiate new requests eachtime? You can track this with tcpdump by looking for 'syn' packets if you want(tcpdump 'tcp[tcpflags] == tcp-syn').

What about the street wisdom for high-performance web servers? HAProxy's sitesays:

"If a site needs keep-alive, there is a real problem. Highly loaded sitesoften disable keep-alive to support the maximum number of simultaneousclients. The real downside of not having keep-alive is a slightly increasedlatency to fetch objects. Browsers double the number of concurrent connectionson non-keepalive sites to compensate for this."
Disabling keep-alive on SSL connections means every single http request isgoing to take 3 round-trips before even asking for data. If your server is100ms away, and you have 10 resources to serve on a single page, that's 3seconds of network latency before you include SSL crypto or resource transfertime. With keep alive, you could eat that handshake cost only once instead of10 times.

Many browsers will open multiple simultaneous connections to any givenwebserver if it needs to fetch multiple resources. Idea is that parallelismgets you more tasty http resources in a shorter time. If the browser openstwo connections in parallel, you'll still incur many sequential SSL handshakesthat slow your resource fetching down. More SSL handshakes in parallel meanshigher CPU burden, too, and ultimately memory (per open connection) scales morecheaply than does CPU time - think: above, one active connection cost 22ms oftime (most of which is spent in CPU) and costs much more than that connectionholds in memory resources and scales better (easier to grow memory than cpu).

For some data, Google and Facebook both permit keep-alive:

% URL=https://s-static.ak.facebook.com/rsrc.php/zPET4/hash/9e65hu86.js
% curl  -w "tcp: %{time_connect} ssl:%{time_appconnect}\n" -sk -o /dev/null $URL -o /dev/null $URL
tcp: 0.038 ssl:0.088
tcp: 0.000 ssl:0.000

% URL=https://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js
% curl  -w "tcp: %{time_connect} ssl:%{time_appconnect}\n" -sk -o /dev/null $URL -o /dev/null $URL
tcp: 0.054 ssl:0.132
tcp: 0.000 ssl:0.000

The 2nd line of output reports zero time spent in tcp and ssl handshaking.Further, if you tell curl to output response headers (curl -D -) you'll see"Connection: keep-alive". This is data showing that at least some of big folkswith massive qps are using keep alive.

Remember that new handshakes are high cpu usage, but existing SSL connectionsgenerally aren't as they are using a cheaper block cipher after the handshake.Disabling keep alive ensures that every request will incur an SSL handshakewhich can quickly overload a moderately-used server without SSL accelerationhardware if you have a large ssl key (2048 or 4096bit key).

Even if you have SSL offloading to special hardware, you're stillincuring the higher network latency that can't be compensated by fasterhardware. Frankly, in most cases it's more cost effective to buy a weaker SSLcertificate (1024 bit) than it is to buy SSL hardware - See Google's Velocity 2010 talk on SSL.

By the way, on modern hardware you can do a decent number of SSL handshakes persecond with 1024bit keys, but 2048bit and 4096bit keys are much harder:

# 'openssl speed rsa' done on an Intel X5550 (2.66gHz)
rsa 1024 bits 0.000496s 0.000027s   2016.3  36713.2
rsa 2048 bits 0.003095s 0.000093s    323.1  10799.2
rsa 4096 bits 0.021688s 0.000345s     46.1   2901.5

Fixing SSL latency is not totally trivial. The CPU intensive part can behandled by special hardware if you can afford it, but the only way sure way tosolve network round-trip latency is to be closer to your user and/or to work onminimizing the total number of round-trips. You can be further from your usersif you don't force things like keep-alive to be off, which can save you moneyin the long run by letting you have better choices of datacenter locations.



  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
提供的源码资源涵盖了安卓应用、小程序、Python应用和Java应用等多个领域,每个领域都包含了丰富的实例和项目。这些源码都是基于各自平台的最新技术和标准编写,确保了在对应环境下能够无缝运行。同时,源码中配备了详细的注释和文档,帮助用户快速理解代码结构和实现逻辑。 适用人群: 这些源码资源特别适合大学生群体。无论你是计算机相关专业的学生,还是对其他领域编程感兴趣的学生,这些资源都能为你提供宝贵的学习和实践机会。通过学习和运行这些源码,你可以掌握各平台开发的基础知识,提升编程能力和项目实战经验。 使用场景及目标: 在学习阶段,你可以利用这些源码资源进行课程实践、课外项目或毕业设计。通过分析和运行源码,你将深入了解各平台开发的技术细节和最佳实践,逐步培养起自己的项目开发和问题解决能力。此外,在求职或创业过程中,具备跨平台开发能力的大学生将更具竞争力。 其他说明: 为了确保源码资源的可运行性和易用性,特别注意了以下几点:首先,每份源码都提供了详细的运行环境和依赖说明,确保用户能够轻松搭建起开发环境;其次,源码中的注释和文档都非常完善,方便用户快速上手和理解代码;最后,我会定期更新这些源码资源,以适应各平台技术的最新发展和市场需求。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值