首先,一些背景.有一个工作人员扩展/解决了一堆短URL:
http://t.co/example -> http://example.com
所以,我们只是按照重定向.而已.我们不会从连接中读取任何数据.在我们得到200之后,我们返回最终的URL并关闭InputStream.
现在,问题本身.在生产服务器上,其中一个解析程序线程挂起在InputStream.close()调用内:
"ProcessShortUrlTask" prio=10 tid=0x00007f8810119000 nid=0x402b runnable [0x00007f882b044000]
java.lang.Thread.State: RUNNABLE
at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
at java.io.BufferedInputStream.skip(BufferedInputStream.java:352)
- locked <0x0000000561293aa0> (a java.io.BufferedInputStream)
at sun.net.www.MeteredStream.skip(MeteredStream.java:134)
- locked <0x0000000561293a70> (a sun.net.www.http.KeepAliveStream)
at sun.net.www.http.KeepAliveStream.close(KeepAliveStream.java:76)
at java.io.FilterInputStream.close(FilterInputStream.java:155)
at sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.close(HttpURLConnection.java:2735)
at ru.twitter.times.http.URLProcessor.resolve(URLProcessor.java:131)
at ru.twitter.times.http.URLProcessor.resolve(URLProcessor.java:55)
at ...
经过简单的研究,我了解到在调用skip()之前将其清理回连接池(如果设置了keep-alive?).我仍然不明白如何避免这种情况.此外,我怀疑我们的代码中是否存在一些不良设计或JDK中存在问题.
所以,问题是:
>是否可以避免挂在close()?保证一些合理的
例如,超时.
>是否可以避免从连接中读取数据?
记住我只想要最终的URL.实际上,我想,我不想要
skip()被调用…
更新:
KeepAliveStream,第79行,close()方法:
// Skip past the data that's left in the Inputstream because
// some sort of error may have occurred.
// Do this ONLY if the skip won't block. The stream may have
// been closed at the beginning of a big file and we don't want
// to hang around for nothing. So if we can't skip without blocking
// we just close the socket and, therefore, terminate the keepAlive
// NOTE: Don't close super class
try {
if (expected > count) {
long nskip = (long) (expected - count);
if (nskip <= available()) {
long n = 0;
while (n < nskip) {
nskip = nskip - n;
n = skip(nskip);} ...
在我看来,JDK本身存在一个错误.不幸的是,重现这个很难……