requests 超时关闭_如何终止在C#中的HttpWebRequest连接?它不工作,即使设置超时或readwritetimeout...

I want to terminate a httpwebrequest when it takes too long time in connection.

Here is just a simaple code that I wrote:

HttpWebRequest request = (HttpWebRequest)WebRequest.Create(URL);

request.Timeout = 5000;

request.ReadWriteTimeout = 5000;

request.Proxy = new WebProxy("http://" + proxyUsed + "/", true);

request.UserAgent = "Mozilla/4.0 (compatible; MSIE 6.01; Windows NT 5.0)";

using (WebResponse myResponse = request.GetResponse())

{

using (Stream s = myResponse.GetResponseStream())

{

s.ReadTimeout = 5000;

s.WriteTimeout = 5000;

using (StreamReader sr = new StreamReader(s, System.Text.Encoding.UTF8))

{

result = sr.ReadToEnd();

httpLink = myResponse.ResponseUri.AbsoluteUri;

sr.Close();

}

s.Close();

}

myResponse.Close();

}

However, sometimes the connection will take a about 15minutes to get the response.

The situation is after 15minutes I still can get the response but not the full source code of the URL.

I guess that the connection is too slow that the URL response me a bit data within the timeout, just say for example receive 1byte in 5seconds, so it doesn't expire the timoue but it's very long.

How can I terminate the connection?

Thanks:)

解决方案

You might find that the timeout is actually working, but the thread hangs when trying to close the stream. I don't know why it happens, but sometimes it does. I've never used ReadToEnd, but I've run across this when using Read.

I fixed the problem by calling Abort on the request, before I close the stream. It's a bit of a kluge, but it's effective. The abbreviated code below shows the technique.

HttpWebResponse response = null;

StreamReader sr = null;

try

{

response = (HttpWebResponse)request.GetResponse(...);

Stream s = response.GetResponseStream();

sr = new StreamReader(s, Encoding.UTF8);

// do your reading here

}

finally

{

request.Abort(); // !! Yes, abort the request

if (sr != null)

sr.Dispose();

if (response != null)

response.Close();

}

What I've found is that the ReadTimeout and ReadWriteTimeout work as expected. That is, when the read times out, execution really does go to the finally block. And if the request.Abort isn't there, the call to sr.Dispose will hang.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
出现请求超时的原因可能有很多,比如网络原因、服务器响应速度慢等等。在使用Scrapy发送POST请求时,可以尝试以下几种方法来解决超时问题: 1. 增加超时时间:在Scrapy的settings.py文件设置DOWNLOAD_TIMEOUT参数,增加请求超时时间,例如: ``` DOWNLOAD_TIMEOUT = 20 ``` 2. 使用RetryMiddleware:在Scrapy使用RetryMiddleware可以自动重试请求,可以设置重试次数和重试时间间隔。在settings.py文件添加以下代码: ``` RETRY_TIMES = 3 RETRY_HTTP_CODES = [500, 502, 503, 504, 400, 403, 404, 408] DOWNLOADER_MIDDLEWARES = { 'scrapy.downloadermiddlewares.retry.RetryMiddleware': 90, 'scrapy_proxies.RandomProxy': 100, 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 110, } ``` 3. 使用代理:在Scrapy使用代理可以解决网络闪断等问题,可以使用scrapy_proxies库来实现代理功能。在settings.py文件添加以下代码: ``` PROXY_LIST = '/path/to/proxy/list.txt' PROXY_MODE = 0 RANDOM_UA_PER_PROXY = True ``` 其,PROXY_LIST为代理IP列表文件路径,PROXY_MODE为代理模式,0为随机选择代理IP,1为顺序选择代理IP。RANDOM_UA_PER_PROXY为是否在每个代理IP上使用随机User-Agent。 4. 使用requests库:如果使用Scrapy发送POST请求仍然存在超时问题,可以尝试使用requests库来发送请求。在Scrapy可以使用scrapy-requests库来集成requests库,具体使用方法可以参考文档:https://github.com/scrapy-plugins/scrapy-requests

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值