1.解决办法:
a.浏览器采用代理轮换访问
b.IP地址代理轮换访问
Scrapy工程下创建中间件middle.py
# Importing base64 library because we'll need it ONLY in case if the proxy we are going to use requires authentication
import base64
# Start your middleware class
class ProxyMiddleware(object):
# overwrite process request
def process_request(self, request, spider):
# Set the location of the proxy
request.meta['proxy'] = "http://YOUR_PROXY_IP:PORT"
# Use the following lines if your proxy requires authentication
proxy_user_pass = "USERNAME:PASSWORD"
# setup basic authentication for the proxy
encoded_user_pass = base64.encodestring(proxy_user_pass)
request.headers['Proxy-Authorization'] = 'Basic ' + encoded_user_password
DOWNLOADER_MIDDLEWARES = { 'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 110, 'pythontab.middlewares.ProxyMiddleware': 100,}
Scrapy代理中间件配置
本文介绍如何在Scrapy爬虫项目中配置代理中间件,实现IP地址的代理轮换访问,包括设置代理服务器及认证过程。
2514

被折叠的 条评论
为什么被折叠?



