Scrapy在爬虫中间件中添加代理IP和浏览器头

本文介绍了如何在Middle.py中实现一个Scrapy爬虫中间件,通过获取和处理来自代理池的IP,使用User-Agent随机切换,确保请求的匿名性。重点讲解了如何筛选有效代理并应用于爬取过程中,以提高抓取效率和避免被封禁。
摘要由CSDN通过智能技术生成

在middle.py中添加如下代码:

class Demo1SpiderMiddleware:
    user_agent = [
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36 OPR/26.0.1656.60",
        "Opera/8.0 (Windows NT 5.1; U; en)",
        "Mozilla/5.0 (Windows NT 5.1; U; en; rv:1.8.1) Gecko/20061208 Firefox/2.0.0 Opera 9.50",
        "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; en) Opera 9.50",
        # Firefox
        "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:34.0) Gecko/20100101 Firefox/34.0",
        "Mozilla/5.0 (X11; U; Linux x86_64; zh-CN; rv:1.9.2.10) Gecko/20100922 Ubuntu/10.10 (maverick) Firefox/3.6.10",
        # Safari
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/534.57.2 (KHTML, like Gecko) Version/5.1.7 Safari/534.57.2",
        # chrome
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.71 Safari/537.36",
        "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11",
        "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/534.16 (KHTML, like Gecko) Chrome/10.0.648.133 Safari/534.16"
    ]
    @classmethod
    def from_crawler(cls, crawler):
        # This method is used by Scrapy to create your spiders.
        s = cls()
        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
        return s
    def process_spider_input(self, response, spider):
        return None
    def process_spider_output(self, response, result, spider):
        for i in result:
            yield i
    def process_spider_exception(self, response, exception, spider):
        pass
    def process_start_requests(self, start_requests, spider):
        user_agent = [
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36 OPR/26.0.1656.60",
            "Opera/8.0 (Windows NT 5.1; U; en)",
            "Mozilla/5.0 (Windows NT 5.1; U; en; rv:1.8.1) Gecko/20061208 Firefox/2.0.0 Opera 9.50",
            "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; en) Opera 9.50",
            # Firefox
            "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:34.0) Gecko/20100101 Firefox/34.0",
            "Mozilla/5.0 (X11; U; Linux x86_64; zh-CN; rv:1.9.2.10) Gecko/20100922 Ubuntu/10.10 (maverick) Firefox/3.6.10",
            # Safari
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/534.57.2 (KHTML, like Gecko) Version/5.1.7 Safari/534.57.2",
            # chrome
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.71 Safari/537.36",
            "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11",
            "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/534.16 (KHTML, like Gecko) Chrome/10.0.648.133 Safari/534.16"
        ]
        head={"User_Agent":"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36 OPR/26.0.1656.60"}
        a=requests.get("http://127.0.0.1:5010/get_status/").json().get("useful_proxy")
        b=[]
        for i in range(a):
            IP=requests.get("http://127.0.0.1:5010/get/").json().get("proxy")
            b.append(IP)
        d=[]
        e=len(b)-1
        while e > 0:
            e-=1
            c=b[e]
            thisProxy={
                "http":"http://{}".format(c)
            }
            print(thisProxy)
            time.sleep(2)
            try:
                thisIP = "".join(c.split(":")[0:1])
                res = requests.get(url="http://httpbin.org/get?show_env=1",timeout=60,proxies=thisProxy,headers=head)
                print("这里是可用的")
                f=res.text
                fdict=json.loads(f)
                count=fdict['origin'].count(",")
                print(count)
                if count==0:
                    d.append(c)
                    break
                else:
                    requests.get("http://127.0.0.1:5010/delete/?proxy={}".format(c))
            except Exception:
                print(c+"这个是要删除的")
                requests.get("http://127.0.0.1:5010/delete/?proxy={}".format(c))
        print(d)
        for request in start_requests:
            request.headers['User-Agent']=random.choice(user_agent)
            request.headers['referer']='www.baidu.com'
            request.meta['proxy'] = "http://"+d[0]
#            #print(HeaderMethod.get_headers())
            print("开始实验了")
            yield request
    def spider_opened(self, spider):
        spider.logger.info('Spider opened: %s' % spider.name)
    def get_proxy():
        return requests.get("http://127.0.0.1:5010/get/").json()

这里包含从代理池获取ip,并访问。

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值