scrapy爬虫防ban策略总结

1.策略一:设置download_delay

download_delay能够设置在settings.py中,也能够在spider中设置。
(1)settings.py 中配置 :
DOWNLOAD_DELAY=3
(2)spider中配置:
class CSDNBlogCrawlSpider(CrawlSpider):

 name = "CSDNBlogCrawlSpider"  
 #设置下载延时  
 download_delay = 2  
 allowed_domains = ['blog.csdn.net']  

2.策略二:禁止cookies

所谓cookies,是指某些网站为了辨别用户身份而储存在用户本地终端(Client Side)上的数据(通常经过加密),禁止cookies也就防止了可能使用cookies识别爬虫轨迹的网站得逞。
使用:
在settings.py中设置COOKIES_ENABLES=False。也就是不启用cookies middleware,不想web server发送cookies。

3.策略三:使用user agent池

所谓的user agent,是指包括浏览器信息、操作系统信息等的一个字符串,也称之为一种特殊的网络协议。server通过它推断当前訪问对象是浏览器、邮件client还是网络爬虫。在request.headers能够查看user agent。

4.策略四:使用IP池

web server应对爬虫的策略之中的一个就是直接将你的IP或者是整个IP段都封掉禁止訪问,这时候,当IP封掉后。转换到其它的IP继续訪问就可以。

5.策略五:分布式爬取

这个,内容就很多其它了,针对scrapy。也有相关的针对分布式爬取的GitHub repo。

案例 爬取西刺代理ip

XiCiSpiders.py

# -*- coding: utf-8 -*-
import scrapy
from xici.items import XiciItem
class XiCiSpiders(scrapy.Spider):
    name = 'xici'
    start_urls = ['http://www.xicidaili.com/']

    def start_requests(self):
        reqs = [];
        for i in range(1,5):  # 设置变量:页码1到206
            req = scrapy.Request("http://www.xicidaili.com/nn/%s" %i)
            reqs.append(req)  # 生成的request放到resqs中
        return reqs  # 返回reqs

    def parse(self, response):
        trs = response.xpath('//table[@id="ip_list"]/tr')
        items=[]
        for ip in trs[1:]:
            pre_item=XiciItem()
            pre_item['IP'] = ip.xpath('td[2]/text()')[0].extract()  # 取文字
            pre_item['PORT'] = ip.xpath('td[3]/text()')[0].extract()  # 取文字
            #pre_item['POSITION'] = ip.xpath('string(td[4])')[0].extract().strip()
            pre_item['POSITION'] = ip.xpath('td[4]/a/text()')[0].extract()
            pre_item['TYPE'] = ip.xpath('td[6]/text()')[0].extract()
            # speed取到td的title属性,再用正则(匹配到数字)
            pre_item['SPEED'] = ip.xpath(
                'td[7]/div[@class="bar"]/@title').re('\d{0,2}\.\d{0,}')[0]
            pre_item['LAST_CHECK_TIME'] = ip.xpath('td[9]/text()')[0].extract()
            items.append(pre_item)  # 把pre_item添加到项目
        return items  # 返回项目

items.py

# -*- coding: utf-8 -*-
# Define here the models for your scraped items
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/items.html
import scrapy
class XiciItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    IP = scrapy.Field()
    PORT = scrapy.Field()
    POSITION = scrapy.Field()
    TYPE = scrapy.Field()
    SPEED = scrapy.Field()
    LAST_CHECK_TIME = scrapy.Field()
    pass

middlewares.py

# -*- coding: utf-8 -*-

# Define here the models for your spider middleware
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/spider-middleware.html

from scrapy import signals
from settings import PROXIES
import base64
import random
class RandomUserAgent(object):
    def __init__(self,agents):
        self.agents = agents

    @classmethod
    def from_crawler(cls, crawler):
        return cls(crawler.settings.getlist('USER_AGENTS'))

    def process_request(self, request, spider):
        # print "**************************" + random.choice(self.agents)
        request.headers.setdefault('User-Agent', random.choice(self.agents))

class ProxyMiddleware(object):
    def process_request(self, request, spider):
        proxy = random.choice(PROXIES)
        if proxy['user_pass'] is not None:
            request.meta['proxy'] = "http://%s" % proxy['ip_port']
            encoded_user_pass = base64.encodestring(proxy['user_pass'])
            request.headers['Proxy-Authorization'] = 'Basic ' + encoded_user_pass
            print "**************ProxyMiddleware have pass************" + proxy['ip_port']
        else:
            print "**************ProxyMiddleware no pass************" + proxy['ip_port']
            request.meta['proxy'] = "http://%s" % proxy['ip_port']

settings.py

USER_AGENTS = [
    "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; AcooBrowser; .NET CLR 1.1.4322; .NET CLR 2.0.50727)",
    "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; Acoo Browser; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET CLR 3.0.04506)",
    "Mozilla/4.0 (compatible; MSIE 7.0; AOL 9.5; AOLBuild 4337.35; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)",
    "Mozilla/5.0 (Windows; U; MSIE 9.0; Windows NT 9.0; en-US)",
    "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 2.0.50727; Media Center PC 6.0)",
    "Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 1.0.3705; .NET CLR 1.1.4322)",
    "Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 3.0.04506.30)",
    "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN) AppleWebKit/523.15 (KHTML, like Gecko, Safari/419.3) Arora/0.3 (Change: 287 c9dfb30)",
    "Mozilla/5.0 (X11; U; Linux; en-US) AppleWebKit/527+ (KHTML, like Gecko, Safari/419.3) Arora/0.6",
    "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.2pre) Gecko/20070215 K-Ninja/2.1.1",
    "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9) Gecko/20080705 Firefox/3.0 Kapiko/3.0",
    "Mozilla/5.0 (X11; Linux i686; U;) Gecko/20070322 Kazehakase/0.4.5",
    "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.8) Gecko Fedora/1.9.0.8-1.fc10 Kazehakase/0.5.6",
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11",
    "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/535.20 (KHTML, like Gecko) Chrome/19.0.1036.7 Safari/535.20",
    "Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; fr) Presto/2.9.168 Version/11.52",
]
PROXIES = [
    {'ip_port': '111.206.239.40:8081', 'user_pass': ''},
    {'ip_port': '218.92.209.74:8080', 'user_pass': ''},
    {'ip_port': '106.39.160.135:8888', 'user_pass': ''},
    {'ip_port': '61.157.248.35:80', 'user_pass': ''},
    {'ip_port': '218.32.94.77:8080', 'user_pass': ''},
    {'ip_port': '222.132.145.122:53281', 'user_pass': ''},
    {'ip_port': '222.244.51.205:9000', 'user_pass': ''},
    {'ip_port': '182.139.81.17:80', 'user_pass': ''},
    {'ip_port': '115.58.174.225:8118', 'user_pass': ''},
    {'ip_port': '61.132.238.92:9999', 'user_pass': ''},
]
COOKIES_ENABLED=False
DOWNLOAD_DELAY=3
DOWNLOADER_MIDDLEWARES = {
    'xici.middlewares.RandomUserAgent': 1,
    'xici.middlewares.ProxyMiddleware': 100,
}

运行
scrapy crawl xici -o xici.csv

这里写图片描述

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值