python爬虫框架Scrapy使用

安装

pip install -i https://pypi.tuna.tsinghua.edu.cn/simple scrapy

创建爬虫项目

scrapy startproject mypachong

项目结构
在这里插入图片描述
创建Spider

scrapy genspider quotes

处理文本内容

class QuotesSpider(scrapy.Spider):
    name = 'quotes'
    allowed_domains = ['book.zongheng.com']
    start_urls = ['http://book.zongheng.com/showchapter/2313244.html']

    def parse(self, response):
        quotes = response.css('.chapter-list li')

        for quote in quotes:
            chapter = quote.css('a::attr(href)').extract_first()
            url = response.urljoin(chapter)
            yield scrapy.Request(url=url, callback=self.parse_content)


    def parse_content(self, response):
        book =response.css('.reader_crumb a::text').extract()[2]
        chapter =response.css('.title_txtbox::text').extract_first()
        quotes = response.css('.content')
        for quote in quotes:
            text = quote.css('p::text').extract()
            item = MypachongItem()
            item['text'] = text
            item['book'] = book
            item['chapter'] = chapter
            yield item

class MypachongItem(scrapy.Item):
    text = scrapy.Field()
    book = scrapy.Field()
    chapter = scrapy.Field()

随机浏览器头:

class RandomUA(object):
    def process_request(self, request, spider):
        ua = random.choice(USER_AGENTS)
        request.headers.setdefault('User-Agent', ua)
USER_AGENTS = [
    'Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_8; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50',
    'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50',
    'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0',
    'Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0)',
    'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)',
    'Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1',
    'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.101 Safari/537.36',
    'Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; en) Presto/2.8.131 Version/11.11',
    'Opera/9.80 (Windows NT 6.1; U; en) Presto/2.8.131 Version/11.11',
    'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; The World)',
    'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/30.0.1599.101 Safari/537.36',
    'Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko'
]

处理抓取下来的数据:

class MypachongPipeline(object):
    def __init__(self):
        self.file = open('items.json', 'w')

    def process_item(self, item, spider):
        line = json.dumps(dict(item), ensure_ascii=False) + "\n"
        self.file.write(line)
        return item

启动爬虫

scrapy crawl quotes
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值