Scrapy

安装Scrapy
pip install Scrapy

什么是Scrapy

  • scrapy是一个为爬取网站数据,提取结构数据而编写的应用框架,我们只需要实现少量的代码,就能够快速的抓取
  • scrapy使用了Twister异步网络框架,可以加快我们的下载速度

scrapy框架流程在这里插入图片描述

同步和非阻塞的区别

  • 异步:调用在发出以后,这个调用就直接返回,不管有无结果
  • 非阻塞:关注的是程序在等待调用结果时的状态,指在不能立即得到结果之前,该调用不会阻塞当前线路

Scrapy框架流程

  • Scrapy engling(引擎):总指挥:负责数据和信号在不同模块间的传递 ,这个流程Scrapy自动实现
  • Scheduler(调度器):一个队列,存放引擎发过来的request请求,并返回给引擎, Scrapy已经实现
  • Spider(爬虫):处理引擎发过来的request,提取数据,提取url,并交给引擎,需要手写实现
  • Item Pipline(管道):处理引擎传过来的数据,比如存储,需要手写
  • Downloader Middlewares(下载中间件):可以自定义的下载扩展 ,比如设置代理 。 一般不用手写
  • Spider Middlewares(中间间):可以自定义requests请求和进行response过滤。一般不用手写

Scrapy快速入门

创建项目

  • 要使用Scrapy框架创建项目,需要通过命令来创建。首先进入到你想把这个项目存放的目录,然后使用以下命令创建
  • scrapy startproject [项目名称]

创建爬虫

  • 第二步:scrapy genspider[爬虫名称] [爬虫作用的域名]

运行爬虫

  • 在cmd或cmder等终端软件下运行
  • scrapy crawl [爬虫名称]

不打印scrapy框架的命令
- 在创建项目的settings.py文件下加上
- LOG_LEVEL = 'WARNING'

目录结构介绍

  • items.py:用来存放爬虫下来的模型
  • middlewares.py:用来存放各种中间件的文件
  • pipelines.py:本爬虫将items的模型存储到本地磁盘中(保存爬虫的方式)
  • settings.py:本爬虫的一些配置信息(比如请求头、多久发送一次请求、代理IP等)
  • spiders包:以后所有的爬虫,都存放到这个里面

scrapy.Requests知识点

  • callback:指定传入的URL交给哪一个解析的函数去处理
  • meta:实现不同解析行数中传递数据,meta默认会携带部分信息,比如下载延迟,请求深度
  • dont_filter:让scrapy的去重不会过滤当前的URL,scrapy默认会又URL去重的功能,对需要重复请求的URL有重要用途

Scrapy深入之scrapy shell

  • scrapy shell 是一个交互终端,我们可以在未启动spider的情况下尝试及调试代码和测试xpath表示方法
  • response.url:当前响应的url地址
  • response.request.url:当前响应url地址
  • response.headers:响应头
  • response.body:响应体
  • response.requests.headers: 当前响应的请求头

settings.py基本介绍

# -*- coding: utf-8 -*-

# Scrapy settings for bookdata project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'bookdata'   # scrapy项目的名字,这将用来构建默认user-Agent

SPIDER_MODULES = ['bookdata.spiders']
NEWSPIDER_MODULE = 'bookdata.spiders'

LOG_LEVEL = 'WARNING'

# Crawl responsibly by identifying yourself (and your website) on the user-agent
# USER_AGENT = 'bookdata (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = False     # 遵守robots.txt规则

# 配置Scrapy执行的最大并发请求(默认值:16)
# Configure maximum concurrent requests performed by Scrapy (default: 16)
# CONCURRENT_REQUESTS = 32

# 同一网站的请求配置延迟(默认值为:0)
# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
# DOWNLOAD_DELAY = 3   # 下载延迟

# 下载延迟设置将仅满足以下条件之一(二选一)
# The download delay setting will honor only one of:
# CONCURRENT_REQUESTS_PER_DOMAIN = 16     # 每个域的并发请求的最大值
# CONCURRENT_REQUESTS_PER_IP = 16         # 对单给IP进行并发请求的最大值

# 禁用cookie(默认情况下启动)
# Disable cookies (enabled by default)
# COOKIES_ENABLED = False

# 禁用telnet控制台(默认启动)
# Disable Telnet Console (enabled by default)
# TELNETCONSOLE_ENABLED = False

# 覆盖默认请求头
# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
    'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
    'Accept-Language': 'en',
    'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36'
}

# 启动或禁用蜘蛛中间件
# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
# SPIDER_MIDDLEWARES = {
#    'bookdata.middlewares.BookdataSpiderMiddleware': 543,
# }

# 启动或禁用下载器中间件
# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
# DOWNLOADER_MIDDLEWARES = {
#    'bookdata.middlewares.BookdataDownloaderMiddleware': 543,
# }

# 启动或禁用扩展
# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
# EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
# }

# 管道配置项目
# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
# pip 设置
ITEM_PIPELINES = {
    'bookdata.pipelines.BookdataPipeline': 300,
}

# 启动和配置AutoThrottle扩展(默认情况下禁用)
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
# AUTOTHROTTLE_ENABLED = True

# 初始下载延迟
# The initial download delay
# AUTOTHROTTLE_START_DELAY = 5

# 在高延迟情况下设置的最大下载延迟
# The maximum download delay to be set in case of high latencies
# AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
# AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
# AUTOTHROTTLE_DEBUG = False

# 启动和匹配HTTP缓存(默认情况下禁用)
# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
# HTTPCACHE_ENABLED = True
# HTTPCACHE_EXPIRATION_SECS = 0
# HTTPCACHE_DIR = 'httpcache'
# HTTPCACHE_IGNORE_HTTP_CODES = []
# HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

古诗文网爬取

目标网站:https://www.gushiwen.org/
spiders下的爬虫文件get_book.py:

# -*- coding: utf-8 -*-
import scrapy
from ..items import BookdataItem
"""
get : 在scrapy框架中是查找第一个返回
getall : 是查找所有并返回

"""


class GetBookSpider(scrapy.Spider):
    name = 'get_book'
    allowed_domains = ['gushiwen.org']
    start_urls = ['http://gushiwen.org/']

    def parse(self, response):
        results = response.xpath('//div[@class="left"]/div[@class="sons"]')
        for result in results:
            title = result.xpath('.//b/text()').get()    # 书名
            source = result.xpath('.//p[@class="source"]/a/text()').getall()
            years = source[0]       # 年份
            author = source[1]      # 作者
            book_text = result.xpath('.//div[@class="contson"]//text()').getall()   # 返回的结果为一个列表类型的数据
            book_content = ''.join(book_text).strip()  # 去掉多余的字符串
            item = BookdataItem(title=title, years=years, author=author, book_content=book_content)
            yield item

        html_href = response.xpath('//a[@id="amore"]/@href').get()
        if html_href:
            html_urls = response.urljoin(html_href)   # 会自动使用start_url来连接
            urls = scrapy.Request(html_urls)
            yield urls      # 会自动识别并把url传进去

items:

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html

import scrapy

class BookdataItem(scrapy.Item):
    title = scrapy.Field()          # 标题
    years = scrapy.Field()          # 年份
    author = scrapy.Field()         # 作者
    book_content = scrapy.Field()   # 内容

middlewares.py:

# -*- coding: utf-8 -*-

# Define here the models for your spider middleware
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html

from scrapy import signals


class BookdataSpiderMiddleware(object):
    # Not all methods need to be defined. If a method is not defined,
    # scrapy acts as if the spider middleware does not modify the
    # passed objects.

    @classmethod
    def from_crawler(cls, crawler):
        # This method is used by Scrapy to create your spiders.
        s = cls()
        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
        return s

    def process_spider_input(self, response, spider):
        # Called for each response that goes through the spider
        # middleware and into the spider.

        # Should return None or raise an exception.
        return None

    def process_spider_output(self, response, result, spider):
        # Called with the results returned from the Spider, after
        # it has processed the response.

        # Must return an iterable of Request, dict or Item objects.
        for i in result:
            yield i

    def process_spider_exception(self, response, exception, spider):
        # Called when a spider or process_spider_input() method
        # (from other spider middleware) raises an exception.

        # Should return either None or an iterable of Request, dict
        # or Item objects.
        pass

    def process_start_requests(self, start_requests, spider):
        # Called with the start requests of the spider, and works
        # similarly to the process_spider_output() method, except
        # that it doesn’t have a response associated.

        # Must return only requests (not items).
        for r in start_requests:
            yield r

    def spider_opened(self, spider):
        spider.logger.info('Spider opened: %s' % spider.name)


class BookdataDownloaderMiddleware(object):
    # Not all methods need to be defined. If a method is not defined,
    # scrapy acts as if the downloader middleware does not modify the
    # passed objects.

    @classmethod
    def from_crawler(cls, crawler):
        # This method is used by Scrapy to create your spiders.
        s = cls()
        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
        return s

    def process_request(self, request, spider):
        # Called for each request that goes through the downloader
        # middleware.

        # Must either:
        # - return None: continue processing this request
        # - or return a Response object
        # - or return a Request object
        # - or raise IgnoreRequest: process_exception() methods of
        #   installed downloader middleware will be called
        return None

    def process_response(self, request, response, spider):
        # Called with the response returned from the downloader.

        # Must either;
        # - return a Response object
        # - return a Request object
        # - or raise IgnoreRequest
        return response

    def process_exception(self, request, exception, spider):
        # Called when a download handler or a process_request()
        # (from other downloader middleware) raises an exception.

        # Must either:
        # - return None: continue processing this exception
        # - return a Response object: stops process_exception() chain
        # - return a Request object: stops process_exception() chain
        pass

    def spider_opened(self, spider):
        spider.logger.info('Spider opened: %s' % spider.name)

pipelines.py:

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html

import csv
import json


class BookdataPipeline(object):
    # 存入json的写法
    def open_spider(self, spider):
        self.fp = open('古诗文.txt', 'w', encoding='utf-8')

    def process_item(self, item, spider):
        self.fp.write(json.dumps(dict(item), ensure_ascii=False)+'\n')
        return item

    def close_spider(self, spider):
        self.fp.close()

    # 存入csv
    # def __init__(self):
    #     self.fp = open('古诗文.csv', 'w', encoding='utf-8', newline='')
    #     self.writer = csv.DictWriter(self.fp, fieldnames=['title', 'years', 'author', 'book_content'])
    #     self.writer.writeheader()
    #
    # def process_item(self, item, spider):
    #     self.writer.writerow(item)
    #     return item
    #
    # def close_spider(self, spider):
    #     self.fp.close()

settings.py:

# -*- coding: utf-8 -*-

# Scrapy settings for bookdata project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'bookdata'   # scrapy项目的名字,这将用来构建默认user-Agent

SPIDER_MODULES = ['bookdata.spiders']
NEWSPIDER_MODULE = 'bookdata.spiders'

LOG_LEVEL = 'WARNING'

# Crawl responsibly by identifying yourself (and your website) on the user-agent
# USER_AGENT = 'bookdata (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = False     # 遵守robots.txt规则

# 配置Scrapy执行的最大并发请求(默认值:16)
# Configure maximum concurrent requests performed by Scrapy (default: 16)
# CONCURRENT_REQUESTS = 32

# 同一网站的请求配置延迟(默认值为:0)
# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
# DOWNLOAD_DELAY = 3   # 下载延迟

# 下载延迟设置将仅满足以下条件之一(二选一)
# The download delay setting will honor only one of:
# CONCURRENT_REQUESTS_PER_DOMAIN = 16     # 每个域的并发请求的最大值
# CONCURRENT_REQUESTS_PER_IP = 16         # 对单给IP进行并发请求的最大值

# 禁用cookie(默认情况下启动)
# Disable cookies (enabled by default)
# COOKIES_ENABLED = False

# 禁用telnet控制台(默认启动)
# Disable Telnet Console (enabled by default)
# TELNETCONSOLE_ENABLED = False

# 覆盖默认请求头
# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
    'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
    'Accept-Language': 'en',
    'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36'
}

# 启动或禁用蜘蛛中间件
# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
# SPIDER_MIDDLEWARES = {
#    'bookdata.middlewares.BookdataSpiderMiddleware': 543,
# }

# 启动或禁用下载器中间件
# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
# DOWNLOADER_MIDDLEWARES = {
#    'bookdata.middlewares.BookdataDownloaderMiddleware': 543,
# }

# 启动或禁用扩展
# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
# EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
# }

# 管道配置项目
# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
# pip 设置
ITEM_PIPELINES = {
    'bookdata.pipelines.BookdataPipeline': 300,
}

# 启动和配置AutoThrottle扩展(默认情况下禁用)
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
# AUTOTHROTTLE_ENABLED = True

# 初始下载延迟
# The initial download delay
# AUTOTHROTTLE_START_DELAY = 5

# 在高延迟情况下设置的最大下载延迟
# The maximum download delay to be set in case of high latencies
# AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
# AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
# AUTOTHROTTLE_DEBUG = False

# 启动和匹配HTTP缓存(默认情况下禁用)
# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
# HTTPCACHE_ENABLED = True
# HTTPCACHE_EXPIRATION_SECS = 0
# HTTPCACHE_DIR = 'httpcache'
# HTTPCACHE_IGNORE_HTTP_CODES = []
# HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

【为什么学爬虫?】        1、爬虫入手容易,但是深入较难,如何写出高效率的爬虫,如何写出灵活性高可扩展的爬虫都是一项技术活。另外在爬虫过程中,经常容易遇到被反爬虫,比如字体反爬、IP识别、验证码等,如何层层攻克难点拿到想要的数据,这门课程,你都能学到!        2、如果是作为一个其他行业的开发者,比如app开发,web开发,学习爬虫能让你加强对技术的认知,能够开发出更加安全的软件和网站 【课程设计】 一个完整的爬虫程序,无论大小,总体来说可以分成三个步骤,分别是:网络请求:模拟浏览器的行为从网上抓取数据。数据解析:将请求下来的数据进行过滤,提取我们想要的数据。数据存储:将提取到的数据存储到硬盘或者内存中。比如用mysql数据库或者redis等。那么本课程也是按照这几个步骤循序渐进的进行讲解,带领学生完整的掌握每个步骤的技术。另外,因为爬虫的多样性,在爬取的过程中可能会发生被反爬、效率低下等。因此我们又增加了两个章节用来提高爬虫程序的灵活性,分别是:爬虫进阶:包括IP代理,多线程爬虫,图形验证码识别、JS加密解密、动态网页爬虫、字体反爬识别等。Scrapy和分布式爬虫:Scrapy框架Scrapy-redis组件、分布式爬虫等。通过爬虫进阶的知识点我们能应付大量的反爬网站,而Scrapy框架作为一个专业的爬虫框架,使用他可以快速提高我们编写爬虫程序的效率和速度。另外如果一台机器不能满足你的需求,我们可以用分布式爬虫让多台机器帮助你快速爬取数据。 从基础爬虫到商业化应用爬虫,本套课程满足您的所有需求!【课程服务】 专属付费社群+定期答疑
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值