scrapy

基础

1.scrapy安装与环境依赖
# 1.在安装scrapy前需要安装好相应的依赖库, 再安装scrapy, 具体安装步骤如下:
    (1).安装lxml库: pip install lxml
    (2).安装wheel: pip install wheel
    (3).安装twisted: pip install twisted文件路径
    	(twisted需下载后本地安装,下载地址:http://www.lfd.uci.edu/~gohlke/pythonlibs/#twisted)
    	(版本选择如下图,版本后面有解释,请根据自己实际选择)
    (4).安装pywin32: pip install pywin32
    	(注意:以上安装步骤一定要确保每一步安装都成功,没有报错信息,如有报错自行百度解决)
    (5).安装scrapy: pip install scrapy
    	(注意:以上安装步骤一定要确保每一步安装都成功,没有报错信息,如有报错自行百度解决)
    (6).成功验证:在cmd命令行输入scrapy,显示Scrapy1.6.0-no active project,证明安装成功 

img


2.创建项目
1.手动创建一个目录test
2.在test文件夹下创建爬虫项目为spiderpro: scrapy startproject spiderpro
3.进入项目文件夹: cd spiderpro
4.创建爬虫文件: scrapy genspider 爬虫名 域名
5.启动项目 : scrapy crawl demo(爬虫名)

3.项目目录介绍
spiderpro
  spiderpro # 项目目录
    __init__
    spiders:爬虫文件目录
      __init__
      tests.py:爬虫文件
    items.py:定义爬取数据持久化的数据结构
    middlewares.py:定义中间件
    pipelines.py:管道,持久化存储相关
    settings.py:配置文件
  venv:虚拟环境目录
  scrapy.cfg: scrapy项目配置文件

说明:

(1).spiders:其内包含一个个Spider的实现, 每个Spider是一个单独的文件
(2).items.py:它定义了Item数据结构, 爬取到的数据存储为哪些字段
(3).pipelines.py:它定义Item Pipeline的实现
(4).settings.py:项目的全局配置
(5).middlewares.py:定义中间件, 包括爬虫中间件和下载中间件
(6).scrapy.cfg:它是scrapy项目的配置文件, 其内定义了项目的配置路径, 部署相关的信息等

4.scrapy框架介绍: 5大核心组件与数据流向

img

(1).架构:

  Scrapy Engine: 这是引擎,负责Spiders、ItemPipeline、Downloader、Scheduler中间的通讯,信号、数据传递等等!

  Scheduler(调度器): 它负责接受引擎发送过来的requests请求,并按照一定的方式进行整理排列,入队、并等待Scrapy Engine(引擎)来请求时,交给引擎。

  Downloader(下载器):负责下载Scrapy Engine(引擎)发送的所有Requests请求,并将其获取到的Responses交还给Scrapy Engine(引擎),由引擎交给Spiders来处理,

  Spiders:它负责处理所有Responses,从中分析提取数据,获取Item字段需要的数据,并将需要跟进的URL提交给引擎,再次进入Scheduler(调度器),

  Item Pipeline:它负责处理Spiders中获取到的Item,并进行处理,比如去重,持久化存储(存数据库,写入文件,总之就是保存数据用的)

  Downloader Middlewares(下载中间件):你可以当作是一个可以自定义扩展下载功能的组件

  Spider Middlewares(Spider中间件):你可以理解为是一个可以自定扩展和操作引擎和Spiders中间‘通信‘的功能组件(比如进入Spiders的Responses;和从Spiders出去的Requests)
(2).工作流:

  1.spider将请求发送给引擎, 引擎将request发送给调度器进行请求调度

  2.调度器把接下来要请求的request发送给引擎, 引擎传递给下载器, 中间会途径下载中间件

  3.下载携带request访问服务器, 并将爬取内容response返回给引擎, 引擎将response返回给spider

  4.spider将response传递给自己的parse进行数据解析处理及构建item一系列的工作, 最后将item返回给引擎, 引擎传递个pipeline

  5.pipe获取到item后进行数据持久化

  6.以上过程不断循环直至爬虫程序终止

5.使用scrapy框架爬取糗百

# 需求: 爬取糗事百科热门板块,每一条的标题,好笑,评论条数及作者信息,解析爬取的信息数据,定制item数据存储结构,最终将数据存储于MongoDB数据库中.
# 创建项目:
scrapy startproject qsbk # 创建项目
cd qsbk # 切换到项目目录
scrapy genspider qsbk_hot www.qiushibaike.com # 创建爬虫文件, qsbk_hot为爬虫名, www...com为爬取范围
# item文件定义数据存储的字段:
import scrapy
class QsbkItem(scrapy.Item):
    title = scrapy.Field()  # 标题
    lau = scrapy.Field()  # 好笑数
    comment = scrapy.Field()  # 评论数
    auth = scrapy.Field()  # 作者
# spider文件中定义解析数据的方法
class QsbkHotSpider(scrapy.Spider):
	name ='qsbk_hot'
	# allowed_domains = ['www.qiushibaike.com'] # 无用, 可注释掉
	start_urls =['http://www.qiushibaike.com/']

	# 思路:一条热点数据在前端中对应一个li标签, 将一页中的所有li标签取出, 再进一步操作
	def parse(self, response):

		li_list = response.selector.xpath('//div[@class="recommend-article"]/ul/li')

		# 循环li标签组成的列表, 先实例化item, 再取需要的字段, 并该item对象的相应属性赋值
		for li in li_list:

			# 实例化item对象
			item =QsbkItem()

			# 解析获取title(标题), lau(好笑数), comment(评论数), auth(作者)等信息
			title = li.xpath('./div[@class="recmd-right"]/a/text()').extract_first()
			lau = li.xpath('./div[@class="recmd-right"]/div[@class="recmd-detail clearfix"]/div/span[1]/text()').extract_first()
			comment = li.xpath('./div[@class="recmd-right"]/div[@class="recmd-detail clearfix"]/div/span[4]/text()').extract_first()
			auth = li.xpath('./div[@class="recmd-right"]/div[@class="recmd-detail clearfix"]/a/span/text()').extract_first()

			# 因为部分热点数据还没有评论和好笑数, 所以需对数据进行处理
			if not lau:
				lau =None
			if not comment:
				comment =None

			# 将字段的值存储在item的属性中
            item["title"]= title
            item["lau"]= lau
            item["comment"]= comment
            item["auth"]= auth

			# 返回item, 框架会自动将item传送至pipeline中的指定类
			yield item
要先在settings中开启pipline
# 在pipeline中定义管道类进行数据的存储  要先在settings中开启pipline
import pymongo

classQsbkPipeline(object):
  # 连接MongoDB数据库
	conn = pymongo.MongoClient("localhost", 27017)
	db = conn.qiubai
	table = db.qb_hot

  def process_item(self, item, spider):
    # 向数据库中出入数据
    self.table.insert(dict(item))

    # 此处return item是为了下一个管道类能够接收到item进行存储
    return item

  def close_spider(self):
    # 关闭数据库连接
    self.conn.close()
# 此示例中配置文件中的配置的项, 注意是不是全部的配置, 是针对该项目增加或修改的配置项

# 忽略robots协议
ROBOTSTXT_OBEY =False

# UA伪装
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.87 Safari/537.36'

# 管道类的注册配置
ITEM_PIPELINES ={
'qsbk.pipelines.QsbkPipeline':300,
}

6.scrapy爬取校花网人名与图片下载链接
# 需求: 爬取校花网大学校花的默认的第一页的所有图片src和人名, 并通过管道存入mongodb数据库
# 创建项目:
scrapy startproject xiaohuaspider # 创建项目
cd xiaohuaspider # 切换到项目目录
scrapy genspider hua www.baidu.com # 创建爬虫文件, hua为爬虫名, www.baidu.com为爬取范围
# 创建item类, 用于存储解析出的数据
import scrapy
class XiaohuaspiderItem(scrapy.Item):
    name = scrapy.Field()
    src = scrapy.Field()
# spider中定义爬取的行为与解析数据的操作
import scrapy
from ..items import XiaohuaspiderItem


class HuaSpider(scrapy.Spider):
    name = 'hua'
    # allowed_domains = ['www.baidu.com']
    start_urls = ['http://www.xiaohuar.com/hua/']

    def parse(self, response):
        div_list = response.xpath('//div[@class="img"]')
        for div in div_list:
            item = XiaohuaspiderItem()
            name = div.xpath('.//span/text()').extract_first()
            src = div.xpath('./a/img/@src').extract_first()
            item["name"] = name
            item["src"] = src
            yield item
# itemPipeline编码, 持久化数据到本地
import pymongo

class XiaohuaspiderPipeline(object):
    conn = pymongo.MongoClient('localhost', 27017)
    db = conn.xiaohua
    table = db.hua
    def process_item(self, item, spider):
        self.table.insert(dict(item))
        return item
    def close_spider(self, spider):
        self.conn.close()
# 配置项:
# UA伪装:
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.87 Safari/537.36'

# 忽略robots协议:
ROBOTSTXT_OBEY = False

# 开启管道类
ITEM_PIPELINES = {
   'xiaohuaspider.pipelines.XiaohuaspiderPipeline': 300,
}

详情

1.scrapy 多页爬取
# spider编码在原基础之上, 构建其他页面的url地址, 并利用scrapy.Request发起新的请求, 请求的回调函数依然是parse:
page = 1
base_url = 'http://www.xiaohuar.com/list-1-%s.html'
if self.page < 4:
    page_url = base_url%self.page
    self.page += 1
    yield scrapy.Request(url=page_url, callback=self.parse)
# (其他文件不用改动)

2.scrapy爬取详情页
# 需求: 爬取笑话的标题与详情页连接, 通过详情页链接, 爬取详情页笑话内容
# item编码: 定义数据持久化的字段信息
import scrapy
class JokeItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    title = scrapy.Field()
    content = scrapy.Field()
# spider的编码:
# -*- coding: utf-8 -*-
import scrapy
from ..items import JokeItem

class XhSpider(scrapy.Spider):
    name = 'xh'
    # allowed_domains = ['www.baidu.com']
    start_urls = ['http://www.jokeji.cn/list.htm']

    def parse(self, response):
        li_list = response.xpath('//div[@class="list_title"]/ul/li')
        for li in li_list:
            title = li.xpath('./b/a/text()').extract_first()
            link = 'http://www.jokeji.cn' + li.xpath('./b/a/@href').extract_first()
            yield scrapy.Request(url=link, callback=self.datail_parse, meta={"title":title})

    def datail_parse(self, response):
        joke_list = response.xpath('//span[@id="text110"]//text()').extract()
        title = response.meta["title"]
        content = ''
        for s in joke_list:
            content += s
        item = JokeItem()
        item["title"] = title
        item["content"] = content
        yield item
# Pipeline编码: 数据持久化具体操作
import pymongo

class JokePipeline(object):
    conn = pymongo.MongoClient('localhost', 27017)
    db = conn.haha
    table = db.hahatable

    def process_item(self, item, spider):
        self.table.insert(dict(item))
        return item

    def close_spider(self, spider):
        self.conn.close()
# settings配置编码:
UA伪装
Robots协议
Item_Pipeline

3.scrapy发送post请求
import scrapy
import json


class FySpider(scrapy.Spider):
    name = 'fy'
    # allowed_domains = ['www.baidu.com']
    start_urls = ['https://fanyi.baidu.com/sug']
    def start_requests(self):
        data = {
            'kw':'boy'
        }
        yield scrapy.FormRequest(url=self.start_urls[0], callback=self.parse, formdata=data)

    def parse(self, response):
        print(1111111111111111111111111111111111111111111111111111111111111111111111111111111111)
        print(response.text)
        print(json.loads(response.text))
 print(2222222222222222222222222222222222222222222222222222222222222222222222222222222222)

4.scrapy中间件
# 中间件分类:
	- 下载中间件: DownloadMiddleware
	- 爬虫中间件: SpiderMiddleware
# 中间件的作用:
	- 下载中间件: 拦截请求与响应, 篡改请求与响应
	- 爬虫中间件: 拦截请求与响应, 拦截管道item, 篡改请求与响应, 处理item
# 下载中间件的主要方法:
process_request
process_response
process_exception
# 下载中间件拦截请求, 使用代理ip案例
# spider编码:
import scrapy
class DlproxySpider(scrapy.Spider):
    name = 'dlproxy'
    # allowed_domains = ['www.baidu.com']
    start_urls = ['https://www.baidu.com/s?wd=ip']

    def parse(self, response):
        with open('baiduproxy.html', 'w', encoding='utf-8') as f:
            f.write(response.text)
# Downloadermiddleware编码:
def process_request(self, request, spider):
    request.meta['proxy'] = 'http://111.231.90.122:8888'
    return None

5.下载中间件实现UA
# spider编码:
class DlproxySpider(scrapy.Spider):
    name = 'dlproxy'
    # allowed_domains = ['www.baidu.com']
    start_urls = ['https://www.baidu.com/','https://www.baidu.com/','https://www.baidu.com/','https://www.baidu.com/','https://www.baidu.com/']
    
    def parse(self, response):
        pass
# 中间件的编码:
from scrapy import signals
from fake_useragent import UserAgent
import random
ua = UserAgent()
ua_list = []
for i in range(100):
    ua_chrome = ua.Chrome
    ua_list.append(ua_chrome)
    
class ...():
    def process_request(self, request, spider):
        # request.meta['proxy'] = 'http://111.231.90.122:8888'
        print(55555555555555555555555555555)
        print(self.ua_pool)
        print(55555555555555555555555555555)
        request.headers['User-Agent'] = random.choice(self.ua_pool)
        return None
   def process_response(self, request, response, spider):
        print(1111111111111111111111111111111)
        print(request.headers["User-Agent"])
        print(2222222222222222222222222222222)
        return response

对接Selenium

1.scrapy对接selenium
selenium可以实现抓取动态数据
scrapy不能抓取动态数据, 如果是ajax请求, 可以请求接口, 如果是js动态加载, 需要结合selenium
import scrapy
from selenium import webdriver
from ..items import WynewsItem
from selenium.webdriver import ChromeOptions


class NewsSpider(scrapy.Spider):
    name = 'news'
    # allowed_domains = ['www.baidu.com']
    start_urls = ['https://news.163.com/domestic/']
    option = ChromeOptions()
    option.add_experimental_option('excludeSwitches', ['enable-automation'])
    bro = webdriver.Chrome(executable_path=r'C:\Users\Administrator\Desktop\news\wynews\wynews\spiders\chromedriver.exe', options=option)
    # bro = webdriver.Chrome(executable_path=r'C:\Users\Administrator\Desktop\news\wynews\wynews\spiders\chromedriver.exe')

    def detail_parse(self, response):
        content_list = response.xpath('//div[@id="endText"]/p//text()').extract()
        content = ''
        title = response.meta['title']
        for s in content_list:
            content += s
        item = WynewsItem()
        item["title"] = title
        item["content"] = content
        yield item

    def parse(self, response):
        div_list = response.xpath('//div[contains(@class, "data_row")]')
        for div in div_list:
            link = div.xpath('./a/@href').extract_first()
            title = div.xpath('./div/div[1]/h3/a/text()').extract_first()
            yield scrapy.Request(url=link, callback=self.detail_parse, meta={"title":title})
# 中间件编码:
from scrapy.http import HtmlResponse
class WynewsDownloaderMiddleware(object):
        def process_response(self, request, response, spider):
        bro = spider.bro
        if request.url in spider.start_urls:
            bro.get(request.url)
            time.sleep(3)
            js = 'window.scrollTo(0, document.body.scrollHeight)'
            bro.execute_script(js)
            time.sleep(3)
            response_selenium = bro.page_source
            return HtmlResponse(url=bro.current_url, body=response_selenium, encoding="utf-8", request=request)

        return response
# Pipeline编码:
import pymongo

class WynewsPipeline(object):
    conn = pymongo.MongoClient('localhost', 27017)
    db = conn.wynews
    table = db.newsinfo
    def process_item(self, item, spider):
        self.table.insert(dict(item))
        return item

2.pipeline数据持久化
# 介绍:
	1.pipelines: 用于数据持久化
     2.数据持久化的方式有很多种: MongoDB, MySQL, Redis, CSV
    3.必须实现的方法: process_item
# 核心方法讲解:
open_spider(self, spider): spider开启是被调用
close_spider(self, spider): spider关闭是被调用
from_crawler(cls, crawler): 类方法, 用@classmethod标识, 可以获取配置信息
Process_item(self, item, spider): 与数据库交互存储数据, 该方法必须实现 *****
# MongoDB交互:
import Pymongo
class MongoPipeline(object):
    def __init__(self, mongo_uri, mongo_db):
        self.mongo_uri = mongo_uri
        self.mongo_db = mongo_db
        
    @classmethod
    def from_crawler(cls, crawler):
        return cls(
        	mongo_uri = crawler.settings.get('MONGO_URI'),
            mongo_db = crawler.settings.get('MONGO_DB')
        )
    
    def open_spider(self, spider):
        self.client = pymongo.MongoClient(self.mongo_uri)
        self.db = self.client[self.mongo_db]
        
    def process_item(self, item, spider):
        self.db['news'].insert(dict(item))
        return item
    
    def close_spider(self, spider):
        self.client.close()
# MySQL交互:
import pymysql

def MysqlPipeline(object):
    def __init__(self, host, database, user, password, port):
        self.host = host
        self.database = database
        self.user = user
        self.password = password
        self.port = port
    
    @classmethod  
    def from_crawler(self, crawler):
        return cls(
        	host = crawler.settings.get('MYSQL_HOST')
            database = crawler.settings.get('MYSQL_DATABASE')
            user = crawler.settings.get('MYSQL_USER')
            password= crawler.settings.get('MYSQL_PASSWORD')
            port = crawler.settings.get('MYSQL_PORT')
        )
    
    def open_spider(self, spider):
        self.db = pymysql.connect(self.host, self.user, self.password, self.database, charset='utf8', port=self.port)
        self.cursor = self.db.cursor()
        
    def process_item(self, item, spider):
        data = dict(item)
        
        keys = ','.join(data.keys())
        values = ','.join(['%s']*len(data))
        sql = 'insert into %s (%s) values (%s)' % (table, keys, value)
        self.cursor.execute(sql, tuple(data.values()))
        self.db.commit()
        return item
# 用于文件下载的管道类
# spider编码:
import scrapy
from ..items import XhxhItem
class XhSpider(scrapy.Spider):
    name = 'xh'
    # allowed_domains = ['www.baidu.com']
    start_urls = ['http://www.521609.com/qingchunmeinv/']
    def parse(self, response):
        li_list = response.xpath('//div[@class="index_img list_center"]/ul/li')
        for li in li_list:
            item = XhxhItem()
            link = li.xpath('./a[1]/img/@src').extract_first()
            item['img_link'] = 'http://www.521609.com' + link
            print(item)
            yield item
# items编码:
import scrapy
class XhxhItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    img_link = scrapy.Field()
# 管道编码:
import scrapy
from scrapy.pipelines.images import ImagesPipeline

class XhxhPipeline(object):
    def process_item(self, item, spider):
        return item


class ImgPipeLine(ImagesPipeline):

    def get_media_requests(self, item, info):
        yield scrapy.Request(url=item['img_link'])

    def file_path(self, request, response=None, info=None):
        url = request.url
        file_name = url.split('/')[-1]
        return file_name

    def item_completed(self, results, item, info):
        return item

# settings编码:
ITEM_PIPELINES = {
   'xhxh.pipelines.XhxhPipeline': 300,
   'xhxh.pipelines.ImgPipeLine': 301,
}
IMAGES_STORE = './mvs'

爬虫请遵循规则

  • 2
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值