爬虫 + 数据分析 - 6 中间件, scrapy+selenium爬取网易新闻

一.全站数据的爬取(手动)

- yield scrapy.Request(url,callback):callback回调一个函数用于数据解析
# 爬取阳光热线前五页数据

import scrapy from sunLinePro.items import SunlineproItem class SunSpider(scrapy.Spider): name = 'sun' # allowed_domains = ['www.xxx.com'] start_urls = ['http://wz.sun0769.com/index.php/question/questionType?type=4&page='] #通用的url模板(不可以修改) url = 'http://wz.sun0769.com/index.php/question/questionType?type=4&page=%d' page = 1 def parse(self, response): print('--------------------------page=',self.page) tr_list = response.xpath('//*[@id="morelist"]/div/table[2]//tr/td/table//tr') for tr in tr_list: title = tr.xpath('./td[2]/a[2]/text()').extract_first() status = tr.xpath('./td[3]/span/text()').extract_first() item = SunlineproItem() item['title'] = title item['status'] = status yield item if self.page < 5: #手动对指定的url进行请求发送 count = self.page * 30 new_url = format(self.url%count) self.page += 1 # 手动对指定的url进行请求发送 yield scrapy.Request(url=new_url,callback=self.parse)

 

二.如何进行post请求发送 和cookie处理

 

  1.post请求的发送

    - post请求的发送:
- 重写父类的start_requests(self)方法
- 在该方法内部只需要调用yield scrapy.FormRequest(url,callback,formdata)

 

import scrapy


class PostdemoSpider(scrapy.Spider):
    name = 'postDemo'
    # allowed_domains = ['www.xxx.com']
    #https://fanyi.baidu.com/sug
    start_urls = ['https://fanyi.baidu.com/sug']
    #父类方法,就是将start_urls中的列表元素进行get请求的发送
    # def start_requests(self):
    #     for url in self.start_urls:
    #         yield scrapy.Request(url=url,callback=self.parse)

    def start_requests(self):
        for url in self.start_urls:
            data = {
                'kw':'cat'
            }
            #post请求的手动发送使用的是FormRequest
            yield scrapy.FormRequest(url=url,callback=self.parse,formdata=data)

    def parse(self, response):
        print(response.text)

 

  2.cookie的处理

    - cookie处理:scrapy默认情况下会自动进行cookie处理

 

三.请求传参

请求传参:
    - 使用场景:如果使用scrapy爬取的数据没有在同一张页面中,则必须使用请求传参
    - 编码流程:
        - 需求:爬取的是首页中电影的名称和详情页中电影的简介(全站数据爬取)
        - 基于起始url进行数据解析(parse)
            - 解析数据
                - 电影的名称
                - 详情页的url
                - 对详情页的url发起手动请求(指定的回调函数parse_detail),进行请求传参(meta)
                    meta传递给parse_detail这个回调函数
                - 封装一个其他页码对应url的一个通用的URL模板
                - 在for循环外部,手动对其他页的url进行手动请求发送(需要指定回调函数==》parse)
            - 定义parse_detail回调方法,在其内部对电影的简介进行解析。解析完毕后,需要将解析到的电影名称
                和电影的简介封装到同一个item中。
                - 接收传递过来的item,并且将解析到的数据存储到item中,将item提交给管道

 

# -*- coding: utf-8 -*-
#爬取电影网前五页  


import scrapy
from moviePro.items import MovieproItem

class MovieSpider(scrapy.Spider):
    name = 'movie'
    # allowed_domains = ['www.xxx.com']
    start_urls = ['https://www.4567tv.tv/index.php/vod/show/class/动作/id/1.html']
    url = 'https://www.4567tv.tv/index.php/vod/show/class/动作/id/1/page/%d.html'
    page_num = 2

    def parse_detail(self,response):
        desc = response.xpath('/html/body/div[1]/div/div/div/div[2]/p[5]/span[2]/text()').extract_first()
        #callback回去item
        item = response.meta['item']
        item['desc'] = desc

        yield item
    def parse(self, response):
        li_list = response.xpath('/html/body/div[1]/div/div/div/div[2]/ul/li')
        for li in li_list:
            name = li.xpath('./div/a/@title').extract_first()
            detail_url = 'https://www.4567tv.tv'+li.xpath('./div/a/@href').extract_first()
            item = MovieproItem()
            item['name'] = name
            #meta是一个字典,可以将meta传递给callback
            yield scrapy.Request(detail_url,callback=self.parse_detail,meta={'item':item})

        if self.page_num <= 5:
            new_url = format(self.url%self.page_num)
            self.page_num += 1
            yield scrapy.Request(new_url,callback=self.parse)

 

 

 四.中间件

 

    - 下载中间件的作用:批量拦截整个工程中发起的所有请求和响应
    - 拦截请求:
        - UA伪装:
        - 代理ip:
    - 拦截响应:

 

   1.UA池 和代理池


UA池:User-Agent池 - 作用:尽可能多的将scrapy工程中的请求伪装成不同类型的浏览器身份。


 代理池:ip代理

  - 作用:尽可能多的将scrapy工程中的请求的IP设置成不同的。

 

 在middlewares.py 文件中

import random

#批量拦截所有的请求和响应
class MiddlewearproDownloaderMiddleware(object):
    #UA池
    user_agent_list = [
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 "
        "(KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1",
        "Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 "
        "(KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 "
        "(KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",
        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 "
        "(KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",
        "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 "
        "(KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",
        "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 "
        "(KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",
        "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 "
        "(KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 "
        "(KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
        "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 "
        "(KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
        "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 "
        "(KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 "
        "(KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 "
        "(KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 "
        "(KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 "
        "(KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
        "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 "
        "(KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 "
        "(KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",
        "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 "
        "(KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",
        "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 "
        "(KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"
    ]
    #代理池
    PROXY_http = [
        '153.180.102.104:80',
        '195.208.131.189:56055',
    ]
    PROXY_https = [
        '120.83.49.90:9000',
        '95.189.112.214:35508',
    ]

    #拦截正常请求:request就是该方法拦截到的请求,spider就是爬虫类实例化的一个对象
    def process_request(self, request, spider):
        print('this is process_request!!!')
        #UA伪装
        request.headers['User-Agent'] = random.choice(self.user_agent_list)
        return None

    #拦截所有的响应
    def process_response(self, request, response, spider):

        return response

    #拦截发生异常的请求对象
    def process_exception(self, request, exception, spider):
        print('this is process_exception!!!!')
        #代理ip的设定
        if request.url.split(':')[0] == 'http':
            request.meta['proxy'] = random.choice(self.PROXY_http)
        else:
            request.meta['proxy'] = random.choice(self.PROXY_https)

        #将修正后的请求对象重新进行请求发送
        return request

 

 在settings.py文件中

 

 

 

 

   2.拦截响应和 selenium的使用

拦截响应:
  修改 中间件文件的 process_response 函数



selenium 浏览器自动化:
  - 爬虫类中定义一个属性bro
  - 爬虫类中重写父类的一个方法closed,在该方法中关闭bro
  - 在中间件类的process_response中编写selenium自动化的相关操作
  

 

示例:爬取  网易新闻 数据

在 爬虫文件中

 

# -*- coding: utf-8 -*-
import scrapy
from wangyiPro.items import WangyiproItem
from selenium import webdriver
class WangyiSpider(scrapy.Spider):
    name = 'wangyi'
    # allowed_domains = ['www.xxx.com']
    #网易新闻的首页
    start_urls = ['https://news.163.com/']

    def __init__(self):
        self.bro = webdriver.Chrome(executable_path=r'C:\Users\oldboy\Desktop\爬虫+数据\tools\chromedriver.exe')

    five_model_urls = [] #存储的是5个板块对应的url
    #从网易新闻的首页中解析出5个板块对应的url
    def parse(self, response):
        #存储的是所有板块对应的li
        li_list = response.xpath('//*[@id="index2016_wrap"]/div[1]/div[2]/div[2]/div[2]/div[2]/div/ul/li')
        alist = [3,4,6,7,8]
        for a in alist:
            #五个板块对应的li
            li = li_list[a]
            #五个板块对应详情页的url
            news_url = li.xpath('./a/@href').extract_first()
            self.five_model_urls.append(news_url)
            #对五个板块详情页发起请求
            yield scrapy.Request(news_url,callback=self.new_parse)

    #response就是五个板块对应的响应对象
    #响应对象中的响应数据是不包含动态加载的新闻数据
    def new_parse(self,response):
        #用来解析每一个板块中的新闻数据
        div_list = response.xpath('/html/body/div/div[3]/div[4]/div[1]/div/div/ul/li/div/div')
        for div in div_list:
            #新闻标题和详情页的url
            title = div.xpath('./div/div[1]/h3/a/text()').extract_first()
            detail_url = div.xpath('./div/div[1]/h3/a/@href').extract_first()

            if detail_url is not None:
                item = WangyiproItem()
                item['title'] = title
                #对新闻详情页发起请求获取新闻数据
                yield scrapy.Request(detail_url,callback=self.detail_parse,meta={'item':item})
    def detail_parse(self,response):
        item = response.meta['item']
        content = response.xpath('//*[@id="endText"]//text()').extract()
        content = ''.join(content)

        item['content'] = content

        yield item


    def closed(self,spider):
        self.bro.quit()

 

 

在 中间件文件 中

 

# -*- coding: utf-8 -*-

# Define here the models for your spider middleware
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html

from scrapy import signals
from scrapy.http import HtmlResponse

from time import sleep
class WangyiproDownloaderMiddleware(object):
    # Not all methods need to be defined. If a method is not defined,
    # scrapy acts as if the downloader middleware does not modify the
    # passed objects.


    def process_request(self, request, spider):
        # Called for each request that goes through the downloader
        # middleware.

        # Must either:
        # - return None: continue processing this request
        # - or return a Response object
        # - or return a Request object
        # - or raise IgnoreRequest: process_exception() methods of
        #   installed downloader middleware will be called
        return None
    #拦截响应(1+5+n个响应)
    def process_response(self, request, response, spider):
       #该方法会拦截到所有的响应(1+5+n)
       #我们需要篡改的是五个板块对应的响应对象
       #如何有针对性的捕获到五个板块对应的响应对象

       #定位指定响应对象的方法:
       #根据url定位到指定的request
       #根据指定的request定位到指定的response
        urls = spider.five_model_urls
        if request.url in urls:
            #将原始不满足需求的response篡改成符合需求的新的response
            #先要获取符合需求的响应数据,然后将该响应数据封装到新的响应对象中,将新响应对象返回
            bro = spider.bro
            bro.get(request.url)
            sleep(2)
            js = 'window.scrollTo(0,document.body.scrollHeight)'
            bro.execute_script(js)
            sleep(1)
            bro.execute_script(js)
            sleep(1)
            bro.execute_script(js)
            sleep(1)
            #返回的页面源码就包含了动态加载的新闻数据,page_text是需要作为新的响应对象的响应数据
            page_text = bro.page_source

            new_response = HtmlResponse(url=bro.current_url,body=page_text,encoding='utf-8',request=request)
            return new_response
        else:
            return response

    def process_exception(self, request, exception, spider):
        # Called when a download handler or a process_request()
        # (from other downloader middleware) raises an exception.

        # Must either:
        # - return None: continue processing this exception
        # - return a Response object: stops process_exception() chain
        # - return a Request object: stops process_exception() chain
        pass

settings中

BOT_NAME = 'wangyiPro'
USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36'

SPIDER_MODULES = ['wangyiPro.spiders']
NEWSPIDER_MODULE = 'wangyiPro.spiders'



LOG_LEVEL = 'ERROR'

DOWNLOADER_MIDDLEWARES = {
   'wangyiPro.middlewares.WangyiproDownloaderMiddleware': 543,
}


ITEM_PIPELINES = {
   'wangyiPro.pipelines.WangyiproPipeline': 300,
}

 

 注意事项

1. 导入浏览器启动文件

2. 修改 settings.py 文件

3,修改 items.py 文件

4,持久化存储式,修改 管道 文件

 

转载于:https://www.cnblogs.com/lw1095950124/p/11120879.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值