python爬虫之用scrapy下载中间件爬取网易新闻

python爬虫之用scrapy下载中间件爬取网易新闻

相关资源如下:
采用scrapy下载中间件爬取网易新闻国内、国际、数读、军事、航空五大板块新闻标题和内容

程序wangyi.py主代码:

import scrapy
from selenium import webdriver
from selenium.webdriver.edge.service import Service
from wangyiPro.items import WangyiproItem

class WangyiPySpider(scrapy.Spider):
    name = "wangyi.py"
    # allowed_domains = ["www.xxx.com"]
    start_urls = ["https://news.163.com/"]
    models_url = []     #存储五个板块对应详情页的url
    #解析五大板块对应详情页的url

    #实例化一个浏览器对象
    def __init__(self):
        service = Service('D:\PycharmProjects\Scrapykuangjia\msedgedriver.exe')
        self.bro = webdriver.Edge(service=service)
    def parse(self, response):
        li_list = response.xpath('//*[@id="index2016_wrap"]/div[3]/div[2]/div[2]/div[2]/div/ul/li')
        alist = [2, 3, 4, 5, 6]
        for index in alist:
            model_url = li_list[index].xpath('./a/@href').extract_first()
            self.models_url.append(model_url)

        #依次对每一个板块对应的页面进行请求
        for url in self.models_url:#对每一个板块的url进行请求发送
            yield scrapy.Request(url,callback=self.parse_model)

    #每一个板块对应的新闻标题相关的内容都是动态加载
    def parse_model(self,response):#解析每一个板块页面中对应新闻的标题和新闻详情页的url
        # response.xpath()
        div_list = response.xpath('/html/body/div/div[3]/div[3]/div[1]/div[1]/div/ul/li/div')
        for div in div_list:
            title = div.xpath('./div/div[1]/div/h3/a/text()').extract_first()
            new_detail_url = div.xpath('./div/div[1]/div/h3/a/@href').extract_first()

            item = WangyiproItem()
            item['title'] = title

            #对新闻详情页的url发起请求
            yield scrapy.Request(url=new_detail_url,callback=self.parse_detail,meta={'item':item})
    def parse_detail(self,response):#解析新闻内容
        content = response.xpath('//*[@id="content"]/div[2]//text()').extract()
        content = ''.join(content)
        item = response.meta['item']
        item['content'] = content

        yield item

    def closed(self,spider):
        self.bro.quit()

items.py

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html

import scrapy


class WangyiproItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    title = scrapy.Field()
    content = scrapy.Field()

middlewares.py

# Define here the models for your spider middleware
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html

from scrapy import signals

# useful for handling different item types with a single interface
from itemadapter import is_item, ItemAdapter
from scrapy.http import HtmlResponse
from time import sleep
class WangyiproDownloaderMiddleware:
    # Not all methods need to be defined. If a method is not defined,
    # scrapy acts as if the downloader middleware does not modify the
    # passed objects.



    def process_request(self, request, spider):
        # Called for each request that goes through the downloader
        # middleware.

        # Must either:
        # - return None: continue processing this request
        # - or return a Response object
        # - or return a Request object
        # - or raise IgnoreRequest: process_exception() methods of
        #   installed downloader middleware will be called
        return None
    #该方法拦截五大板块对应的响应对象,进行修改
    def process_response(self, request, response, spider):#spider爬虫对象
        bro = spider.bro#获取了在爬虫类中定义的浏览器对象
        #挑选出指定的响应对象进行修改
        #通过url指定request
        #通过request指定response
        if request.url in spider.models_url:
            bro.get(request.url)#五个板块对应的url进行请求
            sleep(2)
            page_text = bro.page_source     #包含了动态加载的新闻数据
            # response#五大板块对应的响应对象
            #针对定位到的这些response进行篡改
            #实例化一个新的响应对象(符合需求,包含动态加载出的新闻数据),替代原来旧的响应对象
            #如何获取动态加载出的新闻数据?
                #基于selenium便捷的获取动态加载数据
            new_response = HtmlResponse(url=request.url,body=page_text,encoding='utf-8',request=request)
            return new_response
        else:
            # response#其他请求对应的响应对象
            return response



    def process_exception(self, request, exception, spider):
        # Called when a download handler or a process_request()
        # (from other downloader middleware) raises an exception.

        # Must either:
        # - return None: continue processing this exception
        # - return a Response object: stops process_exception() chain
        # - return a Request object: stops process_exception() chain
        pass


pipelines.py

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html


# useful for handling different item types with a single interface
from itemadapter import ItemAdapter


class WangyiproPipeline:
    def process_item(self, item, spider):
        print(item)
        return item

settings.py

# Scrapy settings for wangyiPro project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = "wangyiPro"

SPIDER_MODULES = ["wangyiPro.spiders"]
NEWSPIDER_MODULE = "wangyiPro.spiders"


# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36 Edg/116.0.1938.76"

LOG_LEVEL = "ERROR"

# Obey robots.txt rules
ROBOTSTXT_OBEY = False

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#    "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
#    "Accept-Language": "en",
#}

# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    "wangyiPro.middlewares.WangyiproSpiderMiddleware": 543,
#}

# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
DOWNLOADER_MIDDLEWARES = {
   "wangyiPro.middlewares.WangyiproDownloaderMiddleware": 543,
}

# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    "scrapy.extensions.telnet.TelnetConsole": None,
#}

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
   "wangyiPro.pipelines.WangyiproPipeline": 300,
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = "httpcache"
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = "scrapy.extensions.httpcache.FilesystemCacheStorage"

# Set settings whose default value is deprecated to a future-proof value
REQUEST_FINGERPRINTER_IMPLEMENTATION = "2.7"
TWISTED_REACTOR = "twisted.internet.asyncioreactor.AsyncioSelectorReactor"
FEED_EXPORT_ENCODING = "utf-8"

运行后结果如下:
在这里插入图片描述

  • 8
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
好的,这里提供一个通过 Scrapy 中间件爬取百度首页的示例代码。 1. 创建 Scrapy 项目 ``` scrapy startproject baidu cd baidu ``` 2. 创建爬虫文件 ``` scrapy genspider baidu_spider www.baidu.com ``` 3. 编写中间件 在 `middlewares.py` 文件中编写中间件代码,如下所示: ```python from scrapy import signals from scrapy.http import HtmlResponse from selenium import webdriver class BaiduDownloaderMiddleware: def __init__(self): self.driver = webdriver.Chrome() def process_request(self, request, spider): self.driver.get(request.url) html = self.driver.page_source return HtmlResponse(url=request.url, body=html.encode('utf-8'), encoding='utf-8', request=request) def __del__(self): self.driver.quit() ``` 这里使用了 Selenium 和 ChromeDriver 来模拟浏览器访问百度首页,并将返回的页面源码封装成 HtmlResponse 类型,传递给爬虫进行解析。 4. 在 settings.py 文件中启用中间件 ```python DOWNLOADER_MIDDLEWARES = { 'baidu.middlewares.BaiduDownloaderMiddleware': 543, } ``` 5. 编写爬虫代码 ```python import scrapy class BaiduSpider(scrapy.Spider): name = 'baidu_spider' start_urls = ['https://www.baidu.com'] def parse(self, response): title = response.xpath('//title/text()').get() print(title) ``` 6. 运行爬虫 ``` scrapy crawl baidu_spider ``` 运行爬虫后,可以看到输出了百度首页的标题。 需要注意的是,由于百度首页有反爬机制,如果使用普通的 requests 或者 scrapy 爬虫访问,可能会被拦截。这里使用了 Selenium 和 ChromeDriver 来模拟浏览器访问,可以规避这个问题。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值