【依葫芦画葫芦】の Scrapy Dou鱼封面爬取 笔记

该博客详细介绍了如何使用Scrapy框架来爬取斗鱼直播平台的主播封面图片。首先创建项目并定义爬虫,设置起始URL,接着在解析函数中解析JSON响应,提取昵称和图片地址。然后通过ImagesPipeline保存图片到本地,并重命名文件为昵称.jpg。在settings.py中配置了图片存储路径和下载延迟。最后,通过PyCharm运行爬虫命令进行爬取。
摘要由CSDN通过智能技术生成

【依葫芦画葫芦】の Scrapy Dou鱼封面爬取

创建项目

scrapy startproject douyu
scrapy genspider douyua www.douyu.com

创建项目后主代码

import scrapy
import json
from douyu.items import DouyuItem


class DouyuaSpider(scrapy.Spider):
    name = 'douyua'
    allowed_domains = ['https://www.douyu.com']
    base_url = 'https://m.douyu.com/api/room/list?page={}&type=yz'
    offset=0
   
    start_urls = [base_url.format(offset)]
    def parse(self, response):

        res = json.loads(response.body)['data']
        # print(len(res))
        if len(res['list'])==0:
            return
        for re in res['list']:
            item = DouyuItem()
            item['nn'] = re['nickname']
            item['img'] = re['verticalSrc']
            # print(item)
            # print(re)
            yield item
        self.offset +=1
        url = self.base_url.format(self.offset)
        yield scrapy.Request(url,callback=self.parse,dont_filter=True)#dont_filter=True url去重


item.py 代码

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html

import scrapy


class DouyuItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()

    nn = scrapy.Field() #昵称
    img = scrapy.Field()#图片地址

piplines.py 代码

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html


# useful for handling different item types with a single interface
from itemadapter import ItemAdapter
#图片保存专用
import scrapy
import os
from scrapy.pipelines.images import ImagesPipeline

class DouyuPipeline(ImagesPipeline):
    def get_media_requests(self, item, info):
        img_link=item['img']
        yield scrapy.Request(img_link)

    def item_completed(self, results, item, info):
        path = 'G:\\py\\douyu\\douyu\\spiders\\imgs\\'
        print('*'*30)
        print(results)
        print(item['nn'])
        if results[0][0] == True:
            os.rename(path + results[0][1]['path'],path + item['nn']+'.jpg')

settings.py 代码

# Scrapy settings for douyu project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'douyu'

SPIDER_MODULES = ['douyu.spiders']
NEWSPIDER_MODULE = 'douyu.spiders'

IMAGES_STORE='G:\\py\\douyu\\douyu\\spiders\\imgs\\'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'douyu (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = True

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
    'user-agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36'
 }

# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'douyu.middlewares.DouyuSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    'douyu.middlewares.DouyuDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
   'douyu.pipelines.DouyuPipeline': 300,
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

PyCharm 运行代码

from scrapy import cmdline

cmdline.execute('scrapy crawl douyua --nolog'.split())

效果图效果图

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值