scrapy爬取站长素材

本文详细介绍如何使用Scrapy框架爬取站长素材网站的图片,包括设置User-Agent,配置LOG_LEVEL和ROBOTSTXT_OBEY,解析HTML抓取src,自定义ImagesPipeline进行图片下载并存储。
摘要由CSDN通过智能技术生成

scrapy爬取站长素材:

1、创建项目scrapy startproject 爬虫项目名字

2、创建虫子scrapy genspider 虫名字

3、setting里面加UA伪装

4、加LOG_LEVEL级别、ROBOTSTXT_OBEY = False

5、虫名字里面爬取网站和解析数据

6、item里面增加爬取的数据

7、setting里面加管道配置

8、pipelines管道自定义from scrapy.pipelines.images import ImagesPipeline、基于媒体资源发请求、文件储存功能注意:split('/')[-1]

9、setting配置文件指定IMAGES_STORE = './imgLibs'图片存储文件夹的名称+路径

10、setting里面开启管道ITEM_PIPELINES

11、执行虫子scrapy crawl img

img.py

import scrapy
from imgsPro.items import ImgsproItem


class ImgSpider(scrapy.Spider):
    name = 'img'
    # allowed_domains = ['www.xxx.com']
    # https: // sc.chinaz.com / tupian / rentiyishu_2.html
    start_urls = ['https://sc.chinaz.com/tupian/rentiyishu.html']
    # url = 'https://sc.chinaz.com/tupian/rentiyishu_%d.html'
    page = 2

    def parse(self, response):
        div_list = response.xpath('//*[@id="container"]/div')
        for div in div_list:
            src = div.xpath('./div/a/img/@src2').extract_first()
            src = 'https:' + src
            item = ImgsproItem()
            item['src'] = src
            # print(item)
            yield item


print('结束!')

items.py

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html

import scrapy


class ImgsproItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    src = scrapy.Field()

pipelines.py (管道)

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html


# useful for handling different item types with a single interface
import scrapy
from itemadapter import ItemAdapter
from scrapy.pipelines.images import ImagesPipeline


# ImagesPipeline 专门用于文件下载到管道类,下载过程支持异步和多线程
class ImgsPipeline(ImagesPipeline):
    # 对item中的图片进行请求操作
    # 可以根据图片地址进行数据的请求
    def get_media_requests(self, item, info):
        # get_media_requests
        """基于媒体资源发请求"""
        yield scrapy.Request(item['src'])  # 不需要设定,callback=

    # 定义图片的名称:    指定图片存储的路径
    """文件储存功能"""

    def file_path(self, request, response=None, info=None, *, item=None):
        imgName = request.url.split('/')[-1]
        return imgName

    # 方法
    # 'ImgsPileLine.file_path()'
    # 的签名与类
    # 'ImagesPipeline'
    # 中基本方法的签名不匹配

    def item_completed(self, result, item, info):
        return item  # 该返回值会传递给下一个即将被执行的管理类

settings.py(配置文件)

# Scrapy settings for imgsPro project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'imgsPro'

SPIDER_MODULES = ['imgsPro.spiders']
NEWSPIDER_MODULE = 'imgsPro.spiders'

# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.212 Safari/537.36'

# Obey robots.txt rules
ROBOTSTXT_OBEY = False
LOG_LEVEL = 'ERROR'
# Configure maximum concurrent requests performed by Scrapy (default: 16)
# CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
# DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
# CONCURRENT_REQUESTS_PER_DOMAIN = 16
# CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
# COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
# TELNETCONSOLE_ENABLED = False

# Override the default request headers:
# DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
# }

# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
# SPIDER_MIDDLEWARES = {
#    'imgsPro.middlewares.ImgsproSpiderMiddleware': 543,
# }

# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
# DOWNLOADER_MIDDLEWARES = {
#    'imgsPro.middlewares.ImgsproDownloaderMiddleware': 543,
# }

# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
# EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
# }

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
    'imgsPro.pipelines.imgsPileLine': 300,
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
# AUTOTHROTTLE_ENABLED = True
# The initial download delay
# AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
# AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
# AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
# AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
# HTTPCACHE_ENABLED = True
# HTTPCACHE_EXPIRATION_SECS = 0
# HTTPCACHE_DIR = 'httpcache'
# HTTPCACHE_IGNORE_HTTP_CODES = []
# HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'


# 指定图片存储的路径:
IMAGES_STORE = './img_wws'

 

Scrapy是一个快速、高层次的网页爬取和网页抓取框架,用于抓取网站数据并从页面中提取结构化的数据。它可以用于各种用例,包括数据挖掘、信息监控和自动化测试等。使用Scrapy爬取新闻通常涉及以下几个步骤: 1. 创建Scrapy项目:在命令行中执行`scrapy startproject news_project`来创建一个新的Scrapy项目。 2. 定义Item:在`items.py`文件中定义需要提取的数据结构,例如新闻的标题、链接和发布日期等。 3. 编写Spider:创建一个爬虫文件,比如`news_spider.py`,在其中编写爬虫逻辑,定义如何请求网页、解析页面内容并提取Item。 4. 解析响应:使用Scrapy的Selector或BeautifulSoup等解析工具来解析网页内容,并将解析得到的数据填充到Item对象中。 5. 存储数据:定义Item Pipeline来处理数据,比如保存到文件、数据库或通过API发送数据。 下面是一个简单的示例代码,展示了如何定义Item和编写Spider来爬取新闻: ```python # items.py import scrapy class NewsItem(scrapy.Item): title = scrapy.Field() # 新闻标题 link = scrapy.Field() # 新闻链接 publish_date = scrapy.Field() # 发布日期 # news_spider.py import scrapy from news_project.items import NewsItem class NewsSpider(scrapy.Spider): name = 'news' allowed_domains = ['example.com'] # 允许爬取的域名 start_urls = ['http://www.example.com/news'] # 起始URL def parse(self, response): # 提取新闻信息 for news in response.css('div.news'): item = NewsItem() item['title'] = news.css('h3::text').get() item['link'] = news.css('a::attr(href)').get() item['publish_date'] = news.css('span.date::text').get() yield item ``` 以上是一个基础的Scrapy爬虫项目结构和代码示例,用以爬取新闻信息。
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

itLaity

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值