scrapy学习案例_爬取电影天堂信息
差别与昨天那的案例:这个案例是图片资源与名字资源不在同一地方
创建scrapy项目
scrapy startproject scrapy_movie
2. 创建爬虫文件(../scrapy_movie/scrapy_movie/spiders
)
scrapy genspider mv https://dytt8.net/html/gndy/china/index.html
爬取思路
- 使用xpath解析response数据,找到每个数据最外层结构
a_list = response.xpath('//div[@class="co_content8"]//td[2]//a[2]')
- 循环
a_list
再次使用xpath解析其中的数据(name, href)
name = a.xpath('./text()').extract_first()
href = a.xpath('./@href').extract_first()
- 其中href只是第二个网页资源,真正的图片在里面
url = 'https://www.dytt8.net' + href
- 由于这次的网页获取图片的逻辑已经不一样了,所以要自己仿写parse函数来获取图片资源,原本已经获取好的name资源可以通过meta参数传进第二个函数(parse_second)
yield scrapy.Request(url=url, callback=self.parse_second, meta={'name': name})
- 调用items.py里面的类, 把这个对象交给管道
def parse_second(self, response):
src = response.xpath('//div[@id="Zoom"]//img/@src').extract_first()
# 接收到请求参数的值
name = response.meta['name']
movie = ScrapyMovieItem(src=src, name=name)
yield movie
- 设置items
class ScrapyMovieItem(scrapy.Item):
# define the fields for your item here like:
name = scrapy.Field()
src = scrapy.Field()
- 设置pipelines
class ScrapyMoviePipeline:
def open_spider(self, spider):
self.fp = open('movie.json', 'w', encoding='utf-8')
def process_item(self, item, spider):
self.fp.write(str(item))
return item
def close_spider(self, spider):
self.fp.close()
- settings解封
ITEM_PIPELINES = {
"scrapy_movie.pipelines.ScrapyMoviePipeline": 300,
}
3. 运行文件
scrapy crawl mv
有什么不明白的上一篇说的清清楚楚了
代码
../scrapy_movie/scrapy_movie/spiders/mv.py
import scrapy
from scrapy_movie.items import ScrapyMovieItem
class MvSpider(scrapy.Spider):
name = "mv"
allowed_domains = ["dytt8.net"]
start_urls = ["https://dytt8.net/html/gndy/china/index.html"]
def parse(self, response):
# 要第一页的名字和第二页的图片
a_list = response.xpath('//div[@class="co_content8"]//td[2]//a[2]')
for a in a_list:
# 获取第一页的name 和 要点击的链接
name = a.xpath('./text()').extract_first()
href = a.xpath('./@href').extract_first()
# 第二页的地址
url = 'https://www.dytt8.net' + href
yield scrapy.Request(url=url, callback=self.parse_second, meta={'name': name})
def parse_second(self, response):
src = response.xpath('//div[@id="Zoom"]//img/@src').extract_first()
# 接收到请求参数的值
name = response.meta['name']
movie = ScrapyMovieItem(src=src, name=name)
yield movie
# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html
import scrapy
class ScrapyMovieItem(scrapy.Item):
# define the fields for your item here like:
name = scrapy.Field()
src = scrapy.Field()
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
# useful for handling different item types with a single interface
from itemadapter import ItemAdapter
class ScrapyMoviePipeline:
def open_spider(self, spider):
self.fp = open('movie.json', 'w', encoding='utf-8')
def process_item(self, item, spider):
self.fp.write(str(item))
return item
def close_spider(self, spider):
self.fp.close()
# Scrapy settings for scrapy_movie project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://docs.scrapy.org/en/latest/topics/settings.html
# https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html
BOT_NAME = "scrapy_movie"
SPIDER_MODULES = ["scrapy_movie.spiders"]
NEWSPIDER_MODULE = "scrapy_movie.spiders"
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = "scrapy_movie (+http://www.yourdomain.com)"
# Obey robots.txt rules
ROBOTSTXT_OBEY = True
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32
# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
# Disable cookies (enabled by default)
#COOKIES_ENABLED = False
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
# "Accept-Language": "en",
#}
# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# "scrapy_movie.middlewares.ScrapyMovieSpiderMiddleware": 543,
#}
# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# "scrapy_movie.middlewares.ScrapyMovieDownloaderMiddleware": 543,
#}
# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# "scrapy.extensions.telnet.TelnetConsole": None,
#}
# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
"scrapy_movie.pipelines.ScrapyMoviePipeline": 300,
}
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False
# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = "httpcache"
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = "scrapy.extensions.httpcache.FilesystemCacheStorage"
# Set settings whose default value is deprecated to a future-proof value
REQUEST_FINGERPRINTER_IMPLEMENTATION = "2.7"
TWISTED_REACTOR = "twisted.internet.asyncioreactor.AsyncioSelectorReactor"
FEED_EXPORT_ENCODING = "utf-8"
这里只是解封而已
其他文件都没变
运行结果
- 会在
../scrapy_movie/scrapy_movie/spiders
的目录下生成一个movie.json
文件
通过此案,需要学会:
- 调用别的函数时,把当前函数的值传过去,mata参数传值(字典)
- 传送:
yield scrapy.Request(url=url, callback=self.parse_second, meta={'name': name})
- 接受:
name = response.meta['name']
- 传送: