最近学完Python之后感觉要学明白一门编程语言就需要不停用,因此就选择了相对难度小的爬虫练习,话不多说,看笔记:
运行环境:
OS:Win10
Python:3.6
Scrapy:1.5.1
IDE:Pycharm
工程代码:
目前主要做了一个爬取豆瓣前250电影的小爬虫,代码也是按照Scrapy的手册上学到的现学现用,还望各位大神赐教。
1,setting.py文件
# -*- coding: utf-8 -*-
import time
# Scrapy settings for scrawlDianpingData project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://doc.scrapy.org/en/latest/topics/settings.html
# https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html
BOT_NAME = 'scrawlDianpingData'
SPIDER_MODULES = ['scrawlDianpingData.spiders']
NEWSPIDER_MODULE = 'scrawlDianpingData.spiders'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
RANDOM_UA_TYPE = 'random'
DOWNLOADER_MIDDLEWARES = {
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,
'scrapy_fake_useragent.middleware.RandomUserAgentMiddleware': 400,
}
# Obey robots.txt rules
ROBOTSTXT_OBEY = True
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32
# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
# Disable cookies (enabled by default)
#COOKIES_ENABLED = False
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#}
# Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
SPIDER_MIDDLEWARES = {
'scrawlDianpingData.middlewares.ScrawldianpingdataSpiderMiddleware': 543,
}
# Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
# DOWNLOADER_MIDDLEWARES = {
# 'scrawlDianpingData.middlewares.ScrawldianpingdataDownloaderMiddleware': 543,
# }
# Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#}
# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
#ITEM_PIPELINES = {
# 'scrawlDianpingData.pipelines.ScrawldianpingdataPipeline': 300,
#}
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False
# Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
HTTPERROR_ALLOWED_CODES = [403]
#Export report
FEED_FORMAT = "csv"
FEED_EXPORT_ENCODING = 'utf-8-sig'
FEED_URI = "file:///C:/Spider_Output/%s.csv" % ("report_"+time.strftime("%Y%m%d%H%M%S", time.localtime()))
,2,主程序crawlmovie.py文件
import scrapy
import os
class CrawlMovie(scrapy.Spider):
name = "movie"
start_urls = [
'https://movie.douban.com/top250',
]
def parse(self, response):
for item in response.css('div.item'):
yield{
'排名': item.css('div.pic em::text').extract_first().strip(),
'电影名': item.css('span.title::text').extract_first().strip(),
'链接': item.css('div.hd a::attr(href)').extract_first().strip(),
'详细': item.css('div.bd p::text').extract_first().strip(),
'评分': item.css('span.rating_num::text').extract_first().strip(),
'评分人数': item.css('div.star span:nth-child(4)::text').extract_first().strip(),
'格言': item.css('span.inq::text').extract_first(),
}
next_page = response.css('span.next a::attr(href)').extract_first()
if next_page is not None:
yield response.follow(next_page, self.parse)
#delete empty files
path = 'C:/Spider_Output/'
for file in os.listdir(path):
if os.stat(path+file).st_size == 0:
try:
os.remove(path+file)
self.log('delete %s success!!!' % file)
except OSError as oe:
print("file is using.")
3,程序启动入口,主要用在配置
from scrapy import cmdline
cmdline.execute("scrapy crawl movie".split())
练习中遇到的问题:
1,编码问题,如果爬去的内容有中文汉字,在setting.py中加入下边代码
FEED_EXPORT_ENCODING = 'utf-8-sig'
其他的解决方案都试过了,但是就这个好用
2,从网页上爬去的时候会出现无值的情况,所以每个爬取之后及得用上extract_first()这个方法,并且后边记得别用strip()
去首位空格
3,如果一些网站禁掉爬虫,记得加上fake-useragent这个package
最后把结果也分享了