添加debug的模式为warning,爬虫运行的时候就可以不会一直打印显示debug内容了,爬虫运行有问题的时候可以再打开,查看问题原因。
LOG_LEVEL = 'WARNING'
开启下面配置项修改请求头即可模拟浏览器
# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = 'twinkl (+http://www.yourdomain.com)'
修改最大并发数量,默认为16,不要贪快噢
# Configure maximum concurrent requests performed by Scrapy (default: 16)
CONCURRENT_REQUESTS = 32
修改爬虫的下载速度,需要降低爬取速度的时候适当降速噢
# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
DOWNLOAD_DELAY = 1
禁用cookie选项,适用于不需要的登录的网址,避免被识别为爬虫
# Disable cookies (enabled by default)
COOKIES_ENABLED = False
可以在这个选项中设置默认请求头,可以添加UA等
# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language': 'en',
}
爬虫中间件,如果需要修改其中的一些方法可以开启,自己修改,我这个里面写了一个随机用户代理
# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
SPIDER_MIDDLEWARES = {
'twinkl.middlewares.TwinklSpiderMiddleware': 543,
}
# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
DOWNLOADER_MIDDLEWARES = {
# 'twinkl.middlewares.TwinklDownloaderMiddleware': 543,
'twinkl.middlewares.RandomUserAgent': 544,
}
这一个是爬虫管道,当你下载的数据需要保存的时候,在pipeline里写好了保存方法的时候,不要忘记开启,不然数据是不会保存的哦,我的这个pipeline是继承的FilesPipeline,所以加了一个文件保存路径
# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'twinkl.pipelines.TwinklPipeline': 300,
}
FILES_STROE = r'C:\Users\Desktop\twink'
是否遵守robots协议,有的网站如果不修改为False会不让爬取,所以最好开启爬虫就修改了
# Obey robots.txt rules
ROBOTSTXT_OBEY = False