python创建scrapy_Python 创建项目时配置 Scrapy 自定义模板

#Scrapy settings for $project_name project#

#For simplicity, this file contains only settings considered important or#commonly used. You can find more settings consulting the documentation:#

#https://docs.scrapy.org/en/latest/topics/settings.html#https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#https://docs.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME= '$project_name'SPIDER_MODULES= ['$project_name.spiders']

NEWSPIDER_MODULE= '$project_name.spiders'

'''Scrapy 提供 5 层 Log Level:

CRITICAL - 严重错误(critical)

ERROR - 一般错误(regular errors)

WARNING - 警告信息(warning messages)

INFO - 一般信息(informational messages)

DEBUG - 调试信息(debugging messages)'''LOG_LEVEL= 'WARNING'

'''有一些网站不喜欢被爬虫程序访问,所以会检测连接对象;

如果是爬虫程序,也就是非人点击访问,它就会不让你继续访问;

所以为了要让程序可以正常运行,需要隐藏自己的爬虫程序的身份。

此时,可以通过设置User Agent的来达到隐藏身份的目的,User Agent的中文名为用户代理,简称UA。'''

#Crawl responsibly by identifying yourself (and your website) on the user-agent#USER_AGENT = '$project_name (+http://www.yourdomain.com)'

USER_AGENT = 'Mozilla/5.0'

'''USER_AGENT = {"User-Agent": random.choice(

['Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.8) Gecko Fedora/1.9.0.8-1.fc10 Kazehakase/0.5.6',

'Mozilla/5.0 (X11; Linux i686; U;) Gecko/20070322 Kazehakase/0.4.5',

'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.71 Safari/537.1 LBBROWSER',

'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)',

'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11',

'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11',

'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; QQDownload 732; .NET4.0C; .NET4.0E)',

'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11',

'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; SV1; QQDownload 732; .NET4.0C; .NET4.0E; 360SE)',

'Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 3.0.04506.30)',

'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/535.20 (KHTML, like Gecko) Chrome/19.0.1036.7 Safari/535.20',

'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.8) Gecko Fedora/1.9.0.8-1.fc10 Kazehakase/0.5.6',

'Mozilla/5.0 (X11; U; Linux x86_64; zh-CN; rv:1.9.2.10) Gecko/20100922 Ubuntu/10.10 (maverick) Firefox/3.6.10',

'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.71 Safari/537.1 LBBROWSER',

'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1',

'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; Acoo Browser; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET CLR 3.0.04506)',

'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.0.12) Gecko/20070731 Ubuntu/dapper-security Firefox/1.5.0.12',

'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; QQDownload 732; .NET4.0C; .NET4.0E; LBBROWSER)',

'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1',

'Mozilla/5.0 (iPhone; CPU iPhone OS 10_3 like Mac OS X) AppleWebKit/603.1.30 (KHTML, like Gecko) Version/10.3 Mobile/14E277 Safari/603.1.30',

'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36'])}'''

'''Obey robots.txt rules

robots.txt 是遵循 Robot协议 的一个文件,它保存在网站的服务器中

作用:告诉搜索引擎爬虫,本网站哪些目录下的网页 不希望 你进行爬取收录。在Scrapy启动后,会在第一时间访问网站的 robots.txt 文件,然后决定该网站的爬取范围。

当然,我们并不是在做搜索引擎,而且在某些情况下我们想要获取的内容恰恰是被 robots.txt 所禁止访问的。所以,某些时候,我们就要将此配置项设置为 False ,拒绝遵守 Robot协议 !'''ROBOTSTXT_OBEY=False#Configure maximum concurrent requests performed by Scrapy (default: 16)#CONCURRENT_REQUESTS = 32

#Configure a delay for requests for the same website (default: 0)#See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay#See also autothrottle settings and docs

DOWNLOAD_DELAY = 1 #延迟下载,防止被封#The download delay setting will honor only one of:#CONCURRENT_REQUESTS_PER_DOMAIN = 16#CONCURRENT_REQUESTS_PER_IP = 16

#Disable cookies (enabled by default)#COOKIES_ENABLED = False

#Disable Telnet Console (enabled by default)#TELNETCONSOLE_ENABLED = False

#Override the default request headers:#DEFAULT_REQUEST_HEADERS = {#'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',#'Accept-Language': 'en',#}

#Enable or disable spider middlewares#See https://docs.scrapy.org/en/latest/topics/spider-middleware.html#SPIDER_MIDDLEWARES = {#'$project_name.middlewares.${ProjectName}SpiderMiddleware': 543,#}

#Enable or disable downloader middlewares#See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#DOWNLOADER_MIDDLEWARES = {#'$project_name.middlewares.${ProjectName}DownloaderMiddleware': 543,#}

#Enable or disable extensions#See https://docs.scrapy.org/en/latest/topics/extensions.html#禁用扩展(Disabling an extension)(avoid twisted.internet.error.CannotListenError)

EXTENSIONS ={'scrapy.extensions.telnet.TelnetConsole': None,

}#Configure item pipelines#See https://docs.scrapy.org/en/latest/topics/item-pipeline.html

ITEM_PIPELINES ={'$project_name.pipelines.${ProjectName}Pipeline': 300,

}#Enable and configure the AutoThrottle extension (disabled by default)#See https://docs.scrapy.org/en/latest/topics/autothrottle.html#AUTOTHROTTLE_ENABLED = True#The initial download delay#AUTOTHROTTLE_START_DELAY = 5#The maximum download delay to be set in case of high latencies#AUTOTHROTTLE_MAX_DELAY = 60#The average number of requests Scrapy should be sending in parallel to#each remote server#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0#Enable showing throttling stats for every response received:#AUTOTHROTTLE_DEBUG = False

#Enable and configure HTTP caching (disabled by default)#See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings#HTTPCACHE_ENABLED = True#HTTPCACHE_EXPIRATION_SECS = 0#HTTPCACHE_DIR = 'httpcache'#HTTPCACHE_IGNORE_HTTP_CODES = []#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值