1.调试
1.1 loggin模块的使⽤
import scrapy
import logging
logger = logging.getLogger(__name__)
class QbSpider(scrapy.Spider):
name = 'qb'
allowed_domains = ['qiushibaike.com']
start_urls = ['http://qiushibaike.com/']
def parse(self, response):
for i in range(10):
item = {
}
item['content'] = "haha"
# logging.warning(item)
logger.warning(item)
yield item
1.2 log日志保存到本地文件
在setting⽂件中LOG_FILE = ‘./log.log’
2. scrapy.Request知识点
scrapy.Request(url, callback=None, method=‘GET’, headers=None, bod
y=None,cookies=None, meta=None, encoding=‘utf-8’, priority=0,
dont_filter=False, errback=None, flags=None)
常⽤参数为:
callback:指定传⼊的URL交给那个解析函数去处理
meta:实现不同的解析函数中传递数据,meta默认会携带部分信息,⽐如下载延迟,请求深
度
dont_filter:让scrapy的去重不会过滤当前URL,scrapy默认有URL去重功能,对需要
重复请求的URL有重要⽤途
3. item的介绍和使⽤
import scrapy
class TencentItem(scrapy.Item):
# define the fields for your item here like:
title = scrapy.Field()
position = scrapy.Field()
date = scrapy.Field()
4. Scrapy log信息的认知
### 爬⾍scrpay版本信息和依赖组建信息
2019-01-19 09:50:48 [scrapy.utils.log] INFO: Scrapy 1.5.1 started
(bot: tencent)
2 2019-01-19 09:50:48 [scrapy.utils.log] INFO: Versions: lxml 4.2.5
.0, libxml2 2.9.5, cssselect 1.0.3, parsel 1.5.0, w3lib 1.19.0, T
wisted 18.9.0, Python 3.6.5 (v3
3 .6.5:f59c0932b4, Mar 28 2018, 17:00:18) [MSC v.1900 64 bit (AMD64
)], pyOpenSSL 18.0.0 (OpenSSL 1.1.0i 14 Aug 2018), cryptography
2.3.1, Platform Windows<