使用scrapy抓取博客信息
本文使用python的爬虫工具scrapy获取博客园发布的文档的信息。
- 创建cnblog爬虫项目;
scrapy startproject cnblog
- 创建爬虫cnblog_spider;
scrapy genspider cnblog_spider cnblogs.com
(1)在settings.py中添加配置项;
BOT_NAME = 'cnblog'
SPIDER_MODULES = ['cnblog.spiders']
NEWSPIDER_MODULE = 'cnblog.spiders'
ROBOTSTXT_OBEY = False
# 覆盖默认的请求头
DEFAULT_REQUEST_HEADERS = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language': 'en',
'User-Agent': "从浏览器请求中获取"
}
# 添加管道
ITEM_PIPELINES = {
'cnblog.pipelines.FilePipeline': 300
}
LOG_LEVEL = 'ERROR'
(2)修改pipelines.py文件
class FilePipeline(object):
def process_item(self, item, spider):
data = ''
with open('cnblog.txt', 'a', encoding='utf-8') as f:
titles = item['title']
links = item['link']
for i, j in zip(titles, links):
data += i+' '+j+'\n'
f.write(data)
f.close()
return item
(3)修改items.py文件
import scrapy
class CnblogItem(scrapy.Item):
title = scrapy.Field()
link = scrapy.Field()
(4)修改cnblog_spider.py爬虫文件
import scrapy
from cnblog.items import CnblogItem
class CnblogSpiderSpider(scrapy.Spider):
name = 'cnblog_spider'
allowed_domains = ['cnblogs.com']
url = 'https://www.cnblogs.com/sitehome/p/'
offset = 1
start_urls = [url + str(offset)]
def parse(self, response):
item = CnblogItem()
item['title'] = response.xpath('//a[@class="post-item-title"]/text()').extract()
item['link'] = response.xpath('//a[@class="post-item-title"]/@href').extract()
yield item
print("第{0}页爬取完成".format(self.offset))
if self.offset < 10:
self.offset += 1
url2 = self.url + str(self.offset)
yield scrapy.Request(url=url2, callback=self.parse)
- 运行爬虫
python main.py