环境
win8, python3.7, pycharm
正文
1.scrapy框架的安装
在cmd命令行窗口执行:
pip install scrapy
即可完成scrapy框架的安装
2. 创建scrapy项目
在cmd命令行窗口下切换到想要的目录下, 我这里是c:\users\administrator\pycharmprojects\untitled\tests\scrapy
执行下面代码, 即可在当前的"scrapy"目录下生成jianshu项目文件夹.
scrapy startproject jianshu
文件夹结构如下:
items.py:定义要爬取的项目
middlewares.py: 定义爬取时的中间介质
pipelines.py: 定义数据管道
settings.py: 配置文件
scrapy.cfg: scrapy部署时的配置文件
3. 创建jianshuspider
在cmd命令行依次执行以下代码, 即可在"jianshu/spiders"目录下创建jianshuspider.py文件
cd jianshu
scrapy genspider jianshuspider jianshuspider.toscrape.com
4. 定义要爬取的项目
在items.py中确定要爬取的信息: 中的主题, 内容, 文章数, 粉丝数这四个信息
1 import scrapy
2 from scrapy.item import item, field
3
4 class jianshuitem(item):
5 # define the fields for your item here like:
6 # name = scrapy.field()
7 title = field() #主题
8 content = field() #内容
9 article = field() #文章
10 fans = field() #粉丝
5. 编写爬虫主程序
简书热门专题采用异步加载, 在network中选择xhr来确定异步加载的url: https://www.jianshu.com/recommendations/collections?page=(1,2,3,4.....)&order_by=hot
在jianshuspider.py中编写主程序:
1 import scrapy
2 from scrapy.spiders import crawlspider
3 from scrapy.selector import selector
4 from jianshu.items import jianshuitem
5 from scrapy.http import request
6 class jianshu(crawlspider):
7 name = 'jianshu'
8 allowed_domains = ['jianshuspider.toscrape.com']
9 start_urls = ['https://www.jianshu.com/recommendations/collections?page=1&order_by=hot']
10 def parse(self, response):
11 item = jianshuitem()
12 #对源码进行初始化
13 selector = selector(response)
14 #采用xpath进行解析
15 infos = selector.xpath('//div[@class="collection-wrap"]')
16 for info in infos:
17 title = info.xpath('a[1]/h4/text()').extract()[0]
18 content = info.xpath('a[1]/p/text()').extract()
19 article = info.xpath('div/a/text()').extract()[0]
20 fans = info.xpath('div/text()').extract()[0]
21 #加入判断, 如果content存在则返回content[0], 否则返回''
22 if content:
23 content = content[0]
24 else:
25 content = ''
26 item['title'] = title
27 item['content'] = content
28 item['article'] = article
29 item['fans'] = fans
30 yield item
31 #列表生成式, 生成多个url
32 urls = ['https://www.jianshu.com/recommendations/collections?page={0}&order_by=hot'.format(str(page)) for page in range(2,37)]
33 for url in urls:
34 yield request(url,callback=self.parse)
6. 保存到mongodb
利用pipelines数据管道将其存储至mongodb, 在pipelines.py编写:
1 import pymongo
2
3 class jianshupipeline(object):
4 def __init__(self):
5 '''连接mongodb'''
6 client = pymongo.mongoclient(host='localhost')
7 db = client['test']
8 jianshu = db["jianshu"]
9 self.post = jianshu
10 def process_item(self, item, spider):
11 '''写入mongodb'''
12 info = dict(item)
13 self.post.insert(info)
14 return item
7. setting配置
1 bot_name = 'jianshu'
2 spider_modules = ['jianshu.spiders']
3 newspider_module = 'jianshu.spiders'
4 #从网站请求头复制粘贴user-agent
5 user_agent = 'mozilla/5.0 (windows nt 6.3; win64; x64) applewebkit/537.36 (khtml, like gecko) chrome/68.0.3440.106 safari/537.36'
6 robotstxt_obey = true
7 #设置等待时间5秒
8 download_delay = 5
9 #配置项目管道
10 item_pipelines = {
11 'jianshu.pipelines.jianshupipeline': 300,
12 }
8. 新建main.py文件
在jianshu文件目录下新建main.py文件, 编辑如下代码:
1 from scrapy import cmdline
2 cmdline.execute('scrapy crawl jianshu'.split())
9. 运行main.py文件
在运行之前, 需确保mongodb服务已经启动, 执行结果如下:
如您对本文有疑问或者有任何想说的,请点击进行留言回复,万千网友为您解惑!