-
scrapy安装
pip install Scrapy
如果需要vs c++的话可能是因为要用到twisted,可以到 https://www.lfd.uci.edu/~gohlke/pythonlibs/ 下载,然后在本地下载的目录下在地址栏输入cmd,然后pip install Twisted-18.7.0-cp37-cp37m-win_amd64.whl 来安装。
No module named 'win32api'错误可以通过 pip install pypiwin32来解决
-
新建scrapy项目
在本地文件夹下,地址栏cmd进入命令行,
scrapy startproject XXX (xxx项目名称)
文件夹会有如下几个文件/文件夹(xxx)
scrapy.cfg
: 项目的配置文件xxx/
: 该项目的python模块。之后您将在此加入代码。xxx/items.py
: 项目中的item文件.xxx/pipelines.py
: 项目中的pipelines文件.xxx/settings.py
: 项目的设置文件.xxx/spiders/
: 放置spider代码的目录.
然后可以将项目导入到pycharm里面了
-
爬取简书网示例
1.在items.py里定义item,相当于java的实体类。我一开始不知道是写在这里面的,自己在外面写了一个,然后运行时候就找不到了。
这个文件本来有示例的,为了方便查看区别,我把整个贴进来了。JsItem是我们加上去的。
# -*- coding: utf-8 -*-
# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html
import scrapy
class QuotesItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
pass
class JsItem(scrapy.Item):
# 类别
leibie = scrapy.Field()
# 标题
biaoti = scrapy.Field()
# 正文
zhengwen = scrapy.Field()
# 字数
zishu = scrapy.Field()
# 阅读量
yuedu = scrapy.Field()
# 评论数
pinglun = scrapy.Field()
# 点赞
dianzan = scrapy.Field()
# 最后编辑时间
shijian = scrapy.Field()
# 作者
zuozhe = scrapy.Field()
# 自定义id
zid = scrapy.Field()
# 原文地址
yuanwen = scrapy.Field()
pass
这里的 阅读量、评论数、点赞数 我后面没获取到,可以忽略。
2.spider文件夹下编写spider文件
# -*- coding:utf-8 -*-
import uuid
import scrapy
from quotes.items import JsItem
# from . import jianshuDic
class ToScrapeSpiderXPath(scrapy.Spider):
name = 'jianshu'
start_urls = [
'https://www.jianshu.com/c/V2CqjW',
# 'https://www.jianshu.com/c/fcd7a62be697',
# 'https://www.jianshu.com/c/8c92f845cd4d', 'https://www.jianshu.com/c/yD9GAd',
# 'https://www.jianshu.com/c/1hjajt', 'https://www.jianshu.com/c/cc7808b775b4',
# 'https://www.jianshu.com/c/7b2be866f564', 'https://www.jianshu.com/c/5AUzod',
# 'https://www.jianshu.com/c/742422443ad3', 'https://www.jianshu.com/c/vHz3Uc',
# 'https://www.jianshu.com/c/70b8514fb442', 'https://www.jianshu.com/c/NEt52a',
# 'https://www.jianshu.com/c/bd38bd199ec6', 'https://www.jianshu.com/c/accb04610749',
# 'https://www.jianshu.com/c/dqfRwQ', 'https://www.jianshu.com/c/qqfxgN', 'https://www.jianshu.com/c/xYuZYD',
# 'https://www.jianshu.com/c/263e0ef8c3c3', 'https://www.jianshu.com/c/6fba5273f339',
# 'https://www.jianshu.com/c/ad41ba5abc09', 'https://www.jianshu.com/c/f6b4ca4bb891',
# 'https://www.jianshu.com/c/e50258a6a44b', 'https://www.jianshu.com/c/Jgq3Wc', 'https://www.jianshu.com/c/LLCyGH'
]
# https://blog.csdn.net/u014271114/article/details/53082676/
# https://www.tuicool.com/articles/jyQF32V
# https://www.jianshu.com/p/acdf9740ec79
# 保存到数据库 https://www.jianshu.com/p/acdf9740ec79
def parse(self, response):
for d in response.xpath('//ul[@class="note-list"]/li'):
# 获取文章链接
pageurl = d.xpath('a/@href').extract_first()
if pageurl is not None:
link = 'http://www.jianshu.com'+pageurl
# item = self.load_item(response)
item = JsItem()
item['leibie']=response.xpath('//a[@class="name"]/text()').extract_first()
item['yuanwen'] = link
# print(item)
yield scrapy.Request(link,meta={'item':item},callback=self.parse_item)
def parse_item(self, response):
item = response.meta['item']
item['biaoti'] = response.xpath('//h1[@class="title"]/text()').extract_first()
item['shijian']=response.xpath('//span[@class="publish-time"]/text()').extract_first()
item['yuedu'] = response.xpath('//span[@class="views-count"]/text()').extract_first()
item['zishu'] = response.xpath('//span[@class="wordage"]/text()').extract_first()
item['pinglun'] = response.xpath('//span[@class="comments-count"]/text()').extract_first()
item['dianzan'] = response.xpath('//span[@class="likes-count"]/text()').extract_first()
item['zuozhe'] = response.xpath('//span[@class="name"]/a/text()').extract_first()
# 自定义id https://www.cnblogs.com/dkblog/archive/2011/10/10/2205200.html
item['zid'] =str(uuid.uuid1()).replace('-','')
# 正文
zw = ''
# zws = response.xpath('//div[@class="show-content-free"]/*').extract()
zws = response.xpath('//div[@class="show-content-free"]/descendant::p/descendant::text()').extract()
for i in zws:
zw += i+'\n'
item['zhengwen'] = zw
return item
这里urls我注释了很多,量太大不好测试,只留了一个。下拉到底的话网站会加载下一页,这个我也还没做。注释的网址是我参考的一些,可以多看看。注意yield和return的区别。
parse方法我们获取到了文章详情的链接,然后再进一步请求,就如
https://www.tuicool.com/articles/jyQF32V 里说(比如博客或论坛,当前页有标题、摘要和url,详情页面有完整内容)这种情况一样,我们在parse的方法最后请求request了详情页链接,通过meta和callback到parse_item里去接收这个请求(第二次请求)的response,并赋值给item。(这个脑回路我整了很久,莫名其妙就可以了,可能说的比较绕。)
正文的处理还要看后期要怎么应用这些数据,从而决定是否保留那些标签,图片等。
3.setting配置
这个就相当于模拟浏览器去请求,
加头文件
#这个改为false
ROBOTSTXT_OBEY = False
DEFAULT_REQUEST_HEADERS = {
'accept': 'image/webp,*/*;q=0.8',
'accept-language': 'zh-CN,zh;q=0.8',
'referer': 'https://www.jianshu.com/',
'user-agent': 'Mozilla/5.0 (Windows NT 6.3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.63 Safari/537.36',
}
4.输出到数据库
数据库新建表,这个没什么说的了,我用的本地mysql
在pipelines.py里设置数据库内容,这里参考
https://www.jianshu.com/p/acdf9740ec79
还是要注意class的命名,不然找不到了
# -*- coding: utf-8 -*-
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html
import pymysql
def dbHandle():
conn = pymysql.connect(
host = "localhost",
user = "root",
passwd = "root",
charset = "utf8",
use_unicode = False
)
return conn
class QuotesPipeline(object):
def process_item(self, item, spider):
return item
class jianshuDB(object):
def process_item(self,item,spider):
dbObject = dbHandle()
cursor = dbObject.cursor()
cursor.execute("USE sale")
sql = "INSERT INTO t_jianshu(zid,leibie,biaoti,zhengwen,zishu,shijian,zuozhe,yuanwen) VALUES(%s,%s,%s,%s,%s,%s,%s,%s)"
try:
cursor.execute(sql,(item['zid'],item['leibie'],item['biaoti'],item['zhengwen'],item['zishu'],item['shijian'],item['zuozhe'],item['yuanwen']))
cursor.connection.commit()
except BaseException as e:
print("错误在这里>>>>>>>>>>>>>",e,"<<<<<<<<<<<<<错误在这里")
dbObject.rollback()
return item
然后再setting,py里设置一下
#输出到数据库
ITEM_PIPELINES = {
'quotes.pipelines.jianshuDB': 300,
}
5.差点忘了说怎么运行,在pycharm里
其实不用输出到数据库也可以运行的,cfg文件位于项目根目录,在根目录新建一个py文件,命名随意start.py,内容如下,运行这个文件就可以了。
# -*- coding:utf-8 -*-
from scrapy import cmdline
cmdline.execute("scrapy crawl jianshu".split())
这里的scrapy crawl 是固定的了,jianshu对应2中spider文件里的name。
还有一些问题还没妥善地处理,在这边先记录下。