最近学累了Java就会玩一会爬虫
从一开始的urllib到requests再到正则,利用BeautifulSoup裤,Seleium库伪装浏览器模拟点击,代理反爬,直到今天的使用pyspider框架,拿一些JS渲染过的数据,其实最重要,最难得还是对网页的分析,慢慢学来。
今天写了一个小爬虫来爬电玩巴士的游戏库中的基本信息,在pyspider中点击链接即可爬取数据,后期我想把它写成可以自动的,但是电玩巴士游戏库网页的源代码对各个游戏详情页面的描述不一样,所以还在不停尝试。
给大家说一下pyspider,基本几十行就可以完成一个爬虫
pyspider是一个国人编写的强大的网络爬虫系统并带有强大的WebUI。采用Python语言编写,多进程处理,去重处理,错误重试,分布式架构,支持多种数据库后端,强大的WebUI支持脚本编辑器,任务监视器,项目管理器以及结果查看器。
在框架内填代码就很舒服。
步骤如下
首先是安装Pyspider
在cmd中输入 pip install pyspider,当然是你安装了python环境的情况下
第二步,下载一个PhantomJS,下载好配置环境变量
即在用户变量Path中输入PhantomJS的位置路径
第三步 ,启动PhantomJS,打开浏览器输入localhost:5000
然后就进入Pyspider啦
接下来就看我代码注释就可以了
#!/usr/bin/env python
# -*- encoding: utf-8 -*-
# Created on 2018-10-02 18:18:42
# Project: bashi
from pyspider.libs.base_handler import *
import pymongo
class Handler(BaseHandler):
crawl_config = {
}
//数据库配置,用的monggodb
client = pymongo.MongoClient("localhost")
db = client["bashi"]
//起始的url
@every(minutes=24 * 60)
def on_start(self):
self.crawl('http://game.tgbus.com/', callback=self.index_page)
//正则匹配要显示详情页的网址
@config(age=10 * 24 * 60 * 60)
def index_page(self, response):
for each in response.doc('a[href^="http"]').items():
self.crawl(each.attr.href, callback=self.detail_page)
//详情页去拿数据
@config(priority=2)
def detail_page(self, response):
url = response.url
name = response.doc('#app > div.gl-view-game.gl-view > div.vd-swimlane.vdp-theme_dark.vdp-background-color_undefined.gl-title.glp-theme_dark > div > h1 > span.gl-title_main').text()
type = response.doc('#app > div.gl-view-game.gl-view > div.view-game-news.vd-swimlane.vdp-theme_dark.vdp-background-color_undefined > div > div > div.vd-flexbox.vdp-flex_33 > div > div:nth-child(3) > div > div.vd-card_inner > div:nth-child(1)').text()
topic = response.doc('#app > div.gl-view-game.gl-view > div.view-game-news.vd-swimlane.vdp-theme_dark.vdp-background-color_undefined > div > div > div.vd-flexbox.vdp-flex_33 > div > div:nth-child(3) > div > div.vd-card_inner > div:nth-child(2)').text()
platform = response.doc('#app > div.gl-view-game.gl-view > div.view-game-news.vd-swimlane.vdp-theme_dark.vdp-background-color_undefined > div > div > div.vd-flexbox.vdp-flex_33 > div > div:nth-child(3) > div > div.vd-card_inner > div:nth-child(3)').text()
Developers = response.doc('#app > div.gl-view-game.gl-view > div.view-game-news.vd-swimlane.vdp-theme_dark.vdp-background-color_undefined > div > div > div.vd-flexbox.vdp-flex_33 > div > div:nth-child(3) > div > div.vd-card_inner > div:nth-child(4)').text()
saleDate=response.doc('#app > div.gl-view-game.gl-view > div.view-game-news.vd-swimlane.vdp-theme_dark.vdp-background-color_undefined > div > div > div.vd-flexbox.vdp-flex_33 > div > div:nth-child(3) > div > div.vd-card_inner > div:nth-child(6)').text()
Pattern=response.doc('#app > div.gl-view-game.gl-view > div.view-game-news.vd-swimlane.vdp-theme_dark.vdp-background-color_undefined > div > div > div.vd-flexbox.vdp-flex_33 > div > div:nth-child(3) > div > div.vd-card_inner > div:nth-child(8)').text()
view=response.doc('#app > div.gl-view-game.gl-view > div.view-game-news.vd-swimlane.vdp-theme_dark.vdp-background-color_undefined > div > div > div.vd-flexbox.vdp-flex_33 > div > div:nth-child(3) > div > div.vd-card_inner > div:nth-child(9)').text()
engine=response.doc('#app > div.gl-view-game.gl-view > div.view-game-news.vd-swimlane.vdp-theme_dark.vdp-background-color_undefined > div > div > div.vd-flexbox.vdp-flex_33 > div > div:nth-child(3) > div > div.vd-card_inner > div:nth-child(10)').text()
series=response.doc('#app > div.gl-view-game.gl-view > div.view-game-news.vd-swimlane.vdp-theme_dark.vdp-background-color_undefined > div > div > div.vd-flexbox.vdp-flex_33 > div > div:nth-child(3) > div > div.vd-card_inner > div:nth-child(11)').text()
outlook=response.doc('#app > div.gl-view-game.gl-view > div.view-game-news.vd-swimlane.vdp-theme_dark.vdp-background-color_undefined > div > div > div.vd-flexbox.vdp-flex_33 > div > div:nth-child(3) > div > div.vd-card_inner > div:nth-child(12)').text()
return {
"name" : name,
"type":type,
"topic" : topic,
"platform" : platform,
"Developers" : Developers,
"saleDate" : saleDate,
"Pattern" : Pattern,
"view" : view,
"engine" :engine,
"series":series,
"outlook":outlook
}
//存储结果
def on_result(self, result):
if result:
self.save_to_mongo(result)
//insert到mongo
def save_to_mongo(self, result):
if self.db['aodesai'].insert(result):
print('save to mongo', result)
娱乐而已,因为以后项目需要数据