Scrapy_Splash渲染

yum-util 提供yum-config-manager功能,另外两个是devicemapper驱动依赖的

[root@localhost visitor]# yum install yum-utils device-mapper-persistent-data lvm2

[root@localhost visitor]# yum-config-manager –add-repo https://download.docker.com/linux/centos/docker-ce.repo

已加载插件:fastestmirror
adding repo from: https://download.docker.com/linux/centos/docker-ce.repo
grabbing file https://download.docker.com/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo

[root@localhost visitor]# yum list docker-ce –showduplicates | sort -r

[root@localhost visitor]# yum install docker-ce

[root@localhost visitor]# systemctl start docker

[root@localhost visitor]# systemctl status docker

● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
Active: active (running) since 一 2018-03-12 14:38:08 CST; 2s ago

[root@localhost visitor]# systemctl enable docker

Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

[root@localhost ~]# docker pull scrapinghub/splash

[root@localhost ~]# docker run -p 8050:8050 -p 8051:8051 scrapinghub/splash

[root@localhost ~]# firewall-cmd –zone=public –add-port=8050/tcp –permanent

success

[root@localhost ~]# firewall-cmd –reload

success

打开网页:http://10.16.14.6:8050/

[root@localhost ~]# pip3 install scrapy-splash

Collecting scrapy-splash
Downloading scrapy_splash-0.7.2-py2.py3-none-any.whl
Installing collected packages: scrapy-splash
Successfully installed scrapy-splash-0.7.2

[visitor@localhost demo3]$ python3 -m scrapy genspider taobao s.taobao.com

Created spider ‘taobao’ using template ‘basic’ in module:
demo3.spiders.taobao

[visitor@localhost demo3]$ python3 -m scrapy crawl taobao

taobao.py

# -*- coding: utf-8 -*-
import scrapy

class TaobaoSpider(scrapy.Spider):
    name = 'taobao'
    allowed_domains = []
    start_urls = ['https://s.taobao.com/search?q=%E7%BE%8E%E9%A3%9F/']

    def parse(self, response):
        title = response.xpath('//div[@class="row row-2 title"]/a/text()').extract()
        print('这是标题:', title)

使用Splash渲染,修改taobao.py和settings.py文件

taobao.py

# -*- coding: utf-8 -*-
import scrapy
from scrapy_splash import SplashRequest

class TaobaoSpider(scrapy.Spider):
    name = 'taobao'
    allowed_domains = []
    start_urls = ['https://s.taobao.com/search?q=%E7%BE%8E%E9%A3%9F/']

    def start_requests(self):
        for url in self.start_urls:
            yield SplashRequest(url=url, callback=self.parse, args={'wait':1}, endpoint='render.html')

    def parse(self, response):
        title = response.xpath('//div[@class="row row-2 title"]/a/text()').extract()
        print('这是标题:', title)

settings.py

# 渲染服务的url
SPLASH_URL = 'http://127.0.0.1:8050'

# 可选项(加上后报错:__init__() missing 1 required positional argument: 'settings')
SPIDER_MIDDLEWARES = {
   'scrapy_splash.SplashAwareFSCacheStorage': 100,
}

DOWNLOADER_MIDDLEWARES = {
   'scrapy_splash.SplashCookiesMiddleware': 723,
   'scrapy_splash.SplashMiddleware': 725,
  'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
}

# 去重过滤器
DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter'
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值