scrapy-splash java,scrapy-splash 简单使用

一.创建scrapy 应用

scrapy startproject jingdong

二.穿件爬虫(爬虫名字不能scrapy名相

scrapy genspider jd jd.com

三.开启scrapy-splash 服务

sudo docker run -p 8050:8050 scrapinghub/splash

四.安装scrapy-splash 框架

pip install scrapy-splash

五.配置setting文件

ROBOTSTXT_OBEY = False

SPIDER_MIDDLEWARES = {

'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,

}

DOWNLOADER_MIDDLEWARES = {

'scrapy_splash.SplashCookiesMiddleware': 723,

'scrapy_splash.SplashMiddleware': 725,

'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810

}

SPLASH_URL = 'http://localhost:8050'

DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter'

HTTPCACHE_STORAGE = 'scrapy_splash.SplashAwareFSCacheStorage'

六.重写scrapy 的 start_requests方法调用请求

def start_requests(self):

for url in self.start_urls:

yield SplashRequest(url,

self.parse,

args={'wait': '0.5'})

完整例子:

import scrapy

from scrapy_splash import SplashRequest

class JdSpider(scrapy.Spider):

name = 'jd'

# allowed_domains = ['jd.com', 'book.jd.com']

start_urls = ['https://book.jd.com/']

def start_requests(self):

for url in self.start_urls:

yield SplashRequest(url,

self.parse,

args={'wait': '0.5'})

def parse(self, response):

div_list = response.xpath('//div[@class="book_nav_body"]/div')

for div in div_list:

title = div.xpath('./div//h3[@class="item_header_title"]/a/text()')

print(title)

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值