scrapy-splash简单使用

scrapy-splash简单使用:
    1.docker安装splash
    docker info 查看docker信息
    docker images  查看所有镜像
    docker pull scrapinghub/splash  安装scrapinghub/splash
    docker run -p 8050:8050 scrapinghub/splash &  指定8050端口运行

    2.pip install scrapy-splash

    3.scrapy 配置:
    SPLASH_URL = 'http://localhost:8050'
    DOWNLOADER_MIDDLEWARES = {
       'scrapy_splash.SplashCookiesMiddleware': 723,
       'scrapy_splash.SplashMiddleware': 725,
       'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
    }
    SPIDER_MIDDLEWARES = {
        'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
    }
    DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter'
    HTTPCACHE_STORAGE = 'scrapy_splash.SplashAwareFSCacheStorage'

    4.scrapy 使用
    from scrapy_splash import SplashRequest
    yield SplashRequest(self.start_urls[0], callback=self.parse, args={'wait': 0.5})

 

测试代码:

import datetime
import os

import scrapy
from scrapy_splash import SplashRequest

from ..settings import LOG_DIR


class SplashSpider(scrapy.Spider):
    name = 'splash'
    allowed_domains = ['biqugedu.com']
    start_urls = ['http://www.biqugedu.com/0_25/']

    custom_settings = {
        'LOG_FILE': os.path.join(LOG_DIR, '%s_%s.log' % (name, datetime.date.today().strftime('%Y-%m-%d'))),
        'LOG_LEVEL': 'INFO',
        'CONCURRENT_REQUESTS': 8,
        'AUTOTHROTTLE_ENABLED': True,
        'AUTOTHROTTLE_TARGET_CONCURRENCY': 8,

        'SPLASH_URL': 'http://localhost:8050',
        'DOWNLOADER_MIDDLEWARES': {
            'scrapy_splash.SplashCookiesMiddleware': 723,
            'scrapy_splash.SplashMiddleware': 725,
            'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
        },
        'SPIDER_MIDDLEWARES': {
            'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
        },
        'DUPEFILTER_CLASS': 'scrapy_splash.SplashAwareDupeFilter',
        'HTTPCACHE_STORAGE': 'scrapy_splash.SplashAwareFSCacheStorage',

    }

    def start_requests(self):
        yield SplashRequest(self.start_urls[0], callback=self.parse, args={'wait': 0.5})

    def parse(self, response):
        """
        :param response:
        :return:
        """
        response_str = response.body.decode('utf-8', 'ignore')
        self.logger.info(response_str)
        self.logger.info(response_str.find('http://www.biqugedu.com/files/article/image/0/25/25s.jpg'))

 

scrapy-splash接收到js请求:

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值