python+docker+scrapy+splash爬动态数据

准备:

docker安装教程:https://yeasy.gitbooks.io/docker_practice/content/install/ubuntu.html

scrapy安装教程:http://scrapy-chs.readthedocs.io/zh_CN/1.0/intro/overview.html

splash安装教程:http://devdoc.net/python/splash-doc-3.2/install.html  (用第一个linux+docker安装)


创建一个爬虫项目:在命令行输入 scrapy startproject myproject,其中myproject是自己的项目名

在 http://scrapy-chs.readthedocs.io/zh_CN/1.0/topics/commands.html 有详细介绍,目录结构大致如下,其中setting.py是配置文件


我的目录结构代码如下:

domz.py(splash支持lua,所以splash_args用的是lua)

import scrapy
from scrapy.http import Request,FormRequest
from scrapy.selector import Selector
from scrapy_splash.request import SplashRequest,SplashFormRequest

class DmozSpider(scrapy.Spider):
    name = "csdn"

    def start_requests(self):
        splash_args = {"lua_source": """
                    --splash.response_body_enabled = true
                    splash.private_mode_enabled = false
                    splash:set_user_agent("Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.186 Safari/537.36")
                    assert(splash:go("http://product.dangdang.com/23992019.html"))
                    splash:wait(3)
                    return {html = splash:html()}
                    """}
        yield SplashRequest(None
                                ,endpoint='run'
                                ,args=splash_args
                                ,callback=self.onSave
                                )
    def onSave(self, response):
        value = response.xpath('//*[@id="dd-price"]/text()').extract()
        print(value)

run.py(该类其实就是相当于在命令行启动程序)

from scrapy import cmdline

name = 'csdn'
cmd = 'scrapy crawl {0}'.format(name)
cmdline.execute(cmd.split())


setting.py

# -*- coding: utf-8 -*-

# Scrapy settings for myScrapy project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://doc.scrapy.org/en/latest/topics/settings.html
#     https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://doc.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'myScrapy'

SPIDER_MODULES = ['myScrapy.spiders']
NEWSPIDER_MODULE = 'myScrapy.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'myScrapy (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = True

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}

# Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'myScrapy.middlewares.MyscrapySpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
DOWNLOADER_MIDDLEWARES = {
    'scrapy_splash.SplashCookiesMiddleware':723,
    'scrapy_splash.SplashMiddleware':725,
    'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,#不配置查不到信息
}

# Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
#ITEM_PIPELINES = {
#    'myScrapy.pipelines.MyscrapyPipeline': 300,
#}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
HTTPCACHE_ENABLED = False
HTTPCACHE_EXPIRATION_SECS = 0
HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
SPLASH_URL="http://192.168.1.123:8050"      #自己安装的docker里的splash位置
DUPEFILTER_CLASS="scrapy_splash.SplashAwareDupeFilter"
HTTPCACHE_STORAGE = 'scrapy_splash.SplashAwareFSCacheStorage'

这样就已经配置好了,开始爬取数据(我爬的是京东的价格):

链接:https://item.jd.com/5089239.html

如果不用splash,scrapy爬取的是未渲染的html,所以爬到的内容为空,而加了splash爬取到的是渲染过的html,此时就能爬到价格,程序运行结果如下:



但是如果不用splash爬到的内容如下:



全部过程如上所述,splash可以用har方法,js点击操作等方法获取数据,在splash文档中都有讲解,要想深入学习可以去该文档看看:

http://devdoc.net/python/splash-doc-3.2/index.html


  • 0
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值