爬虫之scrapy和splash 结合爬取动态网页

scrapy和splash 结合爬取动态网页

  1. 安装scrapy-splash:
    pip install scrapy-splash
  2. 安装splash:
    sudo docker pull scrapinghub/splash
  3. 运行splash:
    docker run -it -d -p 8050:8050 --name splash scrapinghub/splash
  4. 编写scrapy:
    1. 设置settings.py:
SPLASH_URL = 'http://xxx.xxx.xxx.xxx:8050' # splash的url
       DOWNLOADER_MIDDLEWARES = {
        'scrapy_splash.SplashCookiesMiddleware': 723,
        'scrapy_splash.SplashMiddleware': 725,
        'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
         }
        SPIDER_MIDDLEWARES = {
        'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
         }
  1. 编写spider:
    今日头条为例子:
from scrapy.selector import Selector

import scrapy
from scrapy_splash import SplashRequest

import sys
reload(sys)
sys.setdefaultencoding("utf8")


class MySpider(scrapy.Spider):
   name = 'ddd'

   def start_requests(self):
       url = 'https://www.toutiao.com/'
       yield SplashRequest(url=url, callback=self.parse, args={'wait': 0.5}, dont_filter=True)

   def parse(self, response):
       xbody = Selector(response=response)
       title = xbody.xpath("//p[@class='title']/text()").extract()
       for i in title:
           print str(i).encode("gbk", 'ignore')  # 乱码
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值