scrapy 自定义command

参考:

http://www.tuicool.com/articles/UnUBbuJ。

1、创建commands包。与spider 同级

2,新建文件crawlall.py

__author__ = 'fuhan'

from scrapy.commands import ScrapyCommand

from scrapy.crawler import CrawlerRunner

from scrapy.utils.conf import arglist_to_dict

class Command(ScrapyCommand):

  requires_project = True

  def syntax(self):

    return '[options]'

  def short_desc(self):

    return 'Runs all of the spiders'

  def add_options(self, parser):

    ScrapyCommand.add_options(self, parser)

    parser.add_option("-a", dest="spargs", action="append", default=[], metavar="NAME=VALUE",

              help="set spider argument (may be repeated)")

    parser.add_option("-o", "--output", metavar="FILE",

              help="dump scraped items into FILE (use - for stdout)")

    parser.add_option("-t", "--output-format", metavar="FORMAT",

              help="format to use for dumping items with -o")

  def process_options(self, args, opts):

    ScrapyCommand.process_options(self, args, opts)

    try:

      opts.spargs = arglist_to_dict(opts.spargs)

    except ValueError:

      raise UsageError("Invalid -a value, use -a NAME=VALUE", print_help=False)

  def run(self, args, opts):

    #settings = get_project_settings()


    spider_loader = self.crawler_process.spider_loader

    for spidername in args or spider_loader.list():

      print "*********cralall spidername************" + spidername

      self.crawler_process.crawl(spidername, **opts.spargs)

    self.crawler_process.start()

这里主要是用了self.crawler_process.spider_loader.list()方法获取项目下所有的spider,然后利用self.crawler_process.crawl运行spider

3,在settings.py中添加配置:

COMMANDS_MODULE = 'you_name.commands'






转载于:https://my.oschina.net/u/2367514/blog/601471

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值