pyspider architecture--官方文档

原文地址:http://docs.pyspider.org/en/latest/Architecture/

Architecture

This document describes the reason why I made pyspider and the architecture.

Why

Two years ago, I was working on a vertical search engine. We are facing following needs on crawling:

  1. collect 100-200 websites, they may on/offline or change their templates at any time

    We need a really powerful monitor to find out which website is changing. And a good tool to help us write script/template for each website.

  2. data should be collected in 5min when website updated

    We solve this problem by check index page frequently, and use something like 'last update time' or 'last reply time' to determine which page is changed. In addition to this, we recheck pages after X days in case to prevent the omission.
    pyspider will never stop as WWW is changing all the time

Furthermore, we have some APIs from our cooperators, the API may need POST, proxy, request signature etc. Full control from script is more convenient than some global parameters of components.

Overview

The following diagram shows an overview of the pyspider architecture with its components and an outline of the data flow that takes place inside the system.

pyspider

Components are connected by message queue. Every component, including message queue, is running in their own process/thread, and replaceable. That means, when process is slow, you can have many instances of processor and make full use of multiple CPUs, or deploy to multiple machines. This architecture makes pyspider really fast. benchmarking.

Components

Scheduler

The Scheduler receives tasks from newtask_queue from processor. Decide whether the task is new or requires re-crawl. Sort tasks according to priority and feeding them to fetcher with traffic control (token bucket algorithm). Take care of periodic tasks, lost tasks and failed tasks and retry later.

All of above can be set via self.crawl API.

Note that in current implement of scheduler, only one scheduler is allowed.

Fetcher

The Fetcher is responsible for fetching web pages then send results to processor. For flexible, fetcher support Data URI and pages that rendered by JavaScript (via phantomjs). Fetch method, headers, cookies, proxy, etag etc can be controlled by script via API.

Phantomjs Fetcher

Phantomjs Fetcher works like a proxy. It's connected to general Fetcher, fetch and render pages with JavaScript enabled, output a general HTML back to Fetcher:

scheduler -> fetcher -> processor
                |
            phantomjs
                |
             internet

Processor

The Processor is responsible for running the script written by users to parse and extract information. Your script is running in an unlimited environment. Although we have various tools(like PyQuery) for you to extract information and links, you can use anything you want to deal with the response. You may refer to Script Environment and API Reference to get more information about script.

Processor will capture the exceptions and logs, send status(task track) and new tasks to scheduler, send results to Result Worker.

Result Worker (optional)

Result worker receives results from Processor. Pyspider has a built-in result worker to save result to resultdb. Overwrite it to deal with result by your needs.

WebUI

WebUI is a web frontend for everything. It contains:

  • script editor, debugger
  • project manager
  • task monitor
  • result viewer, exporter

Maybe webui is the most attractive part of pyspider. With this powerful UI, you can debug your scripts step by step just as pyspider do. Starting or stop a project. Finding which project is going wrong and what request is failed and try it again with debugger.

Data flow

The data flow in pyspider is just as your seen in diagram above:

  1. Each script has a callback named on_start, when you press the Run button on WebUI. A new task of on_start is submitted to Scheduler as the entries of project.
  2. Scheduler dispatches this on_start task with a Data URI as a normal task to Fetcher.
  3. Fetcher makes a request and a response to it (for Data URI, it's a fake request and response, but has no difference with other normal tasks), then feeds to Processor.
  4. Processor calls the on_start method and generated some new URL to crawl. Processor send a message to Scheduler that this task is finished and new tasks via message queue to Scheduler (here is no results for on_start in most case. If has results, Processor send them to result_queue).
  5. Scheduler receives the new tasks, looking up in the database, determine whether the task is new or requires re-crawl, if so, put them into task queue. Dispatch tasks in order.
  6. The process repeats (from step 3) and wouldn't stop till WWW is dead ;-). Scheduler will check periodic tasks to crawl latest data.

转载于:https://www.cnblogs.com/davidwang456/p/7577115.html

pyspider错误"subprocess-exited-with-error"表示在运行pyspider时,遇到一个或多个子进程出现错误,导致整个命令无法完成。这种错误通常与操作系统或Python版本相关。为了解决这个问题,你可以采取以下几个步骤: 1. 确认你的电脑上是否安装了所需的依赖库。你可以使用pip install命令来安装所需的库。确保你已经安装了pyspider的所有依赖库,并且它们的版本是兼容的。 2. 确保你的操作系统是最新的,并且已经安装了最新版本的Python。有些依赖库只能在特定版本的Python上运行。你可以通过运行python --version命令来查看你当前的Python版本。如果你的Python版本过旧,你可能需要更新到最新的版本。 3. 如果以上步骤都无法解决问题,你可以尝试重新安装Python。你可以使用python -m ensurepip命令来重新安装Python。这将确保你有最新的pip和setuptools库。 如果你使用的是Windows操作系统,还可以尝试下载并安装与你的Python版本对应的pycurl库。你可以在https://www.lfd.uci.edu/~gohlke/pythonlibs/#pycurl网站上找到Python版本对应的pycurl文件。下载后,你可以使用pip install命令来安装pycurl库。 总结起来,要解决pyspider错误"subprocess-exited-with-error",你可以尝试安装所需的依赖库、更新Python版本、重新安装Python或下载并安装适用于你的Python版本的pycurl库。<span class="em">1</span><span class="em">2</span><span class="em">3</span>
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值