把写内容过程中经常用到的一些内容片段记录起来,下边内容内容是关于在线程里运行scrapy的方法的内容。
When you run the Scrapy crawler from a program, the code blocks until the Scrapy crawler is finished. This is due to how Twisted (the underlying asynchronous network library) works. This prevents using the Scrapy crawler from scripts or other code.
To circumvent this issue you can run the Scrapy crawler in a thread with this code.
Keep in mind that this code is mainly for illustrative purposes and far from production ready.
Also the code was only tested with Scrapy 0.8, and will probably need some adjustments for newer versions (since the core API isn’t stable yet), but you get the idea.
“”"
Code to run Scrapy crawler in a thread - works on Scrapy 0.8
“”"
import threading, Queue
from twisted.internet import reactor
from scrapy.xlib.pydispatch import dispatcher
from scrapy.core.manager import scrapymanager
from scrapy.core.engine import scrapyengine
from scrapy.core import signals
class CrawlerThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.running = False
def run(self):
self.running = True
scrapymanager.configure(control_reactor=False)
scrapymanager.start()
reactor.run(installSignalHandlers=False)
if not self.running:
raise RuntimeError("CrawlerThread not running")
self._call_and_block_until_signal(signals.spider_closed,
def stop(self):
reactor.callFromThread(scrapyengine.stop)
q = Queue.Queue()
def unblock():
q.put(None)
dispatcher.connect(unblock, signal=signal)
q.get()
Usage example below:
import os
os.environ.setdefault(‘SCRAPY_SETTINGS_MODULE’, ‘myproject.settings’)
from scrapy.xlib.pydispatch import dispatcher
from scrapy.core import signals
from scrapy.conf import settings
from scrapy.crawler import CrawlerThread
settings.overrides[‘LOG_ENABLED’] = False # avoid log noise
def item_passed(item):
print “Just scraped item:”, item
dispatcher.connect(item_passed, signal=signals.item_passed)
crawler = CrawlerThread()
print “Starting crawler thread…”
crawler.start()
print “Crawling somedomain.com…”
crawler.crawl('somedomain.com) # blocking call
print “Crawling anotherdomain.com…”
crawler.crawl(‘anotherdomain.com’) # blocking call
print “Stopping crawler thread…”
crawler.stop()