python绿色版spider的idle_Python-Scrapy Spider没有收到spider_idle信号

我有蜘蛛使用meta处理链中的请求,以产生具有来自多个请求的数据的项目.

我用来生成请求的方式是在首次调用parse函数时启动所有请求,但是,如果我有太多链接无法请求,则并不是所有请求都已计划好,最终我也无法获得所需的一切.

为了解决这个问题,我正在尝试使Spider一次请求5个产品,然后在Spider空闲时再次请求(通过在from_crawler中连接信号).

问题是,由于我的代码现在正确,spider_idle没有运行请求函数,并且蜘蛛立即关闭.好像蜘蛛不会闲置.

这是一些代码:

class ProductSpider(scrapy.Spider):

def __init__(self, *args, **kwargs):

super(ProductSpider, self).__init__(*args, **kwargs)

self.parsed_data = []

self.header = {}

f = open('file.csv', 'r')

f_data = [[x.strip()] for x in f]

count=1

first = 'smth'

for product in f_data:

if first != '':

header = product[0].split(';')

for each in range(len(header[1:])):

self.header[header[each+1]] = each+1

first = ''

else:

product = product[0].split(';')

product.append(count)

count+=1

self.parsed_data.append(product)

f.close()

@classmethod

def from_crawler(cls, crawler, *args, **kwargs):

spider = super(ProductSpider, cls).from_crawler(crawler, *args, **kwargs)

crawler.signals.connect(spider.request, signal=signals.spider_idle)

return spider

name = 'products'

allowed_domains = [domains]

handle_httpstatus_list = [400, 404, 403, 503, 504]

start_urls = [start]

def next_link(self,response):

product = response.meta['product']

there_is_next = False

for each in range(response.meta['each']+1, len(product)-1):

if product[each] != '':

there_is_next = True

yield scrapy.Request(product[each], callback=response.meta['func_dict'][each], meta={'func_dict': response.meta['func_dict'],'product':product,'each':each,'price_dict':response.meta['price_dict'], 'item':response.meta['item']}, dont_filter=True)

break

if not there_is_next:

item = response.meta['item']

item['prices'] = response.meta['price_dict']

yield item

#[...] chain parsing functions for each request

def get_products(self):

products = []

data = self.parsed_data

for each in range(5):

if data:

products.append(data.pop())

return products

def request(self):

item = Header()

item['first'] = True

item['sellers'] = self.header

yield item

func_dict = {parsing_functions_for_every_site}

products = self.get_products()

if not products:

return

for product in products:

item = Product()

price_dict = {1:product[1]}

item['name'] = product[0]

item['order'] = product[-1]

for each in range(2, len(product)-1):

if product[each] != '':

#print each, func_dict, product[each]

yield scrapy.Request(product[each], callback=func_dict[each],

meta={'func_dict': func_dict,'product':product,

'each':each,'price_dict':price_dict, 'item':item})

break

raise DontCloseSpider

def parse(self, response=None):

pass

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值