相比于用item直接存储数据,Item Loaders存储的是添加数据的方法,更便捷,简约和易维护。
In other words, Items provide the container of scraped data, while Item Loaders provide the mechanism for populating that container.Item Loaders are designed to provide a flexible, efficient and easy mechanism for extending and overriding different field parsing rules, either by spider, or by source format (HTML, XML, etc) without becoming a nightmare to maintain. 例子如下所示:
from scrapy.loader import ItemLoader
from myproject.items import Product
def parse(self, response): #注意: 所有提取的数据都保存在列表list中
l = ItemLoader(item=Product(), response=response)
l.add_xpath('name', '//div[@class="product_name"]') #也可以用1.add_css()方法
l.add_xpath('name', '//div[@class="product_title"]')
l.add_xpath('price', '//p[@id="price"]')
l.add_css('stock', 'p#stock]')
l.add_value('last_updated', 'today') # you can also use literal values 直接添加值
return l.load_item()
NOTE:Both input and output processors must receive an iterator as their first argument. The output of those functions can be anything. The result of input processors will be appended to an internal list (in the Loader) containing the collected values (for that field). The result of the output processors is the value that will be finally assigned to the item.
在item.py 代码中,需申明item的输入,输出处理机制 代码如下:
import scrapy
from scrapy.loader.processors import Join, MapCompose, TakeFirst #导入处理方法
from w3lib.html import remove_tags
def filter_price(value):
if value.isdigit():
return value
class Product(scrapy.Item):
name = scrapy.Field(
input_processor=MapCompose(remove_tags),
output_processor=Join(),
)
price = scrapy.Field(
input_processor=MapCompose(remove_tags, filter_price),
output_processor=TakeFirst(),
)
内建processor:
1. TakeFirst(): 返回第一个非空数据
2. Join(seperator=u''):返回由指定分隔符连接的数据,默认为u''
3. Compose(*functions, **default_loader_context): 数据经过给定的函数处理(可不止一个)
4. MapCompose(*functions, **default_loader_context): 输入数据是可迭代,第一个函数运用与每一个元素