python scrapy 爬虫实例_Python网络爬虫之Scrapy框架-案例实现【第二十二节】

爬取小说

spider

import scrapy

from xiaoshuo.items import XiaoshuoItem

class XiaoshuoSpiderSpider(scrapy.Spider):

name = 'xiaoshuo_spider'

allowed_domains = ['zy200.com']

url = 'http://www.zy200.com/5/5943/'

start_urls = [url + '11667352.html']

def parse(self, response):

info = response.xpath("/html/body/div[@id='content']/text()").extract()

href = response.xpath("//div[@class='zfootbar']/a[3]/@href").extract_first()

xs_item = XiaoshuoItem()

xs_item['content'] = info

yield xs_item

if href != 'index.html':

new_url = self.url + href

yield scrapy.Request(new_url, callback=self.parse)

items

import scrapy

class XiaoshuoItem(scrapy.Item):

# define the fields for your item here like:

content = scrapy.Field()

href = scrapy.Field()

pipeline

class XiaoshuoPipeline(object):

def __init__(self):

self.filename = open("dp1.txt", "w", encoding="utf-8")

def process_item(self, item, spider):

content = item["title"] + item["content"] + '\n'

self.filename.write(content)

self.filename.flush()

return item

def close_spider(self, spider):

self.filename.close()

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值