python extract_scrapy爬虫xpath().extract()[0]获取内容报list index out of range错

小弟新手,在看着教程写scrapy爬虫时使用xpath().extract()[0]的方法获取内容,报IndexError: list index out of range错误,求问大神怎么解决,急求答案在线等。(试过去掉.extract()[0],会报出TypeError: Request url must be str or unicode错误)。代码如下:

cnblog_spider.py

# -*- coding:utf-8 -*-

# !/usr/bin/env python

import scrapy

from bs4 import BeautifulSoup

from scrapy import Selector

from p1.items import CnblogsSpiderItem

class CnblogsSpider(scrapy.spiders.Spider):

name = "cnblogs" # 爬虫的名称

allowed_domains = ["cnblogs.com"]# 允许的域名

start_urls = [

"http://www.cnblogs.com/qiyeboy/default.html? page=1"

]

def parse(self,response):

# 实现网页的解析

# 首先抽取所有的文章

papers = response.xpath(".//*[@class='day']")#.extract()

# 从每篇文章中抽取数据

#soup = BeautifulSoup(papers, "html.parser", from_encoding="utf-8")

# print papers

for paper in papers:

url = paper.xpath(".//*[@class='pastTitle']/a/@href").extract()[0]

title = paper.xpath(".//*[@class='pastTitle']/a").extract()[0]

time = paper.xpath(".//*[@class='dayTitle']/a").extract()[0]

content = paper.xpath(".//*[@class='postCon']/a").extract()[0]

# print url, title, time, content

item = CnblogsSpiderItem(url=url, title=title, time=time, content=content)

request = scrapy.Request(url=url, callback=self.parse_body)

request.meta['item'] = item # 将item暂存

yield request

#yield item

next_page = Selector(response).re(u'下一页')

if next_page:

yield scrapy.Request(url=next_page[0], callback=self.parse)

def parse_body(self, response):

item = response.meta['item']

body = response.xpath(".//*[@class='postBody']")

item['cimage_urls'] = body.xpath('.//img//@src').extract()

yield item

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值