scrapy ajax豆瓣,python - Scrapy CrawlSpider for AJAX content - Stack Overflow

I am attempting to crawl a site for news articles. My start_url contains:

and

A parameter to the AJAX call is "page", which is incremented each time the "More" button is clicked. For example, clicking "More" once will load an additional n articles and update the page parameter in the "More" button onClick event, so that next time "More" is clicked, "page" two of articles will be loaded (assuming "page" 0 was loaded initially, and "page" 1 was loaded on the first click).

For each "page" I would like to scrape the contents of each article using Rules, but I do not know how many "pages" there are and I do not want to choose some arbitrary m (e.g., 10k). I can't seem to figure out how to set this up.

From this question, Scrapy Crawl URLs in Order, I have tried to create a URL list of potential URLs, but I can't determine how and where to send a new URL from the pool after parsing the previous URL and ensuring it contains news links for a CrawlSpider. My Rules send responses to a parse_items callback, where the article contents are parsed.

Is there a way to observe the contents of the links page (similar to the BaseSpider example) before applying Rules and calling parse_items so that I may know when to stop crawling?

Simplified code (I removed several of the fields I'm parsing for clarity):

class ExampleSite(CrawlSpider):

name = "so"

download_delay = 2

more_pages = True

current_page = 0

allowed_domains = ['example.com']

start_urls = ['http://example.com/account/ajax_headlines_content?type=in_focus_articles&page=0'+

'&slugs=tsla&is_symbol_page=true']

##could also use

##start_urls = ['http://example.com/symbol/tsla']

ajax_urls = []

for i in range(1,1000):

ajax_urls.append('http://example.com/account/ajax_headlines_content?type=in_focus_articles&page='+str(i)+

'&slugs=tsla&is_symbol_page=true')

rules = (

Rule(SgmlLinkExtractor(allow=('/symbol/tsla', ))),

Rule(SgmlLinkExtractor(allow=('/news-article.*tesla.*', '/article.*tesla.*', )), callback='parse_item')

)

##need something like this??

##override parse?

## if response.body == 'no results':

## self.more_pages = False

## ##stop crawler??

## else:

## self.current_page = self.current_page + 1

## yield Request(self.ajax_urls[self.current_page], callback=self.parse_start_url)

def parse_item(self, response):

self.log("Scraping: %s" % response.url, level=log.INFO)

hxs = Selector(response)

item = NewsItem()

item['url'] = response.url

item['source'] = 'example'

item['title'] = hxs.xpath('//title/text()')

item['date'] = hxs.xpath('//div[@class="article_info_pos"]/span/text()')

yield item

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值