Scrapy框架+Elasticsearch

前提

1. 已安装scrapy框架

2. 已安装elasticsearch 

 

创建一个项目scrapyes

scrapy startproject scrapyes

目录结构

.
|____scrapy.cfg
|____scrapyes
| |______init__.py
| |____items.py
| |____middlewares.py
| |____pipelines.py
| |____settings.py
| |____spiders
| | |______init__.py

 

安装ScrapyElasticSearch

pip install ScrapyElasticSearch

配置setting.py

...

ITEM_PIPELINES = {
  'scrapyelasticsearch.scrapyelasticsearch.ElasticSearchPipeline': 300,
}

ELASTICSEARCH_SERVERS = ['192.168.4.215']
ELASTICSEARCH_PORT = 9200 # If port 80 leave blank
ELASTICSEARCH_USERNAME = ''
ELASTICSEARCH_PASSWORD = ''
ELASTICSEARCH_INDEX = 'scrapy.course'
ELASTICSEARCH_TYPE = 'course'
ELASTICSEARCH_UNIQ_KEY = 'url'

...

配置说明见 https://github.com/knockrentals/scrapy-elasticsearch

写一个网络课程爬虫

import scrapy

class ESCourseSpider(scrapy.Spider):
    name = 'es_course'

    def start_requests(self):
        urls=[]
        for i in xrange(1,30):
            urls.append('http://demo.edusoho.com/course/'+str(i))

        for url in urls:
            yield scrapy.Request(url=url, callback=self.parse)


    def parse(self, response):
        yield {
            'title': response.css('span.course-detail-heading::text').extract_first(),
            'price': response.css('b.pirce-num::text').extract_first(),
            'url' : response.url,
        }

跑一下爬虫

scrapy crawl es_course -o es_course.json

 

爬下来的内容会存放在新生成的一个文件es_course.json里

[
{"url": "http://demo.edusoho.com/course/1", "price": "免费", "title": "\n               课程功能体验\n                        "},
{"url": "http://demo.edusoho.com/course/20", "price": "0.01", "title": "\n               官方主题\n                        "},
{"url": "http://demo.edusoho.com/course/24", "price": "999.00", "title": "\n               会员专区\n                        "},
{"url": "http://demo.edusoho.com/course/22", "price": "免费", "title": "\n               第三方主题\n                        "},
{"url": "http://demo.edusoho.com/course/27", "price": "0.01", "title": "\n               优惠码\n                        "}
]

到elasticsearch中查看数据,查询条件如下

GET scrapy.course*/_search
{
  "query" : {
    "match_all": {}
  }
  ,"from" : 0, "size" : 50
}

结果

{
  "took": 2,
  "timed_out": false,
  "_shards": {
    "total": 5,
    "successful": 5,
    "failed": 0
  },
  "hits": {
    "total": 5,
    "max_score": 1,
    "hits": [
      {
        "_index": "scrapy.course",
        "_type": "course",
        "_id": "6306093149d91c35eabc1c59f28d68355cc4de9d",
        "_score": 1,
        "_source": {
          "url": "http://demo.edusoho.com/course/1",
          "price": "免费",
          "title": "\n               课程功能体验\n                        "
        }
      },
      {
        "_index": "scrapy.course",
        "_type": "course",
        "_id": "6a090cfe8f9dbf3d21248d64d9907eab4b31bc4d",
        "_score": 1,
        "_source": {
          "url": "http://demo.edusoho.com/course/24",
          "price": "999.00",
          "title": "\n               会员专区\n                        "
        }
      },

...

说明数据已经存到elasticsearch中。

 

 

转载于:https://my.oschina.net/stanleysun/blog/1506007

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值