爬虫之 爬取京东计算机书籍

14 篇文章 0 订阅
5 篇文章 0 订阅

爬取京东的计算机类书籍

1. 工具: requests, pycharm, scrapy, mongodb

2. 网页提取工具: xpath

1. 分析京东网页:

打开京东网站 查看源码发现不是动态网页,而且都是列表, 说明了很好处理;开始分析;

我们只要提取书名,书的链接, 书的出版社,书的作者,评价数,价格

 

I

注意一下,书的价格, 评论数,源码并没有,说明是ajax请求;因此使用浏览器抓包看看有没有;

抓包可以找到评论数;

url: https://club.jd.com/comment/productCommentSummaries.action?my=pinglun&referenceIds=11936238

referenceIds书的id 返回是json

再找价格:

也可以找到:其中m 是书的原价, p 是当前的价格;

url: https://p.3.cn/prices/mgets?ext=11000000&pin=&type=1&area=1_72_4137_0&skuIds=J_11936238

skuIds是书的id, 返回是json

2. 编写代码:

 
scrapy startproject jd

 

1. 编写spider:

#  coding: utf-8
import time
from scrapy.selector import Selector
from scrapy.http import Request
from scrapy.spiders import Spider
import requests
from requests.packages.urllib3.exceptions import InsecureRequestWarning

# 禁用安全请求警告
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
'''
time: 2018-05-19
by: jianmoumou233
爬取京东图书,IT分类

'''

class Page(Spider):
    name = "jd"
    mongo_collections = "jd"

    headers = {
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
        'Accept-Language': 'zh-CN,zh;q=0.8,en-US;q=0.5,en;q=0.3',
        "Upgrade-Insecure-Requests": "1",
        "Connection": "keep-alive",
        "Cache-Control": "max-age=0",
    }

    def start_requests(self):
        for i in xrange(1, 280):
            url = 'https://list.jd.com/list.html?cat=1713,3287,3797&page=%d' % i
            yield Request(url, dont_filter=True)

    def parse(self, response):
        '''

        :param url: url
        :param title: book's name
        :param author: book's author
        :param shop: shop's name
        :param _id: book id  and mongodb's _id
        :param price: book's price
        :param old_price: book's original price
        :param comment_count: book's number of comments
        '''
        xbody = Selector(response)
        item = dict()
        _li = xbody.xpath("//*[@id='plist']/ul/li")
        for i in _li:
            item['url'] = i.xpath("./div/div[1]/a/@href").extract_first()
            item['title'] = i.xpath("./div/div[contains(@class,'p-name')]/a/em/text()").extract_first()
            item['author'] = i.xpath(
                "./div/div[contains(@class,'p-bookdetails')]//span[contains(@class,'author_type_1')]/a/text()").extract_first()
            item['shop'] = i.xpath(
                "./div/div[contains(@class,'p-bookdetails')]/span[contains(@class,'p-bi-store')]/a/@title").extract_first()
            item["_id"] = i.xpath("./div/@data-sku").extract_first()
            item["spidertime"] = time.strftime("%Y-%m-%d %H:%M:%S")

            for k, v in item.items():
                if v:
                    item[k] = str(v).strip()

            if item.get('_id'):
                try:
                    item['price'], item["old_price"] = self.price(item['_id'], self.headers)
                    time.sleep(2)
                    item['comment_count'] = self.buy(item['_id'], self.headers)
                except Exception as e:
                    print e
                if not str(item['url']).startswith("http"):
                    item['url'] = "https" + item['url']
                yield item

    @staticmethod
    def price(id, headers):
        url = "https://p.3.cn/prices/mgets?ext=11000000&pin=&type=1&area=1_72_4137_0&skuIds=J_%s&pdbp=0&pdtk=&pdpin=&pduid=15229474889041156750382&source=list_pc_front" % id
        data = requests.get(url, headers=headers, verify=False).json()
        return data[0].get('p'), data[0].get("m")

    @staticmethod
    def buy(id, headers):
        url = 'https://club.jd.com/comment/productCommentSummaries.action?my=pinglun&referenceIds=%s' % id
        data = requests.get(url, headers=headers, verify=False).json()
        return data.get('CommentsCount')[0].get("CommentCount")

2. 编写入库(Mongodb)pipelines:

# -*- coding: utf-8 -*-

import sys
import pymongo
reload(sys)
sys.setdefaultencoding("utf-8")


class Mongo(object):
    mongo_uri = None
    mongo_db = None
    client = None
    db = None

    def __init__(self, mongo_uri, mongo_db):
        self.mongo_uri = mongo_uri
        self.mongo_db = mongo_db

    @classmethod
    def from_crawler(cls, crawler):

        return cls(
            mongo_uri=crawler.settings.get('MONGO_URI'),
            mongo_db=crawler.settings.get('MONGO_DATABASE', 'test'),

        )

    def open_spider(self, spider):
        self.client = pymongo.MongoClient(host=self.mongo_uri)
        self.db = self.client[self.mongo_db]

    def close_spider(self, spider):
        self.client.close()

    def process_item(self, item, spider):
        try:
            self.db[spider.mongo_collections].insert(dict(item))
        except Exception as e:
            pass
        # return item

3. 设置settings:

MONGO_URI = "mongodb://127.0.0.1:27017"
MONGO_DATABASE = "jd"

# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
   'jd.pipelines.Mongo': 300,
}

4. 运行spider:

scrapy crawl jd

5. 结果:

url 的链接应该是https,我拼接错了;

总结: 

   爬了几千个,还没有封我,加上延迟,headrs尽量都加上;

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值