当当网爬虫

我对当当网所分类进行了遍历 ,对分类下的商品内容精心爬取,算是一个简单的爬取,并没有细化分类 爬取所有的商品
下面是爬虫的spider

import scrapy
from pyquery import PyQuery as pq 
from dangdang.items import DangdangItem
class SpiderSpider(scrapy.Spider):
    name = "spider"
    allowed_domians = ['www.dangdang.com']

    def start_requests(self):
        # start_urls = 'http://category.dangdang.com/cid4009733.html'
        start_urls = 'http://category.dangdang.com/?ref=www-0-C'
        # start_urls = 'http://book.dangdang.com/01.03.htm?ref=book-01-A' /div[contains(@class,"new_pub_nav_pop")]/div[contains(@class,"left_box")]
        yield scrapy.Request(url=start_urls,callback=self.parse,dont_filter=True)

    def parse(self,response):
        menu_list = response.xpath('//*[contains(@class,"classify_left")]//a')
        for menu in menu_list:
            uel = menu.xpath('@href').extract_first()
            title = menu.xpath('text()').extract_first()
            if 'javascript' not in uel:
                yield scrapy.Request(url = uel , callback=self.parse_cid,dont_filter=True)

    def parse_cid(self,response):
        print('打印cid')
        try:
            item = DangdangItem()
            note_list = response.xpath('//div[contains(@id,"search_nature_rg")]/ul/li')
            for note in note_list:
                name = note.xpath('./p[contains(@class,"name")]/a/text()').extract_first()
                price = note.xpath('./p[@class="price"]/span/text()').extract_first()
                level = note.xpath('./p[contains(@class,"star")]/a/text()').extract_first()
                shop = note.xpath('./p[contains(@class,"link")]/a/text()').extract_first()
                if shop:
                    shop =shop
                else:
                    shop = u'当当自营'
                # item = DangdangItem()
                for field in item.fields:
                    item[field] = eval(field)
                yield item
            next_page = 'http://category.dangdang.com/'+response.xpath('//a[contains(@class,"arrow_r")]/@href').extract_first()
            if 'javascript:void(0);' not in next_page:
                yield scrapy.Request(url=next_page,callback=self.parse_cid,dont_filter=True)
        except:
            pass

下面是爬虫的pipline

import MySQLdb
from dangdang.settings import *
class DangdangPipeline(object):
    def __init__(self):
        self.item_array = []
        self.db = MySQLdb.connect(MYSQL_HOST, MYSQL_USER, MYSQL_PASSWD, MYSQL_DBNAME, charset='utf8mb4', use_unicode=True)
        self.cursor = self.db.cursor()
        self.insert_sql = """
            insert into {table_name}(name,price,level,shop)
            VALUES (%s, %s, %s, %s)
        """.format(table_name=table_name)

    def process_item(self, item, spider):

        params = (item['name'], item['price'], item['level'], item['shop'])
        self.item_array.append(params)
        self.cursor.executemany(self.insert_sql, self.item_array)
        self.db.commit()
        self.item_array = []
        return item

最后放图
这里写图片描述
这里写图片描述
单机没有跑完 大概跑了10w数据

  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值