使用scrapy框架爬取全书网书籍信息。

爬取的内容:书籍名称,作者名称,书籍简介,全书网5041页,写入mysql数据库和.txt文件

1,创建scrapy项目

scrapy startproject numberone

2,创建爬虫主程序

cd numberone

scrapy genspider quanshuwang www.quanshuwang.com

3,setting中设置请求头

USER_AGENT = "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.71 Safari/537.36"

4,item中设置要爬取的字段

class NumberoneItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    book_author = scrapy.Field()
    book_name = scrapy.Field()
    book_desc = scrapy.Field()

5,quanshuwang.py主程序中写获取数据的主代码

# -*- coding: utf-8 -*-
import scrapy
from numberone.items import NumberoneItem

class QiubaiSpider(scrapy.Spider):
    name = 'quanshuwang'
    # 这句话是定义爬虫爬取的范围,最好注释掉
    # allowed_domains = ['www.qiushibaike.com']
    # 开始爬取的路由
    start_urls = ['http://www.quanshuwang.com/list/0_1.html']
    def parse(self, response):
        book_list = response.xpath('//ul[@class="seeWell cf"]/li')
        for i in book_list:
            item = NumberoneItem()
            item['book_name'] = i.xpath('./span/a/text()').extract_first()
            item['book_author'] = i.xpath('./span/a[2]/text()').extract_first()
            item['book_desc'] = i.xpath('./span/em/text()').extract_first()
            yield item
        next = response.xpath('//a[@class="next"]/@href').extract_first()
        if next:
            yield scrapy.Request(next, callback=self.parse)

6,pipelines.py管道文件中文件中写持久化保存.txt和mysql。

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
import pymysql
# 写入文件的类
class NumberonePipeline(object):
    f = None
    def open_spider(self,spider):
        self.f = open('全书网.txt','a+',encoding='utf-8')
    def process_item(self, item, spider):
        print(item['book_name']+':正在写入文件...')
        book_name = item['book_name']
        book_author = item['book_author']
        book_desc = item['book_desc']
        self.f.write('书名:'+book_name+'\n'+'作者:'+book_author+'\n'+'书籍简介:'+book_desc+'\n\n')
        return item
    def close_spider(self,spider):
        self.f.close()
# 写入数据库的类
class MysqlPipeline(object):
    conn = None
    mycursor = None
    def open_spider(self,spider):
        self.conn = pymysql.connect(host='172.16.25.4',user='root',password='root',db='quanshuwang')
        self.mycursor = self.conn.cursor()
    def process_item(self, item, spider):
        print(item['book_name'] + ':正在写数据库...')
        book_name = item['book_name']
        book_author = item['book_author']
        book_desc = item['book_desc']
        self.mycursor = self.conn.cursor()
        sql = 'insert into qsw VALUES (null,"%s","%s","%s")'%(book_name,book_author,book_desc)
        bool = self.mycursor.execute(sql)
        self.conn.commit()
        return item
    def close_spider(self,spider):
        self.conn.close()
        self.mycursor.close()

7,setting.py文件中打开管道文件。

ITEM_PIPELINES = {
   'numberone.pipelines.NumberonePipeline': 300,
   'numberone.pipelines.MysqlPipeline': 400,

}

8,执行运行爬虫的命令

scrapy crawl quanshuwang --nolog

9,控制台输出

贵府嫡女:正在写数据库...
随身空间农女翻身记:正在写入文件...
随身空间农女翻身记:正在写数据库...
阴间商人:正在写入文件...
阴间商人:正在写数据库...
我的美味有属性:正在写入文件...
我的美味有属性:正在写数据库...
剑仙修炼纪要:正在写入文件...
剑仙修炼纪要:正在写数据库...
在阴间上班的日子:正在写入文件...
在阴间上班的日子:正在写数据库...
轮回之鸿蒙传说:正在写入文件...
轮回之鸿蒙传说:正在写数据库...
末日星城:正在写入文件...
末日星城:正在写数据库...
异域神州道:正在写入文件...
异域神州道:正在写数据库...

10,打开文件和数据库查看是否写入成功

done。

 

转载于:https://www.cnblogs.com/nmsghgnv/p/11341424.html

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
基于Python Scrapy实现的易云音乐music163数据爬取爬虫系统 含源代码 基于Scrapy框架易云音乐爬虫,大致爬虫流程如下: - 以歌手为索引,抓取到歌手; - 从歌手抓取到专辑; - 通过所有专辑抓取到所有歌曲; - 最后抓取歌曲的精彩评论。 数据保存到`Mongodb`数据库,保存歌曲的歌手,歌名,专辑,和热评的作者,赞数,以及作者头像url。 抓取评论者的头像url,是因为如果大家喜欢,可以将他做web端。 ### 运行: ``` $ scrapy crawl music ``` #!/usr/bin/python #-*-coding:utf-8-*- import time from pprint import pprint from scrapy.spider import BaseSpider from scrapy.selector import HtmlXPathSelector from scrapy.http import Request from woaidu_crawler.items import WoaiduCrawlerItem from woaidu_crawler.utils.select_result import list_first_item,strip_null,deduplication,clean_url class WoaiduSpider(BaseSpider): name = "woaidu" start_urls = ( 'http://www.woaidu.org/sitemap_1.html', ) def parse(self,response): response_selector = HtmlXPathSelector(response) next_link = list_first_item(response_selector.select(u'//div[@class="k2"]/div/a[text()="下一"]/@href').extract()) if next_link: next_link = clean_url(response.url,next_link,response.encoding) yield Request(url=next_link, callback=self.parse) for detail_link in response_selector.select(u'//div[contains(@class,"sousuolist")]/a/@href').extract(): if detail_link: detail_link = clean_url(response.url,detail_link,response.encoding) yield Request(url=detail_link, callback=self.parse_detail) def parse_detail(self, response): woaidu_item = WoaiduCrawlerItem() response_selector = HtmlXPathSelector(response) woaidu_item['book_name'] = list_first_item(response_selector.select('//div[@class="zizida"][1]/text()').extract()) woaidu_item['author'] = [list_first_item(response_selector.select('//div[@class="xiaoxiao"][1]/text()').extract())[5:].strip(),] woaidu_item['book_description'] = list_first_item(response_selector.select('//div[@class="lili"][1]/text()').extract()).strip() woaidu_item['book_covor_image_url'] = list

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值