python爬虫:第二个爬虫(xpath,Excel文件,CSV文件)

之前学习了“Python的基本知识”,“request库,beautifulsoup库,正则表达式”

Python爬虫

一、lxml库的使用(Xpath语法):
网络爬虫学习第五弹:lxml库的使用
python3解析库lxml
HTML 教程

二、将数据保存至CSV文件中:
链接:用 Python 将数据写到 CSV 文件

import csv
fp = open('C:/Users/16579/Desktop/11.csv','w+')
writer = csv.writer(fp)
writer.writerow(('id', 'name'))
writer.writerow(('3', '小萌'))
writer.writerow(('我喜欢', '喜羊羊'))
writer.writerow(('也喜欢', '灰太狼'))

当打开文件出现:PermissionError: [Errno 13] Permission denied: ‘doubanbook.csv’,原因可能如下:

  • 你有可能已经打开了这个文件(比如已经用Excel打开了),关闭这个文件即可
  • open 打开一个文件夹(目录),而不是文件

三、将数据保存至Excel文件中:

import xlwt
import xlrd
book = xlwt.Workbook(encoding='utf-8')
sheet = book.add_sheet('Sheet1')
sheet.write(0,0,'小红')
sheet.write(1,8,"xiaoming")
book.save('test1.xls')

'''
读取XLS,XLSX文件
'''
# xlrd.open_workbook打开一个已经存在的excel文件,文件不存在会造成打开失败。
workbook = xlrd.open_workbook('test1.xls')
# 通过索引获取工作薄对象,workbook.sheet_by_index;也可以通过名称获取工作薄,workbook.sheet_by_name
booksheet = workbook.sheet_by_index(0)
# 返回的结果集
#booksheet.nrows获取工作薄的总行数,同样的,booksheet.ncols获取工作薄的总列数
for i in range(booksheet.nrows):
    #booksheet.row_values(i)获取一整行的内容
    print(booksheet.row_values(i))

四、爬取豆瓣网图书
xpath基本定位用法
要好好看一下lxml的xpath的规则,

from lxml import etree
import requests
import csv

fp = open('C://Users/LP/Desktop/doubanbook.csv','wt',newline='',encoding='utf-8')
writer = csv.writer(fp)
writer.writerow(('name', 'url',  'author', 'publisher', 'date', 'price', 'rate', 'comment'))

urls = ['https://book.douban.com/top250?start={}'.format(str(i)) for i in range(0,250,25)]

headers = {
    'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36'
}

for url in urls:
    html = requests.get(url,headers=headers)
    selector = etree.HTML(html.text)
    infos = selector.xpath('//tr[@class="item"]')
    for info in infos:
        name = info.xpath('td/div/a/@title')[0]
        url = info.xpath('td/div/a/@href')[0]
        book_infos = info.xpath('td/p/text()')[0]
        author = book_infos.split('/')[0]
        publisher = book_infos.split('/')[-3]
        date = book_infos.split('/')[-2]
        price = book_infos.split('/')[-1]
        rate = info.xpath('td/div/span[2]/text()')[0]
        comments = info.xpath('td/p/span/text()')
        comment = comments[0] if len(comments) != 0 else "空"
        writer.writerow((name,url,author,publisher,date,price,rate,comment))
fp.close()

五、爬取奇书网玄幻小说列表:

import xlwt
import requests
from lxml import etree
import time

all_info_list = []

headers = {
    'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36'
}

def get_info(url):
    html = requests.get(url,headers=headers)
    selector = etree.HTML(html.text)
    infos = selector.xpath('/html/body/div[4]/div[2]/div/ul/li')
    for info in infos:
        title = info.xpath('a/text()')[0]
        author = info.xpath('div[1]/text()[1]')[0].split(':')[1]
        word = info.xpath('div[1]/text()[2]')[0].split(':')[1]
        time1 = info.xpath('div[1]/text()[4]')[0].split(':')[1]
        introduce = info.xpath('div[2]/text()')[0].strip().replace(' ','')
        p1 = info.xpath('div[3]/a/text()')[0].split(':')[1]
        info_list = [title,author,word,time1,p1,introduce]
        all_info_list.append(info_list)
    time.sleep(1)

if __name__ == '__main__':
    urls = ['https://www.qisuu.la/soft/sort01/index_{}.html'.format(str(i)) for i in range(1,10)]

    book = xlwt.Workbook(encoding='utf-8')
    sheet = book.add_sheet('Sheet1')
    header = ['书名', '作者', '大小(字数)', '最近更新时间', '最新章节', '内容简介']
    for h in range(len(header)):
        sheet.write(0, h, header[h])
    count = 1
    i = 1
    for url in urls:
        get_info(url)
        print('第' + str(count) + "页完成")
        count += 1
    for list in all_info_list:
        j = 0
        for data in list:
            sheet.write(i, j, data)
            j += 1
        i += 1

    book.save('奇书网玄幻小说列表.xls')

其实xpath还有好多的抓取方式,然鹅现在不太会,暑假再认真看下

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值