java爬虫爬豆瓣图书,十一. 爬虫实战(Xpath)- 豆瓣图书TOP250的数据

爬取网址:https://book.douban.com/top250

爬取信息:书名、书名链接,评价、评价人数、一句话点评

爬书名时,直接复制xpath会遇到一个坑。

这是由于chrome、Firefox等浏览器为了对html文本进行规范化,自动在table标签下添加了tbody。

(通过【查看网页源代码】和【审查】来进行比较,就能清楚看到上述结果。)

所以,需要手动去除table后面的/tbody。

原xpath路径: //*[@id="content"]/div/div[1]/div/table[1]/tbody/tr/td[2]/div[1]/a

替换xpath路径://*[@id="content"]/div/div[1]/div/table[1]/tr/td[2]/div[1]/a

要爬取多本图书的信息,只需列出多本图书的xpath信息进行观察,结果发现table序号不断增加,以书名为例:

//*[@id="content"]/div/div[1]/div/table[1]/tr/td[2]/div[1]/a

//*[@id="content"]/div/div[1]/div/table[2]/tr/td[2]/div[1]/a

//*[@id="content"]/div/div[1]/div/table[3]/tr/td[2]/div[1]/a

//*[@id="content"]/div/div[1]/div/table[4]/tr/td[2]/div[1]/a

#获取多本图书书名的xpath路径为:

//*[@id="content"]/div/div[1]/div/table/tr/td[2]/div[1]/a

单页面代码为:

import requests

from lxml import etree

url = 'https://book.douban.com/top250'

r = requests.get(url)

#print(r.status_code)

selector = etree.HTML(r.text)

book_names = selector.xpath('//*[@id="content"]/div/div[1]/div/table/tr/td[2]/div[1]/a/@title')

ratings = selector.xpath('//*[@id="content"]/div/div[1]/div/table/tr/td[2]/div[2]/span[2]/text()')

rating_nums = selector.xpath('//*[@id="content"]/div/div[1]/div/table/tr/td[2]/div[2]/span[3]/text()')

comments = selector.xpath('//*[@id="content"]/div/div[1]/div/table/tr/td[2]/p[2]/span/text()')

book_links = selector.xpath('//*[@id="content"]/div/div[1]/div/table/tr/td[2]/div[1]/a/@href')

print(book_names , ratings, rating_nums, comments, book_links)

多页面信息爬取,观察页面规律,并构造出网页列表:

urls = ['https://book.douban.com/top250?start={}'.format(i * 25) for i in range(10)]

附上全部代码,由于某些书不存在评论,故而使用了try语句:

import requests

from lxml import etree

import time

urls = ['https://book.douban.com/top250?start={}'.format(i * 25) for i in range(10)]

for url in urls:

r = requests.get(url)

selector = etree.HTML(r.text)

books = selector.xpath('//*[@id="content"]/div/div[1]/div/table/tr/td[2]')

for book in books:

book_name = book.xpath('./div[1]/a/@title')[0]

rating = book.xpath('./div[2]/span[2]/text()')[0]

rating_num = book.xpath('./div[2]/span[3]/text()')[0].strip('()\n ') #去除包含"(",")","\n"," "的首尾字符

try:

comment = book.xpath('./p[2]/span/text()')[0]

except:

comment = ""

book_link = book.xpath('./div[1]/a/@href')[0]

print(book_name ,rating, rating_num,comment, book_link)

time.sleep(3)

进一步对xpath的路径进行精简;

使用if...else语句替换try语句;

使用csv文件存储数据。

import requests

from lxml import etree

import time

import csv

f = open('F://book_top250.csv','w',newline='')

writer = csv.writer(f)

writer.writerow(('书名','评价','评价人数','一句话点评','书名链接'))

urls = ['https://book.douban.com/top250?start={}'.format(i * 25) for i in range(10)]

for url in urls:

r = requests.get(url)

selector = etree.HTML(r.text)

books = selector.xpath('//tr[@class="item"]')

for book in books:

book_name = book.xpath('td/div/a/@title')[0]

rating = book.xpath('td/div/span[2]/text()')[0]

rating_num = book.xpath('td/div/span[3]/text()')[0].strip('()\n 人评价') #去除包含"(",")","\n"," ","人评价"的首尾字

comment_t = book.xpath('td/p/span/text()') #p[2]可简化为p。

#comment_t = book.xpath('td/p[2]/span/text()')

comment = comment_t[0] if len(comment_t) !=0 else "空"

book_link = book.xpath('td/a/@href')[0]

writer.writerow((book_name,rating,rating_num,comment,book_link))

time.sleep(3)

f.close()

提炼文件结构;

增加作者,出版日期,价格信息。

用记事本打开正常,用Excel打开乱码。解决办法:用记事本打开,另存为UTF-8的文件,便不会出现乱码问题。

import requests

from lxml import etree

import time

import csv

def get_info(url):

r = requests.get(url)

selector = etree.HTML(r.text)

books = selector.xpath('//tr[@class="item"]')

for book in books:

book_name = book.xpath('td/div/a/@title')[0]

rating = book.xpath('td/div/span[2]/text()')[0]

rating_num = book.xpath('td/div/span[3]/text()')[0].strip('()\n 人评价') #去除包含"(",")","\n"," ","人评价"的首尾字

comment_t = book.xpath('td/p/span/text()') #p[2]可简化为p。

#comment_t = book.xpath('td/p[2]/span/text()')

comment = comment_t[0] if len(comment_t) !=0 else "空"

book_infos = book.xpath('td/p[1]/text()')[0]

author = book_infos.split('/')[0]

publisher = book_infos.split('/')[-3]

date = book_infos.split('/')[-2]

price = book_infos.split('/')[-1]

book_link = book.xpath('td/a/@href')[0]

writer.writerow((book_name,rating,rating_num,comment,author,publisher,date,price,book_link))

if __name__ == '__main__':

f = open('F://book_top250.csv','w',newline='',encoding='utf-8')

writer = csv.writer(f)

writer.writerow(('书名','评价','评价人数','一句话点评','作者','出版社','出版日期','价格','书名链接'))

urls = ['https://book.douban.com/top250?start={}'.format(i * 25) for i in range(10)]

for url in urls:

get_info(url)

time.sleep(3)

f.close()

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值