Spyder爬取豆瓣电影Top500-csv文件存储
废话少说,直接上代码:
# -*- coding: utf-8 -*-
"""
Created on Fri May 1 16:59:13 2020
@author: ASUS
"""
import requests
from lxml import etree
import csv
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 '
'(KHTML, like Gecko) Chrome/81.0.4044.122 Safari/537.36'
}
movie_file = open('豆瓣电影Top250.csv', mode='a', newline='', encoding='utf8')
writer = csv.writer(movie_file)
# 设置csv文件表头
writer.writerow(['电影排名','电影名称','链接','电影评分','评论'])
urls = ['https://movie.douban.com/top250?start={}&filter='.format(i*25) for i in range(10)]
i = 0
for url in urls:
res = requests.get(url, headers=headers)
html = etree.HTML(res.text)
divs = html.xpath('//div[@class="item"]')
for div in divs:
try:
# 获取排名
rank = div.xpath('.//em/text()')[0]
# 获取电影名称
name = div.xpath('.//a/span[1]/text()')[0]
# 获取链接
link = div.xpath('.//div[@class="hd"]/a/@href')[0]
# 获取评分
score = div.xpath('.//div[@class="star"]//span[2]//text()')[0]
# 获取评论
comments = div.xpath('.//p[@class="quote"]/span/text()')
comment = comments[0] if len(comments) != 0 else '空'
# 写入文件
writer.writerow([rank,name,link,score,comment])
except:
continue
i += 1
print('第{}页已经爬取完毕'.format(i))
# 关闭文件
movie_file.close()
代码还有改进的地方,比如模块化,让代码可读性加强。
还有很多不足的地方,欢迎大家批评指正,同时也欢迎大家交流技术。