本文是TOP250电影排行crawler的笔记
源代码
import re
import requests
import csv
row_url = "https://movie.douban.com/top250"
headers={
"User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36"
}
target = re.compile(r'<li>.*?<span class="title">(?P<name>.*?)</span>.*?'
r'<br>.*?(?P<year>[0-9]*?) '
r'.*?<span class="rating_num" property="v:average">(?P<score>.*?)</span>'
r'.*?<span class="inq">(?P<quote>.*?)</span>'
r'.*?</li>',re.S)
f = open("data.csv",mode="w",newline='')
csvwriter = csv.writer(f)
title = ["电影名","年份","评分","quote"]
csvwriter.writerow(title)
for i in range(10):
url = row_url + "?start=%d"%(i*25)
print(url)
resp = requests.get(url,headers=headers)
data = resp.text
result = target.finditer(data)
for it in result:
csvwriter.writerow(it.groupdict().values())
resp.close()
f.close()
1. 导入需要的python库
import re
import requests
import csv
2. 准备工作
准备好豆瓣地址,GET请求头"User-Agent"(反反爬虫),正则语法预编译,csv文件
查看网页源代码,发现"https://movie.douban.com/top250"的数据是服务器直接渲染在源代码中的:
<li>
<div class="item">
<div class="pic">
<em class="">1</em>
<a href="https://movie.douban.com/subject/1292052/">
<img width="100" alt="肖申克的救赎" src="https://img2.doubanio.com/view/photo/s_ratio_poster/public/p480747492.webp" class="">
</a>
</div>
<div class="info">
<div class="hd">
<a href="https://movie.douban.com/subject/1292052/" class="">
<span class="title">肖申克的救赎</span>
<span class="title"> / The Shawshank Redemption</span>
<span class="other"> / 月黑高飞(港) / 刺激1995(台)</span>
</a>
<span class="playable">[可播放]</span>
</div>
<div class="bd">
<p class="">
导演: 弗兰克·德拉邦特 Frank Darabont 主演: 蒂姆·罗宾斯 Tim Robbins /...<br>
1994 / 美国 / 犯罪 剧情
</p>
<div class="star">
<span class="rating5-t"></span>
<span class="rating_num" property="v:average">9.7</span>
<span property="v:best" content="10.0"></span>
<span>2804339人评价</span>
</div>
<p class="quote">
<span class="inq">希望让人自由。</span>
</p>
</div>
</div>
</div>
</li>
定位到TOP1电影肖申克救赎所在的位置,根据内容和想要爬取的数据来编写正则化语法(re.S表示.可以匹配换行符),这里我获得了电影名、年份、评分和quote:
row_url = "https://movie.douban.com/top250"
headers={
"User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36"
}
target = re.compile(r'<li>.*?<span class="title">(?P<name>.*?)</span>.*?'
r'<br>.*?(?P<year>[0-9]*?) '
r'.*?<span class="rating_num" property="v:average">(?P<score>.*?)</span>'
r'.*?<span class="inq">(?P<quote>.*?)</span>'
r'.*?</li>',re.S)
f = open("data.csv",mode="w",newline='')
csvwriter = csv.writer(f)
title = ["电影名","年份","评分","quote"]
csvwriter.writerow(title)
3. 分页查询,将爬取结果写入CSV文件
for i in range(10):
url = row_url + "?start=%d"%(i*25)
print(url)
resp = requests.get(url,headers=headers)
data = resp.text
result = target.finditer(data)
for it in result:
csvwriter.writerow(it.groupdict().values())
resp.close()
f.close()
查看最终结果: