python爬虫 提取豆瓣Top250电影信息

思路很简单,先将原网页爬取出来,然后用正则表达式、BeautifulSoup和xpath3种方法提取想要的信息,这里暂时先只爬取电影名、导演、评分和标语。

import re
import csv
import requests
from lxml import etree
from bs4 import BeautifulSoup
from urllib.parse import urlencode

root = 'https://movie.douban.com/top250'
para = {'start': 0, 'filter': ''}
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko)'
                         ' Chrome/92.0.4515.107 Safari/537.36 Edg/92.0.902.55'}

# 正则表达式
writedata = []
pattern = re.compile(r'<li>.*?<div class="info">.*?<span class="title">(?P<name>.*?)</span>'
                     r'.*?<p class="">(?P<director>.*?)&nbsp'
                     r'.*?<span class="rating_num" property="v:average">(?P<score>.*?)</span>'
                     r'.*?<span class="inq">(?P<quote>.*?)</span>', re.S)
for i in range(10):
    para['start'] = i * 25
    url = root + '?' + urlencode(para)
    resp = requests.get(url, headers=headers)
    for i in pattern.finditer(resp.text):
        writedata.append([i.group('name'), i.group('director').strip(), i.group('score'), i.group('quote')])
    resp.close()

# BeautifulSoup
writedata = []
for i in range(10):
    para['start'] = i * 25
    url = root + '?' + urlencode(para)
    resp = requests.get(url, headers=headers)
    # 生成bs对象
    bs = BeautifulSoup(resp.text, 'html.parser')
    # 从bs对象中查找数据
    items = bs.find_all(name='div', attrs={'class': 'info'})
    for item in items:
        name = item.find_all(name ='span', class_='title')[0].text
        director = item.find_all(name ='p', class_='')[0].text.strip()
        score = item.find_all(name='span', class_='rating_num')[0].text
        quote = item.find_all(name='span', class_='inq')
        if(quote == []):
            quote = ""
        else:
            quote = item.find_all(name='span', class_='inq')[0].text
        writedata.append([name, director, score, quote])
    resp.close()

# xpath
writedata = []
for i in range(10):
    para['start'] = i * 25
    url = root + '?' + urlencode(para)
    resp = requests.get(url, headers=headers)
    tree = etree.HTML(resp.text)
    for j in range(1, 26):
        name = tree.xpath(f'//*[@id="content"]/div/div[1]/ol/li[{j}]/div/div[2]/div[1]/a/span[1]/text()')[0]
        # 一次提取无法将导演信息准确提取出,需要利用正则表达式再提取一次
        message = tree.xpath(f'//*[@id="content"]/div/div[1]/ol/li[{j}]/div/div[2]/div[2]/p[1]/text()[1]')[0].strip()
        director = re.search(r'导演: (?P<director>.*?) ', message, re.S).group()
        score = tree.xpath(f'//*[@id="content"]/div/div[1]/ol/li[{j}]/div/div[2]/div[2]/div/span[2]/text()')[0]
        quote = tree.xpath(f'//*[@id="content"]/div/div[1]/ol/li[{j}]/div/div[2]/div[2]/p[2]/span/text()')
        if(quote == []):
            quote = ""
        else:
            quote = quote[0]
        writedata.append([name, director, score, quote])

    resp.close()

# 保存
with open('films.csv', 'w', newline="", encoding='utf-8') as f:
    csvwriter = csv.writer(f)
    csvwriter.writerow(['name', 'director', 'score', 'quote'])
    csvwriter.writerows(writedata)
f.close()

3种方法作比较,xpath是最简单的,在chrome中找到想要提取的元素后可以直接右键复制路径,不过可能需要正则表达式进行进一步处理;正则表达式写起来较为复杂,但是运行速度和效率都最高。

  • 1
    点赞
  • 13
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值