电影排行榜(requests + bs4 & scrapy)

一、requests、bs4


 注意:pycharm-终端输入-pip install bs4下载安装包并导入模块

import requests, time, csv
from bs4 import BeautifulSoup

header = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64)\
           AppleWebKit/537.36(KHTML, like Gecko)Chrome/77.0.3865.120 \
           Safari/537.36 Core/1.77.119.400 QQBrowser/10.9.4817.400'}

urls = ['https://movie.douban.com/top250?start={}&filter='.format(i * 25) for i in range(10)]

movie_directory = []

for url in urls:
    res = requests.get(url, headers=header)
    soup = BeautifulSoup(res.text, 'html.parser')
    item = soup.find_all('div', class_='hd')
    for i in item:
        tag = i.find('a')
        name = tag.find(class_="title").text
        link = tag['href']
        print(name, link)
        movie_directory.append([name, link])

    time.sleep(1.5)

print('数据爬取完成')

with open('豆瓣电影TOP250.csv', 'w', newline='', encoding='utf-8') as wb:
    csv_writer = csv.writer(wb)
    csv_writer.writerow(['影名', '网址'])
    for i in movie_directory:
        csv_writer.writerow(i)
print('数据写入完成')

二、scrapy


注意:pycharm-终端输入-pip install scrapy下载安装包并导入模块

           import scrapy  

           scrapy startproject + 爬虫项目名称 创建项目。

           scrapy startproject scrapypython

           cd 爬虫项目名称 scrapy genspider + 爬虫文件名 + 爬取数据域名 创建爬虫文件

           cd  scrapypython

           scrapy genspide douban movie.douban.com

           scrapy crawl + 爬虫文件名 执行爬取操作

           scrapy crawl douban


1.douban.py(爬虫文件)

import scrapy
from scrapypython.items import Movie

START = 0
LIST_URL = ['https://movie.douban.com/top250?start={}&filter='.format(num*25) for num in range(10)]


class DoubanSpider(scrapy.Spider):
    name = 'douban'
    allowed_domains = ['movie.douban.com']
    start_urls = ['https://movie.douban.com/top250']

    def parse_movie(self, response):
        for item in response.css('div.item'):
            movie = Movie()
            movie['rank'] = item.css('div.pic em::text').get()
            movie['name'] = item.css('div.info>div.hd>a span.title::text').get()
            movie['link'] = item.css('div.hd>a::attr(href)').get()
            movie['score'] = item.css('div.star>span.rating_num::text').get()
            movie['quote'] = item.css('div.bd>p.quote span.inq::text').get()
            yield movie

    def parse(self, response):
        for url in LIST_URL:
            yield scrapy.Request(url, self.parse_movie)

2.items(爬取目标)

import scrapy

class Movie(scrapy.Item):
   rank = scrapy.Field()  #排名
   name = scrapy.Field()  #影名 
   link = scrapy.Field()  #链接
   score = scrapy.Field() #评分
   quote = scrapy.Field() #简介

3.settings(爬取设置)

BOT_NAME = 'scrapypython'  

SPIDER_MODULES = ['scrapypython.spiders'] 
NEWSPIDER_MODULE = 'scrapypython.spiders'

ROBOTSTXT_OBEY = True

FEED_URI = 'douban.csv'
FEED_FORMAT = 'csv'
FEED_EXPORT_ENCODING = 'utf-8'
DEFAULT_REQUEST_HEADERS = {
    'authority': 'movie.douban.com',
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) \
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.71 \
Safari/537.36 Core/1.94.169.400 QQBrowser/11.0.5130.400'
}
AUTOTHROTTLE_ENABLED = True
AUTOTHROTTLE_TARGET_CONCURRENCY = 10

 

  • 17
    点赞
  • 17
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 22
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 22
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

漫步桔田

编程界的一枚小学生!感谢支持!

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值