python爬虫小白升仙_5-----初识scrapy(爬取电影天堂数据)

   初识scrapy

  1. scrapy安装
  2. 创建scrapy项目
  3. 爬取电影天堂相关电影资讯
  4. 数据写入数据库mongodb
  5. 使用Robo 3T查看数据库存储的数据

scrapy安装

使用 pip install scrapy 命令安装

创建scrapy项目

1. scrapy基本流程

2. 进入要创建项目的文件夹,输入scrapy startproject "项目名称"   ------>    创建爬虫spider

3.  项目生成,在pycharm打开,有如下文件

movie.py:编写爬虫           entrypoint.py:IDE调试程序     items.py:定义获取的数据     pipelines.py: 存储数据   settings.py: 设置

爬取电影天堂相关电影资讯

1. 查看网页源码获取相关信息

电影类型,url的变化很明显(仅末位数字变):https://www.dy2018.com/3/

  

每种类型下,下一页url的获取,很容易观察出变化,https://www.dy2018.com/6/index_2.html

每种类型下,获取电影的名字、日期、评分、类型、导演等数据,可以清楚得到资讯,xpath获取,再进行数据处理提取

 2. 源码:

新建 entrypoint.py

from scrapy.cmdline import execute

# 在IDE运行此文件,跑爬虫程序
execute(['scrapy','crawl','movie'])

items.py

import scrapy

class MovieBanaItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()

    # 评分
    score=scrapy.Field()
    # 类型
    type=scrapy.Field()
    # 电影名
    name=scrapy.Field()
    # 日期
    date=scrapy.Field()
    # 导演
    director=scrapy.Field()

settings.py

# -*- coding: utf-8 -*-

# Scrapy settings for Movie_Bana project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://doc.scrapy.org/en/latest/topics/settings.html
#     https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://doc.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'Movie_Bana'

SPIDER_MODULES = ['Movie_Bana.spiders']
NEWSPIDER_MODULE = 'Movie_Bana.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'Movie_Bana (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = True

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}

# Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'Movie_Bana.middlewares.MovieBanaSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    'Movie_Bana.middlewares.MovieBanaDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
# 优先级程度(1-1000随意设置,数值越低,组件的优先级越高)
ITEM_PIPELINES = {
    'Movie_Bana.pipelines.MovieBanaPipeline': 1,
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings

# 使用下列,Scrapy会缓存你有的Requests!当你再次请求时,如果存在缓存文档则返回缓存文档,而不是去网站请求,这样既加快了本地调试速度,也减轻了 网站的压力
HTTPCACHE_ENABLED = True
HTTPCACHE_EXPIRATION_SECS = 0
HTTPCACHE_DIR = 'httpcache'
HTTPCACHE_IGNORE_HTTP_CODES = []
HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

# Mongodb参数配置 ip/port/数据库名/集合名
MONGODB_HOST = '127.0.0.1'
MONGODB_PORT = 27017
MONGODB_DBNAME = 'movies'
MONGODB_DOCNAME = 'movie_collection'

movie.py

# -*- coding: utf-8 -*-
import scrapy
import requests
from lxml import etree
from scrapy.http import Request # 一个单独的request模块,需要跟进url的时候使用
from Movie_Bana.items import MovieBanaItem   # 导入定义好的字段


class MovieSpider(scrapy.Spider):
    name = 'movie'  # 爬虫名字
    allowed_domains = ['dy2018.com']   # 作用是只会跟进存在于allowed_domains中的URL。不存在的URL会被忽略。
    start_url = 'https://www.dy2018.com/'
    end_url='.html'

    # 获取电影类型的所用url,拼接url
    def start_requests(self):
        for i in range(21):
            url=self.start_url+str(i)+'/'
            # 使用了导入的Request包,来跟进我们的URL(并将返回的response作为参数传递给self.parse, 嗯!这个叫回调函数!)
            yield Request(url,self.parse)
            # yield Request:请求新的url,后面跟的是回调函数,需要哪一个函数来处理这个返回值,就调用哪一个函数,返回值会以参数的形式传递给所调用的函数

    # 拼接每个类型下的所有页面的url,使用parse函数接受上面request获取到的response
    def parse(self, response):
        '''
        # xml=etree.HTML(response.text)
        # 获取电影类型
        types=xml.xpath('//div[@class="title_all"]/h1/font/text()')[0]
        # '>'分割字符,并去掉空格,提取出数据
        movie_type=types.split('>')[1].strip()
        print(movie_type)
        # 获取每个类型下页面的总数
        # max_num =xml.xpath('//div[@class="x"]/p/select//option//text()')
        # num=len(max_num)
        '''
        # 页数较多,仅选择10页测试
        num=10
        for i in range(1,int(num)+1):
            if i==1: #第一页
                url = response.url + 'index' + self.end_url
                yield Request(url,self.get_data)
            else:
                url = response.url + 'index_' + str(i) + self.end_url
                yield Request(url, self.get_data)

                # yield Request(url, self.get_data, meta={'type': movie_type})
                # meta字典,是Scrapy中传递额外数据的方法。将在此方法获取到的数据向下个函数传递

    # 获取并处理数据
    def get_data(self,response):
        xml = etree.HTML(response.text)
        item = MovieBanaItem()
        # name
        names= xml.xpath('//div[@class="co_content8"]/ul//table//tr[2]//td[2]/b/a[2]/text()')
        name_list=[]
        for name in names:
            name_list.append('《'+name.split('《')[1].split('》')[0]+'》')
        item["name"]=name_list
        # 通过response.meta['']获取额外传递的数据
        # item["type"]=str(response.meta['type'])
        # type
        types=xml.xpath('//div[@class="co_content8"]/ul//table//tr[4]//td/p[2]/text()')
        type_list=[]
        for i in types:
            type_list.append((i.replace("\r\n◎类型:","").strip().split("◎")[0]).replace("\r\n","").strip())
        item["type"]=type_list
        # 返回字典,然后Pipelines就可以开始对这些数据进行处理了
        # date
        dates=xml.xpath('//div[@class="co_content8"]/ul//table//tr[3]//td[2]/font[1]/text()')
        dates_list=[]
        for date in dates:
            dates_list.append(date.split(":")[1].strip())
        item["date"]=dates_list
        # score
        scores=xml.xpath('//div[@class="co_content8"]/ul//table//tr[3]//td[2]/font[2]/text()')
        scores_list=[]
        for score in scores:
            scores_list.append(score.split(": ")[1].strip())
        item["score"]=scores_list
        # director
        directors=xml.xpath('//div[@class="co_content8"]/ul//table//tr[4]//td/p[1]/text()')
        directors_list=[]
        for director in directors:
            directors_list.append(director.split('◎')[3].split(":")[1].replace("\r\n",""))
        item["director"]=directors_list
        return item


 pipelines.py  

from Movie_Bana.items import MovieBanaItem
from scrapy.utils.project import get_project_settings  #  获取settings.py
import pymongo

class MovieBanaPipeline(object):
    def __init__(self):
        settings=get_project_settings()
        host=settings['MONGODB_HOST']
        port=settings['MONGODB_PORT']
        dbName = settings['MONGODB_DBNAME']
        # 创建连接
        client = pymongo.MongoClient(host=host, port=port)
        # 创建数据库
        db = client[dbName]
        # 创建集合
        self.collection = db[settings['MONGODB_DOCNAME']]
    def process_item(self, item, spider):
        if isinstance(item,MovieBanaItem):
            bookInfo=dict(item)
            self.collection.insert_one(bookInfo)
            print(self.collection)
            return item  # 切记 一定要返回item进行后续的pipelines 数据处理

 使用Robo 3T查看数据库存储的数据

 

 

 

参考:https://cuiqingcai.com/3472.html

  • 0
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值