Scrapy框架详解2,入门级项目实战“爬取某瓣电影”~

目标数据要求:

  1. 豆瓣电影250个电影信息
  2. 电影信息为:电影名字,导演信息(可以包含演员信息),评分
  3. 将电影信息直接本地保存
  4. 将电影信息通过管道进行保存

爬虫文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
copy
# -*- coding: utf-8 -*-
import json

import scrapy

from ..items  import  DbItem   #是一个安全的字典
class Db250Spider(scrapy.Spider):#继承基础类
    name = 'db250'  #爬虫文件名字  必须存在且唯一
    # allowed_domains = ['movie.douban.com'] #允许的域名   可以不存在 不存在  任何域名都可以
    start_urls = ['https://movie.douban.com/top250']#初始url  必须要存在
    page_num=0
    def parse(self, response):#解析函数  处理响应数据
        node_list=response.xpath('//div[@class="info"]')
        with open("film.txt","w",encoding="utf-8") as f:
            for node  in  node_list:
                #电影名字
               # extract 新的知识
            film_name=node.xpath("./div/a/span/text()").extract()[0]
                #导演信息
                director_name=node.xpath("./div/p/text()").extract()[0].strip()
                #评分
                score=node.xpath('./div/div/span[@property="v:average"]/text()').extract()[0]

                #非管道存储
                item={}
                item["item_pipe"]=film_name
                item["director_name"]=director_name
                item["score"]=score
                content=json.dumps(item,ensure_ascii=False)
                f.write(content+"\n")

                #使用管道存储
                item_pipe=DbItem() #创建Dbitem对象  当成字典来使用
                item_pipe['film_name']=film_name
                item_pipe['director_name']=director_name
                item_pipe['score']=score
                yield item_pipe
        #发送新一页的请求
        #构造url
        self.page_num += 1
        if self.page_num==3:
            return
        page_url="https://movie.douban.com/top250?start={}&filter=".format(self.page_num*25)
        yield scrapy.Request(page_url)
        
#page页规律
"https://movie.douban.com/top250?start=25&filter="
"https://movie.douban.com/top250?start=50&filter="
"https://movie.douban.com/top250?start=75&filter="

items文件

1
2
3
4
5
6
7
8
copy
import scrapy

class DbItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    film_name=scrapy.Field()
    director_name=scrapy.Field()
    score=scrapy.Field()

piplines文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
copy
import json

class DbPipeline(object):

    def  open_spider(self,spider):
        #爬虫文件开启,此方法执行
        self.f=open("film_pipe.txt","w",encoding="utf-8")

    def process_item(self, item, spider):
        json_data=json.dumps(dict(item),ensure_ascii=False)+"\n"
        self.f.write(json_data)
        return item
    def  close_spider(self,spider):
        # 爬虫文件关闭,此方法执行
        self.f.close() #关闭文件

settings文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
copy
# -*- coding: utf-8 -*-

# Scrapy settings for db project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'db'

SPIDER_MODULES = ['db.spiders']
NEWSPIDER_MODULE = 'db.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'db (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = False

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
  'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
  'Accept-Language': 'en',
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36"
}



# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'db.middlewares.DbSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    'db.middlewares.DbDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
   'db.pipelines.DbPipeline': 300,
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

项目注意事项

  1. settings文件中 项目默认的是 ROBOTSTXT_OBEY = True,即遵循robots协议,则不能爬取到数据,则更改为 ROBOTSTXT_OBEY = False
  2. settings中,有些网站需要添加User-Agent ,才能获取到数据 (伪装成客户端)
  3. settings中,需要将管道打开,才可以将数据传递到pipelines文件中
  4. items中需要设置相应的字段,使用Item对象传递数据,(可以理解为mysql先定义字段,才能写入数据一样)

今天的简单练习先讲到这里,我这里还准备了一份入门级别的学习资料,适合新手学习,包含以下几个方面的内容:

  • 1.爬虫入门篇(内含爬虫工作流程  http工作流程)
  • 2.逆向工程篇
  • 3.逆向算法篇
  • 4.异步爬虫篇
  • 5.安卓逆向篇

资料持续更新中,目前全部都是免费送给大家,如果有需要,尽管拿走,添加我助手领取,备注“CSDN小帅”
 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值