内网如何使用webcontroller 爬虫框架_Python+Scrapy爬虫框架之使用Pipeline存储

在上两节当中,我们爬取了360图片,但是我们需要将图片下载下来,这将如何下载和存储呢?

下边叙述一下三种情况:1、将图片下载后存储到MongoDB数据库;2、将图片下载后存储在MySQL数据库;3、将图片下载到本地文件

话不多说,直接上代码:

1、通过item定义存储字段

# item.pyimport scrapyclass Bole_mode(scrapy.Item):    collection = "images"     # collection为MongoDB储表名名称    table = "images"           # table为MySQL的存储表名名称    id    = scrapy.Field()      # id    url   = scrapy.Field()      # 图片链接    title = scrapy.Field()      # 标题    thumb = scrapy.Field()  # 缩略图

2、配置settings文件获取数据库信息

  # -*- coding: utf-8 -*-# Scrapy settings for bole project## For simplicity, this file contains only settings considered important or# commonly used. You can find more settings consulting the documentation:##     https://doc.scrapy.org/en/latest/topics/settings.html#     https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#     https://doc.scrapy.org/en/latest/topics/spider-middleware.htmlBOT_NAME = 'bole'SPIDER_MODULES = ['BLZX.spiders']NEWSPIDER_MODULE = 'BLZX.spiders'# Crawl responsibly by identifying yourself (and your website) on the user-agent#USER_AGENT = 'bole (+http://www.yourdomain.com)'# Obey robots.txt rulesROBOTSTXT_OBEY = FalseUSER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_3) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.54 Safari/536.5'# Configure maximum concurrent requests performed by Scrapy (default: 16)#CONCURRENT_REQUESTS = 32# Configure a delay for requests for the same website (default: 0)# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay# See also autothrottle settings and docs#DOWNLOAD_DELAY = 3# The download delay setting will honor only one of:#CONCURRENT_REQUESTS_PER_DOMAIN = 16#CONCURRENT_REQUESTS_PER_IP = 16# Disable cookies (enabled by default)#COOKIES_ENABLED = False# Disable Telnet Console (enabled by default)#TELNETCONSOLE_ENABLED = False# Override the default request headers:#DEFAULT_REQUEST_HEADERS = {#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',#   'Accept-Language': 'en',#}# Enable or disable spider middlewares# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html#SPIDER_MIDDLEWARES = {#    'bole.middlewares.BoleSpiderMiddleware': 543,#}# Enable or disable downloader middlewares# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html# DOWNLOADER_MIDDLEWARES = {#    'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware':None,#    'bole.middlewares.ProxyMiddleware':125,#    'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware':None# }# Enable or disable extensions# See https://doc.scrapy.org/en/latest/topics/extensions.html#EXTENSIONS = {#    'scrapy.extensions.telnet.TelnetConsole': None,#}# Configure item pipelines# See https://doc.scrapy.org/en/latest/topics/item-pipeline.htmlITEM_PIPELINES = {    "bole.pipelines.BoleImagePipeline":2,    "bole.pipelines.ImagePipeline":300,    "bole.pipelines.MongoPipeline":301,    "bole.pipelines.MysqlPipeline":302,}# Enable and configure the AutoThrottle extension (disabled by default)# See https://doc.scrapy.org/en/latest/topics/autothrottle.html#AUTOTHROTTLE_ENABLED = True# The initial download delay#AUTOTHROTTLE_START_DELAY = 5# The maximum download delay to be set in case of high latencies#AUTOTHROTTLE_MAX_DELAY = 60# The average number of requests Scrapy should be sending in parallel to# each remote server#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0# Enable showing throttling stats for every response received:#AUTOTHROTTLE_DEBUG = False# Enable and configure HTTP caching (disabled by default)# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings#HTTPCACHE_ENABLED = True#HTTPCACHE_EXPIRATION_SECS = 0#HTTPCACHE_DIR = 'httpcache'#HTTPCACHE_IGNORE_HTTP_CODES = []#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'# 爬取最大页数MAX_PAGE = 50# mongodb配置MONGODB_URL = "localhost"MONGODB_DB = "Images360"# MySQL配置MYSQL_HOST = "localhost"MYSQL_DATABASE = "images360"MYSQL_PORT = 3306MYSQL_USER = "root"MYSQL_PASSWORD = "123456"# 本地配置IMAGES_STORE = r"D:spiderboleimage"

3、此处的Middlewares没有做任何修改

 # -*- coding: utf-8 -*-# Define here the models for your spider middleware## See documentation in:# https://doc.scrapy.org/en/latest/topics/spider-middleware.htmlfrom scrapy import signalsclass BoleSpiderMiddleware(object):    # Not all methods need to be defined. If a method is not defined,    # scrapy acts as if the spider middleware does not modify the    # passed objects.    @classmethod    def from_crawler(cls, crawler):        # This method is used by Scrapy to create your spiders.        s = cls()        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)        return s    def process_spider_input(self, response, spider):        # Called for each response that goes through the spider        # middleware and into the spider.        # Should return None or raise an exception.        return None    def process_spider_output(self, response, result, spider):        # Called with the results returned from the Spider, after        # it has processed the response.        # Must return an iterable of Request, dict or Item objects.        for i in result:            yield i    def process_spider_exception(self, response, exception, spider):        # Called when a spider or process_spider_input() method        # (from other spider middleware) raises an exception.        # Should return either None or an iterable of Response, dict        # or Item objects.        pass    def process_start_requests(self, start_requests, spider):        # Called with the start requests of the spider, and works        # similarly to the process_spider_output() method, except        # that it doesn’t have a response associated.        # Must return only requests (not items).        for r in start_requests:            yield r    def spider_opened(self, spider):        spider.logger.info('Spider opened: %s' % spider.name)class BoleDownloaderMiddleware(object):    # Not all methods need to be defined. If a method is not defined,    # scrapy acts as if the downloader middleware does not modify the    # passed objects.    @classmethod    def from_crawler(cls, crawler):        # This method is used by Scrapy to create your spiders.        s = cls()        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)        return s    def process_request(self, request, spider):        # Called for each request that goes through the downloader        # middleware.        # Must either:        # - return None: continue processing this request        # - or return a Response object        # - or return a Request object        # - or raise IgnoreRequest: process_exception() methods of        #   installed downloader middleware will be called        return None    def process_response(self, request, response, spider):        # Called with the response returned from the downloader.        # Must either;        # - return a Response object        # - return a Request object        # - or raise IgnoreRequest        return response    def process_exception(self, request, exception, spider):        # Called when a download handler or a process_request()        # (from other downloader middleware) raises an exception.        # Must either:        # - return None: continue processing this exception        # - return a Response object: stops process_exception() chain        # - return a Request object: stops process_exception() chain        pass    def spider_opened(self, spider):        spider.logger.info('Spider opened: %s' % spider.name)

4、通过Pipeline对爬取的数据进行存储,分为MongoDB数据库存储,MySQL数据库存储,本地文件夹存储

 # -*- coding: utf-8 -*-# ==========================MongoDB===========================import pymongoclass MongoPipeline(object):    def __init__(self,mongodb_url,mongodb_DB):        self.mongodb_url = mongodb_url        self.mongodb_DB = mongodb_DB    @classmethod    # 获取settings配置文件当中设置的MONGODB_URL和MONGODB_DB    def from_crawler(cls,crawler):        return cls(                    mongodb_url=crawler.settings.get("MONGODB_URL"),                    mongodb_DB=crawler.settings.get("MONGODB_DB")                   )    # 开启爬虫时连接MongoDB数据库    def open_spider(self,spider):        self.client = pymongo.MongoClient(self.mongodb_url)        self.db = self.client[self.mongodb_DB]    def process_item(self,item,spider):        table_name = item.collection        self.db[table_name].insert(dict(item))        return item    # 关闭爬虫时断开MongoDB数据库连接    def close_spider(self,spider):        self.client.close()# ============================MySQL===========================import pymysqlclass MysqlPipeline():    def __init__(self,host,database,port,user,password):        self.host = host        self.database = database        self.port = port        self.user = user        self.password = password    @classmethod    # 获取settings配置文件当中设置的MySQL各个参数    def from_crawler(cls,crawler):        return cls(            host=crawler.settings.get("MYSQL_HOST"),            database=crawler.settings.get("MYSQL_DATABASE"),            port=crawler.settings.get("MYSQL_PORT"),            user=crawler.settings.get("MYSQL_USER"),            password=crawler.settings.get("MYSQL_PASSWORD")        )    # 开启爬虫时连接MongoDB数据库    def open_spider(self,spider):        self.db = pymysql.connect(host=self.host,database=self.database,user=self.user,password=self.password,port=self.port,charset="utf8")        self.cursor = self.db.cursor()    def process_item(self,item,spider):        data = dict(item)        keys = ",".join(data.keys())            # 字段名        values =",".join(["%s"]*len(data))     # 值        sql = "insert into %s(%s) values(%s)"%(item.table, keys, values)        self.cursor.execute(sql, tuple(data.values()))        self.db.commit()        return item    # 关闭爬虫时断开MongoDB数据库连接    def close_spider(self,spider):        self.db.close()# # ============================本地===========================import scrapyfrom scrapy.exceptions import DropItemfrom scrapy.pipelines.images import ImagesPipelineclass ImagePipeline(ImagesPipeline):    # 由于item里的url不是list,所以重写下面几个函数    def file_path(self, request, response=None, info=None):        url = request.url        file_name = url.split("/")[-1]    # 将url连接的最后一部分作为文件名称        return file_name    # results为item对应的图片下载的结果,他是一个list,每个元素为元组,并包含了下载成功和失败的信息    def item_completed(self, results, item, info):        # 获取图片地址path        image_paths = [x["path"] for ok,x in results if ok]        if not image_paths:            raise DropItem("图片下载失败!!")        return item    def get_media_requests(self, item, info):        # 获取item文件里的url字段并加入队列等待被调用进行下载图片        yield scrapy.Request(item["url"])

5、最后就是spider数据爬取了

 import scrapyimport jsonimport syssys.path.append(r'D:spiderboleitem.py')from bole.items import Bole_modeclass BoleSpider(scrapy.Spider):    name = 'boleSpider'    def start_requests(self):        url = "https://image.so.com/zj?ch=photography&sn={}&listtype=new&temp=1"        page = self.settings.get("MAX_PAGE")        for i in range(int(page)+1):            yield scrapy.Request(url=url.format(i*30))    def parse(self,response):        photo_list = json.loads(response.text)        item  = Bole_mode()        for image in photo_list.get("list"):            item["id"] = image["id"]            item["url"] = image["qhimg_url"]            item["title"] = image["group_title"]            item["thumb"] = image["qhimg_thumb_url"]            yield item

6、最最后就是对爬取的结果展示一下呗(只展示MySQL和本地,MongoDB没打开)

(1) MySQL存储

f85201dae161ea13ab85598885f32b28.png

(2) 本地存储

32c065f3b8220427551a48073fc4acd8.png
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值