用scrapy+selenium + phantomjs 爬取vip网页,保存为json格式,写入到mysql数据库,下载图片(一)

标签: pyhton
3人阅读 评论(0) 收藏 举报
分类:

用命令在终端创建一个项目: scrapy startproject myvipspider

进入到myvipspider项目下运行命令: scrapy genspider weipin "vip.com"

项目下有这几个文件,当




settings.py文件设置:

# -*- coding: utf-8 -*-

# Scrapy settings for weipinhui project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://doc.scrapy.org/en/latest/topics/settings.html
#     https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://doc.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'weipinhui'

SPIDER_MODULES = ['weipinhui.spiders']
NEWSPIDER_MODULE = 'weipinhui.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.140 Safari/537.36'

# Obey robots.txt rules
ROBOTSTXT_OBEY = False

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}

# Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'weipinhui.middlewares.WeipinhuiSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
DOWNLOADER_MIDDLEWARES = {
   'weipinhui.middlewares.WeipinhuiDownloaderMiddleware': 543,
   'scrapy.downloadmiddlewares.useragent.UserAgentMiddleware':None,
}

# Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
   'weipinhui.pipelines.WeipinhuiPipeline': 300,
   'weipinhui.pipelines.MysqlPipeline': 299,
}

DB_HOST = "127.0.0.1"
DB_PORT = 3306
DB_USER = "root"
DB_PWD = 'root'
DB_NAME = 'weipin'
DB_CHARSET = "utf8"




# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'


items.py文件代码如下:

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html

import scrapy


class WeipinhuiItem(scrapy.Item):
    # define the fields for your item here like:
    brand = scrapy.Field()  # 品牌
    title = scrapy.Field()  # 标题
    old_price = scrapy.Field()  # 原价
    new_price = scrapy.Field()  # 现价
    discount = scrapy.Field()  # 折扣
    img_url = scrapy.Field()  # 图片地址
    url = scrapy.Field()  # 链接


查看评论

用scrapy+selenium + phantomjs 爬取vip网页,保存为json格式,写入到mysql数据库,下载图片(二)

接上一编weipin.py文件的代码 :# -*- coding: utf-8 -*- import scrapy from weipinhui.items import WeipinhuiItem ...
  • cats_miao
  • cats_miao
  • 2018-04-16 19:38:06
  • 11

用python写一个简单的爬虫保存在json文件中

学习python没多久,所以只能写一个很简单的爬虫啦~~ 我使用annacada 自带的spyder来写爬虫的,这次我们要爬取得网站是http://www.drugbank.ca/drugs, 主...
  • sinat_36841379
  • sinat_36841379
  • 2017-04-13 13:00:45
  • 1911

Phantomjs+Nodejs+Mysql数据抓取(2.抓取图片)

概要 这篇博客是在上一篇博客Phantomjs+Nodejs+Mysql数据抓取(1.抓取数据) http://blog.csdn.net/jokerkon/article/de...
  • JokerKon
  • JokerKon
  • 2016-03-22 23:00:48
  • 995

python网络爬虫学习(六)利用Pyspider+Phantomjs爬取淘宝模特图片

一.新的问题与工具平时在淘宝上剁手的时候,总是会看到各种各样的模特。由于自己就读于一所男女比例三比一的工科院校……写代码之余看看美女也是极好的放松方式。但一张一张点右键–另存为又显得太过麻烦而且不切实...
  • kelvinmao
  • kelvinmao
  • 2016-06-16 11:25:50
  • 4771

selenium+ Phantomjs爬取动态网页

对于动态加载,Selenium+Phantomjs的强大打开网页查看网页源码(注意不是检查元素)会发现要爬取的信息并不在源码里面。Selenium+Phantomjs的强大一方面就在于能将完整的源码抓...
  • sz457763638
  • sz457763638
  • 2016-12-15 22:46:55
  • 995

python 爬虫获取json数据存入文件时乱码

解决python使用爬虫获取json格式的网页,输出以及写入文件乱码的情况import codecsresp = requests.get(url,headers=headers) result = ...
  • u013562625
  • u013562625
  • 2017-12-31 13:16:00
  • 147

Scrapy+PhantomJS+Selenium动态爬虫

转自http://jiayi.space/post/scrapy-phantomjs-seleniumdong-tai-pa-chong#fb_new_comment 很多网页具有动态加载的功能,简单...
  • qq_30242609
  • qq_30242609
  • 2017-04-27 16:43:56
  • 10673

[python爬虫] Selenium定向爬取海量精美图片及搜索引擎杂谈

我自认为这是自己写过博客中一篇比较优秀的文章,同时也是在深夜凌晨2点满怀着激情和愉悦之心完成的。首先通过这篇文章,你能学到以下几点: 1.可以了解Python简单爬取图片的一些思路和方法 2.学习Se...
  • Eastmount
  • Eastmount
  • 2015-10-02 09:47:38
  • 7861

phantomjs 抓取网页

phantomjs:我的理解就是它是一个无显示的浏览器,也就是说除了不能显示页面内容以外,浏览器能干的活儿它基本上都能干。so,最近由于实验需要,要从某电商爬一点图片,但是它又是AJAX生成的,单纯的...
  • tengdazhang770960436
  • tengdazhang770960436
  • 2014-08-13 16:34:03
  • 8792

数据保存!!!Python 爬取网页数据后,三种保存格式---保存为txt文件、CSV文件和mysql数据库

Python爬取网站数据后,数据的保存方式是大家比较关心的意一件事情,也是为接下来是否能够更简便的处理数据的关键步骤。下面,就Python爬取网页数据后的保存格式进行简单介绍。三种保存格式为txt格式...
  • iseeyounow2017
  • iseeyounow2017
  • 2017-09-07 11:55:04
  • 1256
    个人资料
    持之以恒
    等级:
    访问量: 1541
    积分: 211
    排名: 34万+
    文章存档