在win10下用pyinstaller打包scrapy项目生成exe文件

参照官方文档:https://doc.scrapy.org/en/latest/topics/practices.html,认真学习文档才是正解

1.安装pyinstaller

2.安装pywin32

3.安装其他模块

4.在爬虫项目里进行相关操作,参照博友:https://blog.csdn.net/la_vie_est_belle/article/details/79017358

  4.1在scrapy.cfg文件同路径下创建s_spider.py

  4.2写入相关代码

# -*- coding: utf-8 -*-
from scrapy.crawler import CrawlerProcess
from scrapy.utils.project import get_project_settings

# 这里是必须引入的
import robotparser

import scrapy.spiderloader
import scrapy.statscollectors
import scrapy.logformatter
import scrapy.dupefilters
import scrapy.squeues

import scrapy.extensions.spiderstate
import scrapy.extensions.corestats
import scrapy.extensions.telnet
import scrapy.extensions.logstats
import scrapy.extensions.memusage
import scrapy.extensions.memdebug
import scrapy.extensions.feedexport
import scrapy.extensions.closespider
import scrapy.extensions.debug
import scrapy.extensions.httpcache
import scrapy.extensions.statsmailer
import scrapy.extensions.throttle

import scrapy.core.scheduler
import scrapy.core.engine
import scrapy.core.scraper
import scrapy.core.spidermw
import scrapy.core.downloader

import scrapy.downloadermiddlewares.stats
import scrapy.downloadermiddlewares.httpcache
import scrapy.downloadermiddlewares.cookies
import scrapy.downloadermiddlewares.useragent
import scrapy.downloadermiddlewares.httpproxy
import scrapy.downloadermiddlewares.ajaxcrawl
import scrapy.downloadermiddlewares.chunked
import scrapy.downloadermiddlewares.decompression
import scrapy.downloadermiddlewares.defaultheaders
import scrapy.downloadermiddlewares.downloadtimeout
import scrapy.downloadermiddlewares.httpauth
import scrapy.downloadermiddlewares.httpcompression
import scrapy.downloadermiddlewares.redirect
import scrapy.downloadermiddlewares.retry
import scrapy.downloadermiddlewares.robotstxt

import scrapy.spidermiddlewares.depth
import scrapy.spidermiddlewares.httperror
import scrapy.spidermiddlewares.offsite
import scrapy.spidermiddlewares.referer
import scrapy.spidermiddlewares.urllength

import scrapy.pipelines

import scrapy.core.downloader.handlers.http
import scrapy.core.downloader.contextfactory

# 自己项目用到的
#import scrapy.pipelines.images  # 用到图片管道
import openpyxl  # 用到openpyxl库

process = CrawlerProcess(get_project_settings())

# 'sk' is the name of one of the spiders of the project.
process.crawl('sk')
process.start()  # the script will block here until the crawling is finished

  4.3在s_spider.py目录下:shift+右键,然后点击‘’在此处打开命令窗口‘’,输入:pyinstaller crawl.py,生成dist,build(可删)和crawl.spec(可删)。

  4.4在s_spider.exe目录下创建文件夹scrapy,然后到自己安装的scrapy文件夹中把VERSION和mime.types两个文件复制到刚才创建的scrapy文件夹中。

  4.5重新打包运行.exe即可,需要爬虫支持,不能单独运行。

转载于:https://www.cnblogs.com/songxiangyangKing/p/8810151.html

  • 0
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值