centos7 python3 爬虫登陆邮箱_Centos7下成功安裝python3和scrapy爬虫

本文介绍了如何在CentOS7系统中安装Python3及其依赖库,然后详细步骤演示了如何安装Scrapy爬虫框架,并通过验证确保安装成功。首先,通过yum安装开发工具和所需库,然后下载并编译Python3源码,创建软链接到系统路径。接着,使用pip3安装Scrapy,最后在Python3环境中验证Scrapy安装是否成功。
摘要由CSDN通过智能技术生成

1、安装python3(保留python2)

(1)源码编译前准备

[root@hadron ~]# yum -y groupinstall "Development tools"

[root@hadron ~]# yum -y install zlib-devel bzip2-devel openssl-devel ncurses-devel sqlite-devel readline-devel tk-devel gdbm-devel db4-devel libpcap-devel xz-devel

如果上面命名报错,按照提示加上--skip-broken

[root@hadron ~]# yum -y install zlib-devel bzip2-devel openssl-devel ncurses-devel sqlite-devel readline-devel tk-devel gdbm-devel db4-devel libpcap-devel xz-devel --skip-broken

(2)下载Python-3.5源码

[root@hadron ~]# wget https://www.python.org/ftp/python/3.5.5/Python-3.5.5.tgz

然后解压缩,进入根目录

[root@hadron ~]# tar -zxvf Python-3.5.5.tgz

[root@hadron ~]# cd Python-3.5.5/

(3)编译安装

[root@hadron ~]# ./configure --prefix=/usr/local/python3

[root@hadron ~]# make && make install

(4)创建软链接

[root@hadron ~]# ln -s /usr/local/python3/bin/python3 /usr/bin/python3

[root@hadron ~]# ln -s /usr/local/python3/bin/pip3 /usr/bin/pip3

(5)验证

[root@hadron ~]# python

Python 2.7.5 (default, Nov 6 2016, 00:28:07)

[GCC 4.8.5 20150623 (Red Hat 4.8.5-11)] on linux2

Type "help", "copyright", "credits" or "license" for more information.

>>> quit()

[root@hadron ~]# python3

Python 3.5.5 (default, Feb 27 2018, 09:28:49)

[GCC 4.8.5 20150623 (Red Hat 4.8.5-11)] on linux

Type "help", "copyright", "credits" or "license" for more information.

>>> quit()

[root@hadron ~]#

2、安装scrapy爬虫

(1)安装scrapy

[root@hadron ~]# pip3 install scrapy

Collecting scrapy

Downloading Scrapy-1.5.0-py2.py3-none-any.whl (251kB)

100% |████████████████████████████████| 256kB 1.1MB/s

Requirement already satisfied: lxml in /usr/local/python3/lib/python3.5/site-packages (from scrapy)

Collecting PyDispatcher>=2.0.5 (from scrapy)

Downloading PyDispatcher-2.0.5.tar.gz

....

....

Collecting pycparser (from cffi>=1.7; platform_python_implementation != "PyPy"->cryptography>=2.1.4->pyOpenSSL->scrapy)

Downloading pycparser-2.18.tar.gz (245kB)

100% |████████████████████████████████| 256kB 339kB/s

Installing collected packages: PyDispatcher, zope.interface, constantly, incremental, six, attrs, Automat, hyperlink, Twisted, cssselect, w3lib, parsel, asn1crypto, pycparser, cffi, cryptography, pyOpenSSL, pyasn1, pyasn1-modules, service-identity, queuelib, scrapy

Running setup.py install for PyDispatcher ... done

Running setup.py install for Twisted ... done

Running setup.py install for pycparser ... done

Successfully installed Automat-0.6.0 PyDispatcher-2.0.5 Twisted-17.9.0 asn1crypto-0.24.0 attrs-17.4.0 cffi-1.11.4 constantly-15.1.0 cryptography-2.1.4 cssselect-1.0.3 hyperlink-18.0.0 incremental-17.5.0 parsel-1.4.0 pyOpenSSL-17.5.0 pyasn1-0.4.2 pyasn1-modules-0.2.1 pycparser-2.18 queuelib-1.4.2 scrapy-1.5.0 service-identity-17.0.0 six-1.11.0 w3lib-1.19.0 zope.interface-4.4.3

[root@hadron ~]#

(2)在python3 shell中验证scrapy

[root@hadron ~]# python3

Python 3.5.5 (default, Feb 27 2018, 09:28:49)

[GCC 4.8.5 20150623 (Red Hat 4.8.5-11)] on linux

Type "help", "copyright", "credits" or "license" for more information.

>>> import scrapy

>>> scrapy.version_info

(1, 5, 0)

>>>

(3)创建软scrapy链接

[root@hadron ~]# ln -s /usr/local/python3/bin/scrapy /usr/bin/scrapy

(4)在shell中验证scrapy

[root@hadron ~]# scrapy

Scrapy 1.5.0 - no active project

Usage:

scrapy [options] [args]

Available commands:

bench Run quick benchmark test

fetch Fetch a URL using the Scrapy downloader

genspider Generate new spider using pre-defined templates

runspider Run a self-contained spider (without creating a project)

settings Get settings values

shell Interactive scraping console

startproject Create new project

version Print Scrapy version

view Open URL in browser, as seen by Scrapy

[ more ] More commands available when run from project directory

Use "scrapy -h" to see more info about a command

[root@hadron ~]#

[root@hadron ~]# scrapy -v

Scrapy 1.5.0 - no active project

Usage:

scrapy [options] [args]

Available commands:

bench Run quick benchmark test

fetch Fetch a URL using the Scrapy downloader

genspider Generate new spider using pre-defined templates

runspider Run a self-contained spider (without creating a project)

settings Get settings values

shell Interactive scraping console

startproject Create new project

version Print Scrapy version

view Open URL in browser, as seen by Scrapy

[ more ] More commands available when run from project directory

Use "scrapy -h" to see more info about a command

[root@hadron ~]#

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值