python怎么安装scrapy_Python-2.7安装Scrapy 1.0爬虫实例

Scrapy,Python开发的一个快速,高层次的屏幕抓取和web抓取框架,用于抓取web站点并从页面中提取结构化的数据。Scrapy用途广泛,可以用于数据挖掘、监测和自动化测试。

Scrapy吸引人的地方在于它是一个框架,任何人都可以根据需求方便的修改。它也提供了多种类型爬虫的基类,如BaseSpider、sitemap爬虫等,最新版本又提供了web2.0爬虫的支持。

使用python2.7.11

https://www.python.org/ftp/python/2.7.11/Python-2.7.11.tgz

解压

#tar -xvf Python-2.7.11.tgz

cd Python-2.7.11

多版本python存在才进行修改

报错:

exceptions.ImportError: No module named _sqlite3

下载:

https://pypi.python.org/pypi/pysqlite/

tar -zxvf pysqlite-2.8.1.tar.gz

cd pysqlite-2.8.1

python setup.py install

先修改Python-2.7.11目录里的setup.py 文件:

在下面这段的下一行添加’/usr/local/lib/sqlite3/include’,

sqlite_inc_paths = [ ‘/usr/include’,

‘/usr/include/sqlite’,

‘/usr/include/sqlite3’,

‘/usr/local/include’,

‘/usr/local/include/sqlite’,

‘/usr/local/include/sqlite3’,

‘/usr/local/lib/sqlite3/include’,

安装

#./configure

#make all

#make install

#make clean

#make distclean

查看版本信息

#/usr/local/bin/python2.7 -V

建立软连接,使系统默认的 python指向 python2.7

#mv /usr/bin/python /usr/bin/python2.6.6

#ln -s /usr/local/bin/python2.7 /usr/bin/python

7.重新检验Python 版本

#python -V

解决系统 Python 软链接指向 Python2.7 版本后,因为yum是不兼容 Python 2.7的,所以yum不能正常工作,我们需要指定 yum 的Python版本

#vi /usr/bin/yum

将文件头部的

#!/usr/bin/python

改成

#!/usr/bin/python2.6.6

下载pip

https://pip.pypa.io/en/latest/installing/

python get-pip.py

[root@testserver4 ~]# python get-pip.py

Collecting pip

Downloading pip-7.1.2-py2.py3-none-any.whl (1.1MB)

100% |████████████████████████████████| 1.1MB 350kB/s

Collecting wheel

Downloading wheel-0.26.0-py2.py3-none-any.whl (63kB)

100% |████████████████████████████████| 65kB 5.0MB/s

Collecting argparse (from wheel)

Downloading argparse-1.4.0-py2.py3-none-any.whl

Installing collected packages: pip, argparse, wheel

Successfully installed argparse-1.4.0 pip-7.1.2 wheel-0.26.0

/tmp/tmpcBdh5G/pip.zip/pip/_vendor/requests/packages/urllib3/util/ssl_.py:90:

InsecurePlatformWarning: A true SSLContext object is not available.

This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail.

For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.

报错:

出现:zipimport.ZipImportError: can't decompress data; zlib not available错误

解决办法重新编译一下Python源码安装包,如下:

tar zxvf Python-2.7.11.tgz

cd Python-2.7.11

./configure

vi Modules/Setup

在这里把454行左右的 找到

#zlib zlibmodule.c -I$(prefix)/include -L$(exec_prefix)/lib -lz

去掉注释

zlib zlibmodule.c -I$(prefix)/include -L$(exec_prefix)/lib -lz

make

make install

报错:ImportError:cannot import name HTTPSHandler

解决:

yum install -y openssl openssl-devel

然后重新编译python

pip install urllib3

使用pip安装:

pip install Scrapy

提示成功

Successfully installed Scrapy-1.0.3 characteristic-14.3.0 lxml-3.5.0 pyasn1-modules-0.0.8 service-identity-14.0.0

卸载软件包

pip uninstall Scrapy

列出软件包清单

pip list

更新pip

pip install -U pip

报错

/usr/lib/python2.6/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:90:

InsecurePlatformWarning: A true SSLContext object is not available.

This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail.

For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.

首先安装python-devel libffi-devel openssl-devel

yum install python-devel libffi-devel openssl-devel

之后在安装pyopenssl ndg-httpsclient pyasn1

pip install pyopenssl ndg-httpsclient pyasn1

报错

Collecting Twisted>=10.0.0 (from Scrapy)

Could not find a version that satisfies the requirement Twisted>=10.0.0 (from Scrapy) (from versions: )

Some externally hosted files were ignored as access to them may be unreliable (use --allow-external Twisted to allow).

No matching distribution found for Twisted>=10.0.0 (from Scrapy)

安装Twisted

wget http://twistedmatrix.com/Releases/Twisted/15.5/Twisted-15.5.0.tar.bz2

安装Twisted

下载好Twisted后,进入到下载目录,解压:

[root@codebreaker ~]#tar -jvxf Twisted-15.5.0.tar.bz2

解压完成后进入相应目录:

[root@codebreaker ~]#cd Twisted-15.5.0

执行安装:

[root@codebreaker Twisted-15.5.0]#python setup.py install

安装完成后进入python,测试Twisted是否安装成功

[root@codebreaker Twisted-15.5.0]# python

>>> import twisted

如果没有错误发生,说明Twisted已经安装成功了

报错:

error: command 'gcc' failed with exit status 1

yum install python python-dev* python-lxml* libxml2-dev* libxslt-dev*

建立项目:

scrapy startproject itzhaopin

路径 /root/Twisted-15.5.0/itzhaopin/itzhaopin

.

├── itzhaopin

│ ├── itzhaopin

│ │ ├── __init__.py

│ │ ├── items.py

│ │ ├── pipelines.py

│ │ ├── settings.py

│ │ └── spiders

│ │ └── __init__.py

│ └── scrapy.cfg

scrapy.cfg: 项目配置文件

items.py: 需要提取的数据结构定义文件

pipelines.py:管道定义,用来对items里面提取的数据做进一步处理,如保存等

settings.py: 爬虫配置文件

spiders: 放置spider的目录

例子代码(以下配置内容以源码配置为准):

https://github.com/maxliaops/scrapy-itzhaopin

定义Item

在items.py里面定义我们要抓取的数据:

from scrapy.item import Item, Field

class TencentItem(Item):

name = Field()

catalog = Field()

workLocation = Field()

recruitNumber = Field()

detailLink = Field()

publishTime = Field()

解释:

name = Field() # 职位名称

catalog = Field() # 职位类别

workLocation = Field() # 工作地点

recruitNumber = Field() # 招聘人数

detailLink = Field() # 职位详情页链接

publishTime = Field() # 发布时间

实现Spider

Spider是一个继承自scrapy.contrib.spiders.CrawlSpider的Python类,有三个必需的定义的成员

name: 名字,这个spider的标识

start_urls:一个url列表,spider从这些网页开始抓取

parse():一个方法,当start_urls里面的网页抓取下来之后需要调用这个方法解析网页内容,同时需要返回下一个需要抓取的网页,或者返回items列表

所以在spiders目录下新建一个vi spider/tencent_spider.py:

import re

import json

from scrapy.selector import Selector

try:

from scrapy.spider import Spider

except:

from scrapy.spider import BaseSpider as Spider

from scrapy.utils.response import get_base_url

from scrapy.utils.url import urljoin_rfc

from scrapy.contrib.spiders import CrawlSpider, Rule

from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor as sle

from itzhaopin.items import *

from itzhaopin.misc.log import *

class TencentSpider(CrawlSpider):

name = "tencent"

allowed_domains = ["tencent.com"]

start_urls = [

"http://hr.tencent.com/position.php"

]

rules = [ # 定义爬取URL的规则

Rule(sle(allow=("/position.php\?&start=\d{,4}#a")), follow=True, callback='parse_item')

]

def parse_item(self, response): # 提取数据到Items里面,主要用到XPath和CSS选择器提取网页数据

items = []

sel = Selector(response)

base_url = get_base_url(response)

sites_even = sel.css('table.tablelist tr.even')

for site in sites_even:

item = TencentItem()

item['name'] = site.css('.l.square a').xpath('text()').extract()

relative_url = site.css('.l.square a').xpath('@href').extract()[0]

item['detailLink'] = urljoin_rfc(base_url, relative_url)

item['catalog'] = site.css('tr > td:nth-child(2)::text').extract()

item['workLocation'] = site.css('tr > td:nth-child(4)::text').extract()

item['recruitNumber'] = site.css('tr > td:nth-child(3)::text').extract()

item['publishTime'] = site.css('tr > td:nth-child(5)::text').extract()

items.append(item)

#print repr(item).decode("unicode-escape") + '\n'

sites_odd = sel.css('table.tablelist tr.odd')

for site in sites_odd:

item = TencentItem()

item['name'] = site.css('.l.square a').xpath('text()').extract()

relative_url = site.css('.l.square a').xpath('@href').extract()[0]

item['detailLink'] = urljoin_rfc(base_url, relative_url)

item['catalog'] = site.css('tr > td:nth-child(2)::text').extract()

item['workLocation'] = site.css('tr > td:nth-child(4)::text').extract()

item['recruitNumber'] = site.css('tr > td:nth-child(3)::text').extract()

item['publishTime'] = site.css('tr > td:nth-child(5)::text').extract()

items.append(item)

#print repr(item).decode("unicode-escape") + '\n'

info('parsed ' + str(response))

return items

def _process_request(self, request):

info('process ' + str(request))

return request

实现PipeLine

PipeLine用来对Spider返回的Item列表进行保存操作,可以写入到文件、或者数据库等。

PipeLine只有一个需要实现的方法:process_item,例如我们将Item保存到JSON格式文件中:

vi pipelines.py

from scrapy import signals

import json

import codecs

class JsonWithEncodingTencentPipeline(object):

def __init__(self):

self.file = codecs.open('tencent.json', 'w', encoding='utf-8')

def process_item(self, item, spider):

line = json.dumps(dict(item), ensure_ascii=False) + "\n"

self.file.write(line)

return item

def spider_closed(self, spider):

self.file.close(

)

设置:

vi settings.py

# Scrapy settings for itzhaopin project

#

# For simplicity, this file contains only the most important settings by

# default. All the other settings are documented here:

#

# http://doc.scrapy.org/en/latest/topics/settings.html

#

BOT_NAME = 'itzhaopin'

SPIDER_MODULES = ['itzhaopin.spiders']

NEWSPIDER_MODULE = 'itzhaopin.spiders'

# Crawl responsibly by identifying yourself (and your website) on the user-agent

#USER_AGENT = 'itzhaopin (+http://www.yourdomain.com)'

ITEM_PIPELINES = {

'itzhaopin.pipelines.JsonWithEncodingTencentPipeline': 300,

}

LOG_LEVEL = 'INFO'

创建目录misc

#vi log.py

from scrapy import log

def warn(msg):

log.msg(str(msg), level=log.WARNING)

def info(msg):

log.msg(str(msg), level=log.INFO)

def debug(msg):

log.msg(str(msg), level=log.DEBUG)

#vi __init__.py (必须要这个文件,内容为空)

到现在,我们就完成了一个基本的爬虫的实现,可以输入下面的命令来启动这个Spider:

#scrapy crawl tencent

爬虫运行结束后,在当前目录下将会生成一个名为tencent.json的文件,其中以JSON格式保存了职位招聘信息。

部分内容如下:

{"recruitNumber": ["1"], "name": ["SD5-资深手游策划(深圳)"], "detailLink": "http://hr.tencent.com/position_detail.php?id=15626&keywords=&tid=0&lid=0", "publishTime": ["2014-04-25"], "catalog": ["产品/项目类"], "workLocation": ["深圳"]}

{"recruitNumber": ["1"], "name": ["TEG13-后台开发工程师(深圳)"], "detailLink": "http://hr.tencent.com/position_detail.php?id=15666&keywords=&tid=0&lid=0", "publishTime": ["2014-04-25"], "catalog": ["技术类"], "workLocation": ["深圳"]}

参考:

http://scrapy.org/

http://blog.csdn.net/HanTangSongMing/article/details/24454453

http://blog.siliconstraits.vn/building-web-crawler-scrapy/

http://blog.csdn.net/olanlanxiari/article/details/48086917

阅读(3767) | 评论(0) | 转发(0) |

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值