前段时间需要爬取一些数据进行分析,采用python写了个爬虫,具体问题记录如下。
(一)直接写代码
爬虫就是先发http请求,返回的如果是html页面的话一般就解析成一个dom树结构,然后根据标签去取对应的数据
requests和Beautifulsoup4
from bs4 import BeautifulSoup
html字符串创建
soup = BeautifulSoup(html)
本地html文件创建
soup = BeautifulSoup(open('index.html'))
print soup.prettify()
e.g. data = requests.get('http://www.163.com'),向网易首页提交get请求,得到一个requests对象r,r.text就是获得的网页源代码,保存在字符串data中。
循环抓取数据
def parse(self, response):
items = []
validurls = []
newurls = response.xpath(
"//div[@id='selectedgenre']/ul[@class='list paginate']/li/a[@class='paginate-more']/@href").extract()
for url in newurls:
validurls.append(url)
# //循环抓取
items.extend([self.make_requests_from_url(url).replace(callback=self.parse) for url in list(set(validurls))])
iTunes = ItunesItem()
iTunes['url'] = response.xpath(
"//div[@id='selectedcontent']/div[@class='column first']/ul/li/a/@href").extract()
iTunes['title'] = response.xpath(
"//div[@id='selectedcontent']/div[@class='column first']/ul/li/a/text()").extract()
iTunes['create_time'] = int(time.time())
items.append(iTunes)
return items
递归处理页面
def get_page_html(begin_url, depth, ignore_outer, main_site_domain):
#若是设置排除外站 过滤之
if ignore_outer:
if not main_site_domain in begin_url:
return
if depth == 1:
urls = get_url_of_page(begin_url, True)
img.extend(urls)
else:
urls = get_url_of_page(begin_url)
if urls:
for url in urls:
get_page_html(url, depth-1)
使用HtmlXPathSelector进行解析
from scrapy.selector import HtmlXPathSelector
def parse(self, response):
hxs = HtmlXPathSelector(response)
items = []
newurls = hxs.select('//a/@href').extract()
validurls = []
for url in newurls:
#判断URL是否合法
if true:
validurls.append(url)
items.extend([self.make_requests_from_url(url).replace(callback=self.parse)for urlin validurls])
sites = hxs.select('//ul/li')
items = []
for site in sites:
item = DmozItem()
item['title'] = site.select('a/text()').extract()
item['link'] = site.select('a/@href').extract()
item['desc'] = site.select('text()').extract()
items.append(item)
return items
错误
UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-78: ordinal not in range(128)
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
(二)使用pyspider框架
虚拟环境创建
python
sudo pip install pyspider --upgrade --ignore-installed six
—pip install pyspider --upgrade --ignore-installed six --user
这样写不报错
sudo chown -R $USER /Library/Python/2.7/site-packages
之后又报错,继续加权限
chown: /System/Library/Frameworks/Python.framework/Versions/2.7: Operation not permitted
现在的解决办法是取消SIP机制,具体做法是:
重启电脑,按住Command+R(直到出现苹果标志)进入Recovery Mode(恢复模式)
左上角菜单里找到实用工具 -> 终端
输入csrutil disable回车
重启Mac即可
如果想重新启动SIP机制重复上述步骤改用csrutil enable即可
我们现在再看看sip的状态, 这样再安装ipython、gevent再也不会提示无法写入的权限提示了
安装
sudo pip install virtualenv #安装虚拟环境工具
virtualenv ENV #创建一个虚拟环境目录
source /Users/你的名字/ENV/bin/activate #激活虚拟环境
scrapy startproject MyDemo #创建项目
New Scrapy project 'MyDemo', using template directory '/Users/你的名字/ENV/lib/python2.7/site-packages/scrapy/templates/project', created in:
/Users/你的名字/MyDemo
You can start your first spider with:
cd MyDemo
scrapy genspider example example.com
scrapy crawl MyDemo运行
可以安装pyspider
安装完成之后pyspider all报错
ImportError: pycurl: libcurl link-time version (7.43.0) is older than compile-time version (7.49.1)
pip uninstall pycurl
# export PYCURL_SSL_LIBRARY=nss
set -x PYCURL_SSL_LIBRARY openssl
# pip install pycurl
rpm -ivh python-devel-2.7.5-16.el7.x86_64.rpm
[2]http://rpmfind.net/linux/rpm2html/search.php?query=python-devel