XPath:被用作小型的查询语言,可以在xml中查找信息,支持html。
比正则表达式更强大,可以逐级查找。
安装:pip install lxml
使用:
XPath通过元素和属性匹配信息
from lxml import etree
selector = etree.HTML(网页的源码)
selector.xpath
正则表达式:我要找的大楼的左侧是20层建筑,右侧是30层建筑
XPath:我要找海淀区->交大东路21号->7号楼
Html:树状结构,逐层展开,逐层定位,寻找独立节点
html ->body->div->ul[@useful]->li
Chrome:copy XPath : //*[@id="useful"]/li[1]
//*[@id="useful"]/li
from lxml import etree
selector = etree.HTML(html)
#提取文本
content=selector.xpath('//div/ul[@id="useful"]/li/text()')
for each in content:
print each
#提取属性
link = selector。xpath('//a/@href')
for each in link:
print each
#XPath提取特殊内容
1、 以相同的字符开头: start-with(@属性名称,属性字符相同的部分)
<div id="test-1">呵呵</div>
<div id="test-2">哈哈</div>
<div id="testi">嘻嘻</div>
不必写三条xpath查询 只需要用start-with着一条语句就可以提取三条内容
2、标签套标签:string(.)
<div id = "class3">美女,
<font color = red>你微信号多少?</font>
</div>
用到的知识点:
- XPath
- 字符串转字典函数 eval()
注:如果含‘null、false、not、no‘之类的词: NameError: name ‘null’ is not defined。有人说:ast.literal_eval去取代eval,即可。写的很好,我没试过,只是用replace()简单地把null之类的词给做了一下替换。 多线程编程
from multiprocessing.dummy import Pool as ThreadPool results = pool.map(bdtb,urls) pool.close() pool.join()
编码问题
-[bug]UnicodeDecodeError: ‘ascii’ codec can’t decode byte 0xe5 in position 108
[解决]对需要 str->unicode 的代码,把 str 编码由 ascii 改为 utf8 (或 gb18030)
- [bug]SyntaxError: Non-ASCII character ‘\xe5’ in file
[解决]在源码的第一行添加以下语句:
# -- coding: UTF-8 --
或者
#coding=utf-8发现了一篇很有营养的编码文章
re模块中的sub,实现换页功能
urls = [] for i in range(1,20): newurl = re.sub('pn=\d','pn=%s'%i,_url,re.S) urls.append(newurl)
数据持久化
f = open('data.txt','a') f.close()
代码
# -*- coding: UTF-8 -*-
import requests
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
from lxml import etree
from multiprocessing.dummy import Pool as ThreadPool
import re
def bdtb(url):
html = requests.get(url).text.replace('"','"').replace(' ','').replace('null','0').replace('false',"0").strip()
selector = etree.HTML(html)
time = selector.xpath('//*[@id="j_p_postlist"]/div')
names = selector.xpath('//*[@id="j_p_postlist"]/div/div[2]/ul/li[3]/a')
content = selector.xpath('//*[starts-with(@id,"post_content")]/text()')
# for _time in time:
# datadict = eval(_time.attrib['data-field'])
#
# print datadict['content']['date']
item = {}
for i in range(0,20):
item['name'] = names[i].text
item['time'] = eval(time[i].attrib['data-field'])['content']['date']
item['content'] =content[i].strip()
print '发帖人:%s 时间:%s \n内容:%s\n-------------------------------------\n' \
% (item['name'],item['time'],item['content'])
towrite(item)
def towrite(item):
f.writelines(u'发帖人: %s'%item['name'])
f.writelines(u'时间:%s'%str(item['time']))
f.writelines(u'内容:\n%s\n--------------------------------------\n'%item['content'])
pool = ThreadPool(2)# cpu是几核的就填几
f = open('data.txt','a')
_url = 'http://tieba.baidu.com/p/3522395718?pn=1'
urls = []
for i in range(1,20):
newurl = re.sub('pn=\d','pn=%s'%i,_url,re.S)
urls.append(newurl)
results = pool.map(bdtb,urls)
pool.close()
pool.join()
f.close()