3.XPath语法和lxml模块

XPath语法和lxml模块

XPath

xpath(XML Path Language)是一门在XML和HTML文档中查找信息的语言,可用来在XML和HTML文档中对元素和属性进行遍历。

XPath开发工具

Chrome插件XPath Helper。

安装方法:
  1. 打开插件伴侣,选择插件
  2. 选择提取插件内容到桌面,桌面上会多一个文件夹
  3. 把文件夹放入想要放的路径下
  4. 打开谷歌浏览器,选择扩展程序,开发者模式打开,选择加载已解压的扩展程序,选择路径打开即可

Firefox插件Try XPath。

XPath节点

在 XPath 中,有七种类型的节点:元素、属性、文本、命名空间、处理指令、注释以及文档(根)节点。XML 文档是被作为节点树来对待的。树的根被称为文档节点或者根节点。

XPath语法

语法注释
nodename选取此节点的所有子节点
/如果是在最前面,代表从根节点选取,否则选择某节点下的某个节点
//从全局节点中选择节点,不论在哪个位置
@选取某个节点的属性,写在中括号中,如://book[@price]
/bookstore/book[1]选取属于bookstore子元素的第一个book元素
/bookstore/book[last()]选取属于bookstore子元素的最后一个book元素
/bookstore/book[last()-1]选取属于bookstore子元素的倒数第二个book元素
/bookstore/book[position()< 3]选取属于bookstore元素下的最前面的两个book元素
//titile[@lang]选取所有拥有名为lang的属性的title元素
//titile[]@lang=‘eng’选取所有title元素,且这些元素拥有值为eng的lang属性
/bookstore/book[price>35.00]选取bookstore元素的所有book元素,且其中的price元素的值须大于35.00
/bookstore/book[price>35]/title选取bookstore元素的所有book元素的所有title元素,且其中的price元素的值须大于35
/bookstore/*选取bookstore下的所有子元素
/book[@*]选取所有带有属性的book元素

需要注意的知识点:

  1. /和//的区别:/代表只获取子节点,//获取子孙节点,一般//用的比较多,当然也要视情况而定

  2. contains:有时候某个属性中包含了多个值,那么可以使用contains函数,示例如下:

    //title[contains(@lang,'en')]
    

3.谓词中下标是从1开始的,不是从0开始的

lxml库

lxml 是 一个HTML/XML的解析器,主要的功能是如何解析和提取 HTML/XML 数据

lxml python 官方文档:http://lxml.de/index.html

需要安装C语言库,可使用 pip 安装:pip install lxml

etree常用方法:

etree.HTML() 将字符串解析为html文档

etree.tostring() 按字符串序列化html,返回bytes字节流

etree.parse() 读取文件并解析为html文档

etree.xpath() 使用xpath方法,返回一个列表

基本使用:
from lxml import etree

text = '''
<div>
    <ul>
         <li class="item-0"><a href="link1.html">first item</a></li>
         <li class="item-1"><a href="link2.html">second item</a></li>
         <li class="item-inactive"><a href="link3.html">third item</a></li>
         <li class="item-1"><a href="link4.html">fourth item</a></li>
         <li class="item-0"><a href="link5.html">fifth item</a>
     </ul>
 </div>
'''
# 将字符串解析为html文档
html = etree.HTML(text)
print(html)
# 按字符串序列化html
result = etree.tostring(html).decode('utf-8')
print(result)
从文件中读取html代码:
from lxml import etree


#读取
html = etree.parse('hello.html')

result = etree.tostring(html).decode('utf-8')
print(result)
在lxml中使用xpath语法

hello.html文件内容:

<!-- hello.html -->
<div>
    <ul>
         <li class="item-0"><a href="link1.html">first item</a></li>
         <li class="item-1"><a href="link2.html">second item</a></li>
         <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
         <li class="item-1"><a href="link4.html">fourth item</a></li>
         <li class="item-0"><a href="link5.html">fifth item</a></li>
     </ul>
 </div>

xpath语法练习

from lxml import etree
html = etree.parse('hello.html')
# 1.获取所有li标签:
# result = html.xpath('//li')
# print(result)
# for i in result:
#     print(etree.tostring(i))

# 2.获取所有li元素下的所有class属性的值:
# result = html.xpath('//li/@class')
# print(result)

# 3.获取li标签下href为www.baidu.com的a标签:
# result = html.xpath('//li/a[@href="www.baidu.com"]')
# print(result)

# 4.获取li标签下所有span标签:
# result = html.xpath('//li//span')
# print(result)

# 5.获取li标签下的a标签里的所有class:
# result = html.xpath('//li/a//@class')
# print(result)

# 6.获取最后一个li的a的href属性对应的值:
# result = html.xpath('//li[last()]/a/@href')
# print(result)

# 7.获取倒数第二个li元素的内容:
# result = html.xpath('//li[last()-1]/a')
# print(result)
# print(result[0].text)

# 8.获取倒数第二个li元素的内容的第二种方式:
result = html.xpath('//li[last()-1]/a/text()')
print(result)

示例

爬取瓜子二手车

import requests
from lxml import etree

headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36',
'Cookie': 'uuid=02656d12-f65b-4048-a5ae-0a06a8056137; ganji_uuid=4811484271022069787669; antipas=98n3f9i321ts73A9uK0129LR4; clueSourceCode=10103000312%2300; user_city_id=204; sessionid=65045cf7-2c95-40d3-8fee-0223b6c02746; lg=1; _gl_tracker=%7B%22ca_source%22%3A%22-%22%2C%22ca_name%22%3A%22-%22%2C%22ca_kw%22%3A%22-%22%2C%22ca_id%22%3A%22-%22%2C%22ca_s%22%3A%22self%22%2C%22ca_n%22%3A%22-%22%2C%22ca_i%22%3A%22-%22%2C%22sid%22%3A83834068159%7D; cainfo=%7B%22ca_s%22%3A%22pz_baidu%22%2C%22ca_n%22%3A%22tbmkbturl%22%2C%22ca_medium%22%3A%22-%22%2C%22ca_term%22%3A%22-%22%2C%22ca_content%22%3A%22%22%2C%22ca_campaign%22%3A%22%22%2C%22ca_kw%22%3A%22%25e7%2593%259c%25e5%25ad%2590%22%2C%22keyword%22%3A%22-%22%2C%22ca_keywordid%22%3A%22-%22%2C%22scode%22%3A%2210103000312%22%2C%22ca_transid%22%3A%22%22%2C%22platform%22%3A%221%22%2C%22version%22%3A1%2C%22ca_i%22%3A%22-%22%2C%22ca_b%22%3A%22-%22%2C%22ca_a%22%3A%22-%22%2C%22display_finance_flag%22%3A%22-%22%2C%22client_ab%22%3A%22-%22%2C%22guid%22%3A%2202656d12-f65b-4048-a5ae-0a06a8056137%22%2C%22sessionid%22%3A%2265045cf7-2c95-40d3-8fee-0223b6c02746%22%7D; preTime=%7B%22last%22%3A1555049972%2C%22this%22%3A1552292773%2C%22pre%22%3A1552292773%7D; cityDomain=wh'
}
#获取详情页面url
def get_detail_urls(url):
    resp = requests.get(url, headers=headers)
    text = resp.content.decode('utf-8')
    html = etree.HTML(text)
    ul = html.xpath('//ul[@class="carlist clearfix js-top"]')[0]
    # print(ul)
    lis = ul.xpath('./li')
    detail_urls = []
    for li in lis:
        detail_url = li.xpath('./a/@href')
        detail_url = 'https://www.guazi.com' + detail_url[0]
        # print(detail_url)
        detail_urls.append(detail_url)
    return detail_urls

#解析详情页面内容
def parse_detail_page(url):
    resp = requests.get(url, headers=headers)
    text = resp.content.decode('utf-8')
    html = etree.HTML(text)
    title = html.xpath('//div[@class="product-textbox"]/h2/text()')[0]
    title = title.replace(r'\r\n', '').strip()
    # print(title)
    info = html.xpath('//div[@class="product-textbox"]/ul/li/span/text()')
    # print(info)
    infos = {}
    cardtime = info[0]
    km = info[1]
    displacement = info[2]
    speedbox = info[3]

    infos['title'] = title
    infos['cardtime'] = cardtime
    infos['km'] = km
    infos['displacement'] = displacement
    infos['speedbox'] = speedbox
    return infos

#保存数据
def save_data(infos,f):

    f.write('{},{},{},{},{}\n'.format(infos['title'],infos['cardtime'],infos['km'],infos['displacement'],infos['speedbox']))


def main():
    #第一个url
    base_url = 'https://www.guazi.com/cs/buy/o{}/'
    with open('guazi_cs.csv', 'a', encoding='utf-8') as f:

        for x in range(1,6):
            url = base_url.format(x)
            #获取详情页面url
            detail_urls = get_detail_urls(url)
            #解析详情页面内容
            for detail_url in detail_urls:
                infos = parse_detail_page(detail_url)
                save_data(infos,f)

if __name__ == '__main__':
    main()
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值