Beautifulsoup,pyquery、xpath解析库比较

俗话说:好记性不如烂笔头,零零碎碎的知识不加以总结归纳,建立知识体系,就会感觉杂乱无章,获得感极低,因此,再次比较三种解析库的常见使用方法。

主要参考:
BeautifulSoup官方文档 https://www.crummy.com/software/BeautifulSoup/bs4/doc/
pyquery官方文档 https://pythonhosted.org/pyquery/index.html
xpathW3school文档 https://www.w3schools.com/xml/xpath_intro.asp

一、BeautifulSoup库

常见分析一个网页,主要思路是点位节点、获得数据
以下列html文本为例,阐述三种解析方法的异同点:

html_doc = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>

<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>

<p class="story">...</p>
"""

beautifulsoup

实例化

from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')  
First, the document is converted to Unicode, and HTML entities are converted to Unicode characters:

"lxml"是解析方式,常用的有
来源于官方文档
定位节点:soup.节点
如:soup.title <title>The Dormouse's story</title>
不传入解析器,默认使用最合适的,最好传入!

UserWarning: No parser was explicitly specified, so I’m using the best available HTML parser for this system (“lxml”). This usually isn’t a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differe

  • 1
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值