BeautifulSoup

BeautifulSoup类的构造
>>> import requests
>>> r=requests.get(url)
>>> r.text
'<html><head><title>This is a python demo page</title></head>\r\n<body>\r\n<p class="title"><b>The demo python introduces several python courses.</b></p>\r\n<p class="course">Python is a wonderful general-purpose programming language. You can learn Python from novice to professional by tracking the following courses:\r\n<a href="http://www.icourse163.org/course/BIT-268001" class="py1" id="link1">Basic Python</a> and <a href="http://www.icourse163.org/course/BIT-1001870001" class="py2" id="link2">Advanced Python</a>.</p>\r\n</body></html>'
>>> from bs4 import BeautifulSoup
>>> soup = BeautifulSoup(r.text,'html.parser')
>>> print(soup.prettify())       #soup.prettify()为HTML文本智能添加换行缩进,增强可读性
<html>
 <head>
  <title>
   This is a python demo page
  </title>
 </head>
 <body>
  <p class="title">
   <b>
    The demo python introduces several python courses.
   </b>
  </p>
  <p class="course">
   Python is a wonderful general-purpose programming language. You can learn Python from novice to professional by tracking the following courses:
   <a class="py1" href="http://www.icourse163.org/course/BIT-268001" id="link1">
    Basic Python
   </a>
   and
   <a class="py2" href="http://www.icourse163.org/course/BIT-1001870001" id="link2">
    Advanced Python
   </a>
   .
  </p>
 </body>
</html>
>>>  
解析器使用方法条件
bs4的HTML解析器BeautifulSoup(mk,‘html.parser’)安装bs34库
lxml的HTML解释器BeautifulSoup(mk,‘lxml’)pip install lxml
xml的HTML解释器BeautifulSoup(mk,‘xml’)pip install lxml
html5lb解释器BeautifulSoup(mk,‘html5lib’)pip install html5lib
BeautifulSoup类的基本元素

<p class="title"> ... </p>

基本元素说明
Tag标签,最基本的信息组织单元,分别用<>和</>标记开头和结尾
Name名字,

的名字是‘p’,格式:.name
Attributes属性,字典形式组织,格式:.attrs
NavigableString标签内非属性字符串,格式:.string
Comment标签内字符串的注释部分,一种特殊的Comment类型
>>> tag=soup.a                         #Tag
>>> tag
<a class="py1" href="http://www.icourse163.org/course/BIT-268001" id="link1">Basic Python</a>
>>> Name=soup.a.name                   #Name
>>> Name
'a'
>>> soup.a.parent.name
'p'
>>> soup.a.parent.parent.name
'body'
>>> Attributes=tag.attrs               #Attributes
>>> Attributes
{'href': 'http://www.icourse163.org/course/BIT-268001', 'class': ['py1'], 'id': 'link1'}
>>> tag.attrs['class']
['py1']
>>> tag.attrs['href']
'http://www.icourse163.org/course/BIT-268001'
>>> type(tag.attrs)
<class 'dict'>
>>> type(tag)
<class 'bs4.element.Tag'>
>>> NavigableString=soup.a.string      #NavigableString
>>> NavigableString
'Basic Python'
>>> soup.p
<p class="title"><b>The demo python introduces several python courses.</b></p>
>>> soup.p.string
'The demo python introduces several python courses.'
>>> type(soup.p.string)
<class 'bs4.element.NavigableString'>
>>> newsoup=BeautifulSoup("<b><!--This is a comment--></b><p>This is not a comment</p>","html.parser")
>>> newsoup.b.string
'This is a comment'
>>> type(newsoup.b.string)            #Comment
<class 'bs4.element.Comment'>
>>> newsoup.p.string
'This is not a comment'
>>> type(newsoup.p.string)
<class 'bs4.element.NavigableString'>
BeautifulSoup节点遍历
属性说明
.contents由子节点组成的列表
.children子节点的迭代类型,与.contents类似,用于循环遍历
.decendants子孙节点的迭代类型,用于循环遍历
.parent父节点的标签
.parents祖先节点的迭代类型,用于循环遍历
.next_sibling返回按照HTML文本顺序的下一平行节点
.next_siblings返回按照HTML文本顺序的后续平行节点
.previous_sibling返回按照HTML文本顺序的前一平行节点
.previous_siblings返回按照HTML文本顺序的前序平行节点
BeautifulSoup节点检索–<>.find_all()

.find_all(name,attrs,recursive,string,**kwargs)
.find_all(标签名称,属性,搜索的所有子孙节点还是仅子节点(默认True,即前者),标签内非属性字符串,??
.find_all(name)

>>> soup.find_all('a')
[<a class="py1" href="http://www.icourse163.org/course/BIT-268001" id="link1">Basic Python</a>, <a class="py2" href="http://www.icourse163.org/course/BIT-1001870001" id="link2">Advanced Python</a>]
>>> soup.find_all(['a','b'])
[<b>The demo python introduces several python courses.</b>, <a class="py1" href="http://www.icourse163.org/course/BIT-268001" id="link1">Basic Python</a>, <a class="py2" href="http://www.icourse163.org/course/BIT-1001870001" id="link2">Advanced Python</a>]

.find_all(True) 返回所有子孙节点

>>> for tag in soup.find_all(True):
	print(tag.name)
html
head
title
body
p
b
p
a
a

(…) 等效于 .find_all(…)
soup(..) 等效于 soup.find_all(..)

BeautifulSoup 其他检索方法
方法说明
<>.find()返回.find_all()的第一个结果
<>.find_parents()在祖先节点中搜索,返回列表类型
<>.find_parent()返回.find_parents()的第一个结果
<>.find_next_siblings()在平行后续节点中搜索,返回列表类型
<>.find_next_sibling()返回。find_next_siblings()的第一个结果
<>.find_previous_siblings()在平行前序节点中搜索,返回列表类型
<>.find_previous_sibling()返回<>.find_previous_siblings()的第一个结果
BeautifulSoup 与正则表达式相结合
>>> soup.find_all(id='link1')
[<a class="py1" href="http://www.icourse163.org/course/BIT-268001" id="link1">Basic Python</a>]
>>> soup.find_all(id='link')
[]
>>> import re
>>> soup.find_all(id=re.compile("link"))
[<a class="py1" href="http://www.icourse163.org/course/BIT-268001" id="link1">Basic Python</a>, <a class="py2" href="http://www.icourse163.org/course/BIT-1001870001" id="link2">Advanced Python</a>]
>>> 

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值