beautifulsoup
一、beautifulsoup-------使用
Beautiful Soup可以用来处理导航、搜索、修改分析树等功能。在爬虫学习过程中,可以用来抓取数据。
1、安装
1.1、安装beautifulsoup4
pip install beautifulsoup4
1.2、安装lxml
pip install lxml
2、基础代码
下面的一段HTML代码将作为例子(搬运学习时的资料):
html_doc = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
使用BeautifulSoup解析,能够得到一个 BeautifulSoup
的对象,并能按照标准的缩进格式的结构输出:
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_doc, 'lxml')
# html进行美化
print(soup.prettify())
优化后的代码:
<html>
<head>
<title>
The Dormouse's story
</title>
</head>
<body>
<p class="title">
<b>
The Dormouse's story
</b>
</p>
<p class="story">
Once upon a time there were three little sisters; and their names were
<a class="sister" href="http://example.com/elsie" id="link1">
Elsie
</a>
,
<a class="sister" href="http://example.com/lacie" id="link2">
Lacie
</a>
and
<a class="sister" href="http://example.com/tillie" id="link3">
Tillie
</a>
;
and they lived at the bottom of a well.
</p>
<p class="story">
...
</p>
</body>
</html>
简单的浏览结构化数据的方法:
soup.title # 获取标签title # <title>The Dormouse's story</title>
soup.title.name # 获取标签名称 # 'title'
soup.title.string # 获取标签title内的内容 # 'The Dormouse's story'
soup.title.parent # 获取父级标签
soup.title.parent.name # 获取父级标签名称 # 'head'
soup.p # 获得p标签内容 # <p class="title"><b>The Dormouse's story</b></p>
soup.p['class'] # 获取p的class属性值 # 'title'
soup.a # 获得第一个a标签内容
# <a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>
soup.find_all('a') # 获得全部a标签内容
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
# <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
# <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]
soup.find(id="link3") # 获取id为link3的标签
# <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>
从文档中找到所有<a>标签的链接:
for i in soup.find_all('a'): # 遍历出a标签的内容信息
print(i.get('href')) # 打印出含有href的值。
# http://example.com/elsie
# http://example.com/lacie
# http://example.com/tillie
从文档中获取所有文字内容:
print(soup.get_text())
3、对象的种类
Beautiful Soup可以将复杂HTML文档转换成一个复杂的树形结构,每个节点都是Python对象,所有对象可以归纳为 Tag
, NavigableString
, BeautifulSoup
, Comment
。
3.1 Tag对象
Tag
对象就是 HTML 中的一个个标签。
from bs4 import BeautifulSoup
html_doc = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
soup = BeautifulSoup(html_doc, 'lxml')
a = soup.b
print(a) # <b>The Dormouse's story</b>
print(type(a)) # <class 'bs4.element.Tag'>
通过标签属性可以获取到需要的内容信息。
from bs4 import BeautifulSoup
html_doc = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
soup = BeautifulSoup(html_doc, 'lxml')
a = soup.head
b = soup.title
print(a) # <head><title>The Dormouse's story</title></head>
print(b) # <title>The Dormouse's story</title>
可以通过标签.标签的形式快速获得需要的某个内容。
soup.body.b # <b>The Dormouse's story</b>
注意:当获取内容中有多个一样的tag,通过点取属性的方式只能获得当前名字的第一个tag
。
soup.a # <a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>
使用 find_all()
函数可以获得所有的重复tag
的内容信息。
soup.find_all('a')
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
# <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
# <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]
3.2 NavigableString(字符串)
通过标签.string
可以获得标签内容的内容信息。
from bs4 import BeautifulSoup
html_doc = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
soup = BeautifulSoup(html_doc, 'lxml')
a = soup.p.string
b = type(soup.p.string)
print(a) # The Dormouse's story
print(b) # <class 'bs4.element.NavigableString'>
3.3 BeautifulSoup
BeautifulSoup
对象表示的是一个文档的全部内容。我们可以分别获取它的类型
,名称
,以及属性
。
from bs4 import BeautifulSoup
html_doc = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
soup = BeautifulSoup(html_doc, 'lxml')
a = type(soup.name) # 类型
b = soup.name # 名称
c = soup.attrs # 属性
print(a) # <class 'str'>
print(b) # [document]
print(c) # {}
3.4 Comment
如果字符串内容为注释 则为Comment。
html_doc='<a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>'
soup = BeautifulSoup(html_doc, 'html.parser')
print(soup.a.string) # Elsie
print(type(soup.a.string)) # <class 'bs4.element.Comment'>
a 标签里的内容实际上是注释,但是如果利用 .string
来输出它的内容,我们发现它已经把注释符号去掉了。
二、beautifulsoup-------遍历
html_doc = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_doc, 'lxml')
通过这段例子来演示怎样从文档的一段内容找到另一段内容。
1、子节点
Beautiful Soup提供了许多操作和遍历子节点的属性。
1.1 .contents 和 .children
tag中 .contents
属性可以将tag的子节点以列表的方式输出。
soup = BeautifulSoup(html_doc, 'lxml')
head_tag = soup.head # <head><title>The Dormouse's story</title></head>
b = head_tag.contents # [<title>The Dormouse's story</title>]
title_tag = head_tag.contents[0] # <title>The Dormouse's story</title>
d = title_tag.contents # ["The Dormouse's story"]
# [u'The Dormouse's story']
print(head_tag )
print(b)
print(title_tag)
print(d)
字符串没有 .contents
属性,因为字符串没有子节点。
text = title_tag.contents[0]
text.contents
# AttributeError: 'NavigableString' object has no attribute 'contents'
通过tag的 .children
生成器,可以对tag的子节点进行循环:
print(title_tag.children) # <list_iterator object at 0x101b78860>
print(type(title_tag.children)) # <class 'list_iterator'>
for child in title_tag.children:
print(child) # The Dormouse's story
1.2 .descendants
.contents
和 .children
属性仅包含tag的直接子节点.例如,<head>标签只有一个直接子节点<title>
head_tag.contents # [<title>The Dormouse's story</title>]
但是<title>标签也包含一个子节点:字符串 “The Dormouse’s story”,这种情况下字符串 “The Dormouse’s story”也属于<head>标签的子孙节点.
.descendants
属性可以对所有tag的子孙节点进行递归循环 。
for child in head_tag.descendants:
print(child) # The Dormouse's story
2、 节点内容
2.1 .string
print (soup.head.string) # The Dormouse's story
如果tag包含了多个子节点,tag就无法确定,string 方法应该调用哪个子节点的内容, .string 的输出结果是 None
print (soup.html.string) # None
2.2 .text
如果tag包含了多个子节点, text则会返回内部所有文本内容
print (soup.html.text)
3、 多个内容
.strings .stripped_strings 属性
3.1**.strings**
使用.strings
获取多个内容,比如下面的例子:
for string in soup.strings:
print(repr(string))
'''
'\n'
"The Dormouse's story"
'\n'
'\n'
"The Dormouse's story"
'\n'
'Once upon a time there were three little sisters; and their names were\n'
'Elsie'
',\n'
'Lacie'
' and\n'
'Tillie'
';\nand they lived at the bottom of a well.'
'\n'
'...'
'\n'
'''
3.2 .stripped_strings
输出的字符串中可能包含了很多空格或空行,使用 .stripped_strings
可以去除多余空白内容。
for string in soup.stripped_strings:
print(repr(string))
'''
"The Dormouse's story"
"The Dormouse's story"
'Once upon a time there were three little sisters; and their names were'
'Elsie'
','
'Lacie'
'and'
'Tillie'
';\nand they lived at the bottom of a well.'
'...'
'''
4、 父节点
4.1 .parent
通过 .parent
属性来获取某个元素的父节点。
from bs4 import BeautifulSoup
html_doc = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
soup = BeautifulSoup(html_doc, 'lxml')
a = soup.title
b = a.parent # 获取父节点
print(a) # <title>The Dormouse's story</title>
print(b) # <head><title>The Dormouse's story</title></head>
文档的顶层节点比如<html>的父节点是 BeautifulSoup
对象:
html_tag = soup.html
type(html_tag.parent)
# <class 'bs4.BeautifulSoup'>
4.2 .parents
.parents
属性可以递归得到元素的所有父辈节点。
from bs4 import BeautifulSoup
html_doc = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
soup = BeautifulSoup(html_doc, 'lxml')
a = soup.a
for i in a.parents:
print(i.name) # p body html [document]
三、beautifulsoup-------搜索
1、find_all()
find_all( name , attrs , recursive , string , **kwargs )
find_all()
方法搜索当前tag的所有tag子节点,并判断是否符合过滤器的条件:
soup.find_all("title") # [<title>The Dormouse's story</title>]
soup.find_all("p", "title") # [<p class="title"><b>The Dormouse's story</b></p>]
soup.find_all("a")
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
# <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
# <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]
soup.find_all(id="link2")
# [<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>]
import re
# 模糊查询 包含sisters的就可以
soup.find(string=re.compile("sisters"))
# 'Once upon a time there were three little sisters; and their names were\n'
1.1 name 参数
name
参数可以查找所有名字为 name
的tag。
soup.find_all("title") # [<title>The Dormouse's story</title>]
(1) 传字符串
soup.find_all('b') # [<b>The Dormouse's story</b>]
(2)传正则表达式
import re
for tag in soup.find_all(re.compile("^b")):
print(tag.name)
# body
# b
(3) 传列表
soup.find_all(["a", "b"])
# [<b>The Dormouse's story</b>,
# <a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
# <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
# <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]
1.2 keyword 参数
soup.find_all(id='link2')
# [<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>]
import re
# 超链接包含elsie标签
print(soup.find_all(href=re.compile("elsie")))
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>]
# 以The作为开头的字符串
print(soup.find_all(text=re.compile("^The")))
# ["The Dormouse's story", "The Dormouse's story"]
# class选择器包含st的节点
print(soup.find_all(class_=re.compile("st")))
使用多个指定名字的参数可以同时过滤tag的多个属性:
soup.find_all(href=re.compile("elsie"), id='link1')
# [<a class="sister" href="http://example.com/elsie" id="link1">three</a>]
class 过滤:(由于class是有特殊含义,需要在后面加下划线)
print(soup.find_all("a", class_="sister"))
'''
[<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
<a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>
]
'''
通过 find_all()
方法的 attrs
参数定义一个字典参数来搜索包含特殊属性的tag:
data_soup.find_all(attrs={"data-foo": "value"}) # [<div data-foo="value">foo!</div>]
1.3 text 参数
import re
print(soup.find_all(text="Elsie")) # ['Elsie']
print(soup.find_all(text=["Tillie", "Elsie", "Lacie"])) # ['Elsie', 'Lacie', 'Tillie']
# 只要包含Dormouse就可以
print(soup.find_all(text=re.compile("Dormouse"))) # ["The Dormouse's story", "The Dormouse's story"]
1.4 limit 参数
find_all()
方法返回全部的搜索结构,如果文档树很大那么搜索会很慢.如果我们不需要全部结果,可以使用 limit
参数限制返回结果的数量。
print(soup.find_all("a",limit=2))
print(soup.find_all("a")[0:2])
'''
[<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>]
'''
2、find()
find(name , attrs , recursive , string , **kwargs )
find_all()
方法将返回文档中符合条件的所有tag,但如果只需要找一个内容的时候,可以使用find()
soup.find_all('title', limit=1) # [<title>The Dormouse's story</title>]
soup.find('title') # <title>The Dormouse's story</title>
唯一的区别是 find_all()
方法的返回结果是值包含一个元素的列表,而 find()
方法直接返回结果。该函数可以多次调用。
soup.head.title # <title>The Dormouse's story</title>
soup.find("head").find("title") # <title>The Dormouse's story</title>
3、find_parents() 和 find_parent()
a_string = soup.find(text="Lacie")
print(a_string) # Lacie
print(a_string.find_parent())
# <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>
print(a_string.find_parents())
print(a_string.find_parent("p"))
'''
<p class="story">
Once upon a time there were three little sisters; and their names were
<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a> and
<a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>;
and they lived at the bottom of a well.
</p>
'''
四、beautifulsoup-------css选择器
1、通过标签名查找
print(soup.select("title")) #[<title>The Dormouse's story</title>]
print(soup.select("b")) #[<b>The Dormouse's story</b>]
2、通过类名查找
print(soup.select(".sister"))
'''
[<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
<a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]
'''
3、id名查找
print(soup.select("#link1"))
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>]
4、组合查找
组合查找即和写 class 文件时,标签名与类名、id名进行的组合原理是一样的,例如查找 p 标签中,id 等于 link1的内容,二者需要用空格分开
print(soup.select("p #link2"))
#[<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>]
直接子标签查找
print(soup.select("p > #link2"))
# [<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>]
查找既有class也有id选择器的标签
a_string = soup.select(".story#test")
查找有多个class选择器的标签
a_string = soup.select(".story.test")
查找有多个class选择器和一个id选择器的标签
a_string = soup.select(".story.test#book")
5、属性查找
查找时还可以加入属性元素,属性需要用中括号括起来,注意属性和标签属于同一节点,所以中间不能加空格,否则会无法匹配到。
print(soup.select("a[href='http://example.com/tillie']"))
#[<a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]
select 方法返回的结果都是列表形式,可以遍历形式输出,然后用 get_text() 方法来获取它的内容:
for title in soup.select('a'):
print (title.get_text())
'''
Elsie
Lacie
Tillie
'''