一、文档解析器
BeautifulSoup 第一个参数应该是要被解析的文档字符串或是文件句柄,第二个参数用来标识怎样解析文档.目前支持的文本类型为:html、xml、html5。而解析器可以指定lxml、html5lib、html.parser,不同的解析器解析的结果可能会不同。
二、编码
1.任何HTML或XML都有自己的编码,例如ASCII或UTF-8,但是经过BeautifulSoup解析后,文档都会被转换成Unicode,因为BeautifulSoup使用了自动编码检查。
from bs4 import BeautifulSoup
markup = "<h1>Sacr\xc3\xa9 bleu!</h1>"
soup = BeautifulSoup(markup,'lxml')
print(soup.h1)
print(soup.h1.string)
<h1>Sacré bleu!</h1>
Sacré bleu!
2.有时自动编码检查会出错,可以使用参数from_encoding来指定编码方式。
markup = b"<h1>\xed\xe5\xec\xf9</h1>"
soup = BeautifulSoup(markup,'lxml')
print(soup.h1)
print(soup.original_encoding)
<h1>νεμω</h1>
ISO-8859-7
soup = BeautifulSoup(markup,'lxml',from_encoding="iso-8859-8")
print(soup.h1)
print(soup.original_encoding)
<h1>םולש</h1>
iso-8859-8
3.如果希望输出编码不是utf-8,可以使用perttify()或encode()方法
markup = b'''
<html>
<head>
<meta content="text/html; charset=ISO-Latin-1" http-equiv="Content-type" />
</head>
<body>
<p>Sacr\xe9 bleu!</p>
</body>
</html>
'''
soup = BeautifulSoup(markup,'lxml')
print(soup.prettify())
<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-type"/>
</head>
<body>
<p>
Sacré bleu!
</p>
</body>
</html>
指定输出编码方式
print(soup.prettify("latin-1"))
print(soup.encode("latin-1"))
b'<html>\n <head>\n <meta content="text/html; charset=latin-1" http-equiv="Content-type"/>\n </head>\n <body>\n <p>\n Sacr\xe9 bleu!\n </p>\n </body>\n</html>\n'
b'<html>\n<head>\n<meta content="text/html; charset=latin-1" http-equiv="Content-type"/>\n</head>\n<body>\n<p>Sacr\xe9 bleu!</p>\n</body>\n</html>\n'
三、解析部分文档
有时我们仅仅是想查某个标签,但是需要先解析整个文档,这样即浪费内存也浪费时间。我们可以通过定义SoupStrainer来指定需要解析的内容,并把它作为parse_only参数传递给BeautifulSoup的构造方法即可。
html_doc = """
<html><head><title>The Dormouse's story</title></head>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
from bs4 import SoupStrainer
# <a>标签
only_a_tags = SoupStrainer("a")
print(BeautifulSoup(html_doc, "html.parser", parse_only=only_a_tags).prettify())
<a class="sister" href="http://example.com/elsie" id="link1">
Elsie
</a>
<a class="sister" href="http://example.com/lacie" id="link2">
Lacie
</a>
<a class="sister" href="http://example.com/tillie" id="link3">
Tillie
</a>
# id="link2"的标签
only_tags_with_id_link2 = SoupStrainer(id="link2")
print(BeautifulSoup(html_doc, "html.parser", parse_only=only_tags_with_id_link2).prettify())
<a class="sister" href="http://example.com/lacie" id="link2">
Lacie
</a>
# 文本内容长度小于10的标签
def is_short_string(string):
return len(string) < 10
only_short_strings = SoupStrainer(text=is_short_string)
print(BeautifulSoup(html_doc, "html.parser", parse_only=only_short_strings).prettify())
Elsie
,
Lacie
and
Tillie
...
还可以将SoupStrainer传入的搜索方法中
soup = BeautifulSoup(html_doc,'lxml')
soup.find_all(only_short_strings)
['\n', '\n', 'Elsie', ',\n', 'Lacie', ' and\n', 'Tillie', '\n', '...', '\n']