BeautifulSoup库详解

BeautifulSoup库详解


Beautifulsoup:灵活又方便的网页解析库,处理高效,支持多种解析器。利用它不用编写正则表达式即可方便地实现网页信息的提取。

解析器一览

解析器使用方法优势劣势
Python标准库BeautifulSoup(markup,”html.parser”)Python的内置标准库、执行速度适中、文档容错能力强Python 2.7.3 or 3.2.2前的版本中文容错能力差
lxml HTML 解析器BeautifulSoup(markup,”lxml”)速度快、文档容错能力强需要安装C语言库
lxml XML 解析器BeautifulSoup(markup,”xml”)速度快、唯一支持XML的解析器需要安装C语言库
html5libBeautifulSoup(markup,”html5lib”)最好的容错性、以浏览器的方式解析文档、生成HTML5格式的文档速度慢、不依赖外部扩展

注意:此教程中脚本的正常运行需要安装两个包(目前已知)bs4和lxml,其中lxml不会被导入,但仍然需要

基本使用

from bs4 import BeautifulSoup
html = '''
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title" name="dromouse"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1"><!--Elsie--></a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
'''
soup = BeautifulSoup(html,"lxml") #演示时使用lxml进行解析,最常用
print(soup.prettify()) #格式化方法,可以自动地将代码补全,容错处理
print(soup.title.string) #将title,即The Dormouse's story打印出来

标签选择器

from bs4 import BeautifulSoup
html = '''
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title" name="dromouse"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1"><!--Elsie--></a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
'''
soup = BeautifulSoup(html,"lxml")
print(soup.title)   #指定要打印的标签
print(soup.head)
print(soup.p)   #若有多个结果,只返回第一个 (1)选择元素
print(soup.title.name)   #  (2)获取名称
print(soup.p.attrs["name"])   #(3)获取属性
print(soup.p["name"])   #(3)获取属性
print(soup.p.string)  #(4)获取内容
print(soup.head.title.string)  #(5)嵌套选择,若获取的tag格式正确,还可以继续提取元素

输出

<title>The Dormouse's story</title>
<head><title>The Dormouse's story</title></head>
<p class="title" name="dromouse"><b>The Dormouse's story</b></p>
title
dromouse
dormouse
The Dormouse's story
The Dormouse's story

子节点和子孙节点

from bs4 import BeautifulSoup
html = '''
<html>
 <head>
  <title>
   The Dormouse's story
  </title>
 </head>
 <body>
  <p class="story">
   Once upon a time there were three little sisters; and their names were
   <a class="sister" href="http://example.com/elsie" id="link1">
    <span>Elsie</span>
   </a>
   ,
   <a class="sister" href="http://example.com/lacie" id="link2">
    Lacie
   </a>
   and
   <a class="sister" href="http://example.com/tillie" id="link3">
    Tillie
   </a>
   ;
and they lived at the bottom of a well.
  </p>
  <p class="story">
   ...
  </p>
 </body>
</html>
'''
soup = BeautifulSoup(html,"lxml")
print(soup.p.contents)  #只返回第一个p标签的内容,结果以列表形式体现,但每个元素的类型都是bs4内置的

输出

['\n   Once upon a time there were three little sisters; and their names were\n   ', <a class="sister" href="http://example.com/elsie" id="link1">
<span>Elsie</span>
</a>, '\n   ,\n   ', <a class="sister" href="http://example.com/lacie" id="link2">
    Lacie
   </a>, '\n   and\n   ', <a class="sister" href="http://example.com/tillie" id="link3">
    Tillie
   </a>, '\n   ;\nand they lived at the bottom of a well.\n  ']
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,"lxml")  # html变量使用上例的内容,下同
print(soup.p.children) 
for i,child in enumerate(soup.p.children):  
#soup.p.children实为迭代器,需要循坏才能取得内容
      print(i,child)

输出

<list_iterator object at 0x10a20b6a0>
0 
   Once upon a time there were three little sisters; and their names were
   
1 <a class="sister" href="http://example.com/elsie" id="link1">
<span>Elsie</span>
</a>
2 
   ,
   
3 <a class="sister" href="http://example.com/lacie" id="link2">
    Lacie
   </a>
4 
   and
   
5 <a class="sister" href="http://example.com/tillie" id="link3">
    Tillie
   </a>
6 
   ;
and they lived at the bottom of a well.
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,"lxml")
print(soup.p.descendants)
for i,child in enumerate(soup.p.descendants):  
#soup.p.descendants也为迭代器,会将每一个子标签细分出来
	print(i,child)
<generator object Tag.descendants at 0x109615c78>
0 
   Once upon a time there were three little sisters; and their names were
   
1 <a class="sister" href="http://example.com/elsie" id="link1">
<span>Elsie</span>
</a>
2 

3 <span>Elsie</span>
4 Elsie
5 

6 
   ,
   
7 <a class="sister" href="http://example.com/lacie" id="link2">
    Lacie
   </a>
8 
    Lacie
   
9 
   and
   
10 <a class="sister" href="http://example.com/tillie" id="link3">
    Tillie
   </a>
11 
    Tillie
   
12 
   ;
and they lived at the bottom of a well.

父节点和祖先节点

from bs4 import BeautifulSoup
soup = BeautifulSoup(html,"lxml")
print(soup.a.parent)  #以第一个a标签为主,输出其父标签的全部内容

输出

<p class="story">
   Once upon a time there were three little sisters; and their names were
   <a class="sister" href="http://example.com/elsie" id="link1">
<span>Elsie</span>
</a>
   ,
   <a class="sister" href="http://example.com/lacie" id="link2">
    Lacie
   </a>
   and
   <a class="sister" href="http://example.com/tillie" id="link3">
    Tillie
   </a>
   ;
and they lived at the bottom of a well.
  </p>
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,"lxml")
print(list(enumerate(soup.a.parents)))  
#以第一个a标签为主,输出其所有祖先标签的全部内容
#一般最后一个和倒数第二个相同
#一个是parent,一个是parents

输出省略

兄弟节点

from bs4 import BeautifulSoup
soup = BeautifulSoup(html,"lxml")
print(list(enumerate(soup.a.next_siblings)))  #获取后面的兄弟节点
print(list(enumerate(soup.a.previous_siblings)))  #获取前面的兄弟节点

输出

[(0, '\n   ,\n   '), (1, <a class="sister" href="http://example.com/lacie" id="link2">
    Lacie
   </a>), (2, '\n   and\n   '), (3, <a class="sister" href="http://example.com/tillie" id="link3">
    Tillie
   </a>), (4, '\n   ;\nand they lived at the bottom of a well.\n  ')]
[(0, '\n   Once upon a time there were three little sisters; and their names were\n   ')]

标准选择器

find_all

find_all(Name,Attrs,Recursive,Text,**kwargs) 可根据标签名、属性、内容查找文档,原理跟正则表达式相似。
Name:

from bs4 import BeautifulSoup
html = '''
<div class="panel">
    <div class="panel-heading">
        <h4>Hello</h4>
    </div>
    <div class="panel-body">
        <ul class="list" id="list-1">
            <li class="element">Foo</li>
            <li class="element">Bar</li>
            <li class="element">Jay</li>
        </ul>
        <ul class="list list-small" id="list-2">
            <li class="element">Foo</li>
            <li class="element">Bar</li>
        </ul>
    </div>
</div>
'''
soup = BeautifulSoup(html,"lxml")
print(soup.find_all('ul'))
print(type(soup.find_all('ul')[0]))

输出

[<ul class="list" id="list-1">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>, <ul class="list list-small" id="list-2">
<li class="element">Foo</li>
<li class="element">Bar</li>
</ul>]
<class 'bs4.element.Tag'>   #由于是tag类型,还可以进行嵌套操作,参见下例
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,"lxml")
for ul in soup.find_all('ul'):  #嵌套查找
  print(ul.find_all('li'))

输出

[<li class="element">Foo</li>, <li class="element">Bar</li>, <li class="element">Jay</li>]
[<li class="element">Foo</li>, <li class="element">Bar</li>]

Attrs:

from bs4 import BeautifulSoup
soup = BeautifulSoup(html,"lxml")
print(soup.find_all(attrs={'id':'list-1'}))  
#参数形式为字典形式,输出整个标签,特殊的属性名不需要加双引号,可以直接输入
print(soup.find_all(id='list-1'))
print(soup.find_all(attrs={'name':'elements'}))

输出

[<ul class="list" id="list-1">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>]
[<ul class="list" id="list-1">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>]
[]

Text
根据内容匹配

from bs4 import BeautifulSoup
soup = BeautifulSoup(html,"lxml")
print(soup.find_all(text="Foo"))

输出

['Foo', 'Foo']

find(name,attrs,recursive,text,**kwargs)
使用方法和find_all方法完全相同,find返回单个元素,find_all返回所有元素,相当于find_all方法结果的第一个值
返回所有对应节点:
find_parents()、find_next_siblings()、find_previous_siblings()、find_all_next()、find_all_previous()
返回第一个符合条件的节点:
find_parent()、find_next_sibling()、find_previous_sibling()、find_next()、find_previous()

CSS选择器

通过select()直接传入CSS选择器即可完成选择

from bs4 import BeautifulSoup
soup = BeautifulSoup(html,"lxml")
print(soup.select('.panel .panel-heading')) 
#.panel 代表选择class="panel",注意元素之间的空格
#第一个元素表示起始,但不包括,第二个表示输出范围
print(soup.select('ul li'))  #若选择标签则不需添加额外符号
print(soup.select('#list-2 .element'))  # 若选择id则添加 "#"
print(type(soup.select('ul')[0]))
soup = BeautifulSoup(html,"lxml")
for ul in soup.select('ul'):        #使用嵌套结构
     print(ul.select('li'))

输出

[<div class="panel-heading">
<h4>Hello</h4>
</div>]
[<li class="element">Foo</li>, <li class="element">Bar</li>, <li class="element">Jay</li>, <li class="element">Foo</li>, <li class="element">Bar</li>]
[<li class="element">Foo</li>, <li class="element">Bar</li>]
<class 'bs4.element.Tag'>
[<li class="element">Foo</li>, <li class="element">Bar</li>, <li class="element">Jay</li>]
[<li class="element">Foo</li>, <li class="element">Bar</li>]

获取属性:

from bs4 import BeautifulSoup
soup = BeautifulSoup(html, "lxml")
print(soup.select('ul'))
for ul in soup.select('ul'):
     print(ul['id'])
     print(ul.attrs['id'])  #两种方法都可以获取属性

输出

[<ul class="list" id="list-1" name="elements">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>, <ul class="list list-small" id="list-2">
<li class="element">Foo</li>
<li class="element">Bar</li>
</ul>]
list-1
list-1
list-2
list-2

获取内容

from bs4 import BeautifulSoup
soup = BeautifulSoup(html, "lxml")
print(soup.select('ul'))
for ul in soup.select('li'):
     print(ul.get_text())  #获取标签里的文本

输出

[<ul class="list" id="list-1">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>, <ul class="list list-small" id="list-2">
<li class="element">Foo</li>
<li class="element">Bar</li>
</ul>]
Foo
Bar
Jay
Foo
Bar
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值