Beautiful Soup库

Beauti ful Soup 就是Python 的一个HTML 或XML 的解析库,可以用它来方便地从网页中提取数据
https://beautifulsoup.readthedocs.io/zh_CN/v4.4.0/

初始化

from bs4 import BeautifulSoup
soup = BeautifulSoup(html,‘lxml’)
或soup = BeautifulSoup(open(’./bsTest01.html’),‘lxml’)

解析器

beautifulsoup1.jpg

1节点选择器

from bs4 import BeautifulSoup

##1节点选择器
html = '''
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title" name="dromouse"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="linkl"><!-- Elsie --></a>,
<a href="http://example.com/elsie" class="sister" id="link2">Lacie</a> and 
<a href="http://example.com/elsie" class="sister" id="link3">Tillie</a>;
and they live at the bottom of a well.</p>
<p class="story">...</p>
'''
soup = BeautifulSoup(html,'lxml')
#选择元素
print(soup.title)
print(type(soup.title))
print(soup.title.string)
print(soup.head)
print(soup.p)
--------------
<title>The Dormouse's story</title>
<class 'bs4.element.Tag'>
The Dormouse's story
<head><title>The Dormouse's story</title></head>
<p class="title" name="dromouse"><b>The Dormouse's story</b></p>

bs4.element.Tag类型属性

提取信息

获取节点名称

print(soup.title.name)
--------------------------
title

获取节点属性.attrs可以获取所有属性

print(soup.p.attrs)
print(soup.p.attrs['name'])
#另外一种写法:由于attrs返回结果是字典,可以从字典中获取某个键值。
print(soup.p['name'])
print(soup.p['class'])
--------------------------
{'class': ['title'], 'name': 'dromouse'}
dromouse
dromouse
['title']

注意判断结果值,有些返回是字典有些是字符串

获取内容

print(soup.p.string)
--------------------------
The Dormouse's story

嵌套选择

html = """
<html><head><title>The Dormouse's story</title></head>
<body>
"""
soup = BeautifulSoup(html,'lxml')
print(soup.head.title)
print(type(soup.head.title))
print(soup.head.title.string)
------------------------------------
<title>The Dormouse's story</title>
<class 'bs4.element.Tag'>
The Dormouse's story

嵌套每次返回结果都是’bs4.element.Tag’类型,然后可以继续调用节点进行下一步的选择。如:获取了head节点元素,继续调用head来选取其内部的节点元素

关联选择

html = '''
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="linkl"><span>Elsie<span></a>,
<a href="http://example.com/elsie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/elsie" class="sister" id="link3">Tillie</a>;
and they live at the bottom of a well.</p>
<p class="story">...</p>
'''
soup = BeautifulSoup(html,'lxml')

子节点和子孙节点。注:contents获取子节点

print(soup.p.contents)
---------------------------------
['Once upon a time there were three little sisters; and their names were\n', <a class="sister" href="http://example.com/elsie" id="linkl"><span>Elsie<span></span></span></a>, ',\n', <a class="sister" href="http://example.com/elsie" id="link2">Lacie</a>, ' and\n', <a class="sister" href="http://example.com/elsie" id="link3">Tillie</a>, ';\nand they live at the bottom of a well.']

children子节点,但结果是生成器类型,用for输出

print(soup.p.children,type(soup.p.children))
for i,child in enumerate(soup.p.children): #enumerate()函数用于将一个可遍历的数据对象(如列表、元组或字符串)组合为一个索引序列,同时列出数据和数据下标,一般用在 for 循环当中
    print(i,child)
--------------------
<list_iterator object at 0x0000000002FD0128> <class 'list_iterator'>
0 Once upon a time there were three little sisters; and their names were

1 <a class="sister" href="http://example.com/elsie" id="linkl"><span>Elsie<span></span></span></a>
2 ,

3 <a class="sister" href="http://example.com/elsie" id="link2">Lacie</a>
4  and

5 <a class="sister" href="http://example.com/elsie" id="link3">Tillie</a>
6 ;
and they live at the bottom of a well.

descendants子孙节点,生成器类型,用for输出

print(soup.p.descendants,type(soup.p.descendants))
for i,child in enumerate(soup.p.descendants):
    print(i,child)
------------------------
<generator object descendants at 0x000000000067F570> <class 'generator'>
0 Once upon a time there were three little sisters; and their names were

1 <a class="sister" href="http://example.com/elsie" id="linkl"><span>Elsie<span></span></span></a>
2 <span>Elsie<span></span></span>
3 Elsie
4 <span></span>
5 ,

6 <a class="sister" href="http://example.com/elsie" id="link2">Lacie</a>
7 Lacie
8  and

9 <a class="sister" href="http://example.com/elsie" id="link3">Tillie</a>
10 Tillie
11 ;
and they live at the bottom of a well.

父节点和祖先节点

父节点parent

print(soup.a.parent,type(soup.a.parent))
----------------
<p class="story">Once upon a time there were three little sisters; and their names were
<a class="sister" href="http://example.com/elsie" id="linkl"><span>Elsie<span></span></span></a>,
<a class="sister" href="http://example.com/elsie" id="link2">Lacie</a> and
<a class="sister" href="http://example.com/elsie" id="link3">Tillie</a>;
and they live at the bottom of a well.</p> <class 'bs4.element.Tag'>

祖先节点

print(soup.a.parents,type(soup.a.parents))
print(list(enumerate(soup.a.parents)),'\n')#转成列表
for i,parent in enumerate(soup.a.parents): #把结果历遍出来
    print(i,parent)
---------------------
<generator object parents at 0x000000000090F4C0> <class 'generator'>
结果略

注意:结果为生成器类型

兄弟节点

html = '''
<html>
<body>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="linkl"><span>Elsie<span></a> Hello
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they live at the bottom of a well.
</p>
'''
soup = BeautifulSoup(html,'lxml')

下一个兄弟节点

print('Next Sibling:',soup.a.next_sibling)
-------------------------------------------------
Next Sibling:  Hello

上一个兄弟节点

print('prev Sibling:',soup.a.previous_sibling)
----------------------------------------------------
prev Sibling: Once upon a time there were three little sisters; and their names were

下一个所有兄弟节点

print('Next Sibling:',list(enumerate(soup.a.next_siblings)))
-----------------------------------
Next Sibling: [(0, ' Hello\n'), (1, <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>), (2, ' and\n'), (3, <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>), (4, ';\nand they live at the bottom of a well.\n')]

上一个所有兄弟节点

print('prev Sibling:',list(enumerate(soup.a.previous_siblings)))
--------------------------------
prev Sibling: [(0, 'Once upon a time there were three little sisters; and their names were\n')]

提取信息

html = '''
<html>
<body>
<p class="story">
    Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="linkl">Bob</a><a href="http://example.com/lacie" 
class="sister" id="link2">Lacie</a> 
</p>
'''
soup = BeautifulSoup(html,'lxml')
print(type(soup.a.next_sibling),soup.a.next_sibling)
print(soup.a.next_sibling.string)
print(type(soup.a.parents))
print(list(soup.a.parents)[0])
print(list(soup.a.parents)[0].attrs['class'])
-------------------------------------------
<class 'bs4.element.Tag'> <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>
Lacie
<class 'generator'>
<p class="story">
    Once upon a time there were three little sisters; and their names were
<a class="sister" href="http://example.com/elsie" id="linkl">Bob</a><a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>
</p>
['story']

方法选择器

find_all()

name 根据节点名称进行查找

from bs4 import BeautifulSoup

html = """
<div class="panel">
    <div class="panel-heading">
        <h4>Hello</h4>
    </div>
    <div>
        <div class="panel-body">
            <ul class="list" id="list-1">
                <li class="element">Foo</li>
                <li class="element">Bar</li>
                <li class="element">Jay</li>
            </ul>
            <ul class="list list-small" id="list-2">
                <li class="element">Foo</li>
                <li class="element">Bar</li>
            </ul>
        </div>
    </div>
</div>
"""
soup = BeautifulSoup(html,'lxml')
#find_all()
print(soup.find_all(name='ul'))
print(type(soup.find_all(name='ul')[0]))
-------------
[<ul class="list" id="list-1">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>, <ul class="list list-small" id="list-2">
<li class="element">Foo</li>
<li class="element">Bar</li>
</ul>]
<class 'bs4.element.Tag'>
for ul in soup.find_all(name='ul'):
    print(ul.find_all(name='li'))
    for li in ul.find_all(name='li'):
        # print(type(li))
        print(li.string)
---------------------------
[<li class="element">Foo</li>, <li class="element">Bar</li>, <li class="element">Jay</li>]
Foo
Bar
Jay
[<li class="element">Foo</li>, <li class="element">Bar</li>]
Foo
Bar

sttrs 根据属性来查询

html = """
<div class="panel">
    <div class="panel-heading">
        <h4>Hello</h4>
    </div>
    <div>
        <div class="panel-body">
            <ul class="list" id="list-1" name="elements">
                <li class="element">Foo</li>
                <li class="element">Bar</li>
                <li class="element">Jay</li>
            </ul>
            <ul class="list list-small" id="list-2">
                <li class="element">Foo</li>
                <li class="element">Bar</li>
            </ul>
        </div>
    </div>
</div>
"""
soup = BeautifulSoup(html,'lxml')
###attrs根据属性来查询
print(soup.find_all(attrs={"id":"list-1"}))
print(soup.find_all(attrs={'name':'elements'}))
#id 、class 可以直接查
print(soup.find_all(id='list-1'))
print("---"*10)
print(soup.find_all(class_='element'))
------------------------
[<ul class="list" id="list-1" name="elements">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>]
[<ul class="list" id="list-1" name="elements">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>]
[<ul class="list" id="list-1" name="elements">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>]
------------------------------
[<li class="element">Foo</li>, <li class="element">Bar</li>, <li class="element">Jay</li>, <li class="element">Foo</li>, <li class="element">Bar</li>]

find_parents()返回所有祖先节点.
find_parent()返回父节点
find_next_siblings()返回后面所有兄弟节点。
find_next_sibling()返回后面兄弟节
find_previous_siblings()返回前面所有兄弟节点
find_previous_sibling()返回前面兄弟节点
find_all_next()返回节点后所有符合条件的节点
find_next()返回第一个符合条件节点
find_all_previous()返回节点前所有符合条件的节点
find_previous()返回节点前第一个符合条件的节点

CSS选择器

html = """
<div class="panel">
    <div class="panel-heading">
        <h4>Hello</h4>
    </div>
    <div>
        <div class="panel-body">
            <ul class="list" id="list-1">
                <li class="element">Foo</li>
                <li class="element">Bar</li>
                <li class="element">Jay</li>
            </ul>
            <ul class="list list-small" id="list-2">
                <li class="element">Foo</li>
                <li class="element">Bar</li>
            </ul>
        </div>
    </div>
</div>
"""
soup = BeautifulSoup(html,'lxml')
print(soup.select('.panel .panel-heading'))
print(soup.select('ul li'))
print(soup.select('#list-2 .element'))
print(type(soup.select('ul')[0]))
-------------------------
[<div class="panel-heading">
<h4>Hello</h4>
</div>]
[<li class="element">Foo</li>, <li class="element">Bar</li>, <li class="element">Jay</li>, <li class="element">Foo</li>, <li class="element">Bar</li>]
[<li class="element">Foo</li>, <li class="element">Bar</li>]
<class 'bs4.element.Tag'>

嵌套选择

for ul in soup.select('ul'):
    print(ul)
--------------------
<ul class="list" id="list-1">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>
<ul class="list list-small" id="list-2">
<li class="element">Foo</li>
<li class="element">Bar</li>
</ul>

获取属性

for ul in soup.select('ul'):
    print(ul['id'])
-----------
list-1
list-2

获取文本

for li in soup.select('li'):
    print('get text:',li.get_text())
    print('string:',li.string)
----------
get text: Foo
string: Foo
get text: Bar
string: Bar
get text: Jay
string: Jay
get text: Foo
string: Foo
get text: Bar
string: Bar
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
### 回答1: 你可以使用 pip 命令来安装 Beautiful Soup : ``` pip install beautifulsoup4 ``` 安装完成后,就可以在 Python 代码中使用 Beautiful Soup 了。例如,以下代码展示了如何使用 Beautiful Soup 解析 HTML: ```python from bs4 import BeautifulSoup # HTML 字符串 html = """ <html> <head> <title>Beautiful Soup</title> </head> <body> <div class="content"> <h1>Beautiful Soup</h1> <p>Beautiful Soup 是一个用于解析 HTML 和 XML 文档的 Python 。</p> </div> </body> </html> """ # 解析 HTML soup = BeautifulSoup(html, 'html.parser') # 获取 title 标签的内容 title = soup.title.string print(title) # 获取 div 标签的 class 属性值 div_class = soup.div['class'] print(div_class) # 获取 p 标签的文本内容 p_text = soup.p.text print(p_text) ``` 在上面的代码中,我们使用 Beautiful Soup 解析了一个 HTML 字符串,并获取了其中的 title、div 和 p 标签的内容。 ### 回答2: Beautiful Soup是一个用于从HTML和XML文件中提取数据的Python。 要安装Beautiful Soup,首先需要确保安装了Python解释器。然后,可以使用pip包管理工具来安装Beautiful Soup。 打开终端或命令提示符窗口,在命令行中输入以下命令来安装Beautiful Soup: ``` pip install beautifulsoup4 ``` 按下回车键后,pip将会自动下载并安装Beautiful Soup。安装完成后,我们就可以在Python代码中引入Beautiful Soup来使用它的功能了。 在Python代码中,我们可以使用以下语句来引入Beautiful Soup: ```python from bs4 import BeautifulSoup ``` 现在,我们就可以使用Beautiful Soup来解析HTML或XML文件并提取其中的数据了。可以使用`BeautifulSoup`函数来创建一个Beautiful Soup对象。然后,可以使用这个对象的各种方法来查找和提取所需的数据。 例如,可以使用`find_all`方法来查找标签为`<a>`的所有元素。这个方法返回一个列表,其中包含了所有符合条件的元素。可以通过循环遍历这个列表,提取其中的数据。 下面是一个简单的例子,演示了如何使用Beautiful Soup来解析一个HTML文件,并提取其中的所有链接: ```python from bs4 import BeautifulSoup # 读取HTML文件 with open('example.html', 'r') as file: html = file.read() # 创建Beautiful Soup对象 soup = BeautifulSoup(html, 'html.parser') # 查找所有<a>标签 links = soup.find_all('a') # 打印链接 for link in links: print(link['href']) ``` 以上就是使用Beautiful Soup安装和使用的基本步骤。希望对你有所帮助! ### 回答3: 要安装Beautiful Soup,可以按照以下步骤进行操作: 第一步,确保已经安装了Python解释器。Beautiful Soup是一个Python,需要在Python环境中使用。可以访问Python官方网站下载和安装最新版本的Python。 第二步,打开终端或命令提示符窗口,并输入以下命令来安装Beautiful Soup: ``` pip install beautifulsoup4 ``` 如果你使用的是Python3版本,则需要使用pip3命令来代替pip命令。 第三步,等待安装完成。该命令将自动从Python包索引(PyPI)下载Beautiful Soup,并安装到你的Python环境中。 第四步,确认安装是否成功。在终端或命令提示符窗口中输入以下命令,导入Beautiful Soup并查看版本信息: ``` python -c "import bs4; print(bs4.__version__)" ``` 如果成功安装并导入Beautiful Soup,将会显示该的版本号。 安装完成后,你就可以在Python程序中使用Beautiful Soup来解析和提取HTML或XML等文档的数据了。这个提供了强大而灵活的功能,可以通过标签和属性进行元素定位,提取出你需要的信息,方便进行数据分析和抓取等操作。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值