Python爬虫编程4——数据解析模块之bs4

目录

一.bs4简介

1.基本概念

2.源码分析

二.bs4的使用

1.快速开始

2.bs4的对象种类

三.遍历文档树 遍历子节点

1.contents      children      descendants

2.string      strings      stripped_strings

四.遍历文档树 遍历父节点

1.parent和parents

五.遍历文档树 遍历兄弟节点

六.搜索树

七.find_all() 和 find()

八.select()方法

九.修改文档树


一.bs4简介

1.基本概念

Beautiful Soup 是一个可以从HTML或XML文件中提取数据的网页信息提取库。

2.源码分析

(1)中文官方解释文档:https://beautifulsoup.readthedocs.io/zh_CN/v4.4.0/icon-default.png?t=M0H8https://beautifulsoup.readthedocs.io/zh_CN/v4.4.0/

(2)安装:pip install lxml

pip install bs4

二.bs4的使用

1.快速开始

html_doc = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>

<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>

<p class="story">...</p>
"""
# soup = BeautifulSoup(html_doc, features="lxml")
soup = BeautifulSoup(html_doc, "lxml")
# prettify() 把要解析的字符串以标准的缩进格式输出
# print(soup.prettify())
# print(soup.title)   #<title>The Dormouse's story</title>
# string获取文本内容的
# print(soup.title.string)    #The Dormouse's story
# name获取标签名
# print(soup.title.name)  #title


# 需求获取所有p标签
# print(soup.p)   #获取第一个
# tag = soup.find_all('p')
# # print(soup.find_all('p'))   #获取所有p标签,以列表返回
# print(len(tag))
# for i in tag:
#     print(i)

# 获取a标签的属性href
tag = soup.find_all('a')
for i in tag:
    # print(i.get('href'))   #1.get方法里面将属性名以字符串传进去

    # print(i.attrs)  #2.将属性各个放在字典中返回
    # print(i.attrs['href'])

    print(i['href'])  #直接对i取href

2.bs4的对象种类

(1)tag:标签

(2)NavigableString:可导航的字符串

(3)BeautifulSoup:bs对象

(4)Comment:注释

from bs4 import BeautifulSoup

html_doc = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>

<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>

<span><!--我是一个注释--></span>

<p class="story">...</p>
"""

soup = BeautifulSoup(html_doc, 'lxml')
# print(type(soup.title))  #<class 'bs4.element.Tag'>
# print(type(soup.title.string))    #<class 'bs4.element.NavigableString'>
# print(type(soup))       #<class 'bs4.BeautifulSoup'>

print(type(soup.span.string))       #<class 'bs4.element.Comment'>

三.遍历文档树 遍历子节点

bs里面有三种情况,第一个是遍历,第二个是查找,第三个是修改。

1.contents      children      descendants

(1)contents返回的是一个所有子节点的列表;

(2)children返回的是一个子节点的迭代器;

(3)descendants返回的是一个生成器遍历子子孙孙。

from bs4 import BeautifulSoup

html_doc = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>

<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>

<span><!--我是一个注释--></span>

<p class="story">...</p>
"""
'''
遍历子节点
contents 返回的是一个所有子节点的列表
children 返回的是一个子节点的迭代器
descendants 返回的是一个生成器遍历子子孙孙
'''
soup = BeautifulSoup(html_doc, 'lxml')
heads = soup.head
# print(heads.contents)

# print(heads.children)
# for head in heads.children:
#     print(head)

# html = soup.html
# # for i in html.contents:
# #     print(i)
# for i in html.descendants:     #子子孙孙
#     print(i)

2.string      strings      stripped_strings

(1)string获取标签里面的内容;

(2)strings返回的是一个生成器对象。用来获取多个标签内容;

(3)stripped_strings和strings基本一致,但是它可以把多余的空格去掉;

(在一些情况下,可以将生成器强转为列表,方便使用)

'''
string 获取标签里面的内容
strings 返回的是一个生成器对象用过来获取多个标签内容
stripped_strings 和strings基本一致 但是它可以把多余空格去掉
'''
# title_tag = soup.title
# print(title_tag.string)

# html_tag = soup.html
# print(html_tag.strings)
# for i in html_tag.strings:
#     print(i)

# html_tag = soup.html
# print(html_tag.stripped_strings)
# for i in html_tag.stripped_strings:
#     print(i)

四.遍历文档树 遍历父节点

1.parent和parents

(1)parent直接获得父节点;

(2)parents获取所有的父节点,以生成器返回;

'''
遍历父节点
parent直接获得父节点
parents获取所有的父节点,以生成器返回
'''
# t = soup.title
# print(t.parent)
#
# html = soup.html
# print(html.parent)
# print(type(html.parent))  #<class 'bs4.BeautifulSoup'>
#
# print(t.parents)
# for i in t.parents:
#     print(i)
#     print("*"*50)
 

五.遍历文档树 遍历兄弟节点

(1)next_sibling        下一个兄弟节点;

(2)previous_sibling        上一个兄弟节点;

(3)next_siblings        下一个所有兄弟节点;

(4)previous_siblings        上一个所有兄弟节点;

'''
遍历兄弟节点
next_sibling 下一个兄弟节点
previous_sibling  上一个兄弟节点
next_siblings   下一个所有兄弟节点
previous_siblings 上一个所有兄弟节点
'''
html2 = """ 
<html>
    <head>
        <title>The Dormouse's story</title> 
    </head> 
    <body>
        <p class="story">p1</p>
        <span class="story">span</span>
        <p class="story">p2</p> 
    </body> 
</html> 
"""
# soup2 = BeautifulSoup(html2, 'lxml')
# s = soup2.span
# print(s.previous_sibling)  #空白或换行也会是被视作一个节点,所以是空的
# for i in s.previous_siblings:
#     print(i)

六.搜索树

(1)字符串过滤器;

(2)正则表达式过滤器:

我们用正则表达式里面的compile方法编译一个正则表达式传给一个find或者findall这个方法可以实现一个正则表达式的一个过滤器搜索;

(3)列表过滤器;

(4)True过滤器;

from bs4 import BeautifulSoup
# find()
# find_all()
html_doc = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>

<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>

<span><!--我是一个注释--></span>

<p class="story">...</p>
"""
# 字符串过滤器
soup = BeautifulSoup(html_doc, 'lxml')
# find()
# 返回符合条件第一个结果
# result = soup.find('a')
# print(result)

# find_all()   以列表形式返回所有符合规则的
# result = soup.find_all('a')
# print(result)

# 列表过滤器
# result = soup.find_all(['title','b'])
# print(result)

七.find_all() 和 find()

1.find_all()方法以列表形式返回所有的搜索到的标签数据;

2.find()方法返回搜索到的第一条数据;

3.find_all()方法参数:

def find_all(self, name=None, attrs={}, recursive=True, text=None,
                 limit=None, **kwargs):

(1)name:tag名称;

(2)attrs:标签的属性;

(3)recursive:是否递归搜索;

(4)text:文本内容;

(5)limit:限制返回个数;

(6)kwargs:关键字参数;

html = """
<table class="tablelist" cellpadding="0" cellspacing="0">
    <tbody>
        <tr class="h">
            <td class="l" width="374">职位名称</td>
            <td>职位类别</td>
            <td>人数</td>
            <td>地点</td>
            <td>发布时间</td>
        </tr>
        <tr class="even">
            <td class="l square"><a target="_blank" href="position_detail.php?id=33824&keywords=python&tid=87&lid=2218">22989-金融云区块链高级研发工程师(深圳)</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-25</td>
        </tr>
        <tr class="odd">
            <td class="l square"><a target="_blank" href="position_detail.php?id=29938&keywords=python&tid=87&lid=2218">22989-金融云高级后台开发</a></td>
            <td>技术类</td>
            <td>2</td>
            <td>深圳</td>
            <td>2017-11-25</td>
        </tr>
        <tr class="even">
            <td class="l square"><a target="_blank" href="position_detail.php?id=31236&keywords=python&tid=87&lid=2218">SNG16-腾讯音乐运营开发工程师(深圳)</a></td>
            <td>技术类</td>
            <td>2</td>
            <td>深圳</td>
            <td>2017-11-25</td>
        </tr>
        <tr class="odd">
            <td class="l square"><a target="_blank" href="position_detail.php?id=31235&keywords=python&tid=87&lid=2218">SNG16-腾讯音乐业务运维工程师(深圳)</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-25</td>
        </tr>
        <tr class="even">
            <td class="l square"><a target="_blank" href="position_detail.php?id=34531&keywords=python&tid=87&lid=2218">TEG03-高级研发工程师(深圳)</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
        <tr class="odd">
            <td class="l square"><a target="_blank" href="position_detail.php?id=34532&keywords=python&tid=87&lid=2218">TEG03-高级图像算法研发工程师(深圳)</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
        <tr class="even">
            <td class="l square"><a target="_blank" href="position_detail.php?id=31648&keywords=python&tid=87&lid=2218">TEG11-高级AI开发工程师(深圳)</a></td>
            <td>技术类</td>
            <td>4</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
        <tr class="odd">
            <td class="l square"><a target="_blank" href="position_detail.php?id=32218&keywords=python&tid=87&lid=2218">15851-后台开发工程师</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
        <tr class="even">
            <td class="l square"><a target="_blank" href="position_detail.php?id=32217&keywords=python&tid=87&lid=2218">15851-后台开发工程师</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
        <tr class="odd">
            <td class="l square"><a id="test" class="test" target='_blank' href="position_detail.php?id=34511&keywords=python&tid=87&lid=2218">SNG11-高级业务运维工程师(深圳)</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
    </tbody>
</table>
"""
soup2 = BeautifulSoup(html, 'lxml')

# 获取所有的tr标签
# trs = soup2.find_all('tr')
# for t in trs:
#     print(t)
#     print('*'*50)

# 获取第三个tr标签 用列表下标索引取值 从0开始的
# trs = soup2.find_all('tr')
# print(trs[2])

# 通过属性值定位标签 找class="even"的tr标签
# trs = soup2.find_all('tr', attrs={'class': 'even'})
# for t in trs:
#     print(t)
#     print('*'*50)

# 注意:class后面一定要有下划线
# trs = soup2.find_all('tr', class_="even")
# for t in trs:
#     print(t)
#     print('*'*50)


# 获取到  class="even"并且id="test" 的tr标签  可以设置提取数量使用参数 limit = 数量
# trs = soup2.find_all('tr', class_="even", id="test")
# for t in trs:
#     print(t)
#     print('*'*50)

# 获取属性值
# a_tags = soup2.find_all('a')
# for i in a_tags:
#     print(i['href'])

# 获取职位名称
trs = soup2.find_all('tr')[1:]
for t in trs:
    # print(t)
    # print('*'*50)
    tds = t.find_all('td')[0]
    print(tds.string)

八.select()方法

我们也可以通过css选择器的方式来提取数据。但是需要注意的是这里面我们要掌握的css语法。

CSS 选择器参考手册icon-default.png?t=M0H8https://www.w3school.com.cn/cssref/css_selectors.asp

from bs4 import BeautifulSoup
# select_one()  --> find()
# select() -->  find_all()

html_doc = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>

<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>

<p class="story">...</p>
"""

soup = BeautifulSoup(html_doc,'lxml')
# select_one等价于find() 只会返回符合条件的第一个结果
# a_tag = soup.select_one('a')
# print(a_tag)
# select()返回所有
# a_tag = soup.select('a')
# print(a_tag)


# 获取class = “sister”的标签
# 选择class="into"的所有元素   --> .into
# 选择class="sister"的所有元素   --> .sister
# tags = soup.select('.sister')
# print(tags)

# 获取id=“link2”的标签
# 选择 id="firstname" 的元素 --》 #firstname
# 选择 id="link2" 的元素 --》 #link2
# tags = soup.select('#link2')
# print(tags)

# # 获取文本内容
# b_tag = soup.select('b')[0]
# # print(b_tag.string)
# print(b_tag.get_text())


html = """
<table class="tablelist" cellpadding="0" cellspacing="0">
    <tbody>
        <tr class="h">
            <td class="l" width="374">职位名称</td>
            <td>职位类别</td>
            <td>人数</td>
            <td>地点</td>
            <td>发布时间</td>
        </tr>
        <tr class="even" id="test">
            <td class="l square"><a target="_blank" href="position_detail.php?id=33824&keywords=python&tid=87&lid=2218">22989-金融云区块链高级研发工程师(深圳)</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-25</td>
        </tr>
        <tr class="odd">
            <td class="l square"><a target="_blank" href="position_detail.php?id=29938&keywords=python&tid=87&lid=2218">22989-金融云高级后台开发</a></td>
            <td>技术类</td>
            <td>2</td>
            <td>深圳</td>
            <td>2017-11-25</td>
        </tr>
        <tr class="even" id="test">
            <td class="l square"><a target="_blank" href="position_detail.php?id=31236&keywords=python&tid=87&lid=2218">SNG16-腾讯音乐运营开发工程师(深圳)</a></td>
            <td>技术类</td>
            <td>2</td>
            <td>深圳</td>
            <td>2017-11-25</td>
        </tr>
        <tr class="odd">
            <td class="l square"><a target="_blank" href="position_detail.php?id=31235&keywords=python&tid=87&lid=2218">SNG16-腾讯音乐业务运维工程师(深圳)</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-25</td>
        </tr>
        <tr class="even">
            <td class="l square"><a target="_blank" href="position_detail.php?id=34531&keywords=python&tid=87&lid=2218">TEG03-高级研发工程师(深圳)</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
        <tr class="odd">
            <td class="l square"><a target="_blank" href="position_detail.php?id=34532&keywords=python&tid=87&lid=2218">TEG03-高级图像算法研发工程师(深圳)</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
        <tr class="even">
            <td class="l square"><a target="_blank" href="position_detail.php?id=31648&keywords=python&tid=87&lid=2218">TEG11-高级AI开发工程师(深圳)</a></td>
            <td>技术类</td>
            <td>4</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
        <tr class="odd">
            <td class="l square"><a target="_blank" href="position_detail.php?id=32218&keywords=python&tid=87&lid=2218">15851-后台开发工程师</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
        <tr class="even">
            <td class="l square"><a target="_blank" href="position_detail.php?id=32217&keywords=python&tid=87&lid=2218">15851-后台开发工程师</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
        <tr class="odd">
            <td class="l square"><a id="test" class="test" target='_blank' href="position_detail.php?id=34511&keywords=python&tid=87&lid=2218">SNG11-高级业务运维工程师(深圳)</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
    </tbody>
</table>
"""

soup2 = BeautifulSoup(html, 'lxml')
# 1.获取所有tr标签
# trs = soup2.select('tr')
# print(trs)

# 2.获取第三个tr标签
# trs = soup2.select('tr')[2]
# print(trs)

# 3.获取所有class=“even”的标签
# trs = soup2.select('.even')
# trs = soup2.select('tr[class="even"]')
# print(trs)

# 4.获取a标签里面的href属性值
# a_tags = soup2.select('a')
# # print(a_tags)
# for i in a_tags:
#     print(i['href'])


# 5.获取职位名称
trs = soup2.select('tr')[1:]
for i in trs:
    # tds = i.select('td')[0]
    # print(tds.string)
    #使用遍历文档树
    # tds = i.contents
    # print(tds[1].string)
    #将strings强转为列表在取出
    print(list(i.stripped_strings)[0])

九.修改文档树

(1)修改tag的名称和属性;

(2)修改string 属性赋值,就相当于用当前的内容替代了原来的内容;

(3)append()像tag中添加内容,就好像Python的列表的 .append()方法;

(4)decompose()修改删除段落,对于一些没有必要的文章段落我们可以给他删除掉。

from bs4 import BeautifulSoup


'''
修改tag的名称和属性
修改string 属性赋值,就相当于用当前的内容替代了原来的内容
append() 像tag中添加内容,就好像Python的列表的 .append()方法
decompose() 修改删除段落,对于一些没有必要的文章段落我们可以给他删除掉
'''

html_doc = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>

<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>

<p class="story">...</p>
"""

soup = BeautifulSoup(html_doc,'lxml')
# p_tag = soup.p
# print(p_tag)
# # 修改标签名称
# p_tag.name = 'new_p'
# print(p_tag)
# # 修改属性
# p_tag['class'] = 'newclass'
# print(p_tag)

# 修改文本内容
# p_tag.string = 'new_string'
# p_tag.append('new')  #添加内容
# print(p_tag)

# 删除内容
html = soup.html
print(html)
title = soup.title   #找到待删除的
title.decompose()
print("*"*50)
print(html)

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值