BeautifulSoup学习笔记4

124 篇文章 0 订阅

上一篇笔记详细介绍了find_all()方法的name,attr,recursive,string,limit参数。

接下来的find(),find_parents()等方法搜索参数和find_all()方法类似,只是搜索目标会不同。

1 find()

find_all() and find() work their way down the tree, looking at tag’s descendants.

Signature: find(name, attrs, recursive, string, **kwargs)

>>> html_doc = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>

<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>

<p class="story">...</p>
"""
>>> from bs4 import BeautifulSoup
>>> soup = BeautifulSoup(html_doc,"html.parser")
>>> soup.find('Alice')  #返回None
>>> 
>>> soup.find_all('Alice') #返回空列表
[]
>>> 
>>> soup('Alice')  #BeautifulSoup会默认使用find_all()方法。
[]
>>> soup('title')
[<title>The Dormouse's story</title>]
>>> soup.title(text=True)
["The Dormouse's story"]
>>> soup.title.find_all(text=True)
["The Dormouse's story"]
>>> 

2 find_parents() and find_parent()

find_parents() and find_parent() work their way up the tree, looking at a tag’s (or a string’s) parents.

Signature: find_parents(name, attrs, string, limit, **kwargs)
Signature: find_parent(name, attrs, string, **kwargs)

>>> a_string = soup.find(text="Lacie") # 从文档的一个叶节点开始
>>> a_string
'Lacie'
>>>
>>> a_string.find_parents('a')  ## direct parent
[<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>]
>>> a_string.find_parents('p') ##indirect parent
[<p class="story">Once upon a time there were three little sisters; and their names were
<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a> and
<a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>]
>>> 
>>> a_string.find_parents('p',class_='sister')
[]
>>> a_string.find_parents(class_="sister")
[<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>]

3 find_next_siblings() and find_next_sibling()

Iterate over the rest of an element’s siblings in the tree. (搜索后面的兄弟节点)
The find_next_siblings() method returns all the siblings that match, and find_next_sibling() only returns the first one

Signature: find_next_siblings(name, attrs, string, limit, **kwargs)
Signature: find_next_sibling(name, attrs, string, **kwargs)

>>> first_link = soup.a
>>> first_link
<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>
>>> first_link.find_next_siblings('a')
[<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>, <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]
>>> 
>>> first_story_paragraph = soup.find("p","story")
>>> first_story_paragraph
<p class="story">Once upon a time there were three little sisters; and their names were
<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a> and
<a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
>>>
>>> first_story_paragraph.find_next_siblings('p')
[<p class="story">...</p>]
>>> 

4 find_previous_siblings() and find_previous_sibling()

Iterate over an element’s siblings that precede it in the tree.(搜索前面的兄弟节点)
The find_previous_siblings() method returns all the siblings that match, and find_previous_sibling() only returns the first one.

Signature: find_previous_siblings(name, attrs, string, limit, **kwargs)
Signature: find_previous_sibling(name, attrs, string, **kwargs)

>>> second_link = soup.find('a',id='link2')
>>> second_link.find_previous_siblings('a')
[<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>]
>>> >>> 

5 find_all_next() and find_next()

Iterate over whatever tags and strings that come after it in the document.
The find_all_next() method returns all matches, and find_next() only returns the first match.

Signature: find_all_next(name, attrs, string, limit, **kwargs)
Signature: find_next(name, attrs, string, **kwargs)

>>> first_link = soup.find('a')
>>> first_link
<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>
>>> first_link.find_all_next()
[<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>, <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>, <p class="story">...</p>]
>>> first_link.find_all_next(text=True)
['Elsie', ',\n', 'Lacie', ' and\n', 'Tillie', ';\nand they lived at the bottom of a well.', '\n', '...', '\n']
>>> first_link.find_all_next('p')
[<p class="story">...</p>]
>>> 

6 find_all_previous() and find_previous()

Iterate over the tags and strings that came before it in the document.
The find_all_previous() method returns all matches, and find_previous() only returns the first match.

Signature: find_all_previous(name, attrs, string, limit, **kwargs)
Signature: find_previous(name, attrs, string, **kwargs)

>>> first_link.find_all_next('p')
[<p class="story">...</p>]
>>> first_link = soup.find('a')
>>> first_link
<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>
>>>
>>> first_link.find_all_previous('p')
[<p class="story">Once upon a time there were three little sisters; and their names were
<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a> and
<a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>, <p class="title"><b>The Dormouse's story</b></p>]
>>>
>>> first_link.find_all_previous('title')
[<title>The Dormouse's story</title>]
>>>
>>> first_link.find_all_previous(text=True)
['Once upon a time there were three little sisters; and their names were\n', '\n', "The Dormouse's story", '\n', '\n', "The Dormouse's story", '\n']
>>>
>>> first_link.find_previous()
<p class="story">Once upon a time there were three little sisters; and their names were
<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a> and
<a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
>>> 
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
BeautifulSoup是一个Python库,用于从HTML或XML文件中提取数据。它提供了一种简单而灵活的方式来解析和遍历标记文档,并提供了许多有用的方法和属性来获取所需的信息。 在BeautifulSoup库的学习笔记中,介绍了BeautifulSoup库的简介和安装方法。它可以通过pip进行安装,并且有多种解析器可供选择。BeautifulSoup类有五种基本元素,包括Tag标签、Name名字、Attributes属性、NavigableString非属性字符串和Comment注释。它们可以用于遍历标签树的下行、上行和平行遍历。此外,BeautifulSoup几乎覆盖了HTML和XML中的所有内容,还包括一些特殊对象,例如文档的注释部分。 需要注意的是,BeautifulSoup对象本身不是真正的HTML或XML的tag,因此它没有name和attribute属性。但是,在某些情况下,查看它的.name属性是很方便的,因此BeautifulSoup对象包含了一个特殊属性.name,其值为"[document]"。另外,还有一些特殊对象,例如注释对象,可以通过使用BeautifulSoup库来处理。 综上所述,BeautifulSoup库提供了强大的解析和提取HTML或XML中数据的功能,适用于各种爬虫和数据提取任务。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* *2* [BeautifulSoup学习笔记一](https://blog.csdn.net/weixin_43978546/article/details/104858873)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 50%"] - *3* [BeautifulSoup 学习笔记](https://blog.csdn.net/zhengjian0617/article/details/81142540)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 50%"] [ .reference_list ]

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值