网络爬虫及BeautifulSoup的用法详解
微信关注公众号:夜寒信息
致力于为每一位用户免费提供更优质技术帮助与资源供给,感谢支持!
BeautifulSoup库是解析、遍历、维护“标签书”的功能库。将html,xml等文档解析加工,供人类利用,下面介绍它的用法。
html文件理解示例图:
我们使用from bs4 import BeautifulSoup4
来导入BeautifulSoup,下面给出解析html的示例语句:
>>> from bs4 import BeautifulSoup
>>> soup = BeautifulSoup("<html>data</html>", "html.parser")
>>> soup2 = BeautifulSoup(open("D://demo.html"), "html.parser")
除了HTML解析器,我们还可以使用其他解析器,如下:
BeautifulSoup类的基本元素:
Tag
标签,最基本的信息组织单元,分别用<>和</>标明开头和结尾
Name
标签的名字,< p >…< /p >的名子是’p’,格式:.name
Attributes
标签的属性,字典形式组织,格式:< tag >.attrs
NavigableString
标签内非属性字符串,<>…</>中字符串,格式:< tag >.string
Comment
标签内字符串的注释部分,一种特殊的Comment类型
BeautifulSoup的使用
先给出一个网址,接下来的教程都会围绕这个url展开
https://python123.io/ws/demo.html
以下的操作都在Python的IDLE里进心,大家可以跟着敲一遍,方便理解记忆
首先我们需要先定义一个demo,以下操作都会利用demo,不再重复解释和定义
>>> import requests
>>> r = requests.get(demo)
>>> demo = r.text
首先我们来解析这个demo看看:
soup = BeautifulSoup(demo, "html.parser")
print(soup.prettify())
然后我们会看到返回的结果
<html>
<head>
<title>
This is a python demo page
</title>
</head>
<body>
<p class="title">
<b>
The demo python introduces several python courses.
</b>
</p>
<p class="course">
Python is a wonderful general-purpose programming language. You can learn Python from novice to professional by tracking the following courses:
<a class="py1" href="http://www.icourse163.org/course/BIT-268001" id="link1">
Basic Python
</a>
and
<a class="py2" href="http://www.icourse163.org/course/BIT-1001870001" id="link2">
Advanced Python
</a>
.
</p>
</body>
</html>
这样的代码更适合我们的解读
总结下来使用BeautifulSoup库只需两行代码:
from bs4 import BeautifulSoup
soup = BeautifulSoup('<p>data</p>', 'html.parser')
其中< p >data< /p >为html的信息
我们可以获取单独标签的内容:
>>> soup.title
<title>This is a python demo page</title>
>>> tag = soup.a
>>> tag
<a class="py1" href="http://www.icourse163.org/course/BIT-268001" id="link1">Basic Python</a>
这种方式若有多个相同名字的标签,默认只获得第一个
同样我们也可以获得标签的名字:
>>> soup.a.name
'a'
>>> soup.title.name
'title'
下面我们获得标签的属性:
>>> tag = soup.a
>>> tag.attrs
{'href': 'http://www.icourse163.org/course/BIT-268001', 'class': ['py1'], 'id': 'link1'}
我们可以看到他是一个字典
所以我们也可以单独查看某一属性的属性:
>>> tag.attrs['class']
['py1']
>>> tag.attrs['href']
'http://www.icourse163.org/course/BIT-268001'
我们也可以查看标签的类型:
>>> type(tag.attrs)
<class 'dict'>
>>> type(tag)
<class 'bs4.element.Tag'>
当标签无属性时返回空字典
我们还可以查看标签尖括号之间的的字符串:
>>> soup.a
<a class="py1" href="http://www.icourse163.org/course/BIT-268001" id="link1">Basic Python</a>
>>> soup.a.string
'Basic Python'
>>> soup.p
<p class="title"><b>The demo python introduces several python courses.</b></p>
>>> soup.p.string
'The demo python introduces several python courses.'
>>> type(soup.p.string)
<class 'bs4.element.NavigableString'>
BeautifulSoup的遍历
一、标签树的下行遍历
.contents
子节点的列表,将**< tag >**所有的儿子节点存入列表
.children
子节点的迭代类型,与**.contents**类似,用于循环儿子节点
.descendent
子孙节点的迭代类型,包含所有的子孙节点,用于循环遍历
下面举个例子:
>>> soup.head
<head><title>This is a python demo page</title></head>
>>> soup.head.contents
[<title>This is a python demo page</title>]
>>> soup.body.contents
['\n', <p class="title"><b>The demo python introduces several python courses.</b></p>, '\n', <p class="course">Python is a wonderful general-purpose programming language. You can learn Python from novice to professional by tracking the following courses:
<a class="py1" href="http://www.icourse163.org/course/BIT-268001" id="link1">Basic Python</a> and <a class="py2" href="http://www.icourse163.org/course/BIT-1001870001" id="link2">Advanced Python</a>.</p>, '\n']
>>> len(soup.body.contents)
5
>>> soup.body.contents[1]
<p class="title"><b>The demo python introduces several python courses.</b></p>
二、标签树的上行遍历:
.parent
节点的父亲节点
.parents
结点的先辈标签的迭代类型,用于循环遍历先辈节点
下面举个例子:
>>> soup.title.parent
<head><title>This is a python demo page</title></head>
>>> soup.html.parent
<html><head><title>This is a python demo page</title></head>
<body>
<p class="title"><b>The demo python introduces several python courses.</b></p>
<p class="course">Python is a wonderful general-purpose programming language. You can learn Python from novice to professional by tracking the following courses:
<a class="py1" href="http://www.icourse163.org/course/BIT-268001" id="link1">Basic Python</a> and <a class="py2" href="http://www.icourse163.org/course/BIT-1001870001" id="link2">Advanced Python</a>.</p>
</body></html>
>>> soup.parent
>>>
如上我们可以看出,由于html标签无父节点,所以返回它自身,而soup无parent所以返回空
三、标签树的平行遍历
.next_sibling
返回按照HTML文本顺序的下一个平行节点标签
.prevous_sibling
返回按照HTML文本顺序的上一个平行节点标签
.next_siblings
迭代类型,返回按照HTML文本顺序的后续所有平行节点标签
.previous_siblings
迭代类型,返回按照HTML文本顺序的前续所有平行节点标签
下面我们举个例子:
>>> soup.a.next_sibling
' and '
>>> soup.a.next_sibling.next_sibling
<a class="py2" href="http://www.icourse163.org/course/BIT-1001870001" id="link2">Advanced Python</a>
>>> soup.a.previous_sibling
'Python is a wonderful general-purpose programming language. You can learn Python from novice to professional by tracking the following courses:\r\n'
>>> soup.a.previous_sibling.previous_sibling
>>>
平行遍历返回的也可能是String类型,不全是标签类型。同时我们可以看到a节点没有前一个结点的前一个结点,所以返回空
信息的提取
<>.find_all(anme, attrs, recursive, string, **kwargs)
返回一个列表,可以在soup中查找想要的信息
name
对标签名称的检索字符串
attrs
对标签属性值的检索字符串,可标注属性检索
recursive
是否对子孙全部检索,默认为True
string
<>…</>中字符串区域的检索字符串
下面举个例子:
>>> soup.find_all('a')
[<a class="py1" href="http://www.icourse163.org/course/BIT-268001" id="link1">Basic Python</a>, <a class="py2" href="http://www.icourse163.org/course/BIT-1001870001" id="link2">Advanced Python</a>]
>>> soup.find_all(['a','b'])
[<b>The demo python introduces several python courses.</b>, <a class="py1" href="http://www.icourse163.org/course/BIT-268001" id="link1">Basic Python</a>, <a class="py2" href="http://www.icourse163.org/course/BIT-1001870001" id="link2">Advanced Python</a>]
>>> soup.find_all(True)
[<html><head><title>This is a python demo page</title></head>
<body>
<p class="title"><b>The demo python introduces several python courses.</b></p>
<p class="course">Python is a wonderful general-purpose programming language. You can learn Python from novice to professional by tracking the following courses:
<a class="py1" href="http://www.icourse163.org/course/BIT-268001" id="link1">Basic Python</a> and <a class="py2" href="http://www.icourse163.org/course/BIT-1001870001" id="link2">Advanced Python</a>.</p>
</body></html>, <head><title>This is a python demo page</title></head>, <title>This is a python demo page</title>, <body>
<p class="title"><b>The demo python introduces several python courses.</b></p>
<p class="course">Python is a wonderful general-purpose programming language. You can learn Python from novice to professional by tracking the following courses:
<a class="py1" href="http://www.icourse163.org/course/BIT-268001" id="link1">Basic Python</a> and <a class="py2" href="http://www.icourse163.org/course/BIT-1001870001" id="link2">Advanced Python</a>.</p>
</body>, <p class="title"><b>The demo python introduces several python courses.</b></p>, <b>The demo python introduces several python courses.</b>, <p class="course">Python is a wonderful general-purpose programming language. You can learn Python from novice to professional by tracking the following courses:
<a class="py1" href="http://www.icourse163.org/course/BIT-268001" id="link1">Basic Python</a> and <a class="py2" href="http://www.icourse163.org/course/BIT-1001870001" id="link2">Advanced Python</a>.</p>, <a class="py1" href="http://www.icourse163.org/course/BIT-268001" id="link1">Basic Python</a>, <a class="py2" href="http://www.icourse163.org/course/BIT-1001870001" id="link2">Advanced Python</a>]
若参数为True则返回所有信息
我们也可以对属性进行约束:
>>> soup.find_all(id='link1')
[<a class="py1" href="http://www.icourse163.org/course/BIT-268001" id="link1">Basic Python</a>]
>>> soup.find_all(id='link')
[]
若标签不存在则返回空列表
我们默认搜索从当前节点的所有节点,我们可以将recursive置为False,只查找儿子节点:
>>> soup.find_all('a')
[<a class="py1" href="http://www.icourse163.org/course/BIT-268001" id="link1">Basic Python</a>, <a class="py2" href="http://www.icourse163.org/course/BIT-1001870001" id="link2">Advanced Python</a>]
>>> soup.find_all('a', recursive=False)
[]
返回空节点,由此看出soup的儿子节点无a标签
我们也可以检索字符串:
>>> soup.find_all(string = 'Basic Python')
['Basic Python']
以上我们可以使用==< tag >(…)代替< tag >.find_all(…)使用soup(…)代替soup.find_all(…)==
同时我们还有其他的扩展方法:
下面给大家一个爬虫实例:
#中国大学排名定向爬虫
import requests
from bs4 import BeautifulSoup
import bs4
def getHTMLText(url):
try:
r = requests.get(url, timeout=30)
r.raise_for_status()
r.encoding = r.apparent_encoding
return r.text
except:
return ""
def fillUnivList(ulist, html):
soup = BeautifulSoup(html, "html.parser")
for tr in soup.find('tbody').children:
if isinstance(tr, bs4.element.Tag):
tds = tr('td')
ulist.append([tds[0].string, tds[1].string, tds[3].string])
def printUnivList(ulist, num):
tplt = "{0:^10}\t{1:{3}^10}\t{2:^10}"
print(tplt.format("排名","学校名称","总分",chr(12288)))
for i in range(num):
u=ulist[i]
print(tplt.format(u[0],u[1],u[2],chr(12288)))
def main():
uinfo = []
url = 'http://www.zuihaodaxue.com/zuihaodaxuepaiming2019.html'
html = getHTMLText(url)
fillUnivList(uinfo, html)
printUnivList(uinfo, 20) # 20 univs
main()
若有问题请关注微信公众号"夜寒信息"
微信关注公众号:夜寒信息
致力于为每一位用户免费提供更优质技术帮助与资源供给,感谢支持!