通过这篇文章为大家介绍崔庆才老师对Python爬虫BeautifulSoup库的讲解,包括基本原理及其理论知识点
本文共有约1200字,建议阅读时间10分钟,并且注重理论与实践相结合
觉得文章比较枯燥和用电脑观看的可以点击阅读原文即可跳转到CSDN网页
目录:
一、什么是BeautifulSoup
二、安装
三、BeautifulSoup用法详解
一、什么是BeautifulSoup
灵活又方便的网页解析库,处理高效,支持多种解析器。利用它不用编写正则表达式即可方便实现网页信息的提取。
二、安装
pip install beautifulsoup4
三、BeautifulSoup用法详解
解析库
基本使用
html = ''' <html><head><title>The Domouse's story</title></head> <body> <p class="title"name="dromouse"><b>The Dormouse's story</b></p> <p class="story">Once upon a time there were little sisters;and their names were <a href="http://example.com/elsie"class="sister"id="link1"><!--Elsie--></a> <a hred="http://example.com/lacle"class="sister"id="link2">Lacle</a>and <a hred="http://example.com/tilie"class="sister"id="link3">Tillie</a> and they lived at bottom of a well.</p> <p class="story">...</p> ''' from bs4 import BeautifulSoup soup= BeautifulSoup(html,'lxml') print(soup.prettify())#格式化代码,打印结果自动补全缺失的代码 print(soup.title.string)#文章标题
标签选择器
#选择元素 html = ''' <html><head><title>The Domouse's story</title></head> <body> <p class="title"name="dromouse"><b>The Dormouse's story</b></p> <p class="story">Once upon a time there were little sisters;and their names were <a href="http://example.com/elsie"class="sister"id="link1"><!--Elsie--></a> #选择元素 html = ''' <html><head><title>The Domouse's story</title></head> <body> <p class="title"name="dromouse"><b>The Dormouse's story</b></p> <p class="story">Once upon a time there were little sisters;and their names were <a href="http://example.com/elsie"class="sister"id="link1"><!--Elsie--></a> <a hred="http://example.com/lacle"class="sister"id="link2">Lacle</a>and <a hred="http://example.com/tilie"class="sister"id="link3">Tillie</a> and they lived at bottom of a well.</p> <p class="story">...</p> ''' soup = BeautifulSoup(html,'lxml') print(soup.title) <title>The Domouse's story</title> print(type(soup.title)) <class 'bs4.element.Tag'> print(soup.head) <head><title>The Domouse's story</title></head> print(soup.p)#当出现多个时,只返回了一个 <p class="title" name="dromouse"><b>The Dormouse's story</b></p> #获取名称 print(soup.title.name) #获取内容 print(soup.p.attrs['name']) dromouse print(soup.p['name']) dromouse soup = BeautifulSoup(html,'lxml') print(soup.title) <title>The Domouse's story</title> print(type(soup.title)) <class 'bs4.element.Tag'> print(soup.head) <head><title>The Domouse's story</title></head> print(soup.p)#当出现多个时,只返回了一个 <p class="title" name="dromouse"><b>The Dormouse's story</b></p> #获取名称 print(soup.title.name) #获取属性 print(soup.p.attrs['name']) dromouse print(soup.p['name']) dromouse #获取内容 print(soup.p.string)
The Dormouse's story嵌套内容
#嵌套选择 from bs4 import BeautifulSoup html = ''' <html><head><title>The Domouse's story</title></head> <body> <p class="title"name="dromouse"><b>The Dormouse's story</b></p> <p class="story">Once upon a time there were little sisters;and their names were <a href="http://example.com/elsie"class="sister"id="link1"><!--Elsie--></a> <a hred="http://example.com/lacle"class="sister"id="link2">Lacle</a>and <a hred="http://example.com/tilie"class="sister"id="link3">Tillie</a> and they lived at bottom of a well.</p> <p class="story">...</p> ''' soup = BeautifulSoup(html,'lxml') print(soup.head.title.string)#观察html的代码,其中有一层包含的关系:head(title),那我们就可以用嵌套的形式将其内容打印出来;body(p或是a) The Domouse's story #子节点和子孙节点 html2 = ''' <html> <head> <title>The Domouse's story</title> </head> <body> <p class="story"> Once upon a time there were little sisters;and their names were <a href="http://example.com/elsie" class="sister"id="link1"> <span>Elsle</span> </a> <a hred="http://example.com/lacle"class="sister" id="link2">Lacle</a> and <a hred="http://example.com/tilie"class="sister" id="link3">Tillie</a> and they lived at bottom of a well. </p> <p class="story">...</p> ''' soup2 = BeautifulSoup(html2,'lxml') print(soup2.p.contents) print(soup2.children)#不同之处:children实际上是一个迭代器,需要用循环的方式才能将内容取出 for i,child in enumerate(soup2.p.children): print(i,child) print(soup2.p.descendants)#获取所有的子孙节点,也是一个迭代器 for i,child in enumerate(soup2.p.descendants): print(i,child) #父节点和祖先节点 print(soup2.a.parent) print(list(enumerate(soup2.a.parents)))#所有祖先节点(爸爸也算) #兄弟节点(与之并列的节点) print(list(enumerate(soup2.a.next_siblings)))#后面的兄弟节点 print(list(enumerate(soup2.a.previous_siblings)))#前面的兄弟节点
标准选择器
find_all(name,attrs,recursive,text,**kargs) #可根据签名、属性、内容查找文档
#name from bs4 import BeautifulSoup html = ''' <div class="panel"> <div class="panel-heading"name="elements"> <h4>Hello</h4> </div> <div class="panel-body"> <ul class="list"Id="list-1"> <li class="element">Foo</li> <li class="element">Bar</li> <li class="element">Jay</li> </ul> <ul class="list list-small"Id="list-2"> <li class="element">Foo</li> <li class="element">Bar</li> </ul> </div> <div> ''' soup = BeautifulSoup(html,'lxml') print(soup.find_all('ul'))#列表类型 print(type(soup.find_all('ul')[0])) for ul in soup.find_all('ul'): print(ul.find_all('li'))#层层嵌套的查找 #attrs用法 print(soup.find_all(attrs={'id':'list-1'}))
print(soup.find_all(attrs={'name':'elements'})) print(soup.find_all(class_="element"))#text html = ''' <div class="panel"> <div class="panel-heading"> <h4>Hello</h4> </div> <div class="panel-body"name="elelments"> <ul class="list"Id="list-1"> <li class="element">Foo</li> <li class="element">Bar</li> <li class="element">Jay</li> </ul> <ul class="list list-small"Id="list-2"> <li class="element">Foo</li> <li class="element">Bar</li> </ul> </div> <div> ''' from bs4 import BeautifulSoup soup = BeautifulSoup(html,'lxml') print(soup.find_all(text='Foo'))
#find(name,attrs,recursive,text,**kwargs)返回单个元素,find_all返回所有元素 html = ''' <div class="panel"> <div class="panel-heading"> <h4>Hello</h4> </div> <div class="panel-body"name="elelments"> <ul class="list"Id="list-1"> <li class="element">Foo</li> <li class="element">Bar</li> <li class="element">Jay</li> </ul> <ul class="list list-small"Id="list-2"> <li class="element">Foo</li> <li class="element">Bar</li> </ul> </div> <div> ''' from bs4 import BeautifulSoup print(soup.find('ul')) print(type(soup.find('ul'))) print(soup.find('page'))
find_parents():返回所有祖先节点和find_parent()返回直接父节点
find_next_siblings()返回所有兄弟节点和find_next_sibling()返回后面第一个兄弟节点
find_previous_siblings():返回前面的所有兄弟节点和find_previous_sibling():返回前面第一个兄弟节点
find_all_next():返回节点后所有符合条件的节点和find_next():返回第一个符合条件的节点
find_all_previous()返回节点前所有符合条件的节点和find_previous():返回节点前第一个符合条件的节点
.CSS选择器
通过select()直接传入CSS选择器即可完成选择
#CSS选择器 html = ''' <div class="panel"> <div class="panel-heading"> <h4>Hello</h4> </div> <div class="panel-body"name="elelments"> <ul class="list"Id="list-1"> <li class="element">Foo</li> <li class="element">Bar</li> <li class="element">Jay</li> </ul> <ul class="list list-small"Id="list-2"> <li class="element">Foo</li> <li class="element">Bar</li> </ul> </div> <div> ''' from bs4 import BeautifulSoup soup = BeautifulSoup(html,'lxml') print(soup.select('.panel .panel-heading')) #class就需要加一个“.” print(soup.select('ul li') #选择标签 print(soup.select('#list-2 .element')) print(type(soup.select('ul')[0])) for ul in soup.select('ul'):#直接print(soup.select('ul li')) print(ul.select('li'))
获取属性
html = ''' <div class="panel"> <div class="panel-heading"> <h4>Hello</h4> </div> <div class="panel-body"name="elelments"> <ul class="list"Id="list-1"> <li class="element">Foo</li> <li class="element">Bar</li> <li class="element">Jay</li> </ul> <ul class="list list-small"Id="list-2"> <li class="element">Foo</li> <li class="element">Bar</li> </ul> </div> <div> ''' from bs4 import BeautifulSoup soup = BeautifulSoup(html,'lxml') for ul in soup.select('ul'):
print(ul['id'])#直接用[]
print(ul.attrs['id'])#或是attrs+[]获取内容
html = ''' <div class="panel"> <div class="panel-heading"> <h4>Hello</h4> </div> <div class="panel-body"name="elelments"> <ul class="list"Id="list-1"> <li class="element">Foo</li> <li class="element">Bar</li> <li class="element">Jay</li> </ul> <ul class="list list-small"Id="list-2"> <li class="element">Foo</li> <li class="element">Bar</li> </ul> </div> <div> ''' from bs4 import BeautifulSoup soup = BeautifulSoup(html,'lxml') for li in soup.select('ul'):
print(li.get_text())
四、总结
推荐使用'lxml'解析库,必要时使用html.parser
标签选择器筛选功能但速度快
建议使用find(),find_all()查询匹配单个结果或者多个结果
如果对CSS选择器熟悉建议选用select()
记住常用的获取属性和文本值得方法