爬虫——BS4基本用法

一、BeautifulSoup简介

1.是一个高效的网页解析库,可以从HTML或XML文件中提取数据

2.支持不同的解析器,比如,对HTML解析,对XML解析,对HTML5解析

3.就是一个非常强大的工具,爬虫利器

4.一个灵感又方便的网页解析库,处理高效,支持多种解析器

5.利用它就不用编写正则表达式也能方便的实现网页信息的抓取

二、安装

pip install BeautifulSoup4

Beautiful Soup 是一个可以从HTML或XML文件中提取数据的Python库

pip install lxml

lxml 是一种使用 Python 编写的解析库,可以迅速、灵活地处理 XML 和 HTML

三、基本使用 

1.标签选择器

1)通过标签选择

.string --获取文本内容

代码:

h = """
<html>
    <head>
        <title>The Dormouse's story</title>
    </head>
    <body>
    <p class="title" name="dromouse"><b><span>The Dormouse's story</span></b></p>
    <p class="story">Once upon a time there were three little sisters; and their names were
    <a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>,
    <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
    <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
    and they lived at the bottom of a well.</p>
    <p class="story">...</p>
"""

# 1,导包
from bs4 import BeautifulSoup 
#,2,实例化对象
soup = BeautifulSoup(h, 'lxml')  # 参数1:要解析的内容  参数2:解析器

# 通过标签选取,会返回包含标签本身及其里面的所有内容
print(soup.head) # 包含head标签在内的所有内容
print(soup.p) # 返回匹配的第一个结果

# .string是属性,作用是获取字符串文本
print(soup.title.string)
2)获取名称

.name --获取标签本身名称

示例代码:

html = """
<html>
    <head>
        <title>The Dormouse's story</title>
    </head>
<body>
<p class="title" name="dromouse"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
from bs4 import BeautifulSoup

soup = BeautifulSoup(html, 'lxml')

print(soup.title.name)  # 结果为标签本身名称  --> title
print(soup.p.name)  # --> 获取标签名
3)获取属性的值

.attrs[] --通过属性拿属性的值

示例代码:

html = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title asdas" name="abc" id = "qwe"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/123" class="sister" id="link1"><!-- Elsie --></a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>|
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')

print(soup.p.attrs['name'])# 获取p标签name属性的属性值
print(soup.a.attrs['href']) # 获取p标签id属性的属性值

#第二种写法
print(soup.p['id']) 
print(soup.p['class']) # 以列表得形式保存
print(soup.a['href'])  # 也是只返回第一个值

2.标准选择器

—find_all( name , attrs , recursive , text , **kwargs )

1.使用find_all()根据标签名查找

html='''
<div class="panel">
    <div class="panel-heading">
        <h4>Hello</h4>
    </div>
    <div class="panel-body">
        <ul class="list" id="list-1">
            <li class="element">Foo</li>
            <li class="element">Bar</li>
            <li class="element">Jay</li>
        </ul>
        <ul class="list list-small" id="list-2">
            <li class="element">Foo-2</li>
            <li class="element">Bar-2</li>
        </ul>
    </div>
</div>
'''
from bs4 import BeautifulSoup

soup = BeautifulSoup(html, 'lxml')

print(soup.find_all('ul'))  # 拿到所有ul标签及其里面内容

2.get_text() 获取内容

for ul in soup.find_all('ul'):
    print(ul.get_text())

3.使用find_all()根据属性查找

html='''
<div class="panel">
    <div class="panel-heading">
        <h4>Hello</h4>
    </div>
    <div class="panel-body">
        <ul class="list" id="list-1" name="elements">
            <li class="element">Foo</li>
            <li class="element">Bar</li>
            <li class="element">Jay</li>
        </ul>
        <ul class="list list-small" id="list-2">
            <li class="element">Foo</li>
            <li class="element">Bar</li>
        </ul>
    </div>
</div>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')

# 特殊属性查找
# print(soup.find_all(class='element'))  #  注意:错误案例
a = soup.find_all(class_='element')  # class属于Python关键字,做特殊处理 _
print(a) 


# 推荐的查找方法!!!   --- 指定标签和属性
print(soup.find_all('li',{'class':'element'}))  
print('----'*10)
print(soup.find_all('ul',{'id':'list-1'}))

4.text=() 根据文本值选择

html='''
<div class="panel">
    <div class="panel-heading">
        <h4>Hello</h4>
    </div>
    <div class="panel-body">
        <ul class="list" id="list-1">
            <li class="element">Foo</li>
            <li class="element">Bar</li>
            <li class="element">Jay</li>
        </ul>
        <ul class="list list-small" id="list-2">
            <li class="element">Foo</li>
            <li class="element">Bar</li>
        </ul>
    </div>
</div>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')

# 语法格式:text='要查找的文本内容'
print(soup.find_all(text='Foo')) # 可以做内容统计用

print(len(soup.find_all(text='Foo'))) # 统计数量
—find( name , attrs , recursive , text , **kwargs )

1.find返回单个元素,find_all返回所有元素

html='''
<div class="panel">
    <div class="panel-heading">
        <h4>Hello</h4>
    </div>
    <div class="panel-body">
        <ul class="list" id="list-1">
            <li class="element">Foo</li>
            <li class="element">Bar</li>
            <li class="element">Jay</li>
        </ul>
        <ul class="list list-small" id="list-2">
            <li class="element">Foo</li>
            <li class="element">Bar</li>
        </ul>
    </div>
</div>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.find('ul')) # 只返回匹配到的第一个
# print('---------'*5)
print(soup.find('page')) # 如果标签不存在返回Nonehtml='''
<div class="panel">
    <div class="panel-heading">
        <h4>Hello</h4>
    </div>
    <div class="panel-body">
        <ul class="list" id="list-1">
            <li class="element">Foo</li>
            <li class="element">Bar</li>
            <li class="element">Jay</li>
        </ul>
        <ul class="list list-small" id="list-2">
            <li class="element">Foo</li>
            <li class="element">Bar</li>
        </ul>
    </div>
</div>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.find('ul')) # 只返回匹配到的第一个
print(soup.find('page')) # 如果标签不存在返回None

3.css选择器

——通过select()直接传入CSS选择器即可完成选择

注意:

1,用CSS选择器时,标签名不加任何修饰,class类名前加. , id名前加# 

2,用到的方法是soup.select(),返回类型是list

3,多个过滤条件需要用空格隔开,严格遵守从前往后逐层筛选

示例代码:

html='''
<div class="pan">q321312321</div>
<div class="panel">
    <div class="panel-heading">
        <h4>Hello</h4>
    </div>
    <div class="panel-body">
        <ul class="list" id="list-1">
            <li class="element">Foo</li>
            <li class="element">Bar</li>
            <li class="element">Jay</li>
        </ul>
        <ul class="list list-small" id="list-2">
            <li class="element">Foo</li>
            <li class="element">Bar</li>
        </ul>
    </div>
</div>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')

# 根据标签去找 标签不加任何修饰 多个条件用空格隔开
# print(soup.select('ul li'))  
# print("----"*10)

# class类名前加.  
# print(soup.select('.panel-heading'))

# 多个条件用空格隔开
# print(soup.select('ul.list')) 
# print(soup.select('ul.list.list-small')) 


# 注意:可以混合使用!!
# 比如:根据id和class去找
a = soup.select('#list-1 .element')#从这个例子可以看出.select方法会获取满足条件的所有内容
print(a)

for i in a:
    print(i.string)
    
b = soup.select('#list-2')#从这个例子可以看出.select方法会获取满足条件的所有内容
print(b)  
for i in b:
    print(i.get_text())

1)获取属性的值

两种写法:

1,ul['id']

2,ul.attrs['id']

html='''
<div class="panel">
    <div class="panel-heading">
        <h4>Hello</h4>
    </div>
    <div class="panel-body">
        <ul class="list" id="list-1">
            <li class="element">Foo</li>
            <li class="element">Bar</li>
            <li class="element">Jay</li>
        </ul>
        <ul class="list list-small" id="list-2">
            <li class="element">Foo</li>
            <li class="element">Bar</li>
        </ul>
    </div>
</div>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
s = soup.select('#list-2')  # 案例演示
print(s)

for ul in soup.select('#list-2'):
    print(ul)
    print(ul['id'])
    print(ul['class']) 

#     print(ul.attrs['id'])
#     print(ul.attrs['class']) 

4. 层级选择器

1.后代选择器
html='''
<div class="panel">
    <div class="panel-heading">
        <h4>Hello</h4>
    </div>
    <div class="panel-body">
        <ul class="list" id="list-1">
            <li class="element">Foo</li>
            <li class="element">Bar</li>
            <li class="element">Jay</li>
        </ul>
        <ul class="list list-small" id="list-2">
            <li class="element">Foo</li>
            <li class="element">Bar</li>
        </ul>
    </div>
</div>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
s = soup.select('div li')  # 案例演示
print(s)
2.子代选择器
html='''
<div class="panel">
    <div class="panel-heading">
        <h4>Hello</h4>
    </div>
    <div class="panel-body">
        <ul class="list" id="list-1">
            <li class="element">Foo</li>
            <li class="element">Bar</li>
            <li class="element">Jay</li>
            <a href="" class="a1">幸福之家</a>
        </ul>
        <ul class="list list-small" id="list-2">
            <li class="element">Foo</li>
            <li class="element">Bar</li>
        </ul>
    </div>
</div>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
s = soup.select('div > ul > li')  # 案例演示
print(s)
# 找到a标签和li标签的所有对象
print(soup.select('a,li'))

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值