[爬虫篇]Python爬虫之爬取网页音频

使用python爬取网页音频【以红楼梦广播剧为例】

使用设备及工具:
	Windows 10
	python3.8

具体步骤如下:

第一步

使用requests库向目标网址发送GET请求,并使用User-Agent伪装浏览器。获取网页内容。

book_url = "https://www.ximalaya.com/album/22088719"
headers = {"User-Agent" :random.choices(user_agent)}
url_get_ximalaya = requests.get(headers=headers,url=book_url)
url_get_ximalaya_webcode = url_get_ximalaya.text

第二步

使用正则表达式从网页内容中提取包含音频关键信息的数据包网址。正则表达式分两个部分,一个是1集——29集的信息,一个是21集——120集的信息。具体的正则表达式为:

data_id_name_code_page_1 = re.findall('"trackId":(\d+),"isPaid":false,"tag":0,"title":"(.*?)"',url_get_ximalaya_webcode)# 1------>29集
data_id_name_code_page_2 = re.findall('"trackId":(\d+),"trackName":"(.*?)"',url_get_ximalaya_webcode)# 21------>120

第三步

调用download_1()和download_2()函数,分别下载1集——29集和21集——120集的音频文件。首先拼接出每个音频文件编号的URL,然后使用requests.get()方法获取网页内容,再从网页内容中提取出音频下载链接,最后使用urllib库中的urllib.request.urlopen()方法下载音频文件并保存。

def download_1():
    for data_id_1,data_name_1 in data_id_name_code_page_1:
        audio_DATA = f"https://www.ximalaya.com/revision/play/v1/audio?id={data_id_1}&ptype=1" #---->接收data_id至url数据包
        time.sleep(0.1)
        print("正在下载--->%s"%data_name_1)
        audio_DATA_get = requests.get(url=audio_DATA,headers=headers)
        audio_DATA_get_text = audio_DATA_get.text
        audio_DATA_download_url = re.findall('"src":"(.*?)"',audio_DATA_get_text) #提取下载链接
        print(audio_DATA_download_url[0])
        download_data_url = audio_DATA_download_url[0]
        try:
            open_downloda_data_url = urllib.request.urlopen(download_data_url)
        except:
            print(download_data_url,"---->ERROR!")
        read_download_data_url = open_downloda_data_url.read()
        def download_data():
            with open("%s.mp3"%data_name_1,"wb") as writes:
                writes.write(read_download_data_url)
        download_data()
download_1()

def download_2():
    for data_id_2,data_name_2 in data_id_name_code_page_2:
        audio_DATA = f"https://www.ximalaya.com/revision/play/v1/audio?id={data_id_2}&ptype=1"
        time.sleep(0.1)
        print("正在下载--->%s"%data_name_2)
        audio_DATA_get = requests.get(url=audio_DATA,headers=headers)
        audio_DATA_get_text = audio_DATA_get.text
        audio_DATA_download_url = re.findall('"src":"(.*?)"',audio_DATA_get_text)
        print(audio_DATA_download_url)
        download_data_url = audio_DATA_download_url[0]
        try:
            open_download_data_url = urllib.request.urlopen(download_data_url)
        except:
            print(download_data_url,"---->ERROR!")
        read_download_data_url = open_download_data_url.read()
        def download_data():
            with open("%s.mp3"%data_name_2,"wb") as writes:
                writes.write(read_download_data_url)
        download_data()
download_2()

第四步

以上就是这段代码的主要实现,最后使用print()函数提示音频下载完成。

完整代码

import random
import time
import requests
import urllib.request
import re
book_url = "https://www.ximalaya.com/album/22088719"

user_agent = [
    "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3",
    "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:54.0) Gecko/20100101 Firefox/54.0",
    "Mozilla/5.0 (Windows NT 10.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.121 Safari/537.3",
    "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.121 Safari/537.3",
    "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.101 Safari/537.36 Edg/91.0.864.54"
] 

headers = {"User-Agent" :random.choice(user_agent)}# 采用user-agent随机反爬机制
url_get_ximalaya = requests.get(headers=headers,url=book_url)
url_get_ximalaya_webcode = url_get_ximalaya.text

def with_url_get_ximalaya_webcode():
    with open("url_get_ximalaya_webcode.txt","a",encoding="utf-8") as w:
        w.write(url_get_ximalaya_webcode)
with_url_get_ximalaya_webcode()

data_id_name_code_page_1 = re.findall('"trackId":(\d+),"isPaid":false,"tag":0,"title":"(.*?)"',url_get_ximalaya_webcode)# 1------>29集
data_id_name_code_page_2 = re.findall('"trackId":(\d+),"trackName":"(.*?)"',url_get_ximalaya_webcode)# 21------>120print("加载列表清单...")
time.sleep(2)

def download_1():
    for data_id_1,data_name_1 in data_id_name_code_page_1:
        audio_DATA = f"https://www.ximalaya.com/revision/play/v1/audio?id={data_id_1}&ptype=1" #---->接收data_id至url数据包
        time.sleep(0.1)
        print("正在下载--->%s"%data_name_1)
        audio_DATA_get = requests.get(url=audio_DATA,headers=headers)
        audio_DATA_get_text = audio_DATA_get.text
        audio_DATA_download_url = re.findall('"src":"(.*?)"',audio_DATA_get_text) #提取下载链接
        print(audio_DATA_download_url[0])
        download_data_url = audio_DATA_download_url[0]
        try:
            open_downloda_data_url = urllib.request.urlopen(download_data_url)
        except:
            print(download_data_url,"---->ERROR!")
        read_download_data_url = open_downloda_data_url.read()
        def download_data():
            with open("%s.mp3"%data_name_1,"wb") as writes:
                writes.write(read_download_data_url)
        download_data()
        #print(data_name)
        #print(audio_DATA)
download_1()
def download_2():
    for data_id_2,data_name_2 in data_id_name_code_page_2:
        audio_DATA = f"https://www.ximalaya.com/revision/play/v1/audio?id={data_id_2}&ptype=1"
        time.sleep(0.1)
        print("正在下载--->%s"%data_name_2)
        audio_DATA_get = requests.get(url=audio_DATA,headers=headers)
        audio_DATA_get_text = audio_DATA_get.text
        audio_DATA_download_url = re.findall('"src":"(.*?)"',audio_DATA_get_text)
        print(audio_DATA_download_url)
        download_data_url = audio_DATA_download_url[0]
        try:
            open_download_data_url = urllib.request.urlopen(download_data_url)
        except:
            print(download_data_url,"---->ERROR!")
        read_download_data_url = open_download_data_url.read()
        def download_data():
            with open("%s.mp3"%data_name_2,"wb") as writes:
                writes.write(read_download_data_url)
        download_data()
download_2()
print("下载完成!")

效果

在这里插入图片描述

补充

User-Agent随机:有些网站会对User-Agent进行监控,如果发现有某一种类型的爬虫用户量很大,就会认为这是一种爬虫行为,加强反爬机制。因此,我们可以通过随机生成的User-Agent来模拟不同的用户和设备。

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

anonymous_who_am_i

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值