小伙子不讲武德,竟用Python爬取了B站上4w条马保国视频弹幕

6 篇文章 1 订阅
6 篇文章 0 订阅

​ ”明月如霜,好风如水,清景无限 “

文远今天看到其他公众号的b站弹幕分析,激起了一些兴趣。总的是三步:

  • 找到b站马老师专栏里播放量靠前的视频

  • 爬取视频的弹幕

  • 将弹幕持久化存储后,制作词云

首先,找到的b站专栏的网址:

url='https://api.bilibili.com/x/web-interface/web/channel/multiple/list?channel_id=3503796&sort_type=hot&page_size=30'

这个是b站专门留的接口,很方便。爬取一下对应的视频:(注意加上自己的Cookies,不然只能爬未登录的信息)
在这里插入图片描述

import os
import requests
import json
import pandas as pd
import requests
import pandas as pd
import re
import time
import random
from concurrent.futures import ThreadPoolExecutor
import datetime
import jieba
from stylecloud import gen_stylecloud
###  安装库
###  pip install -i https://pypi.tuna.tsinghua.edu.cn/simple stylecloud  jiebadef get_data(url,headers):
    data_m = pd.DataFrame(columns=['id','name','view_count','like_count','duration','author_name','author_id','bvid'])
    html = requests.get(url,headers=headers).content
    data = json.loads(html.decode('utf-8'))
    offset = data['data']['offset']
    print(offset)
#     print(data)
    for j in range(1,31):
        data_m = data_m.append({'id':data['data']['list'][j]['id'],'name':data['data']['list'][j]['name'],
                            'view_count':data['data']['list'][j]['view_count'],'like_count':data['data']['list'][j]['like_count'],
                            'duration':data['data']['list'][j]['duration'],'author_name':data['data']['list'][j]['author_name'],
                            'author_id':data['data']['list'][j]['author_id'],'bvid':data['data']['list'][j]['bvid']},ignore_index=True)
    print(data_m)
    return(offset,data_m)
​
url='https://api.bilibili.com/x/web-interface/web/channel/multiple/list?channel_id=3503796&sort_type=hot&page_size=30'
headers = {
    'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.66 Safari/537.36'
}
offset,data_m=get_data(url,headers)
html = requests.get(url=url,headers=headers).content
data = json.loads(html.decode('utf-8'))
# print(data['data']['list'][0])
# len(data['data']['list'][0][ 'items'])
for j in range(10):
    data_m = data_m.append({'id':data['data']['list'][0][ 'items'][j]['id'],'name':data['data']['list'][0][ 'items'][j]['name'],
                        'view_count':data['data']['list'][0][ 'items'][j]['view_count'],'like_count':data['data']['list'][0][ 'items'][j]['like_count'],
                        'duration':data['data']['list'][0][ 'items'][j]['duration'],'author_name':data['data']['list'][0][ 'items'][j]['author_name'],
                        'author_id':data['data']['list'][0][ 'items'][j]['author_id'],'bvid':data['data']['list'][0][ 'items'][j]['bvid']},ignore_index=True)
data_m.to_csv("mabaoguo_data.csv")
data = pd.read_csv("mabaoguo_data.csv")

到此,我们得到了这个接口的40个视频的信息。如下:
在这里插入图片描述

现在的问题是爬取每个视频的弹幕。而我们已知的信息是bv号和id号(以前叫av号)。文远直接打开一个马老师的视频,f12后发现,弹幕都是动态数据,Ajax请求才行。所以搜了下弹幕的history。
在这里插入图片描述

所以不能直接爬取bv号或者av号对应的界面,需要找到,对应的弹幕界面。如下:

https://api.bilibili.com/x/v2/dm/history?type=1&oid=257737956&date=2020-11-20

在Request URL中发现了oid和date两个参数是弹幕唯一的接口。

找了一下,发现:

'''
四个接口
获取up主主页信息:
根据up主UID 获取出视频个数 然后计算出应该有几页。num/30+1
https://api.bilibili.com/x/space/navnum?mid=161419374&jsonp=jsonp
直接根据这个接口也可以查看视频个数。不过更主要作用是查看视频的相关信息集合。包括视频aid
https://api.bilibili.com/x/space/arc/search?mid=161419374&ps=30&tid=0&pn=1&order=pubdate&jsonp=jsonp
​
获取视频详情信息  主要包括cid  弹幕接口的id就是 cid
https://api.bilibili.com/x/web-interface/view?aid={}
{"code":0,"message":"0","ttl":1,"data":{"bvid":"","aid":84828732,"videos":1,"tid":76,"tname":"美食圈","copyright":1,"pic":"http://i1.hdslb.com/bfs/archive/bce895aa633adf97076206ad70205d356e324d86.jpg","title":"山药二牛和老板娘一起过年,做一桌简单好吃的年夜饭,这才是过年该有的样子","pubdate":1579862416,"ctime":1579862416,"desc":"过年了,山药,二牛,老板娘在家做了一桌美味的年夜饭,看着就让人流口水","state":0,"attribute":16768,"duration":295,"rights":{"bp":0,"elec":0,"download":1,"movie":0,"pay":0,"hd5":1,"no_reprint":1,"autoplay":1,"ugc_pay":0,"is_cooperation":0,"ugc_pay_preview":0,"no_background":0},"owner":{"mid":161419374,"name":"山药视频","face":"http://i0.hdslb.com/bfs/face/357b015de3b9f4c04527d4fefb844460397ac8b0.jpg"},"stat":{"aid":84828732,"view":266347,"danmaku":973,"reply":289,"favorite":555,"coin":3037,"share":169,"now_rank":0,"his_rank":0,"like":15336,"dislike":0,"evaluation":""},"dynamic":"#生活##美食圈##美食#","cid":145067294,"dimension":{"width":3840,"height":1772,"rotate":0},"no_cache":false,"pages":[{"cid":145067294,"page":1,"from":"vupload","part":"年夜饭","duration":295,"vid":"","weblink":"","dimension":{"width":3840,"height":1772,"rotate":0}}],"subtitle":{"allow_submit":false,"list":[]}}}
获取视频弹幕信息
https://api.bilibili.com/x/v1/dm/list.so?oid={}
'''

所以只需要由aid(av号)找到cid就行了。

data_m.to_csv("mabaoguo_data.csv")
data = pd.read_csv("mabaoguo_data.csv")
av=list(data['id'])
cid_list=[]
url='https://api.bilibili.com/x/web-interface/view?aid='
for aid in av:
    new_url=url+str(aid)
#     print(new_url)
    response=requests.get(url=new_url, headers=headers)
    response.encoding = 'utf-8'
    dd_cid=json.loads(response.text)
    cid_list.append(dd_cid['data']['cid'])
# cid_list存储
for i in cid_list:
    with open('./cid.txt','a',encoding='utf-8')as f:
        f.write(str(i)+'\n')

然后爬取对应的弹幕。可惜文远还不会多线程和并发啥的,以后再试:

###  开始爬弹幕
user_agent = [
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1",
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",
    "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",
    "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",
    "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
    "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
    "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
    "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
    "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
    "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",
    "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"
    ]
start_time = datetime.datetime.now()
 
###################很重要,由于文远水品有限,你们自己把自己的Cookies复制一下,不然应该没用 
def  Grab_barrage(date,cid):
    # 伪装请求头
    headers = {
        "sec-fetch-dest": "empty",
        "sec-fetch-mode": "cors",
        "sec-fetch-site": "same-site",
        "origin": "https://www.bilibili.com",
#         "referer": "https://www.bilibili.com/video/BV1Z5411Y7or?from=search&seid=8575656932289970537",
        "cookie": "_uuid=5C5451FE-0BA0-F0B8-6D57-470F0A5C7A9F54503infoc; buvid3=29E2E716-A1BE-4E30-9510-8408A190D1A4155824infoc; rpdid=|(kkJYJlul0J'ulmkmJll~R; DedeUserID=203569824; DedeUserID__ckMd5=768e78e113e17e12; blackside_state=1; CURRENT_FNVAL=80; LIVE_BUVID=AUTO8316003360968431; SESSDATA=f90036c8%2C1619597259%2C3d076*a1; bili_jct=d997e210fa600567c1aee3f2d8d51dee; CURRENT_QUALITY=64; sid=9bbb25d1; PVID=1; bfe_id=61a513175dc1ae8854a560f6b82b37af",
        "user-agent": random.choice(user_agent),
    }
    # 构造url访问   需要用到的参数
    params = {
        'type': 1,
        'oid': cid,
        'date': date
    }
    # 发送请求  获取响应
    response = requests.get(url, params=params, headers=headers)
    # print(response.encoding)   重新设置编码
    response.encoding = 'utf-8'
    # print(response.text)
    # 正则匹配提取数据
    comment = re.findall('<d p=".*?">(.*?)</d>', response.text)
    # 将每条弹幕数据写入txt
#     with open('barrages.txt', 'a+') as f:
#         for con in comment:
#             f.write(con + '\n')
    with open('mabaoguo.txt', 'a+',encoding='utf-8') as f:
        for con in comment:
            f.write(con + '\n')
    time.sleep(random.randint(1, 3))   # 休眠###!!!!!!!!!!!!!!!!!!!!重要调用处!!!!!!!!!!!!!!
url = "https://api.bilibili.com/x/v2/dm/history"
start = '20201120'
end = '20201124'
# 生成时间序列
date_list = [x for x in pd.date_range(start, end).strftime('%Y-%m-%d')]
count = 0
start_time = datetime.datetime.now()
###  本来有多线程,但文远改不回来了
# 开多线程爬取   提高爬取效率
#     with ThreadPoolExecutor(max_workers=4) as executor:
#         executor.map(Grab_barrage, date_list)
for cid in cid_list:
    for date in date_list:
        Grab_barrage( date,str(cid))
delta = (datetime.datetime.now() - start_time).total_seconds()
print(f'用时:{delta}s')
​
​
####显示条数
with open('mabaoguo.txt','rb') as f:
    data = f.readlines()
    print(f'弹幕数据:{len(data)}条')

效果还行,爬了4w条弹幕。
在这里插入图片描述

最后制作词云:

####  制作词云   
def cloud1(file_name):
    with open(file_name,'r',encoding='utf8') as f:
        word_list = jieba.cut(f.read())
        result = " ".join(word_list) #分词用 隔开
        #制作中文云词
        gen_stylecloud(text=result,
                       font_path='C:\\Windows\\Fonts\\simhei.ttf',
                       # background_color= 'black',
                       palette='cartocolors.diverging.Fall_4',
                       icon_name='fas fa-plane',
                       output_name='t4.png',
                       ) #必须加中文字体,否则格式错误
def cloud2(file_name):
    with open(file_name,'r',encoding='utf8') as f:
        word_list = jieba.cut(f.read())
        result = " ".join(word_list) #分词用 隔开
        #制作中文云词
        gen_stylecloud(text=result,
                       font_path='C:\\Windows\\Fonts\\simhei.ttf',
                       # background_color= 'black',
                       palette='cartocolors.diverging.TealRose_2',
                       icon_name='fas fa-user-graduate',
                       gradient='horizontal' ,
                       output_name='t7.png',
                       ) #必须加中文字体,否则格式错误
def cloud3(file_name):
    with open(file_name,'r',encoding='utf8') as f:
        word_list = jieba.cut(f.read())
        result = " ".join(word_list) #分词用 隔开
        #制作中文云词
        gen_stylecloud(text=result,
                       font_path='C:\\Windows\\Fonts\\simhei.ttf',
                       # background_color= 'black',
                       palette='cartocolors.diverging.TealRose_2',
                       icon_name='fas fa-car',
                       gradient='horizontal' ,
                       output_name='t6.png',
                       ) #必须加中文字体,否则格式错误
file_name = 'mabaoguo.txt'
cloud1(file_name)
cloud2(file_name)
cloud3(file_name)
print("!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!end!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!")

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

整个过程不容易啊,文远作为马老师的校友,还是觉得马老师挺不错的,起码给大家带来了快乐,不过没必要消费他,笑笑就过去了。需要源码的直接点击阅读原文,喜欢的同学记得点赞和点击在看,还有Star啊。

END

想想你在干嘛?

作者:不爱跑马的影迷不是好程序猿

喜欢的话请关注点赞👇 👇👇 👇

在这里插入图片描述
Gone are the good old days when we were

close friends with a shared love for future.

评论 10
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值