python爬虫获取给定新浪微博评论

本文分享爬取微博网页端指定微博的评论数据。
首先登录微博网页端,找一个感兴趣的微博:
在这里插入图片描述
打开评论页面,右键检查,点击network,ctrl+R加载页面
在这里插入图片描述
得到这个页面的cookie:
在这里插入图片描述
代码实现:
爬取了评论的昵称、时间和内容

import time
import  requests,json
from lxml import etree
import xlwt
wookbook=xlwt.Workbook(encoding='utf-8')
sheet=wookbook.add_sheet('sheet',cell_overwrite_ok=True)
sheet.write(0,0,'nick')
sheet.write(0,1,'time')
sheet.write(0,2,'content')

headers={
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:70.0) Gecko/20100101 Firefox/70.0',
        'Cookie':'SINAGLOBAL=5322597451823.386.1554213722659; Ugrow-G0=589da022062e21d675f389ce54f2eae7; login_sid_t=535c06faa28c0a73bbf2a70054bed5ac; cross_origin_proto=SSL; YF-V5-G0=bae6287b9457a76192e7de61c8d66c9d; WBStorage=42212210b087ca50|undefined; _s_tentry=passport.weibo.com; Apache=3011672908696.3213.1592668545629; ULV=1592668545635:44:6:1:3011672908696.3213.1592668545629:1591590712267; crossidccode=CODE-yf-1JMFR8-29rJK3-ng3qQtt3hYUdGQeb030fb; ALF=1624204599; SSOLoginState=1592668599; SCF=ApjScoaMbsXtNFObav_TZqQn86gd4_VisrebpOwKJO9-7nKNzPWApotfh41gp7QvIRfB-WzENTDQdqTziGo26tk.; SUB=_2A25z6kHoDeRhGeNJ61MZ8ijPwjmIHXVQnjQgrDV8PUNbmtANLRPtkW9NSBjGUQ-3h0MfrgBtUEtVUHAeybQTIcZ9; SUBP=0033WrSXqPxfM725Ws9jqgMF55529P9D9WhQl.su1CrnzMojsR4pBc225JpX5KzhUgL.Fo-Neh2Reoq01K-2dJLoIEnLxK-LBo5L12qLxKML1hqL122LxKqL1KnL1-qLxK-LB.2LBKU9C-_l; SUHB=0Jvg9O4IYZXCjE; wvr=6; UOR=www.psjia.com,widget.weibo.com,graph.qq.com; webim_unReadCount=%7B%22time%22%3A1592668759286%2C%22dm_pub_total%22%3A0%2C%22chat_group_client%22%3A0%2C%22chat_group_notice%22%3A0%2C%22allcountNum%22%3A43%2C%22msgbox%22%3A0%7D; YF-Page-G0=580fe01acc9791e17cca20c5fa377d00|1592668778|1592668627'
}

def get_furl():
    flag=1
    url1='https://weibo.com/aj/v6/comment/big?ajwvr=6&id=4517608383498080&from=singleWeiBo&page=1'
    txt=requests.get(url1,headers=headers).text
    cnt=1
    while flag==1:
            time.sleep(2)
            html=json.loads(txt)['data']['html']
            html=etree.HTML(html)
            # 得到该评论源码的所有评论
            uls = html.xpath('//div[@class="list_con"]')
            for ul in uls:
                user = ul.xpath('./div[@class="WB_text"]/a/text()')[0]
                comment = ul.xpath('./div[@class="WB_text"]/text()')[1]
                # 去除中文冒号:
                comment = comment.split(':', maxsplit=1)[-1]
                tim = ul.xpath('./div[contains(@class,"WB_func")]/div[contains(@class,"WB_from")]/text()')[0]
                user_url = 'https:' + ul.xpath('./div[@class="WB_text"]/a/@href')[0]
                print(user)
                sheet.write(cnt,0,user)
                print(comment)
                sheet.write(cnt,2,comment)
                print(tim)
                sheet.write(cnt,1,tim)
                cnt+=1

            try:
                net_url=html.xpath('//div[@node-type="comment_loading"]/@action-data')[0]
            except:
                try:

                    net_url=html.xpath('//a/@action-data')[-1]
                except:
                    print(cnt)
                    # print('*'*25)
                    wookbook.save('liziqi.xlsx')
                    exit()


                print(net_url)
            url1='https://weibo.com/aj/v6/comment/big?ajwvr=6&'+net_url+'&from=singleWeiBo&__rnd=1592668779880'
            txt = requests.get(url1, headers=headers).text

            print(url1)


    return  html
if __name__=='__main__':
    s=requests.Session()
    data=get_furl()

结果截图:
在这里插入图片描述
爬取不同的微博时,需要替换的是第13行Cookie,第18行的id值和第56行的rnd值。获取方法如前文所述。
本代码的不足是不能爬取回复的评论,只能把当前页面显示的评论爬取下来。

  • 4
    点赞
  • 17
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值