【一学就会】爬取知乎热榜话题下的回答及评论点赞数

最近印度新冠疫情爆发,连我国都有好几个城市出现了印度的变异病毒。为此,我特意去知乎上逛了逛关于印度疫情的话题

【如何看待全球新冠确诊超 1.5 亿,印度单日新增确诊连续 9 天超 30 万例,未来国际疫情形势如何?】
世界卫生组织30日公布的最新数据显示,全球累计新冠确诊病例达150110310例。印度卫生部30日公布的数据显示,该国较前一日新增新冠确诊病例386452例,累计18762976例;新增死亡3498例,累计208330例。印度单日新增确诊病例已连续9天超过30万例。

下面我们来爬取这一话题下的回答数据

首先分析网页
我们先点进刷新后的第一个网页的preview发现里面没有我们需要的数据,那么可以猜测这是通过XHR加载的数据,直接点击XHR,快速找到图示所标记的网页,这就是我们加以请求的网页
在这里插入图片描述
这个网页的参数包括:
include:一个固定值
limit:5 也就是每一个url里包括的回答数
offset:一个偏移量,可以发现每一次偏移值+5(想要爬取多页需要变的就是这个)
platform:固定值
sort_by:固定值

         data={
            'include':'data[*].is_normal,admin_closed_comment,reward_info,is_collapsed,annotation_action,annotation_detail,collapse_reason,is_sticky,collapsed_by,suggest_edit,comment_count,can_comment,content,editable_content,attachment,voteup_count,reshipment_settings,comment_permission,created_time,updated_time,review_info,relevant_info,question,excerpt,is_labeled,paid_info,paid_info_content,relationship.is_authorized,is_author,voting,is_thanked,is_nothelp,is_recognized;data[*].mark_infos[*].url;data[*].author.follower_count,badge[*].topics;data[*].settings.table_of_content.enabled',
             'limit':'5',
             'offset':str(item),
             'platform':'desktop',
             'sort_by':'default',
        }

知道这些我们就好构造参数了

url='https://www.zhihu.com/api/v4/questions/457368252/answers?include=data%5B%2A%5D.is_normal%2Cadmin_closed_comment%2Creward_info%2Cis_collapsed%2Cannotation_action%2Cannotation_detail%2Ccollapse_reason%2Cis_sticky%2Ccollapsed_by%2Csuggest_edit%2Ccomment_count%2Ccan_comment%2Ccontent%2Ceditable_content%2Cattachment%2Cvoteup_count%2Creshipment_settings%2Ccomment_permission%2Ccreated_time%2Cupdated_time%2Creview_info%2Crelevant_info%2Cquestion%2Cexcerpt%2Cis_labeled%2Cpaid_info%2Cpaid_info_content%2Crelationship.is_authorized%2Cis_author%2Cvoting%2Cis_thanked%2Cis_nothelp%2Cis_recognized%3Bdata%5B%2A%5D.mark_infos%5B%2A%5D.url%3Bdata%5B%2A%5D.author.follower_count%2Cbadge%5B%2A%5D.topics%3Bdata%5B%2A%5D.settings.table_of_content.enabled&limit=5&offset='+str(item)+'&platform=desktop&sort_by=default'

导入需要的库

import requests
import csv
import time
import json
from pyquery import PyQuery as pq

构造请求头

    for item in range(235,380,5):
        url='https://www.zhihu.com/api/v4/questions/457368252/answers?include=data%5B%2A%5D.is_normal%2Cadmin_closed_comment%2Creward_info%2Cis_collapsed%2Cannotation_action%2Cannotation_detail%2Ccollapse_reason%2Cis_sticky%2Ccollapsed_by%2Csuggest_edit%2Ccomment_count%2Ccan_comment%2Ccontent%2Ceditable_content%2Cattachment%2Cvoteup_count%2Creshipment_settings%2Ccomment_permission%2Ccreated_time%2Cupdated_time%2Creview_info%2Crelevant_info%2Cquestion%2Cexcerpt%2Cis_labeled%2Cpaid_info%2Cpaid_info_content%2Crelationship.is_authorized%2Cis_author%2Cvoting%2Cis_thanked%2Cis_nothelp%2Cis_recognized%3Bdata%5B%2A%5D.mark_infos%5B%2A%5D.url%3Bdata%5B%2A%5D.author.follower_count%2Cbadge%5B%2A%5D.topics%3Bdata%5B%2A%5D.settings.table_of_content.enabled&limit=5&offset='+str(item)+'&platform=desktop&sort_by=default'
        headers={
            'user-agent': '自己的',
            'cookie': '自己的',
            'referer': 'https://www.zhihu.com/question/457368252'
        }

        response=requests.get(url=url,headers=headers)
        html=response.text
        print(html)

解析数据
这里用pyquery这个库完成comment的解析工作,去除标签,获取评论文本数据

json_data=json.loads(html)
        data=json_data.get('data')
        for i in range(len(data)):
            try:
                Comment=data[i].get('content')
                doc=pq(Comment)
                comment=doc.text()
                Author=data[i].get('author')
                author=Author.get('name')
                gender=Author.get('gender')
                if str(gender)=='1':
                    gender='男'
                elif str(gender)=='0':
                    gender='女'
                else:
                    gender='未知'
                voteup_count=data[i].get('voteup_count')
                comment_count=data[i].get('comment_count')

实现结果如下

在这里插入图片描述
在这里插入图片描述

在这里插入图片描述

源代码如下

import requests
import csv
import time
import json
from pyquery import PyQuery as pq

f=open('D:\\知乎话题评论.csv', mode='a', encoding='utf-8',newline='')
f2=open('D:\\知乎话题评论.txt', mode='a', encoding='utf-8')
csv_test = csv.DictWriter(f, fieldnames=['用户名','性别','点赞数','评论数','评论'])
csv_test.writeheader()
def main():
    for item in range(235,380,5):
        url='https://www.zhihu.com/api/v4/questions/457368252/answers?include=data%5B%2A%5D.is_normal%2Cadmin_closed_comment%2Creward_info%2Cis_collapsed%2Cannotation_action%2Cannotation_detail%2Ccollapse_reason%2Cis_sticky%2Ccollapsed_by%2Csuggest_edit%2Ccomment_count%2Ccan_comment%2Ccontent%2Ceditable_content%2Cattachment%2Cvoteup_count%2Creshipment_settings%2Ccomment_permission%2Ccreated_time%2Cupdated_time%2Creview_info%2Crelevant_info%2Cquestion%2Cexcerpt%2Cis_labeled%2Cpaid_info%2Cpaid_info_content%2Crelationship.is_authorized%2Cis_author%2Cvoting%2Cis_thanked%2Cis_nothelp%2Cis_recognized%3Bdata%5B%2A%5D.mark_infos%5B%2A%5D.url%3Bdata%5B%2A%5D.author.follower_count%2Cbadge%5B%2A%5D.topics%3Bdata%5B%2A%5D.settings.table_of_content.enabled&limit=5&offset='+str(item)+'&platform=desktop&sort_by=default'
        headers={
            'user-agent': '自己的',
            'cookie': '自己的',
            'referer': 'https://www.zhihu.com/question/457368252'
        }

        response=requests.get(url=url,headers=headers)
        html=response.text
        print(html)
        json_data=json.loads(html)
        data=json_data.get('data')
        for i in range(len(data)):
            try:
                Comment=data[i].get('content')
                doc=pq(Comment)
                comment=doc.text()
                Author=data[i].get('author')
                author=Author.get('name')
                gender=Author.get('gender')
                if str(gender)=='1':
                    gender='男'
                elif str(gender)=='0':
                    gender='女'
                else:
                    gender='未知'
                voteup_count=data[i].get('voteup_count')
                comment_count=data[i].get('comment_count')
                dit={
                    '用户名':author,
                    '性别':gender,
                    '点赞数':voteup_count,
                    '评论数':comment_count,
                    '评论':comment
                }
                csv_test.writerow(dit)
                add='第'+str(i+item)+'条回答: '
                f2.write(add)
                f2.write(comment)
                f2.write('\n')
                print(author,' ',gender,' ',voteup_count,' ',comment_count,' ',comment)
            except:
                continue
        time.sleep(0.5)
        print('已爬取第%d条数据' %item)

main()

下一篇实现对此结果的数据分析

欢迎一键三连哦!!!

评论 7
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值