目的:学习笔记
2.首先我们试着爬取下来一篇文章的评论,通过搜索发现在 response里面我们并没有匹配到评论,说明评论是动态加载的。
3.此时我们清空请求,收起评论,再次打开评论
4.完成上面操作后,我们选择XHR,可以发现点击评论的时候发送了3个请求。
5.我们点击带comments的请求,然后在response里搜索可以匹配到评论,返回的是json数据,说明评论请求是这条没错了
请求链接在上图,现在咱先不管请求链接的组合规则是什么,继续往下
6.接下来 打开json.cn,复制response里的json数据粘贴进去
7。分析json数据,一个object包含一条评论的所有信息,比如评论人,评论内容等等,我们需要写代码从里面把相关的信息搞出来。
8.现在我们知道了请求链接url=https://www.zhihu.com/api/v4/articles/258812959/root_comments?order=normal&limit=20&offset=20&status=open'
请求方式为:request
可以开始写代码获取相关信息了
代码:
import requests
import json
url = 'https://www.zhihu.com/api/v4/articles/258812959/root_comments?order=normal&limit=20&offset=20&status=open'
Headers = {
'User-Agent': "Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.79 Safari/537.36",
"referer": "https://www.zhihu.com/"
}
res=requests.get(url,headers=Headers).content.decode('utf-8')
jsonfile=json.loads(res)
next_page=jsonfile['paging']['is_end']
print(next_page)
for data in jsonfile['data']:
id=data['id']
content=data['content']
author=data['author']['member']['name']
print(id,content,author)
打印效果:
9.至此,我们打印了知乎上面第一页,第一个话题,第一页评论,下面我们来思考怎么抓取该话题的所有评论。
10.我们点击第二页 获取到请求url=https://www.zhihu.com/api/v4/answers/1307614528/root_comments?order=normal&limit=20&offset=20&status=open
对比第一页的url1=https://www.zhihu.com/api/v4/answers/1307614528/root_comments?order=normal&limit=20&offset=0&status=open
可以发先offset由0变成了20,继续分析后面页面可得每过一页offset便加20。
那么一直加20什么时候会是个头呢,这时我们翻到最后一页,分析最后一页的json数据发现
is_end的值为Ture,所以我们可以用一个while循环,当is_end==Ture时 break掉就行
11.代码:
import requests
import json
from lxml import etree
import re
i=0
while True:
url='https://www.zhihu.com/api/v4/articles/258812959/root_comments?order=normal&limit=20&offset={}&status=open'.format(i)
i+=20
Headers = {
'User-Agent': "Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.79 Safari/537.36",
"referer": "https://www.zhihu.com/"
}
res = requests.get(url=url, headers=Headers).content.decode('utf-8')
jsonfile=json.loads(res)
next_page=jsonfile['paging']['is_end']
print(next_page)
comp = re.compile('?\w+[^>]*>')
for data in jsonfile['data']:
content=comp.sub("",data['content'])
author=data['author']['member']['name']
print("昵称---"+author,"评论:"+content)
if next_page==True:
break
通过分析可以看出只有前面那串数字不一样,于是可以得出前面那串数字是控制不同话题的
13.接下来我们从话题来找关联,发先能在response里面匹配到信息,于是我打算动手直接写代码把相关信息提前出来
代码:
import requests,json
from lxml import etree
Headers = {
'User-Agent': "Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.79 Safari/537.36",
"cookie": '''_zap=3935ec64-2d91-4666-903c-a641b2510b18; d_c0="AOBcn_xhAxKPToowOy9HNL3DgDozDHDt63I=|1602244658"; capsion_ticket="2|1:0|10:1604449780|14:capsion_ticket|44:YjAwODdlMjcxNzY4NGYwODlmNjgxMzYyNWFkZDJlYTI=|6312ca79725710f1810a97a1fe3c4bbd6d16d02f769891626a67124cba7dd1f9"; z_c0="2|1:0|10:1604449808|4:z_c0|92:Mi4xVnlXQUNBQUFBQUFBNEZ5Zl9HRURFaVlBQUFCZ0FsVk5FRVNQWUFDOUlsV1pKa2hZUTdvc1U5Z1cxbTluajk5UW5n|dc09f94f0b3e78d3d80f6d18da39109a38b3357154e275642bec5e4afa4c825b"; tst=r; q_c1=097e8b52467b4017a4f27f26dd8622c2|1604625864000|1604625864000; _xsrf=88ef577a-c34c-49a1-8a13-faf8cd85c55a; KLBRSID=4843ceb2c0de43091e0ff7c22eadca8c|1605003647|1604996383''',
"referer": "https://www.zhihu.com/"
}
url1='https://www.zhihu.com/'
res=requests.get(url1,headers=Headers).text
html=etree.HTML(res)
divs = html.xpath('''//div[@class="Card TopstoryItem TopstoryItem--old TopstoryItem-isRecommend"]''')
for div in divs:
title=div.xpath('.//h2//a[@target="_blank"]/text()')[0]
link=div.xpath('.//h2//a[@target="_blank"]/@href')[0]
link_num=link.split('/')[-1]
print(link_num)
运行结果:
14.如上面结果可以发现link_num刚好是可以控制话题滴。
于是开始写代码:
import requests,json
from lxml import etree
headers = {
'User-Agent': "Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.79 Safari/537.36",
"cookie": '''_zap=3935ec64-2d91-4666-903c-a641b2510b18; d_c0="AOBcn_xhAxKPToowOy9HNL3DgDozDHDt63I=|1602244658"; capsion_ticket="2|1:0|10:1604449780|14:capsion_ticket|44:YjAwODdlMjcxNzY4NGYwODlmNjgxMzYyNWFkZDJlYTI=|6312ca79725710f1810a97a1fe3c4bbd6d16d02f769891626a67124cba7dd1f9"; z_c0="2|1:0|10:1604449808|4:z_c0|92:Mi4xVnlXQUNBQUFBQUFBNEZ5Zl9HRURFaVlBQUFCZ0FsVk5FRVNQWUFDOUlsV1pKa2hZUTdvc1U5Z1cxbTluajk5UW5n|dc09f94f0b3e78d3d80f6d18da39109a38b3357154e275642bec5e4afa4c825b"; tst=r; q_c1=097e8b52467b4017a4f27f26dd8622c2|1604625864000|1604625864000; _xsrf=88ef577a-c34c-49a1-8a13-faf8cd85c55a; KLBRSID=4843ceb2c0de43091e0ff7c22eadca8c|1605003647|1604996383''',
"referer": "https://www.zhihu.com/"
}
url1='https://www.zhihu.com/'
res=requests.get(url1,headers=headers).text
html=etree.HTML(res)
divs = html.xpath('''//div[@class="Card TopstoryItem TopstoryItem--old TopstoryItem-isRecommend"]''')
for div in divs:
title=div.xpath('.//h2//a[@target="_blank"]/text()')[0]
link=div.xpath('.//h2//a[@target="_blank"]/@href')[0]
link_num=link.split('/')[-1]
i=0
print(f'.........................................标题为:{title} ...........................................................')
while True:
url2='https://www.zhihu.com/api/v4/answers/{}/root_comments?order=normal&limit=20&offset={}&status=open'.format(link_num,i)
i += 20
print(f'正在打印第{i / 20}页。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。')
res = requests.get(url2, headers=headers).content.decode('utf-8')
jsonfile = json.loads(res)
next_page = jsonfile['paging']['is_end']
# print(next_page)
for data in jsonfile['data']:
id = data['id']
content = data['content']
author = data['author']['member']['name']
print(f'{author}评价:{content}')
if next_page == True:
break
运行结果截图:
至此完成了第一页全话题 全评论的爬取.
写道这里发现知乎的话题也是动态加载的并不需要翻页,很多数据都是通过json传入,而且需要传入cookie才可以进行爬取
最后:代码写的不够完善至少加强自己对爬虫的理解,有些地方需添加异常处理的。