你是如何开始能写Python爬虫?就这个问题我查看了一下知乎,看到各种大牛写的心得,感觉受益匪浅,于是我有了一种冲动,想把各种大牛回答的心得爬取下来,以后可以细细品味。
首先我们在浏览器输入https://www.zhihu.com/question/21358581,里面可以看到各种大牛的回答
正常的思路
- 加载requests包,下载html,
- 然后解析html,
- 存储数据。
按照上面的思路我们来写代码看看。
import requests
headers={
'Cookie':'自己填写',
'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:75.0) Gecko/20100101 Firefox/75.0'
}
res = requests.get('https://www.zhihu.com/question/21358581',headers=headers)
print(res.text)
打印下载的html,发现html的内容并不是所有回答,我这里的只到第一个回答的结尾。
我们通过浏览网页发现,知乎里面的回答,随着你鼠标往下移动,它会不断的加载。这就说明我们浏览的是一个动态网页,按F12,点网络,在XHR里面我们可以看到几个传输数据比较大的json文件。
里面的内容都是我们大牛的回答。
我们把这个json文件翻到最下面,这个是网址是链接的下一个回答。现在我们通过解析json文件获取url和内容。
接下来我们就是撸代码的时间了:
import requests
import json
import re
import time
def get_html(url,headers):
res = requests.get(url,headers=headers,timeout=1)
res.encoding='utf-8'
return res.text
def get_text(data):
pattern= '[\u2E80-\u9FFF]+'
txt = '\n'.join(re.findall(pattern,data))
return(txt)
def get_img_url(data):
pattern ='src="(http.*?[jpg|gif])"'
urls = set(re.findall(pattern,data))
return urls
def down_img(urls,name):
i = 0
for url in urls:
img = requests.get(url).content
if url[-3:] =='jpg':
with open(r'照片\\'+name+str(i)+'.jpg','wb') as pic:
pic.write(img)
else:
with open(r'照片\\'+name+str(i)+'.gif','wb') as pic:
pic.write(img)
i+=1
def down_txt(txt,name):
with open(r'文章\\' + name+ '.txt', 'w') as wb:
wb.write(txt)
def start(url):
headers = {
'Cookie': '自己填写',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:75.0) Gecko/20100101 Firefox/75.0'
}
urls=set()
old_urls = set()
urls.add(url)
while True:
k = 1
if len(urls) !=0:
url = urls.pop()
old_urls.add(url)
res = get_html(url,headers)
for i in range(3):
try:
json_data = json.loads(res)['data'][i]['content']
name = json.loads(res)['data'][i]['author']['name']
down_img(get_img_url(json_data),name)
down_txt(get_text(json_data),name)
except:
print("结束1")
k=0
break
json_url = json.loads(res)['paging']['next']
if json_url not in old_urls and k:
urls.add(json_url)
print(urls)
else:
print("结束2")
break
if __name__ == '__main__':
start('https://www.zhihu.com/api/v4/questions/21358581/answers?include=data%5B*%5D.is_normal%2Cadmin_closed_comment%2Creward_info%2Cis_collapsed%2Cannotation_action%2Cannotation_detail%2Ccollapse_reason%2Cis_sticky%2Ccollapsed_by%2Csuggest_edit%2Ccomment_count%2Ccan_comment%2Ccontent%2Ceditable_content%2Cvoteup_count%2Creshipment_settings%2Ccomment_permission%2Ccreated_time%2Cupdated_time%2Creview_info%2Crelevant_info%2Cquestion%2Cexcerpt%2Crelationship.is_authorized%2Cis_author%2Cvoting%2Cis_thanked%2Cis_nothelp%2Cis_labeled%2Cis_recognized%2Cpaid_info%2Cpaid_info_content%3Bdata%5B*%5D.mark_infos%5B*%5D.url%3Bdata%5B*%5D.author.follower_count%2Cbadge%5B*%5D.topics&offset=16&limit=3&sort_by=default&platform=desktop')
爬取的结果: