目录
一、模块安装
!pip install requests
二、requests模块get函数的使用
百度为例:
import requests
r=requests.get('https://www.baidu.com/') #get请求网址
print(type(r))
print(r.status_code)
print(type(r.text))
print(r.text) #获取百度源码
print(r.cookies)
以CSDN为例:
import requests
r=requests.get('https://www.csdn.net/')
print(r.status_code)
print(r.text)
print(r.cookies)
get请求的返回值信息
import requests
r = requests.get('http://httpbin.org/get')
print(r.text)
如果要添加其他参数, 构造请求链接,直接写或者params参数。以name,age为例:
直接法:
r=requests.get('http://httpbin.org/get?name=MUMU&age=18')
params参数:
import requests
data={
'name':'MUMU',
'age':18
}r=requests.get('http://httpbin.org/get',params=data)
print(r.text)
调用json方法,将返回结果是json格式的字符串转化为字典。代码如下:
import requests
r = requests.get("http://httpbin.org/get")
print(type(r.text))
print(r.json())
print(type(r.json()))
获取cookie
import requests
headers={
'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.71 Safari/537.36'
}#请求头
url='https://www.csdn.net/?spm=1011.2124.3001.5359'
r=requests.get(url=url,headers=headers)
print(r.cookies)#直接打印
添加请求头:通过headers参数传递头信息。如果不传递头信息,则不能正常请求,请求结果会显示403。下面以知乎为例:
import requests
r = requests.get("https://www.zhihu.com/explore")
print(r.text)
添加请求头:
import requests
headers={'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36'}
r = requests.get("https://www.zhihu.com/explore",headers=headers)
print(r.text)
使用cookie获取网页:
import requests
headers={
'Cookie':'_zap=f4cf1039-988d-4506-86b0-4a66e741c6b1; d_c0="AGDcaFGHGRKPTutiDmNxGnxfi7VhsfQ0wI8=|1603730839"; _xsrf=01xnSvUI1MkWP715R02yeXnThs2EHIXu; Hm_lvt_98beee57fd2ef70ccdd5ca52b9740c49=1610911317,1611507538,1611565882,1611566000; SESSIONID=EQPbneOhTXEKEWzoKhctFGCvXtNsbB6hgyaptDJMHfy; JOID=UFoUAUOmDkyYr9xFaaZkkCC9KVZ441wf8Mu5CQL4VgrQ4IE_BWQiVfil30VgxKKpzSBYFUbBpzXzd2z2Km1WeDs=; osd=WloUBkysDkyfoNZFaaFrmiC9Llly41wY_8G5CQX3XArQ5441BWQlWvKl30JvzqKpyi9SFUbGqD_zd2v5IG1WfzQ=; Hm_lpvt_98beee57fd2ef70ccdd5ca52b9740c49=1611673785; capsion_ticket="2|1:0|10:1611673806|14:capsion_ticket|44:N2ExMGExOTQ3YWIwNGE1YzliMTc1Mzk0ZmEwMjAyYTE=|5aecaa59c17c237af06b47a7b1402eb5b996139c8a6e1d15490899fab3c17108"; KLBRSID=031b5396d5ab406499e2ac6fe1bb1a43|1611673848|1611672766; z_c0="2|1:0|10:1611673849|4:z_c0|92:Mi4xUkFJd0lnQUFBQUFBWU54b1VZY1pFaVlBQUFCZ0FsVk4tWDc5WUFCQmZYWFB4ZkM5Z3l6ZlRNSENUUHVhR0lmYy1B|6d89241fc554ad378bce7f27715f2a4cc63cf87028c2da1e4104423b99ee14ee"; unlock_ticket="APBUrbfKXhImAAAAYAJVTQE4EGCaxoSZiXGfIktWFZReL6J3wOaKOQ=="',
'User-Agent':'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11',
'host':'www.zhihu.com',
}
url='http://www.zhihu.com'
r=requests.get(url=url,headers=headers)
print(r.text)
三、爬取自己的头像
以爬取自己CSDN头像为例:
import requests
headers={
'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36 Edg/95.0.1020.44'
}
url='https://avatar.csdnimg.cn/E/F/D/1_qq_54715996_1621566095.jpg'
r = requests.get(url=url,headers=headers)
with open('phpto.jpg', 'wb') as f:
f.write(r.content)
四、POST请求
提交表单:
import requests
data = {'name': 'germey', 'age': '22'}
r = requests.post("http://httpbin.org/post", data=data)
print(r.text)
添加请求头:
import requests
import json
host="http://httpbin.org/"
endpoint="post"
url=''.join([host,endpoint])
headers={"User-Agent":"test request headers"}
r=requests.post(url,headers=headers)
print(r.text)
文件上传:
首先,先建立一个文件:myfile.txt
import requests
import json
host="http://httpbin.org/"
endpoint="post"
url=''.join([host,endpoint])
#普通上传
files = {
'file':open('myfile.txt','rb')
}
r = requests.post(url,files=files)
print (r.text)
#headers={"User-Agent":"test request headers"}
#r=requests.post(url,headers=headers)
#print(r.text)
跟着川川学习爬虫的第二天,今天满课,写的有点晚,这两天都有学习新的知识,感觉自己的生活变得很充实。再次感谢川川大大,希望我会继续坚持下去。