Requests:“HTTP for Humans”
Requests使用Python语言编写,基于urllib,但是它比urllib更加方便,可以节约我们大量的工作,完全满足HTTP测试需求。
中文文档:http://docs.python-requests.org/zh_CN/latest/index.html
github地址:http://github.com/requests/requests
1、发送get请求
import requests
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36'
}
data = {'xx':'xx'}
rq = requests.get('https://xxx/',headers=headers,params=data)
#属性
#查询响应内容
print(rq.text) #返回unicode格式的数据,这个是str的数据类型 res.content.decode('utf-8') 乱码手动解码方式
# print(rq.content) #返回字节流数据,这个是直接从网络上抓取的数据,没有经过任何的编码,所以是一个bytes类型,其实在硬盘上和网络上传输的字符串都是bytes类型
# print(rq.url)
2、发送post请求
rq = requests.post(‘https://xxx’,data=data)
import requests
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36'
}
url = 'xxx'
data = {'xxx'}
rq = requests.post(url,headers=headers,data=data)
print(rq.text)
2、在请求的方法中传入proxies参数
import requests
proxy = {'http':'113.120.146.188:9999'
}
url = 'http://httpbin.org/ip'
rq = requests.get(url,proxies=proxy)
print(rq.text)
3、处理 --cookie
import requests
url = 'https://www.XXX.com/'
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36',
'cookie':'xxx'
}
rp = requests.get(url,headers=headers)
print(rp.text)
Session:共享cookie
url = 'https://xxx'
data = {'username':'xxx',
'xxx'
}
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36'
}
#登陆
session = requests.Session()
session.post(url,headers=headers,data=data)
url1 = 'https://xxx'
rp = session.get(url1)
print(rp.text)
4、处理证书 --verify参数
import requests
url = 'http://xxx'
rp = requests.get(url,verify=False)
print(rp.content.decode('utf-8'))