requests.post
requests.post(url, data=None, json=None, **kwargs)
参数 | 类型 | 描述 |
---|---|---|
url | 字符串 | 请求的网页,对应的是form的action属性的值,而不是form所在的页面 |
data | form提交的数据 | key是form的name属性对应的值,value是所填入的值,通过chrome审查元素(inspector) |
json | json格式的数据 | 注意看请求过程中是否出现Content-Type: application/json,json参数其实是data的一个快捷方式,在data参数中传入json.dumps(dict)是同一个效果 |
**kwargs | 其他的参数 | 比如headers |
1.表单请求
1.from表单
<form method="post" action="processing.php">
First name: <input type="text" name="firstname"><br>
Last name: <input type="text" name="lastname"><br>
<input type="submit" value="Submit">
</form>
2.request请求
import requests
params = {'firstname': 'Ryan', 'lastname': 'Mitchell'}
r = requests.post("http://pythonscraping.com/files/processing.php", data=params)
2.提交文件
2.1.form
<form action="processing2.php" method="post" enctype="multipart/form-data">
Submit a jpg, png, or gif: <input type="file" name="uploadFile"><br>
<input type="submit" value="Upload File">
</form>
2.2.python
import requests
files = {'uploadFile': open('../files/Python-logo.png', 'rb')} #这里提交的python文件对象(字节)
r = requests.post("http://pythonscraping.com/pages/processing2.php",
files=files)
3.处理cookie
3.1.人工方法获得cookie
import requests
params = {'username': 'Ryan', 'password': 'password'}
r = requests.post("http://pythonscraping.com/pages/cookies/welcome.php", params) # 第一次请求获得的cookie
print("Cookie is set to:")
print(r.cookies.get_dict())
r = requests.get("http://pythonscraping.com/pages/cookies/profile.php",
cookies=r.cookies) # 用上一次的请求所获得的cookie
print(r.text)
3.2.session回话保存cookie
import requests
session = requests.Session() # 新建一个回话,他会保存每次访问的cookie,header等HTTP协议信息,不必每次手动提交cookie
params = {'username': 'username', 'password': 'password'}
s = session.post("http://pythonscraping.com/pages/cookies/welcome.php", params)
print("Cookie is set to:")
print(s.cookies.get_dict())
print("Going to profile page...")
s = session.get("http://pythonscraping.com/pages/cookies/profile.php")
print(s.text)
参考文献:
python网络数据采集