爬虫学习第五天
requests的使用:
直接上代码吧,因为没有遇到啥问题,都是很简单的代码
get:
from fake_useragent import UserAgent
import requests
url="https://www.baidu.com/s"
headers = {
"User-Agent":UserAgent().chrome
}
params= {
"wd":"百度"
}
response = requests.get(url,headers= headers,params=params)
print(response.text)
post:
from fake_useragent import UserAgent
import requests
login_url = ""
headers = {
"User-Agent":UserAgent().chrome
}
params= {
"user":"",
"paswword":"."
}
response = requests.get(login_url,headers= headers,data=params)
print(response.text)
proxy:
from fake_useragent import UserAgent
import requests
url="http://httpbin.org/get"
headers