url='https://www.baidu.com/s?wd=123'
head={'User‐Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKi t/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36'}
rep=requests.get(url,headers=head)
print(rep.text)
带参数的post请求
postData = { 'username':'Angela',
'password':'123456'
}
response = requests.post(url,data=postData)
添加headers中的User-Agent(标识符)....
head={'User‐Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKi t/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36'}
rep=requests.get(url,headers=head)
添加其他的如:cookies、referer也是如此,在前面的urllib的反爬虫也介绍过了
cookies在requests库中的获取方法
url='https://www.baidu.com/'
req=requests.get(url)
req.cookies
url='https://www.baidu.com/'
get_cookies.get(url,headers=head)
#此时已经从百度返回一个cookies信息
#我们可以直接调用该cookies
get_cookies.get(url+'s?wd=123',headers=head)
标签:cookies,get,python,爬虫,url,537.36,发送数据,requests,head
来源: https://www.cnblogs.com/lcyzblog/p/11269341.html