爬虫:
模拟浏览器自动抓取网页信息的脚本
主要用到浏览器自带的抓包功能,request模块,beaufulsoup模块和re模块
一.伪装
1.进行伪装的原因
import requests
url='http://www.baidu.com'
header={'User-Agent':'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.71 Mobile Safari/537.36 Edg/94.0.992.38'}
response=requests.get(url)
response1=requests.get(url,headers=header)
print(len(response.content.decode()))
print(len(response1.content.decode()))
可以看出当不进行伪装时,我们能获取的信息长度只有2287,而当我们进行伪装后,我们能获取的信息长度为295758
2.请求头heders
headers为字典形式,一般构造headers需要cookie和User-Agent两个
import requests
url='https://github.com/Khazing'
header={'Cookie':'_octo=GH1.1.1409507418.1634466346; _device_id=657e29e120e5f4c50fd8f575dc1651eb; user_session=0cgOhLVHLt1AQzVFHGVYnBPb0yDinMqmr0PNWSNjZfSAY9ww; __Host-user_session_same_site=0cgOhLVHLt1AQzVFHGVYnBPb0yDinMqmr0PNWSNjZfSAY9ww; logged_in=yes; dotcom_user=Khazing; has_recent_activity=1; color_mode=%7B%22color_mode%22%3A%22auto%22%2C%22light_theme%22%3A%7B%22name%22%3A%22light%22%2C%22color_mode%22%3A%22light%22%7D%2C%22dark_theme%22%3A%7B%22name%22%3A%22dark%22%2C%22color_mode%22%3A%22dark%22%7D%7D; _gh_sess=RjgJAxoXBWueK4CHLzr7hmxOC%2BW0GSlqophwVtis534lsLS%2FN3PZ6eeBcmrcIstWJCxyKFCu51v3muGPySlsP%2BrCPuTi%2Bl%2BfbKVKWSA6UeyXWm3PnLnGo6hQz1GRf1MsZ5fGGb8%2BBRdQmM9NmBzq9dx0Y9PDwjO1j160tc9yrb2euaiP4B%2Bp%2BsuCo7X9MoId--bxhgtt5nehk5Qc%2FH--2qKYAo1TVPp0HgMBpQPuqw%3D%3D'
,'User-Agent':'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.71 Mobile Safari/537.36 Edg/94.0.992.38'}
response1=requests.get(url,headers=header)
User-Agent
向服务器说明是pc/Android发起的请求
cookies
能够保持登录状态,从而可以爬取基于某个用户的信息
使用方法:先携带cookie登录后,转而使用session方法
session=ruquests.session()
response=session.get()
response=session.post()
3.参数params
params为字典形式,值为下图中的查询字符串参数
4.代理proxies
proxies是为了防止服务器由于同一个IP不断发起请求,认为是爬虫而封杀的属性
代理的种类和使用: