如果正常爬虫
import requests
import time
from bs4 import BeautifulSoup
url='https://blog.csdn.net/Xiang_lhh/article/details/104940609'
resp=request.get(url)
bs=BeautifulSoup(resp,'lxml')#使用beautifulsoup解析返回的对象
result=bs.find('p')
print(result.text)
time.sleep(5)
#此时爬取相关数据,正常
爬取一些网站时,输出状态码报418
此时网页反爬虫
可以在requests.get()中传入参数headers
import requests
import time
from bs4 import BeautifulSoup
url='https://blog.csdn.net/Xiang_lhh/article/details/104940609'
headers={'User-Agent':'','Referer':''}
resp=request.get(url,headers=headers)
bs=BeautifulSoup(resp,'lxml')#使用beautifulsoup解析返回的对象
result=bs.find('p')
print(result.text)
time.sleep(5)
此时,在请求中加入headers,成功
查看headers可以在浏览器中network中查看