IP地址
QA:
1.为什么会出现IP被封
- 网站为了防止被爬取,会有反爬机制,对于同一个IP地址的大量同类型的访问,会封锁IP,过一段时间后,才能继续访问
2.如何应对IP被封的问题
-
有几种套路:
修改请求头,模拟浏览器(而不是代码去直接访问)去访问
采用代理IP并轮换
设置访问时间间隔
3.如何获取代理IP地址
- 从该网站获取: https://www.xicidaili.com/
inspect -> 鼠标定位:
要获取的代理IP地址,属于class = "odd"标签的内容:代码如下,获取的代理IP保存在proxy_ip_list列表中
获取IP
# 案例代码
from bs4 import BeautifulSoup
import requests
import time
# 解析网页信息
def open_proxy_url(url):
user_agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.119 Safari/537.36'
headers = {
'User-Agent': user_agent}
try:
r = requests.get(url, headers = headers, timeout = 20)
r.raise_for_status()
r.encoding = r.apparent_encoding
return(r.text)
except:
print('无法访问网页' + url)
# 收集IP地址
# def get_proxy_ip(response):
# proxy_ip_list = []
# soup = BeautifulSoup(response, 'html.parser')
# proxy_ips = soup.select('.odd')#选择标签
# for proxy_ip in proxy_ips:
# ip = proxy_ip.select('td')[1].text
# port = proxy_ip.select('td')[2].text
# protocol = proxy_ip.select('td')[5].text
# if protocol in ('HTTP','HTTPS'):
# proxy_ip_list.append(f'{protocol}://{ip}:{port}')
# return proxy_ip_list
def get_proxy_ip(response):
proxy_ip_list = []
soup = BeautifulSoup(response, 'html.parser')
# 通过bs4的find_all(‘tr’)来获取所有IP:
proxy_ips = soup.find(id = 'ip_list').find_all('tr')
for proxy_ip in proxy_ips:
if len(proxy_ip.select('td')) >=8:
ip = proxy_ip.select('td')[1].text
port = proxy_ip.select('td')[2].text
protocol = proxy_ip.select('td')[5].text
if protocol in ('HTTP','HTTPS','http','https'):
proxy_ip_list.append(f'{protocol}://{ip}:{port}')
return proxy_ip_list
# 主程序
if __name__ == '__main__':
proxy_url = 'https://www.xicidaili.com/'
text = open_proxy_url(proxy_url)
proxy_ip_filename = 'proxy_ip.txt'
with open(proxy_ip_filename, 'w') as f:
f.write(text