我们在进行爬虫的时候,总会遇到ip被封的情况,是因为网站为了防止被爬取,会有反爬机制,对于同一个IP地址的大量同类型的访问,会封锁IP,过一段时间后,才能继续访问。
1.如何应对IP被封的问题
有几种方法:
- 修改请求头,模拟浏览器(而不是代码去直接访问)去访问
- 采用代理IP并轮换
- 设置访问时间间隔
2.获取代理IP地址
- 网站获取:按需自行查找
- inspect -> 鼠标定位:
- 要获取的代理IP地址,属于class = "odd"标签的内容:代码如下,获取的代理IP保存在proxy_ip_list列表中。
from bs4 import BeautifulSoup
import requests
import time
def open_proxy_url(url):
user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.113 Safari/537.36'
headers = {'User-Agent': user_agent}
try:
r = requests.get(url, headers = headers, timeout = 20)
r.raise_for_status()
r.encoding = r.apparent_encoding
return(r.text)
except:
print('无法访问网页' + url)
def get_proxy_ip(response):
proxy_ip_list = []
soup = BeautifulSoup(response, 'html.parser')
proxy_ips = soup.select('.odd')#选择标签
for proxy_ip in proxy_ips:
ip = proxy_ip.select('td')[1].text
port = proxy_ip.select('td')[2].text
protocol = proxy_ip.select('td')[5].text
if protocol in ('HTTP','HTTPS'):
proxy_ip_list.append(f'{protocol}://{ip}:{port}')
return proxy_ip_list
if __name__ == '__main__':
proxy_url = 'https://www.xicidaili.com/'
text = open_proxy_url(proxy_url)
proxy_ip_filename = 'proxy_ip.txt'
with open(proxy_ip_filename, 'w') as f:
f.write(text)
text = open(proxy_ip_filename, 'r').read()
proxy_ip_list = get_proxy_ip(text)
print(proxy_ip_list)
['HTTPS://183.195.106.118:8118', 'HTTPS://223.68.190.130:8181', 'HTTPS://110.189.152.86:52277', 'HTTPS://27.184.157.205:8118', 'HTTP://202.107.233.123:8090', 'HTTP://211.159.219.225:8118', 'HTTPS://115.29.108.117:8118', 'HTTPS://183.250.255.86:63000', 'HTTP://111.222.141.127:8118', 'HTTP://117.94.213.119:8118', 'HTTPS://125.123.139.19:9000', 'HTTP://122.225.45.66:43391', 'HTTP://163.125.113.249:8088', 'HTTP://14.20.235.73:808', 'HTTP://123.163.24.113:3128', 'HTTPS://60.177.170.155:8118', 'HTTPS://125.123.142.64:9000', 'HTTP://118.24.1.252:1080', 'HTTP://58.249.55.222:9797', 'HTTP://211.147.226.4:8118', 'HTTPS://125.123.139.19:9000', 'HTTPS://183.195.106.118:8118', 'HTTPS://223.68.190.130:8181', 'HTTPS://27.184.157.205:8118', 'HTTPS://119.84.112.137:80', 'HTTPS://115.29.108.117:8118', 'HTTPS://125.123.16.197:9000', 'HTTPS://218.76.253.201:61408', 'HTTPS://218.203.132.117:808', 'HTTPS://221.193.94.18:8118', 'HTTP://122.225.45.66:43391', 'HTTP://121.237.149.218:3000', 'HTTP://115.219.168.69:8118', 'HTTP://202.107.233.123:8090', 'HTTP://222.190.125.5:8118', 'HTTP://101.132.190.101:80', 'HTTP://119.129.236.70:3128', 'HTTP://120.198.76.45:41443', 'HTTP://60.191.11.246:3128', 'HTTP://58.249.55.222:9797']
获取如下数据:
获取到代理IP地址后,发现数据缺失很多,再仔细查看elements,发现有些并非class = “odd”,而是…,这些数据没有被获取
class = "odd"奇数的结果,而没有class = "odd"的是偶数的结果
通过bs4的find_all(‘tr’)来获取所有IP:
def get_proxy_ip(response):
proxy_ip_list = []
soup = BeautifulSoup(response, 'html.parser')
proxy_ips = soup.find(id = 'ip_list').find_all('tr')
for proxy_ip in proxy_ips:
if len(proxy_ip.select('td')) >=8:
ip = proxy_ip.select('td')[1].text
port = proxy_ip.select('td')[2].text
protocol = proxy_ip.select('td')[5].text
if protocol in ('HTTP','HTTPS','http','https'):
proxy_ip_list.append(f'{protocol}://{ip}:{port}')
return proxy_ip_list
2.1 使用代理
- proxies的格式是一个字典:
- {‘http’: ‘http://IP:port‘,‘https’:'https://IP:port‘}
- 把它直接传入requests的get方法中即可
- web_data = requests.get(url, headers=headers, proxies=proxies)
def open_url_using_proxy(url, proxy):
user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.113 Safari/537.36'
headers = {'User-Agent': user_agent}
proxies = {}
if proxy.startswith(('HTTPS','https')):
proxies['https'] = proxy
else:
proxies['http'] = proxy
try:
r = requests.get(url, headers = headers, proxies = proxies, timeout = 10)
r.raise_for_status()
r.encoding = r.apparent_encoding
return (r.text, r.status_code)
except:
print('无法访问网页' + url)
print('无效代理IP: ' + proxy)
return False
2.2确认代理IP地址有效性
- 无论是免费还是收费的代理网站,提供的代理IP都未必有效,我们应该验证一下,有效后,再放入我们的代理IP池中,以下通过几种方式:访问网站,得到的返回码是200真正的访问某些网站,获取title等,验证title与预计的相同访问某些可以提供被访问IP的网站,类似于“查询我的IP”的网站,查看返回的IP地址是什么验证返回码
def check_proxy_avaliability(proxy):
url = 'http://www.baidu.com'
result = open_url_using_proxy(url, proxy)
VALID_PROXY = False
if result:
text, status_code = result
if status_code == 200:
print('有效代理IP: ' + proxy)
else:
print('无效代理IP: ' + proxy)
2.3改进:确认网站title
def check_proxy_avaliability(proxy):
url = 'http://www.baidu.com'
text, status_code = open_url_using_proxy(url, proxy)
VALID = False
if status_code == 200:
if r_title:
if r_title[0] == '<title>百度一下,你就知道</title>':
VALID = True
if VALID:
print('有效代理IP: ' + proxy)
else:
print('无效代理IP: ' + proxy)
3 关于http和https代理
- 可以看到proxies中有两个键值对:
- {‘http’: ‘http://IP:port‘,‘https’:'https://IP:port‘}
- 其中 HTTP 代理,只代理 HTTP 网站,对于 HTTPS 的网站不起作用,也就是说,用的是本机 IP,反之亦然。
- 我刚才使用的验证的网站是https://jsonip.com, 是HTTPS网站所以探测到的有效代理中,如果是https代理,则返回的是代理地址
- 如果是http代理,将使用本机IP进行访问,返回的是我的公网IP地址