爬虫入门-ip代理-task03

爬虫入门-ip代理-task03

为什么会出现IP被封

网站为了防止被爬取,会有反爬机制,对于同一个IP地址的大量同类型的访问,会封锁IP,过一段时间后,才能继续访问

如何应对IP被封的问题
有几种套路:

1.修改请求头,模拟浏览器(而不是代码去直接访问)去访问
2.采用代理IP并轮换
3.设置访问时间间隔

如何获取代理IP地址
从该网站获取:

https://www.xicidaili.com/

inspect -> 鼠标定位:
要获取的代理IP地址,属于class = "odd"标签的内容:代码如下,获取的代理IP保存在proxy_ip_list列表中

# 案例代码
from bs4 import BeautifulSoup
import requests
import time

def open_proxy_url(url):
    user_agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.119 Safari/537.36'
    headers = {'User-Agent': user_agent}
    try:
        r = requests.get(url, headers = headers, timeout = 20)
        r.raise_for_status()
        r.encoding = r.apparent_encoding
        return(r.text)
    except:
        print('无法访问网页' + url)


def get_proxy_ip(response):
    proxy_ip_list = []
    soup = BeautifulSoup(response, 'html.parser')
    proxy_ips  = soup.select('.odd')#选择标签
    for proxy_ip in proxy_ips:
        ip = proxy_ip.select('td')[1].text
        port = proxy_ip.select('td')[2].text
        protocol = proxy_ip.select('td')[5].text
        if protocol in ('HTTP','HTTPS'):
            proxy_ip_list.append(f'{protocol}://{ip}:{port}')
    return proxy_ip_list

if __name__ == '__main__':
    proxy_url = 'https://www.xicidaili.com/'
    text = open_proxy_url(proxy_url)
    proxy_ip_filename = 'proxy_ip.txt'
    with open(proxy_ip_filename, 'w') as f:
        f.write(text)
    text = open(proxy_ip_filename, 'r').read()
    proxy_ip_list = get_proxy_ip(text)
    print(proxy_ip_list)

获取结果如下:

[‘HTTP://222.95.144.128:3000’, ‘HTTP://222.85.28.130:40505’, ‘HTTP://121.237.148.248:3000’, ‘HTTPS://121.237.149.109:3000’, ‘HTTP://222.95.144.195:3000’, ‘HTTP://121.237.149.88:3000’, ‘HTTP://121.237.148.192:3000’, ‘HTTPS://121.237.148.183:3000’, ‘HTTP://121.237.149.56:3000’, ‘HTTP://121.237.149.163:3000’, ‘HTTP://122.228.19.8:3389’, ‘HTTP://116.196.87.86:20183’, ‘HTTP://112.95.205.137:8888’, ‘HTTP://61.153.251.150:22222’, ‘HTTPS://113.140.1.82:53281’, ‘HTTP://218.27.136.169:8085’, ‘HTTP://211.147.226.4:8118’, ‘HTTP://118.114.166.99:8118’, ‘HTTPS://223.245.35.187:65309’, ‘HTTPS://163.125.67.66:9797’, ‘HTTPS://222.95.144.129:3000’, ‘HTTPS://222.95.144.119:3000’, ‘HTTPS://222.95.144.200:3000’, ‘HTTPS://121.237.149.26:3000’, ‘HTTPS://117.88.176.158:3000’, ‘HTTPS://122.228.19.9:3389’, ‘HTTPS://117.88.177.124:3000’, ‘HTTPS://119.41.206.207:53281’, ‘HTTPS://121.237.148.59:3000’, ‘HTTPS://119.254.94.93:46323’, ‘HTTP://222.95.144.128:3000’, ‘HTTP://122.228.19.8:3389’, ‘HTTP://59.38.62.148:9797’, ‘HTTP://121.237.148.115:3000’, ‘HTTP://121.237.149.16:3000’, ‘HTTP://112.247.169.33:8060’, ‘HTTP://116.196.87.86:20183’, ‘HTTP://117.88.177.34:3000’, ‘HTTP://121.237.149.160:3000’, ‘HTTP://121.237.149.73:3000’]

获取到代理IP地址后,发现数据缺失很多,再仔细查看elements,发现有些并非class = “odd”,而是…,这些数据没有被获取 class = "odd"奇数的结果,而没有class = "odd"的是偶数的结果

通过bs4的find_all(‘tr’)来获取所有IP:

def get_proxy_ip(response):
    proxy_ip_list = []
    soup = BeautifulSoup(response, 'html.parser')
    proxy_ips = soup.find(id = 'ip_list').find_all('tr')
    for proxy_ip in proxy_ips:
        if len(proxy_ip.select('td')) >=8:
            ip = proxy_ip.select('td')[1].text
            port = proxy_ip.select('td')[2].text
            protocol = proxy_ip.select('td')[5].text
            if protocol in ('HTTP','HTTPS','http','https'):
                proxy_ip_list.append(f'{protocol}://{ip}:{port}')
    return proxy_ip_list

使用代理

proxies的格式是一个字典: {‘http’:‘http://IP:port‘,‘https’:'https://IP:port‘}
把它直接传入requests的get方法中即可 web_data = requests.get(url,
headers=headers, proxies=proxies)

def open_url_using_proxy(url, proxy):
    user_agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.119 Safari/537.36'
    headers = {'User-Agent': user_agent}
    proxies = {}
    if proxy.startswith('HTTPS'):
        proxies['https'] = proxy
    else:
        proxies['http'] = proxy
try:
    r = requests.get(url, headers = headers, proxies = proxies, timeout = 10)
    r.raise_for_status()
    r.encoding = r.apparent_encoding
    return (r.text, r.status_code)
except:
    print('无法访问网页' + url)
    return False
url = 'http://www.baidu.com'
text = open_url_using_proxy(url, proxy_ip_list[0])

确认代理IP地址有效性:

无论是免费还是收费的代理网站,提供的代理IP都未必有效,我们应该验证一下,有效后,再放入我们的代理IP池中,以下通过几种方式:访问网站,得到的返回码是200真正的访问某些网站,获取title等,验证title与预计的相同访问某些可以提供被访问IP的网站,类似于“查询我的IP”的网站,查看返回的IP地址是什么验证返回码

def check_proxy_avaliability(proxy):
    url = 'http://www.baidu.com'
    result = open_url_using_proxy(url, proxy)
    VALID_PROXY = False
    if result:
        text, status_code = result
        if status_code == 200:
            print('有效代理IP: ' + proxy)
        else:
            print('无效代理IP: ' + proxy)

改进:确认网站title

def check_proxy_avaliability(proxy):
    url = 'http://www.baidu.com'
    text, status_code = open_url_using_proxy(url, proxy)
    VALID = False
    if status_code == 200:
        if r_title:
            if r_title[0] == '<title>百度一下,你就知道</title>':
                VALID = True
    if VALID:
        print('有效代理IP: ' + proxy)
    else:
        print('无效代理IP: ' + proxy)

完整代码

from bs4 import BeautifulSoup
import requests
import re
import json


def open_proxy_url(url):
    user_agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.119 Safari/537.36'
    headers = {'User-Agent': user_agent}
    try:
        r = requests.get(url, headers = headers, timeout = 10)
        r.raise_for_status()
        r.encoding = r.apparent_encoding
        return r.text
    except:
        print('无法访问网页' + url)


def get_proxy_ip(response):
    proxy_ip_list = []
    soup = BeautifulSoup(response, 'html.parser')
    proxy_ips = soup.find(id = 'ip_list').find_all('tr')
    for proxy_ip in proxy_ips:
        if len(proxy_ip.select('td')) >=8:
            ip = proxy_ip.select('td')[1].text
            port = proxy_ip.select('td')[2].text
            protocol = proxy_ip.select('td')[5].text
            if protocol in ('HTTP','HTTPS','http','https'):
                proxy_ip_list.append(f'{protocol}://{ip}:{port}')
    return proxy_ip_list


def open_url_using_proxy(url, proxy):
    user_agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.119 Safari/537.36'
    headers = {'User-Agent': user_agent}
    proxies = {}
    if proxy.startswith(('HTTPS','https')):
        proxies['https'] = proxy
    else:
        proxies['http'] = proxy

    try:
        r = requests.get(url, headers = headers, proxies = proxies, timeout = 10)
        r.raise_for_status()
        r.encoding = r.apparent_encoding
        return (r.text, r.status_code)
    except:
        print('无法访问网页' + url)
        print('无效代理IP: ' + proxy)
        return False


def check_proxy_avaliability(proxy):
    url = 'http://www.baidu.com'
    result = open_url_using_proxy(url, proxy)
    VALID_PROXY = False
    if result:
        text, status_code = result
        if status_code == 200:
            r_title = re.findall('<title>.*</title>', text)
            if r_title:
                if r_title[0] == '<title>百度一下,你就知道</title>':
                    VALID_PROXY = True
        if VALID_PROXY:
            check_ip_url = 'https://jsonip.com/'
            try:
                text, status_code = open_url_using_proxy(check_ip_url, proxy)
            except:
                return

            print('有效代理IP: ' + proxy)
            with open('valid_proxy_ip.txt','a') as f:
                f.writelines(proxy)
            try:
                source_ip = json.loads(text).get('ip')
                print(f'源IP地址为:{source_ip}')
                print('='*40)
            except:
                print('返回的非json,无法解析')
                print(text)
    else:
        print('无效代理IP: ' + proxy)


if __name__ == '__main__':
    proxy_url = 'https://www.xicidaili.com/'
    proxy_ip_filename = 'proxy_ip.txt'
    text = open(proxy_ip_filename, 'r').read()
    proxy_ip_list = get_proxy_ip(text)
    for proxy in proxy_ip_list:
        check_proxy_avaliability(proxy)

关于http和https代理

可以看到proxies中有两个键值对:
{‘http’: ‘http://IP:port‘,‘https’:'https://IP:port‘}
其中 HTTP 代理,只代理 HTTP 网站,对于 HTTPS 的网站不起作用,也就是说,用的是本机 IP,反之亦然。
我刚才使用的验证的网站是https://jsonip.com, 是HTTPS网站所以探测到的有效代理中,如果是https代理,则返回的是代理地址
如果是http代理,将使用本机IP进行访问,返回的是我的公网IP地址

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值