ip代理池的爬取和验证可用性
初学python爬虫,爬取微博的时候被封了IP,查资料才了解到ip代理,于是做了一个简易程序在免费网站上爬取可用ip,采用正则表达式和requests方法,并用多线程提高效率。特此记录,既是为自己记录下来,也希望帮助到其他初学者。
import threading
import requests
import re
baseurl = "https://www.xicidaili.com/wn/" #西刺网https基础网页,后面加数字即可
testurl = "https://api.ipify.org?format=json" #用于测试能否使用
path = 'F:\\image\\1.txt' #ip地址存储位置
headers = {
"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36"
}
Https = [] #存储https
def test(https): #测试函数
try:
proxy = {'http':"115.216.76.120:9999",'https':https}
response = requests.get(testurl,headers,proxies=proxy,timeout=10)
if response.status_code == 200:
print(response.content)
Https.append(https)
except Exception as e:
print(e)
def wfile(datas): #写入文件
with open(path,'a') as file:
for data in datas:
file.write(data + "\n")
file.close()
def download(url,page):
try:
response = requests.get(url+str(page), headers=headers)
if response.status_code == 200:
datas = response.text
data = re.findall('<td>(\d+\.\d+\.\d+\.\d+)</td>', datas, re.S)
data1 = re.findall('<td>(\d{1,6})</td>', datas, re.S)
i = 0
for h in data:
test(h + ":" + data1[i])
i = i + 1
except Exception as e:
print(e)
threads = []
try:
for i in range(1,15):
t1 = threading.Thread(target=download, args=("https://www.xicidaili.com/wn/",i))
t1.start()
threads.append(t1)
for t in threads:
t.join()
print("退出主线程")
except:
print ("Error: 无法启动线程")
wfile(Https)
该代码爬取西刺网的数据,更改baseurl,testurl可爬取不同类型的ip。
更改range可更改爬取数量。