这个代码实现的功能是爬取代理ip将之存放到proxy.txt文件中,用到了BeautifulSoup库,还有自带的urllib2库和itertools。
代码如下:
`
# -- coding:utf-8 --
from bs4 import BeautifulSoup
import itertools
import urllib2
from itertools import izip
f = open(“proxy.txt”,”w”)
def download(url):
url = urllib2.urlopen(url)
soup = BeautifulSoup(url, “html.parser”)
iplist = soup.findAll(“td”,{“data-title”:”IP”})
portlist = soup.findAll(“td”,{“data-title”:”PORT”})
for ip,port in izip(iplist,portlist):
f.write(ip.get_text()+”\t”+port.get_text()+”\n”)
for page in itertools.count(1):
url = ‘http://www.kuaidaili.com/free/inha/%d/’ % page
print page
html = download(url)
`