有没有一种好方法可以使用python下载大量文件?这段代码足够快速下载大约100个文件。但我需要下载300000个文件。显然它们都是非常小的文件(或者我不会下载300000个),所以真正的瓶颈似乎是这个循环。有人有什么想法吗?可能使用MPI或线程?
我只需要忍受瓶颈吗?或者有没有更快的方法,甚至不使用python?
(为了完整起见,我包含了代码的完整开头)from __future__ import division
import pandas as pd
import numpy as np
import urllib2
import os
import linecache
#we start with a huge file of urls
data= pd.read_csv("edgar.csv")
datatemp2=data[data['form'].str.contains("14A")]
datatemp3=data[data['form'].str.contains("14C")]
#data2 is the cut-down file
data2=datatemp2.append(datatemp3)
flist=np.array(data2['filename'])
print len(flist)
print flist
###below we have a script to download all of the files in the data2 database
###here you will need to create a new directory named edgar14A14C in your CWD
original=os.getcwd().copy()
os.chdir(str(os.getcwd())+str('/edgar14A14C'))
for i in xrange(len(flist)):
url = "ftp://ftp.sec.gov/"+str(flist[i])
file_name = str(url.split('/')[-1])
u = urllib2.urlopen(url)
f = open(file_name, 'wb')
f.write(u.read())
f.close()
print i