如果内存有问题,并且如果您提前知道字段的大小,则可能不希望首先读取整个文件.这样的事情可能更合适:
#allocate memory (np.empty would work too and be marginally faster,
# but probably not worth mentioning).
a=np.zeros((3000,300),dtype=np.float32)
with open(filename) as f:
for i,line in enumerate(f):
a[i,:]=map(np.float32,line.split())
在我的机器上进行几次快速(和令人惊讶的)测试后,似乎甚至不需要地图:
a=np.zeros((3000,300),dtype=np.float32)
with open(filename) as f:
for i,line in enumerate(f):
a[i,:]=line.split()
这可能不是最快的,但它肯定是最有效的内存方式.
一些测试:
import numpy as np
def func1(): #No map -- And pretty speedy :-).
a=np.zeros((3000,300),dtype=np.float32)
with open('junk.txt') as f:
for i,line in enumerate(f):
a[i,:]=line.split()
def func2():
a=np.zeros((3000,300),dtype=np.float32)
with open('junk.txt') as f:
for i,line in enumerate(f):
a[i,:]=map(np.float32,line.split())
def func3():
a=np.zeros((3000,300),dtype=np.float32)
with open('junk.txt') as f:
for i,line in enumerate(f):
a[i,:]=map(float,line.split())
import timeit
print timeit.timeit('func1()',setup='from __main__ import func1',number=3) #1.36s
print timeit.timeit('func2()',setup='from __main__ import func2',number=3) #11.53s
print timeit.timeit('func3()',setup='from __main__ import func3',number=3) #1.72s