pickle 在dump的时候是把数据使用转换成二进制的压缩方法保存数据,每打开一次数据就要把二进制文件转换成可读的对象,所以存一次,读一次,再读时会出现data ran out of的错误。 我说为什么pickle的速度这么快,但是几乎没有人用,原来是这个原因。下面是我测试 import pickle datalist = [ [ 1, 1, 'yes' ], [ 1, 1, 'yes' ], [ 1, 1, 'no' ], [ 0, 1, 'no' ], [ 0, 1, 'no' ] ] datadict = {0: [ 1, 2, 3, 4 ], 1: ('a', 'b',), 2: {'c': 'yes', 'd': 'no'}} with open('data', 'wb') as indata: pickle.dump(datalist, indata, pickle.HIGHEST_PROTOCOL) pickle.dump(datalist, indata, pickle.HIGHEST_PROTOCOL) with open('data','rb') as outdata: data = pickle.load(outdata) print(data) data = pickle.load(outdata) print(data) data = pickle.load(outdata) print(data)