I have this quite big CSV file (15 Gb) and I need to read about 1 million random lines from it.
As far as I can see - and implement - the CSV utility in Python only allows to iterate sequentially in the file.
It's very memory consuming to read the all file into memory to use some random choosing and it's very time consuming to go trough all the file and discard some values and choose others, so, is there anyway to choose some random line from the CSV file and read only that line?
I tried without success:
import csv
with open('linear_e_LAN2A_F_0_435keV.csv') as file:
reader = csv.reader(file)
print reader[someRandomInteger]
A sample of the CSV file:
331.093,329.735
251.188,249.994
374.468,373.782
295.643,295.159
83.9058,0
380.709,116.221
352.238,351.891
183.809,182.615
257.277,201.302
61.4598,40.7106
解决方案import random
filesize = 1500 #size of the really big file
offset = random.randrange(filesize)
f = open('really_big_file')
f.seek(offset) #go to random position
f.readline() # discard - bound to be partial line
random_line = f.readline() # bingo!
# extra to handle last/first line edge cases
if len(random_line) == 0: # we have hit the end
f.seek(0)
random_line = f.readline() # so we'll grab the first line instead
As @AndreBoos pointed out, this approach will lead to biased selection. If you know min and max length of line you can remove this bias by doing the following:
Let's assume (in this case) we have min=3 and max=15
1) Find the length (Lp) of the previous line.
Then if Lp = 3, the line is most biased against. Hence we should take it 100% of the time
If Lp = 15, the line is most biased towards. We should only take it 20% of the time as it is 5* more likely selected.
We accomplish this by randomly keeping the line X% of the time where:
X = min / Lp
If we don't keep the line, we do another random pick until our dice roll comes good. :-)