Is there a pythonic way to figure out which rows in a CSV file contain headers and values and which rows contain trash and then get the headers/values rows into data frames?
I'm relatively new to python and have been using it to read multiple CSVs exported from a scientific instrument's datalog, and when dealing with CSVs so far for other tasks I've always defaulted to using the pandas library. However, these CSV exports can vary depending on the number of "tests" logged on each instrument.
The column headers and data structure are the same between instruments, but there is a "preamble" separating each test that can change. So I end up with backups that look something like this (for this example there are two tests, but there could be potentially any number of tests):
blah blah here's a test and
here's some information
you don't care about
even a little bit
header1, header2, header3
1, 2, 3
4, 5, 6
oh you have another test
here's some more garbage
that's different than the last one
this should make
life interesting
header1, header2, header3
7, 8, 9
10, 11, 12
13, 14, 15
If it was a fixed length preamble each time I'd just use the skiprow parameter, but the preamble is variable length and the number of rows in each test is of variable length.
My end goal is to be able to merge all the tests and end up with something like:
header1, header2, header3
1, 2, 3
4, 5, 6
7, 8, 9
10, 11, 12
13, 14, 15
Which I can then manipulate with pandas as usual.
I've tried the following to find the first row with my expected headers:
import csv
import pandas as pd
with open('my_file.csv', 'rb') as input_file:
for row_num, row in enumerate(csv.reader(input_file, delimiter=',')):
# The CSV module will return a blank list []
# so added the len(row)>0 so it doesn't error out
# later when searching for a string
if len(row) > 0:
# There's probably a better way to find it, but I just convert
# the list to a string then search for the expected header
if "['header1', 'header2', 'header3']" in str(row):
header_row = row_num
df = pd.read_csv('my_file.csv', skiprows = header_row, header=0)
print df
This works if I only have one test because it finds the first row that has the headers, but of course the header_row variable is getting updated each additional time it finds the header, so in the example above I end up with output:
header1 header2 header3
0 7 8 9
1 10 11 12
2 13 14 15
I'm getting lost figuring out how to append each instance of the header/dataset to a dataframe before continuing on to searching for the next instance of the header/dataset.
And it's probably not super efficient when dealing with a large number of files to have to open it once with the csv module then again with pandas.
解决方案
This program might help. It is essentially a wrapper around the csv.reader() object, which wrapper greps the good data out.
import pandas as pd
import csv
import sys
def ignore_comments(fp, start_fn, end_fn, keep_initial):
state = 'keep' if keep_initial else 'start'
for line in fp:
if state == 'start' and start_fn(line):
state = 'keep'
yield line
elif state == 'keep':
if end_fn(line):
state = 'drop'
else:
yield line
elif state == 'drop':
if start_fn(line):
state = 'keep'
if __name__ == "__main__":
df = open('x.in')
df = csv.reader(df, skipinitialspace=True)
df = ignore_comments(
df,
lambda x: x == ['header1', 'header2', 'header3'],
lambda x: x == [],
False)
df = pd.read_csv(df, engine='python')
print df