I have a .csv file of contact information that I import as a pandas data frame.
>>> import pandas as pd
>>>
>>> df = pd.read_csv('data.csv')
>>> df.head()
fName lName email title
0 John Smith jsmith@gmail.com CEO
1 Joe Schmo jschmo@business.com Bagger
2 Some Person some.person@hotmail.com Clerk
After importing the data, I'd like to drop rows where one field contains one of several substrings in a list. For example:
to_drop = ['Clerk', 'Bagger']
for i in range(len(df)):
for k in range(len(to_drop)):
if to_drop[k] in df.title[i]:
# some code to drop the rows from the data frame
df.to_csv("results.csv")
What is the preferred way to do this in Pandas? Should this even be a post-processing step, or is it preferred to filter this prior to writing to the data frame in the first place? My thought was that this would be easier to manipulate once in a data frame object.
解决方案
Use isin and pass your list of terms to search for you can then negate the boolean mask using ~ and this will filter out those rows:
In [6]:
to_drop = ['Clerk', 'Bagger']
df[~df['title'].isin(to_drop)]
Out[6]:
fName lName email title
0 John Smith jsmith@gmail.com CEO
Another method is to join the terms so it becomes a regex and use the vectorised str.contains:
In [8]:
df[~df['title'].str.contains('|'.join(to_drop))]
Out[8]:
fName lName email title
0 John Smith jsmith@gmail.com CEO
IMO it will be easier and probably faster to perform the filtering as a post processing step because if you decide to filter whilst reading then you are iteratively growing the dataframe which is not efficient.
Alternatively you can read the csv in chunks, filter out the rows you don't want and append the chunks to your output csv