I would like to do the following:
If two rows have exactly the same value in 3 columns ("ID","symbol", and "date") and have either "X" or "T" in one column ("message"), then remove both of these rows. However, if two rows have the same value in the same 3 columns but a value different than "X" or "T" in the other column, then leave intact.
Here is an example of my data frame:
df = pd.DataFrame({"ID":["AA-1", "AA-1", "C-0" ,"BB-2", "BB-2"], "symbol":["A","A","C","B","B"], "date":["06/24/2014","06/24/2014","06/20/2013","06/25/2014","06/25/2015"], "message": ["T","X","T","",""] })
Note that the first two rows have the same value values for the columns "ID","symbol", and "date", and "T" and "X" in the column "message". I would like to remove these two rows.
However, the last two rows have the same value in columns "ID","symbol", and "date", but blank (different than "X" or "T") in the column "message".
I am interested in applying the function to a large dataset with several million rows. So far what I have tried consumes all my memory,
thank you and I appreciate any help,
解决方案
This might work for you:
vals = ['X', 'T']
pd.concat([df[~df.message.isin(vals)], df[df.message.isin(vals)].loc[~df.duplicated(subset=['ID', 'date', 'symbol'], keep=False), :]])
ID date message symbol
3 BB-2 06/25/2014 B
4 BB-2 06/25/2015 B
2 C-0 06/20/2013 T C
It's reasonably fast:
%%timeit
pd.concat([df[~df.message.isin(['X', 'T'])], df[df.message.isin(['X', 'T'])].loc[~df.duplicated(subset=['ID', 'date', 'symbol'], keep=False), :]])
100 loops, best of 3: 1.99 ms per loop
%%timeit
df.groupby(['ID','date','symbol']).filter(lambda x: ~x.message.isin(['T','X']).all())
100 loops, best of 3: 2.71 ms per loop
The alternative was giving indexing errors.