I have a csv file. I read it:
import pandas as pd
data = pd.read_csv('my_data.csv', sep=',')
data.head()
It has output like:
id city department sms category
01 khi revenue NaN 0
02 lhr revenue good 1
03 lhr revenue NaN 0
I want to remove all the rows where sms column is empty/NaN. What is efficient way to do it?
解决方案
Use dropna with parameter subset for specify column for check NaNs:
data = data.dropna(subset=['sms'])
print (data)
id city department sms category
1 2 lhr revenue good 1
Another solution with boolean indexing and notnull:
data = data[data['sms'].notnull()]
print (data)
id city department sms category
1 2 lhr revenue good 1
Alternative with query:
print (data.query("sms == sms"))
id city department sms category
1 2 lhr revenue good 1
Timings
#[300000 rows x 5 columns]
data = pd.concat([data]*100000).reset_index(drop=True)
In [123]: %timeit (data.dropna(subset=['sms']))
100 loops, best of 3: 19.5 ms per loop
In [124]: %timeit (data[data['sms'].notnull()])
100 loops, best of 3: 13.8 ms per loop
In [125]: %timeit (data.query("sms == sms"))
10 loops, best of 3: 23.6 ms per loop