利用Python进行数据分析第二版复现(六)

第07章 数据清洗和准备

7.1 处理缺失数据

pandas使用浮点值NaN(Not a Number)表示缺失数据。
Python内置的None值在对象数组中也可以作为NA.
列出了一些关于缺失值处理的函数。
在这里插入图片描述

import pandas as pd
import numpy as np
string_data = pd.Series(['aardvark', 'artichoke', np.nan, 'avocado'])
string_data
0     aardvark
1    artichoke
2          NaN
3      avocado
dtype: object
string_data.isnull()
0    False
1    False
2     True
3    False
dtype: bool
string_data[0] = None
string_data.isnull()
0     True
1    False
2     True
3    False
dtype: bool
滤除缺失数据
from numpy import nan as NA
data = pd.Series([1, NA, 3.5, NA, 7])
data.dropna()

0    1.0
2    3.5
4    7.0
dtype: float64
#等价于:
data[data.notnull()]
0    1.0
2    3.5
4    7.0
dtype: float64
#对于DataFrame对象,dropna默认丢弃任何含有缺失值的行.
data = pd.DataFrame([[1., 6.5, 3.], [1., NA, NA],
                     [NA, NA, NA], [NA, 6.5, 3.]])
cleaned = data.dropna()
print(data)
print('\n')
print('\n')
print(cleaned)
     0    1    2
0  1.0  6.5  3.0
1  1.0  NaN  NaN
2  NaN  NaN  NaN
3  NaN  6.5  3.0




     0    1    2
0  1.0  6.5  3.0
#传入how='all'将只丢弃全为NA的那些行:
print(data.dropna(how='all'))
     0    1    2
0  1.0  6.5  3.0
1  1.0  NaN  NaN
3  NaN  6.5  3.0
#用这种方式丢弃列,只需传入axis=1即可:
data[4] = NA
print(data)
print('\n')
print(data.dropna(axis=1, how='all'))
     0    1    2   4
0  1.0  6.5  3.0 NaN
1  1.0  NaN  NaN NaN
2  NaN  NaN  NaN NaN
3  NaN  6.5  3.0 NaN


     0    1    2
0  1.0  6.5  3.0
1  1.0  NaN  NaN
2  NaN  NaN  NaN
3  NaN  6.5  3.0
#只想留下1部分观测数据,可以用thresh参数实现此目的
df = pd.DataFrame(np.random.randn(7, 3))
df.iloc[:4, 1] = NA
df.iloc[:2, 2] = NA
print(df)
print('\n')
print(df.dropna())
print('\n')
print(df.dropna(thresh=2))
          0         1         2
0  0.875651       NaN       NaN
1  0.261047       NaN       NaN
2 -0.677917       NaN  0.360858
3 -1.035602       NaN  0.097730
4  0.127693 -1.006822  0.404568
5 -0.124359  0.444798 -0.028498
6 -0.779734 -0.012785  1.745280


          0         1         2
4  0.127693 -1.006822  0.404568
5 -0.124359  0.444798 -0.028498
6 -0.779734 -0.012785  1.745280


          0         1         2
2 -0.677917       NaN  0.360858
3 -1.035602       NaN  0.097730
4  0.127693 -1.006822  0.404568
5 -0.124359  0.444798 -0.028498
6 -0.779734 -0.012785  1.745280

填充缺失数据

通过1个常数调用fillna就会将缺失值替换为那个常数值。

print(df.fillna(0))
          0         1         2
0  0.875651  0.000000  0.000000
1  0.261047  0.000000  0.000000
2 -0.677917  0.000000  0.360858
3 -1.035602  0.000000  0.097730
4  0.127693 -1.006822  0.404568
5 -0.124359  0.444798 -0.028498
6 -0.779734 -0.012785  1.745280
#可以实现对不同列填充不同的值
print(df.fillna({1: 0.5, 2: 0}))
#fillna默认会返回新对象,但也可以对现有对象进行就地修改:
          0         1         2
0  0.875651  0.500000  0.000000
1  0.261047  0.500000  0.000000
2 -0.677917  0.500000  0.360858
3 -1.035602  0.500000  0.097730
4  0.127693 -1.006822  0.404568
5 -0.124359  0.444798 -0.028498
6 -0.779734 -0.012785  1.745280
#reindexing有效的插值方法也可以用于fillna。
df = pd.DataFrame(np.random.randn(7, 3))
df.iloc[2:, 1] = NA
df.iloc[4:, 2] = NA
print(df)
print('\n')
print(df.fillna(method='ffill'))
print('\n')
print(df.fillna(method='ffill', limit=2))
          0         1         2
0  0.210105  0.442817 -0.468990
1 -1.027097  0.892226 -0.452749
2  0.662849       NaN -0.476516
3  0.556252       NaN  0.208664
4 -0.021977       NaN       NaN
5 -1.325938       NaN       NaN
6 -0.520716       NaN       NaN


          0         1         2
0  0.210105  0.442817 -0.468990
1 -1.027097  0.892226 -0.452749
2  0.662849  0.892226 -0.476516
3  0.556252  0.892226  0.208664
4 -0.021977  0.892226  0.208664
5 -1.325938  0.892226  0.208664
6 -0.520716  0.892226  0.208664


          0         1         2
0  0.210105  0.442817 -0.468990
1 -1.027097  0.892226 -0.452749
2  0.662849  0.892226 -0.476516
3  0.556252  0.892226  0.208664
4 -0.021977       NaN  0.208664
5 -1.325938       NaN  0.208664
6 -0.520716       NaN       NaN

7.2 数据转换

移除重复数据
 data = pd.DataFrame({'k1': ['one', 'two'] * 3 + ['two'],
                      'k2': [1, 1, 2, 3, 3, 4, 4]})
print(data)
    k1  k2
0  one   1
1  two   1
2  one   2
3  two   3
4  one   3
5  two   4
6  two   4
#DataFrame的duplicated方法返回1个布尔型Series,表示各⾏是否是重复行(前面出现过的行)
print(data.duplicated())
#还有1个与此相关的drop_duplicates方法,它会返回1个DataFrame,重复的数组会标为False:
print('\n')
print( data.drop_duplicates())
0    False
1    False
2    False
3    False
4    False
5    False
6     True
dtype: bool


    k1  k2
0  one   1
1  two   1
2  one   2
3  two   3
4  one   3
5  two   4
#假设我们还有1列值,且只希望根据k1列过滤重复项:
data['v1'] = range(7)
print(data.drop_duplicates(['k1']))
#duplicated和drop_duplicates默认保留的是第1个出现的值组合。传入keep='last'则保留最后1个
print('\n')
print(data.drop_duplicates(['k1', 'k2'], keep='last'))
    k1  k2  v1
0  one   1   0
1  two   1   1


    k1  k2  v1
0  one   1   0
1  two   1   1
2  one   2   2
3  two   3   3
4  one   3   4
6  two   4   6
利用函数或映射进行数据转换
data = pd.DataFrame({'food': ['bacon', 'pulled pork', 'bacon',
                              'Pastrami', 'corned beef', 'Bacon',
                              'pastrami', 'honey ham', 'nova lox'],
                     'ounces': [4, 3, 12, 6, 7.5, 8, 3, 5, 6]})
print(data)
          food  ounces
0        bacon     4.0
1  pulled pork     3.0
2        bacon    12.0
3     Pastrami     6.0
4  corned beef     7.5
5        Bacon     8.0
6     pastrami     3.0
7    honey ham     5.0
8     nova lox     6.0
#你想要添加1列表示该肉类食物来源的动物类型。我们先编写1个不同⾁类到动物的映射
meat_to_animal = {
'bacon': 'pig',
'pulled pork': 'pig',
'pastrami': 'cow',
'corned beef': 'cow',
'honey ham': 'pig',
'nova lox': 'salmon'
}
#使用Series的str.lower方法,将各个值转换为小写
lowercased = data['food'].str.lower()
print(lowercased)
data['animal'] = lowercased.map(meat_to_animal)
print('\n')
print(data)
0          bacon
1    pulled pork
2          bacon
3       pastrami
4    corned beef
5          bacon
6       pastrami
7      honey ham
8       nova lox
Name: food, dtype: object


          food  ounces  animal
0        bacon     4.0     pig
1  pulled pork     3.0     pig
2        bacon    12.0     pig
3     Pastrami     6.0     cow
4  corned beef     7.5     cow
5        Bacon     8.0     pig
6     pastrami     3.0     cow
7    honey ham     5.0     pig
8     nova lox     6.0  salmon

替换值

data = pd.Series([1., -999., 2., -999., -1000., 3.])
print(data)

data.replace(-999, np.nan)
0       1.0
1    -999.0
2       2.0
3    -999.0
4   -1000.0
5       3.0
dtype: float64





0       1.0
1       NaN
2       2.0
3       NaN
4   -1000.0
5       3.0
dtype: float64
#一次性替换多个值
print(data.replace([-999, -1000], np.nan))#一换多
print('\n')
print(data.replace([-999, -1000], [np.nan, 0]))#一换一,列表形式
print('\n')
print( data.replace({-999: np.nan, -1000: 0}))#一换一,字典形式
0    1.0
1    NaN
2    2.0
3    NaN
4    NaN
5    3.0
dtype: float64


0    1.0
1    NaN
2    2.0
3    NaN
4    0.0
5    3.0
dtype: float64


0    1.0
1    NaN
2    2.0
3    NaN
4    0.0
5    3.0
dtype: float64
重命名轴索引
data = pd.DataFrame(np.arange(12).reshape((3, 4)),
                    index=['Ohio', 'Colorado', 'New York'],
                    columns=['one', 'two', 'three', 'four'])


transform = lambda x: x[:4].upper()
data.index.map(transform)
Index(['OHIO', 'COLO', 'NEW '], dtype='object')
data.index = data.index.map(transform)
print(data)
      one  two  three  four
OHIO    0    1      2     3
COLO    4    5      6     7
NEW     8    9     10    11
print(data.rename(index=str.title, columns=str.upper))
      ONE  TWO  THREE  FOUR
Ohio    0    1      2     3
Colo    4    5      6     7
New     8    9     10    11
#rename可以结合字典型对象实现对部分轴标签的更新
data.rename(index={'OHIO': 'INDIANA'},
            columns={'three': 'peekaboo'})
onetwopeekaboofour
INDIANA0123
COLO4567
NEW891011

离散化和面元划分

ages = [20, 22, 25, 27, 21, 23, 37, 31, 61, 45, 41, 32]
#接下来将这些数据划分为“18到25”、“26到35”、“35到60”以及“60以上”几个面元。要实现该功能,你需
#要使用pandas的cut函数
bins = [18, 25, 35, 60, 100]
cats = pd.cut(ages, bins)
print(cats)
[(18, 25], (18, 25], (18, 25], (25, 35], (18, 25], ..., (25, 35], (60, 100], (35, 60], (35, 60], (25, 35]]
Length: 12
Categories (4, interval[int64]): [(18, 25] < (25, 35] < (35, 60] < (60, 100]]
cats.codes
array([0, 0, 0, 1, 0, 0, 2, 1, 3, 2, 2, 1], dtype=int8)
 cats.categories
IntervalIndex([(18, 25], (25, 35], (35, 60], (60, 100]],
              closed='right',
              dtype='interval[int64]')
print(pd.value_counts(cats))
#pd.value_counts(cats)是pandas.cut结果的面元计数
(18, 25]     5
(35, 60]     3
(25, 35]     3
(60, 100]    1
dtype: int64
#跟“区间”的数学符号1样,圆括号表示开端,而方括号则表示闭端(包括)。哪边是闭端可以通过
#right=False进行修改
pd.cut(ages, [18, 26, 36, 61, 100], right=False)
[[18, 26), [18, 26), [18, 26), [26, 36), [18, 26), ..., [26, 36), [61, 100), [36, 61), [36, 61), [26, 36)]
Length: 12
Categories (4, interval[int64]): [[18, 26) < [26, 36) < [36, 61) < [61, 100)]
group_names = ['Youth', 'YoungAdult', 'MiddleAged', 'Senior']
pd.cut(ages, bins, labels=group_names)

[Youth, Youth, Youth, YoungAdult, Youth, ..., YoungAdult, Senior, MiddleAged, MiddleAged, YoungAdult]
Length: 12
Categories (4, object): [Youth < YoungAdult < MiddleAged < Senior]
data = np.random.rand(20)
#将1些均匀分布的数据分成四组
pd.cut(data, 4, precision=2)
#选项precision=2,限定小数只有两位
[(0.5, 0.74], (0.0012, 0.25], (0.25, 0.5], (0.5, 0.74], (0.25, 0.5], ..., (0.74, 0.99], (0.74, 0.99], (0.25, 0.5], (0.74, 0.99], (0.25, 0.5]]
Length: 20
Categories (4, interval[float64]): [(0.0012, 0.25] < (0.25, 0.5] < (0.5, 0.74] < (0.74, 0.99]]
data = np.random.randn(1000) 
cats = pd.qcut(data, 4)
print(cats)
print('\n')
print(pd.value_counts(cats))
[(-0.737, -0.0221], (0.744, 2.901], (0.744, 2.901], (0.744, 2.901], (0.744, 2.901], ..., (-0.737, -0.0221], (0.744, 2.901], (0.744, 2.901], (0.744, 2.901], (0.744, 2.901]]
Length: 1000
Categories (4, interval[float64]): [(-3.961, -0.737] < (-0.737, -0.0221] < (-0.0221, 0.744] < (0.744, 2.901]]


(0.744, 2.901]       250
(-0.0221, 0.744]     250
(-0.737, -0.0221]    250
(-3.961, -0.737]     250
dtype: int64

检测和过滤异常值

data = pd.DataFrame(np.random.randn(1000, 4))
print(data.describe())
                 0            1            2            3
count  1000.000000  1000.000000  1000.000000  1000.000000
mean      0.000800    -0.032331     0.023345     0.048245
std       1.003987     0.996681     1.007219     1.010186
min      -3.091473    -3.506493    -3.055961    -3.951916
25%      -0.666621    -0.701949    -0.652654    -0.610015
50%      -0.022072    -0.045974     0.010023     0.061506
75%       0.660415     0.638646     0.696614     0.728876
max       2.867995     2.941726     3.496384     2.806226
col = data[2]
col[np.abs(col) > 3]
231    3.058974
265    3.058203
453    3.496384
723    3.127166
734    3.364669
826   -3.055961
Name: 2, dtype: float64
#要选出全部含有“超过3或-3的值”的行,你可以在布尔型DataFrame中使用any方法:
print(data[(np.abs(data) > 3).any(1)])
            0         1         2         3
160 -0.009122  1.304577 -0.562316 -3.211438
231  0.326291 -0.965339  3.058974 -0.249949
265  0.293463  0.026279  3.058203 -0.690907
298  0.172609 -3.506493  1.028811 -1.075643
423 -0.316089 -1.273325 -1.538425 -3.951916
453 -0.654466 -0.017330  3.496384 -0.315267
616 -3.091473 -1.450217  1.190238 -1.281410
723 -0.836537 -0.184380  3.127166  0.515251
734  1.096555  0.346592  3.364669  1.115433
826  0.278241  2.690008 -3.055961 -0.471325
901 -0.399469 -3.126419 -1.882590 -1.159301
#将值限制在-3到3之间的区间里
data[np.abs(data) > 3] = np.sign(data) * 3
print(data.describe())
                 0            1            2            3
count  1000.000000  1000.000000  1000.000000  1000.000000
mean      0.000891    -0.031698     0.022295     0.049408
std       1.003709     0.994655     1.003572     1.006192
min      -3.000000    -3.000000    -3.000000    -3.000000
25%      -0.666621    -0.701949    -0.652654    -0.610015
50%      -0.022072    -0.045974     0.010023     0.061506
75%       0.660415     0.638646     0.696614     0.728876
max       2.867995     2.941726     3.000000     2.806226
#np.sign(data)可以生成1和-1:
print(np.sign(data).head())
     0    1    2    3
0  1.0  1.0 -1.0 -1.0
1 -1.0  1.0  1.0 -1.0
2  1.0 -1.0  1.0  1.0
3 -1.0  1.0  1.0  1.0
4 -1.0  1.0 -1.0  1.0

排列和随机采样

df = pd.DataFrame(np.arange(5 * 4).reshape((5, 4)))
sampler = np.random.permutation(5)
print(sampler)
print('\n')
print(df)
print('\n')
print( df.take(sampler))
print('\n')
print(df.sample(n=3))
[1 4 3 2 0]


    0   1   2   3
0   0   1   2   3
1   4   5   6   7
2   8   9  10  11
3  12  13  14  15
4  16  17  18  19


    0   1   2   3
1   4   5   6   7
4  16  17  18  19
3  12  13  14  15
2   8   9  10  11
0   0   1   2   3


    0   1   2   3
3  12  13  14  15
0   0   1   2   3
1   4   5   6   7
计算指标/哑变量
 df = pd.DataFrame({'key': ['b', 'b', 'a', 'c', 'a', 'b'],
                    'data1': range(6)})
print( pd.get_dummies(df['key']))
   a  b  c
0  0  1  0
1  0  1  0
2  1  0  0
3  0  0  1
4  1  0  0
5  0  1  0
dummies = pd.get_dummies(df['key'], prefix='key')
df_with_dummy = df[['data1']].join(dummies)
print(df_with_dummy)
   data1  key_a  key_b  key_c
0      0      0      1      0
1      1      0      1      0
2      2      1      0      0
3      3      0      0      1
4      4      1      0      0
5      5      0      1      0
mnames = ['movie_id', 'title', 'genres']
movies = pd.read_table('datasets/movielens/movies.dat', sep='::',
                       header=None, names=mnames)
print(movies[:10])
   movie_id                               title                        genres
0         1                    Toy Story (1995)   Animation|Children's|Comedy
1         2                      Jumanji (1995)  Adventure|Children's|Fantasy
2         3             Grumpier Old Men (1995)                Comedy|Romance
3         4            Waiting to Exhale (1995)                  Comedy|Drama
4         5  Father of the Bride Part II (1995)                        Comedy
5         6                         Heat (1995)         Action|Crime|Thriller
6         7                      Sabrina (1995)                Comedy|Romance
7         8                 Tom and Huck (1995)          Adventure|Children's
8         9                 Sudden Death (1995)                        Action
9        10                    GoldenEye (1995)     Action|Adventure|Thriller


E:\anaconda\lib\site-packages\ipykernel_launcher.py:3: ParserWarning: Falling back to the 'python' engine because the 'c' engine does not support regex separators (separators > 1 char and different from '\s+' are interpreted as regex); you can avoid this warning by specifying engine='python'.
  This is separate from the ipykernel package so we can avoid doing imports until
#从数据集中抽取出不同的genre值
all_genres = []
for x in movies.genres:
    all_genres.extend(x.split('|'))
genres = pd.unique(all_genres)
print(genres)
['Animation' "Children's" 'Comedy' 'Adventure' 'Fantasy' 'Romance' 'Drama'
 'Action' 'Crime' 'Thriller' 'Horror' 'Sci-Fi' 'Documentary' 'War'
 'Musical' 'Mystery' 'Film-Noir' 'Western']
zero_matrix = np.zeros((len(movies), len(genres)))
dummies = pd.DataFrame(zero_matrix, columns=genres)
#使用dummies.columns来计算每个类型的列索引
gen = movies.genres[0]
print(gen.split('|'))
dummies.columns.get_indexer(gen.split('|'))
['Animation', "Children's", 'Comedy']





array([0, 1, 2], dtype=int32)
for i, gen in enumerate(movies.genres):
    indices = dummies.columns.get_indexer(gen.split('|'))
    dummies.iloc[i, indices] = 1
movies_windic = movies.join(dummies.add_prefix('Genre_'))
movies_windic.iloc[0]
movie_id                                       1
title                           Toy Story (1995)
genres               Animation|Children's|Comedy
Genre_Animation                                1
Genre_Children's                               1
Genre_Comedy                                   1
Genre_Adventure                                0
Genre_Fantasy                                  0
Genre_Romance                                  0
Genre_Drama                                    0
Genre_Action                                   0
Genre_Crime                                    0
Genre_Thriller                                 0
Genre_Horror                                   0
Genre_Sci-Fi                                   0
Genre_Documentary                              0
Genre_War                                      0
Genre_Musical                                  0
Genre_Mystery                                  0
Genre_Film-Noir                                0
Genre_Western                                  0
Name: 0, dtype: object
np.random.seed(12345)
values = np.random.rand(10)
bins = [0, 0.2, 0.4, 0.6, 0.8, 1]
print( pd.get_dummies(pd.cut(values, bins)))
   (0.0, 0.2]  (0.2, 0.4]  (0.4, 0.6]  (0.6, 0.8]  (0.8, 1.0]
0           0           0           0           0           1
1           0           1           0           0           0
2           1           0           0           0           0
3           0           1           0           0           0
4           0           0           1           0           0
5           0           0           1           0           0
6           0           0           0           0           1
7           0           0           0           1           0
8           0           0           0           1           0
9           0           0           0           1           0

7.3 字符串操作

字符串对象⽅法
#split分割数据
val = 'a,b, guido'
val.split(',')
['a', 'b', ' guido']
#split常常与strip1起使用,以去除空白符(包括换行符)
pieces = [x.strip() for x in val.split(',')]
print(pieces)
['a', 'b', 'guido']
first, second, third = pieces
first + '::' + second + '::' + third
'a::b::guido'

在这里插入图片描述
在这里插入图片描述

正则表达式

描述1个或多个空白符的regex是\s+

import re
text = "foo bar\t baz \tqux"
re.split('\s+', text)
['foo', 'bar', 'baz', 'qux']
#findall返回的是字符串中所有的匹配项,而search则只返回第1个匹配项。
text = """Dave dave@google.com
Steve steve@gmail.com
Rob rob@gmail.com
Ryan ryan@yahoo.com
"""
pattern = r'[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}'
# re.IGNORECASE makes the regex case-insensitive
regex = re.compile(pattern, flags=re.IGNORECASE)
regex.findall(text)

['dave@google.com', 'steve@gmail.com', 'rob@gmail.com', 'ryan@yahoo.com']
m = regex.search(text)
print(m)
text[m.start():m.end()]
<re.Match object; span=(5, 20), match='dave@google.com'>





'dave@google.com'

在这里插入图片描述

pandas的矢量化字符串函数
data = {'Dave': 'dave@google.com', 'Steve': 'steve@gmail.com','Rob': 'rob@gmail.com', 'Wes': np.nan}
data = pd.Series(data)
print(data)
print('\n')
print(data.isnull)
Dave     dave@google.com
Steve    steve@gmail.com
Rob        rob@gmail.com
Wes                  NaN
dtype: object


<bound method Series.isnull of Dave     dave@google.com
Steve    steve@gmail.com
Rob        rob@gmail.com
Wes                  NaN
dtype: object>
pattern = r'[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}'
print(data.str.contains('gmail'))
print('\n')
print(pattern)
print('\n')
print(data.str.findall(pattern, flags=re.IGNORECASE))
Dave     False
Steve     True
Rob       True
Wes        NaN
dtype: object


[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}


Dave     [dave@google.com]
Steve    [steve@gmail.com]
Rob        [rob@gmail.com]
Wes                    NaN
dtype: object

在这里插入图片描述

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

三街打工人

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值