python数据科学库学习(三)—— pandas

import pandas as pd

# 这两个在下面会用到
import numpy as np
import string

为什么使用pandas

  1. numpy只能够处理数值型数据,但是实际上,我们还需要处理很多其他的数据信息;
  2. numpy中有很多方法在pandas中也具有,但是用法和结果视情况可能会有稍许不同。

pandas读取外部数据

pandas可以读取多种类型的数据,具体可见此处。比较常用的读取文件类型是csv文件和excel文件,其余可根据自己需求来查询相关函数。

# csv文件
csv_file = pd.read_csv("./filename.csv")

# excel文件
excel_file = pd.read_excel('./tmp.xlsx')

pandas数据类型

主要有两种数据类型:

  1. Series
  2. DataFrame

Series

Series创建及类型

利用列表生成
>>> a = pd.Series([1, 2, 3, 4, 5])
>>> a
0    1
1    2
2    3
3    4
4    5
dtype: int64
>>> type(a)
<class 'pandas.core.series.Series'>

# 指定索引
>>> b = pd.Series([1, 2, 3, 4, 5], index = list("abcde"))
>>> b
a    1
b    2
c    3
d    4
e    5
dtype: int64
利用字典生成
>>> c = pd.Series({"a": 1, "b": 2, "c": 3})
>>> c
b    2
c    3
dtype: int64		# 因为值均为int所以类型为int64

>>> c = pd.Series({"a": "A", "b": 2, "c": 3})
>>> c
a    A
b    2
c    3
dtype: object		# 值的类型不一,类型为object

>>> c = pd.Series({"a": "A", "b": "B", "c": "C"})
>>> c
a    A
b    B
c    C
dtype: object		# 字符串,也为object类型
数据类型的一点补充说明及数据类型的改变

在上述dtype中已经可以简单看出Series的数据类型如何对应,特别注意,pandas会自动根据数据类型更改dtype。

>>> import string
>>> d = {string.ascii_uppercase[i]:i for i in range(10)}
>>> d
{'A': 0, 'B': 1, 'C': 2, 'D': 3, 'E': 4, 'F': 5, 'G': 6, 'H': 7, 'I': 8, 'J': 9}

>>> pd.Series(d)
A    0
C    2
D    3
E    4
F    5
G    6
H    7
I    8
J    9
dtype: int64

'''
重新制定其他的索引后,若能对应上,则取值,若不能,则为nan
pandas会自动根据数据类型更改dtype
'''
>>> pd.Series(d, index = list(string.ascii_uppercase[5:15]))
F    5.0
H    7.0
I    8.0
J    9.0
K    NaN
L    NaN
M    NaN
N    NaN
O    NaN
dtype: float64

>>> c
a    A
b    B
c    C
dtype: object
>>> e
A    0
C    2
D    3
dtype: int64
>>> f
F    5.0
H    7.0
O    NaN
dtype: float64
>>> c.dtype
dtype('O')
>>> e.dtype
dtype('int64')
>>> f.dtype
dtype('float64')

'''
改变数据类型
'''
>>> e.astype(float)
A    0.0
C    2.0
D    3.0
dtype: float64

Series的切片和索引

Series对象本质上由两个数组构成,index(索引,键)与values(值)

一般索引情况
>>> g = pd.Series({"one": 1, "two": "hello", "three": [1, 2, 3]})
>>> g
one              1
two          hello
three    [1, 2, 3]
dtype: object

'''
按照编号来索引
'''
>>> g[0]
1

'''
按照index值来索引
'''
>>> g["one"]
1

'''
利用编号一次索引多行
'''
>>> g[[0, 1]]
one        1
two    hello
dtype: object

'''
利用index值一次索引多行
'''
>>> g[["one", "two"]]
one        1
two    hello
dtype: object
编号和index值混用索引情况

大致有这两种情况(欢迎补充 ),具体描述和代码示例如下:

  1. 默认使用index值来索引,将另一个编号视作index数值,而Series中并不存在这个索引,因此返回NaN,如下例中g的“1”;
  2. 当定义的index值中存在int类型与默认的编号类型(0,1,2…)有相同值的时候,默认只可使用index值来索引。
>>> g
one              1
two          hello
three    [1, 2, 3]
dtype: object

'''
1.
默认使用index值来索引, 将另一个编号视作index数值
如example1, 对于g来说, 不存在这样一个index索引"1", 因此返回为nan
'''
# 不存在索引"1"
>>> g[["one", 1]]
Passing list-likes to .loc or [] with any missing label will raise
KeyError in the future, you can use .reindex() as an alternative.

https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#deprecate-loc-reindex-listlike
  return self.loc[key]
one      1
1      NaN
dtype: object

# 不存在索引"2"
>>> g[[0, "two"]]
Passing list-likes to .loc or [] with any missing label will raise
KeyError in the future, you can use .reindex() as an alternative.

See the documentation here:
https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#deprecate-loc-reindex-listlike
  return self.loc[key]
0        NaN
two    hello
dtype: object


'''
2.
当定义的index值中存在int类型与默认的编号类型(0,1,2...)有相同值的时候,
默认只可使用index值来索引
'''
>>> g = pd.Series({"one": 1, "two": "hello", 0: [1, 2, 3]})
>>> g
one            1
two        hello
0      [1, 2, 3]
dtype: object

# 这里的结果是将0作为自己定义的index值所取出的[1, 2, 3]
# 而非认为是默认编号0,1,2,...中的0,即所的结果不是1
>>> g[0]
[1, 2, 3]

# g中并不存在1这个index值,因此报错
>>> g[1]
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
    result = self.index.get_value(self, key)
  File "E:\Anaconda_3\lib\site-packages\pandas\core\indexes\base.py", line 4730, in get_value
    return self._engine.get_value(s, k, tz=getattr(series.dtype, "tz", None))
  File "pandas\_libs\index.pyx", line 80, in pandas._libs.index.IndexEngine.get_value
  File "pandas\_libs\index.pyx", line 88, in pandas._libs.index.IndexEngine.get_value
  File "pandas\_libs\index.pyx", line 131, in pandas._libs.index.IndexEngine.get_loc
  File "pandas\_libs\hashtable_class_helper.pxi", line 1607, in pandas._libs.hashtable.PyObjectHashTable.get_item
  File "pandas\_libs\hashtable_class_helper.pxi", line 1614, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 1
使用bool值来取值
>>> h = pd.Series(range(1, 6))
>>> h
1    2
2    3
3    4
4    5
dtype: int64
>>> h[h < 3]
0    1
1    2
dtype: int64

Series键值

# 获取h的索引值
>>> h.index
RangeIndex(start=0, stop=5, step=1)
# 获取h的值
>>> h.values
array([1, 2, 3, 4, 5], dtype=int64)
# 获取h索引值类型
>>> type(h.index)
<class 'pandas.core.indexes.range.RangeIndex'>
# h的值类型
>>> type(h.values)
<class 'numpy.ndarray'>

Dataframe

DataFrame创建及类型

>>> pd.DataFrame(np.arange(12).reshape((3, 4)))
   0  1   2   3
0  0  1   2   3
1  4  5   6   7
2  8  9  10  11
>>> pd.DataFrame(np.arange(12).reshape((3, 4)), index = ["A", "B", "C"], columns = ["D", "E", "F", "G"])
   D  E   F   G
A  0  1   2   3
B  4  5   6   7
C  8  9  10  11

'''
利用字典创建的几种方法
'''
>>> k = {"name": ["A", "B", "C"], "age": [15, 16, 17]}
>>> l = pd.DataFrame(k)
>>> l
  name  age
0    A   15
1    B   16
2    C   17
>>> type(l)
<class 'pandas.core.frame.DataFrame'>

>>> m = [{"name": "A", "age": 15}, {"name": "B", "age": 16}, {"name": "C", "age": 17}]
>>> n = pd.DataFrame(m)
>>> n
  name  age
0    A   15
1    B   16
2    C   17
>>> type(n)
<class 'pandas.core.frame.DataFrame'>

# 当有未指定值的时候,默认该值为不存在,nan
>>> o = [{"name": "A", "age": 15}, {"name": "B", "age": 16}, {"age": 17}]
>>> p = pd.DataFrame(o)
>>> p
  name  age
0    A   15
1    B   16
2  NaN   17
DataFrame是Series的容器

DataFrame由一系列Series构成,每一列都是一个Series。

>>> p
  name  age
0    A   15
1    B   16
2  NaN   17
>>> p["name"]	# 这里是DataFrame的索引,之后会提及相关内容
0      A
1      B
2    NaN
Name: name, dtype: object
>>> type(p['name'])
<class 'pandas.core.series.Series'>

DataFrame的基础属性

df = pd.DataFrame(...)
属性解释
df.shape以元组的形式返回DataFrame的行数列数
df.dtypes返回DataFrame的类型
df.ndim返回DataFrame的维度,DataFrame返回2,Series返回1
df.index返回DataFrame的行索引
df.columns返回DataFrame的列索引
df.values返回除去表头后的数值,类型为numpy.ndarray,建议使用df.to_numpy()来替代
>>> df = pd.DataFrame(
... {"name": ["A", "B", "C", "D"],
...  "age": [5, 20, 18, 25],
...  "sex": ["girl", "boy", "gril", "boy"],
...  "height":[130.1, 180.5, 175.3, 188.9],
...  "haveNums": [[1, 2, 3], [15, 2, 5], [5, 9, 14], [3, 6, 8]]})
>>> df
  name  age   sex  height    haveNums
0    A    5  girl   130.1   [1, 2, 3]
1    B   20   boy   180.5  [15, 2, 5]
2    C   18  gril   175.3  [5, 9, 14]
3    D   25   boy   188.9   [3, 6, 8]

>>> df.shape
(4, 5)

>>> df.dtypes
name         object
age           int64
sex          object
height      float64
haveNums     object
dtype: object

# DataFrame返回2,Series返回1
>>> df.ndim
2

# 默认 DataFrame.index: pandas.core.indexes.base.Index
>>> df.index
RangeIndex(start=0, stop=4, step=1)

# 默认 DataFrame.columns: pandas.core.indexes.base.Index
>>> df.columns
Index(['name', 'age', 'sex', 'height', 'haveNums'], dtype='object')

'''
DataFrame.to_numpy(dtype=None, copy=False, na_value=<object object>)
'''
>>> df.values
array([['A', 5, 'girl', 130.1, list([1, 2, 3])],
       ['B', 20, 'boy', 180.5, list([15, 2, 5])],
       ['C', 18, 'gril', 175.3, list([5, 9, 14])],
       ['D', 25, 'boy', 188.9, list([3, 6, 8])]], dtype=object)
>>> df.to_numpy()
array([['A', 5, 'girl', 130.1, list([1, 2, 3])],
       ['B', 20, 'boy', 180.5, list([15, 2, 5])],
       ['C', 18, 'gril', 175.3, list([5, 9, 14])],
       ['D', 25, 'boy', 188.9, list([3, 6, 8])]], dtype=object)

DataFrame整体情况查询

属性解释
df.head(n)返回前n行,默认n = 5
df.tail(n)返回后n行,默认n = 5
df.info()返回DataFrame的相关信息,有一系列参数设置,可见此处
df.describe()返回DataFrame的一些统计学结果,不统计非数字内容,具体参数设置见此
df.sort_values()沿着任意轴按值排序,需要保证类型的统一,具体参数设置见此
df.sort_index()沿着标签排序,需要保证类型的统一,具体参数设置见此
>>> df
  name  age   sex  height    haveNums
0    A    5  girl   130.1   [1, 2, 3]
1    B   20   boy   180.5  [15, 2, 5]
2    C   18  gril   175.3  [5, 9, 14]
3    D   25   boy   188.9   [3, 6, 8]

# 返回前两行, 默认返回前5行
>>> df.head(2)
  name  age   sex  height    haveNums
0    A    5  girl   130.1   [1, 2, 3]
1    B   20   boy   180.5  [15, 2, 5]


# 返回后两行, 默认返回后5行
>>> df.tail(2)
  name  age   sex  height    haveNums
2    C   18  gril   175.3  [5, 9, 14]
3    D   25   boy   188.9   [3, 6, 8]

'''
DataFrame.info(verbose=None, buf=None, max_cols=None, memory_usage=None, null_counts=None)
'''
>>> df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4 entries, 0 to 3
Data columns (total 5 columns):
name        4 non-null object
age         4 non-null int64
sex         4 non-null object
height      4 non-null float64
haveNums    4 non-null object
dtypes: float64(1), int64(1), object(3)
memory usage: 288.0+ bytes

'''
DataFrame.describe(percentiles=None, include=None, exclude=None, datetime_is_numeric=False)
'''
# 不统计非数字内容
>>> df.describe()
             age      height
count   4.000000    4.000000
mean   17.000000  168.700000
std     8.524475   26.336287
min     5.000000  130.100000
25%    14.750000  164.000000
50%    19.000000  177.900000
75%    21.250000  182.600000
max    25.000000  188.900000

'''
DataFrame.sort_values(by, axis=0, ascending=True, inplace=False, kind='quicksort', na_position='last', ignore_index=False, key=None)
'''
>>> df.sort_values("age")
  name  age   sex  height    haveNums
0    A    5  girl   130.1   [1, 2, 3]
2    C   18  gril   175.3  [5, 9, 14]
1    B   20   boy   180.5  [15, 2, 5]
3    D   25   boy   188.9   [3, 6, 8]
>>> df.sort_values("sex")
  name  age   sex  height    haveNums
1    B   20   boy   180.5  [15, 2, 5]
3    D   25   boy   188.9   [3, 6, 8]
0    A    5  girl   130.1   [1, 2, 3]
2    C   18  gril   175.3  [5, 9, 14]
>>> df.sort_values("haveNums")
  name  age   sex  height    haveNums
0    A    5  girl   130.1   [1, 2, 3]
3    D   25   boy   188.9   [3, 6, 8]
2    C   18  gril   175.3  [5, 9, 14]
1    B   20   boy   180.5  [15, 2, 5]

'''
DataFrame.sort_index(axis=0, level=None, ascending=True, inplace=False, kind='quicksort', na_position='last', sort_remaining=True, ignore_index=False, key=None)
'''
# 按照标签降序排列
>>> df.sort_index(ascending=False)
  name  age   sex  height    haveNums
3    D   25   boy   188.9   [3, 6, 8]
2    C   18  gril   175.3  [5, 9, 14]
1    B   20   boy   180.5  [15, 2, 5]
0    A    5  girl   130.1   [1, 2, 3]

# 对字符串也可排序
>>> df1 = pd.DataFrame(np.arange(12).reshape((3, 4)), index = ["A", "C", "B"], columns = ["D", "E", "F", "G"])
>>> df1
   D  E   F   G
A  0  1   2   3
C  4  5   6   7
B  8  9  10  11
>>> df1.sort_index()
   D  E   F   G
A  0  1   2   3
B  8  9  10  11
C  4  5   6   7

# 需要保证类型的统一
>>> df2 = pd.DataFrame(np.arange(12).reshape((4, 3)), index = ["A", "C", "123", "Z15"], columns = ["D", "E", "F"])
>>> df2.sort_index()
     D   E   F
123  6   7   8
A    0   1   2
C    3   4   5
Z15  9  10  11

DataFrame切片与索引

直接进行切片索引
'''
写数组,表示取行
单行是DataFrame类型
'''
>>> df[:2]
  name  age   sex  height    haveNums
0    A    5  girl   130.1   [1, 2, 3]
1    B   20   boy   180.5  [15, 2, 5]
>>> type(df[:0])
<class 'pandas.core.frame.DataFrame'>
>>> df[0, 2]	# error,不能这样取不间隔的多行!!!
KeyError: (0, 2)


'''
写字符串,表示取列
单列是Series类型
'''
>>> df["age"]
0     5
1    20
2    18
3    25
Name: age, dtype: int64
>>> type(df["age"])
<class 'pandas.core.series.Series'>
# 可以这样取多列,但是类似这样不能取不间隔的多行
>>> df[["age", "height"]]
   age  height
0    5   130.1
1   20   180.5
2   18   175.3
3   25   188.9
>>> type(df[["age", "height"]])
<class 'pandas.core.frame.DataFrame'>
借助loc()与iloc()函数切片索引
>>> df3 = pd.DataFrame(
...{"age": [5, 20, 18, 25], 
...  "sex": ["girl", "boy", "gril", "boy"],
...  "height":[130.1, 180.5, 175.3, 188.9],
...  "haveNums": [[1, 2, 3], [15, 2, 5], [5, 9, 14], [3, 6, 8]]},
...  index = ["A", "B", "C", "D"])
>>> df3
   age   sex  height    haveNums
A    5  girl   130.1   [1, 2, 3]
B   20   boy   180.5  [15, 2, 5]
C   18  gril   175.3  [5, 9, 14]
D   25   boy   188.9   [3, 6, 8]


'''
通过标签来获取,loc()函数
'''
>>> df3.loc["A", "age"]
5
>>> df3.loc["A", :]
age                 5
sex              girl
height          130.1
haveNums    [1, 2, 3]
Name: A, dtype: object
>>> df3.loc[:, "age"]
A     5
B    20
C    18
D    25
Name: age, dtype: int64
>>> df3.loc[["A", "C"], ["age", "haveNums"]]
   age    haveNums
A    5   [1, 2, 3]
C   18  [5, 9, 14]

>>> df3.loc["A":"C", ["age", "haveNums"]]	# 冒号在loc中是闭合的
   age    haveNums
A    5   [1, 2, 3]
B   20  [15, 2, 5]
C   18  [5, 9, 14]


'''
通过位置来获取,iloc函数
'''
>>> df3.iloc[1]
age                 20
sex                boy
height           180.5
haveNums    [15, 2, 5]
Name: B, dtype: object

>>> df3.iloc[:, 1]
A    girl
B     boy
C    gril
D     boy
Name: sex, dtype: object

>>> df3.iloc[[0, 2], [1, 3]]
    sex    haveNums
A  girl   [1, 2, 3]
C  gril  [5, 9, 14]

>>> df3.iloc[1:, : 1]
   age
B   20
C   18
D   25
bool值索引
  1. 多种条件的选取用 & |
  2. 选取数值在给定列表中的表格,df.loc[df[‘height’].isin([180.5, 175.3])],反之为df.loc[~df[‘height’].isin([180.5, 175.3])]
>>> df3.iloc[1:, : 2] = np.nan		# 不报错,自动转换类型
>>> df3
   age   sex  height    haveNums
A  5.0  girl   130.1   [1, 2, 3]
B  NaN   NaN   180.5  [15, 2, 5]
C  NaN   NaN   175.3  [5, 9, 14]
D  NaN   NaN   188.9   [3, 6, 8]

>>> df3[(df3["height"] > 170) & (df3["height"] < 181)]
   age  sex  height    haveNums
B  NaN  NaN   180.5  [15, 2, 5]
C  NaN  NaN   175.3  [5, 9, 14]


In [21]: df2
Out[21]: 
   D  E   F   G   H   I  J         K
A  0  1   2   3   3   0  1  3.000000
B  4  5   6   7  15  24  1  1.400000
C  8  9  10  11  27  80  1  1.222222

myValues = [1, 2, 3, 4, 9]

In [23]: df2[df2["E"].isin(myValues)]
Out[23]: 
   D  E   F   G   H   I  J         K
A  0  1   2   3   3   0  1  3.000000
C  8  9  10  11  27  80  1  1.222222

In [24]: df2[~df2["E"].isin(myValues)]
Out[24]: 
   D  E  F  G   H   I  J    K
B  4  5  6  7  15  24  1  1.4

缺失数据的处理

缺失数据处理的方法

pandas中DataFrame提供了一系列处理缺失数据的方法,如下表所示。

方法说明
df.isna()判断Dataframe中是否有nan,并在相应的位置返回True,其余的为False
df.isnull()判断Dataframe中是否有nan,并在相应的位置返回True,其余的为False,注意inf并不被视作nan,但可以手动设置mode模式
df.notna()判断Dataframe中是否有nan,并在相应没有的位置返回True,其余的为False
df.notnull()判断Dataframe中是否有nan,并在相应没有的位置返回True,其余的为False
df.fillna()使用指定的方法填充nan,方法有 {‘backfill’, ‘bfill’, ‘pad’, ‘ffill’, None} ,默认使用None方法,这些方法分别对应了 {df.backfill, df.bfill, df.pad, df.ffill} 方法,下面会详细说明这几个方法的作用
df.dropna()按照指定的方式移除包含nan值的行列
df.interpolate()使用指定的方法填充nan,默认采取线性填充,方法有 {‘linear’, ‘time’, ‘index’, ‘values’, ‘pad’, ‘nearest’, ‘zero’, ‘slinear’, ‘quadratic’, ‘cubic’, ‘spline’, ‘barycentric’, ‘polynomial’, ‘krogh’, ‘piecewise_polynomial’, ‘spline’, ‘pchip’, ‘akima’, ‘cubicspline’, ‘from_derivatives’} ,具体用法可见此处
df.replace()用给定的值替换,不一定要是nan,可以使用正则表达式,具体用法见此处
df.fillna的一些具体方法使用如下:
方法说明
-----------
‘backfill’ ( df.backfill() )用下一个行有效非nan值来填充这一行
‘bfill’ ( df.bfill() )用下一个行有效非nan值来填充这一行
‘pad’ ( df.pad() )用上一个行有效非nan值来填充这一行
‘ffill’ ( df.ffill() )用上一个行有效非nan值来填充这一行
>>> df4 = pd.DataFrame(
...{"age": [5, 20, 18, 25], 
...  "sex": ["girl", "boy", "gril", "boy"],
...  "height":[130.1, 180.5, 175.3, 188.9],
...  "haveNums": [[1, 2, 3], [15, 2, 5], [5, 9, 14], [3, 6, 8]]},
...  index = ["A", "B", "C", "D"])
# 不分行输入的如下:
# df3 = pd.DataFrame({"age": [5, 20, 18, 25], "sex": ["girl", "boy", "gril", "boy"], "height":[130.1, 180.5, 175.3, 188.9], "haveNums": [[1, 2, 3], [15, 2, 5], [5, 9, 14], [3, 6, 8]]}, index = ["A", "B", "C", "D"])
>>> df4
   age   sex  height    haveNums
A    5  girl   130.1   [1, 2, 3]
B   20   boy   180.5  [15, 2, 5]
C   18  gril   175.3  [5, 9, 14]
D   25   boy   188.9   [3, 6, 8]
>>> df4.iloc[[0, 2], [0, 3]] = np.nan
>>> df4.iloc[1, 1] = np.nan
>>> df4.iloc[2, 1] = np.nan
>>> df4.iloc[1, 2] = np.nan
>>> df4
    age   sex  height    haveNums
A   NaN  girl   130.1         NaN
B  20.0   NaN     NaN  [15, 2, 5]
C   NaN   NaN   175.3         NaN
D  25.0   boy   188.9   [3, 6, 8]

# 若使用None则输出如下
>>> df5 = pd.DataFrame({"age": [None, 20, None, 25], "sex": ["girl", None, None, "boy"], "height":[130.1, None, 175.3, 188.9], "haveNums": [None, [15, 2, 5], None, [3, 6, 8]]}, index = ["A", "B", "C", "D"])
>>> df5
    age   sex  height    haveNums
A   NaN  girl   130.1        None
B  20.0  None     NaN  [15, 2, 5]
C   NaN  None   175.3        None
D  25.0   boy   188.9   [3, 6, 8]
缺失值的判断

主要有以下四个函数,isna() 、isnull() 、notna() 、notnull(),前两者意义相同,后两者意义相同。

>>> df5.isnull()		# df5.isnan()效果相同
     age    sex  height  haveNums
A   True  False   False      True
B  False   True    True     False
C   True   True   False      True
D  False  False   False     False
>>> df5.notnull()		# df5.notnan()效果相同
     age    sex  height  haveNums
A  False   True    True     False
B   True  False   False      True
C  False  False    True     False
D   True   True    True      True
缺失值的去除
DataFrame.dropna(axis=0, how='any', thresh=None, subset=None, inplace=False)
# 利用bool索引替换抽取
>>> df5[pd.notnull(df4["height"])]
    age   sex  height   haveNums
A   NaN  girl   130.1       None
C   NaN  None   175.3       None
D  25.0   boy   188.9  [3, 6, 8]

# 去除任何包含nan的行, 默认axis为0,即取行
>>> df5.dropna(axis = 0, how = "any")
    age  sex  height   haveNums
D  25.0  boy   188.9  [3, 6, 8]

# 去除全为nan的行,因为此处没有此种行,所以没有发生变化
>>> df5.dropna(axis = 0, how = "all")
    age   sex  height    haveNums
A   NaN  girl   130.1        None
B  20.0  None     NaN  [15, 2, 5]
C   NaN  None   175.3        None
D  25.0   boy   188.9   [3, 6, 8]

# 去除这些行,用inplace控制在原地改变
>>> df5.dropna(how = "any", inplace = True)
>>> df5
    age  sex  height   haveNums
D  25.0  boy   188.9  [3, 6, 8]
缺失值替换
'''
1.
DataFrame.fillna(value=None, method=None, axis=None, inplace=False, limit=None, downcast=None)[source]
DataFrame.pad(axis=None, inplace=False, limit=None, downcast=None)
注意在用均值等统计数据值替换的时候:
	当那一列的数据为非数值型的,pandas会忽视
	并且当这一列数值有nan的时候,pandas计算均值时不会考虑nan
'''

# 用括号内的值替换所有的nan,此处为0
>>> df5.fillna(0)
    age   sex  height    haveNums
A   0.0  girl   130.1           0
B  20.0     0     0.0  [15, 2, 5]
C   0.0     0   175.3           0
D  25.0   boy   188.9   [3, 6, 8]

# 用括号内的值替换所有的nan,此处为使用了均值
>>> df5.fillna(df5.mean())
    age   sex      height    haveNums
A  22.5  girl  130.100000        None
B  20.0  None  164.766667  [15, 2, 5]
C  22.5  None  175.300000        None
D  25.0   boy  188.900000   [3, 6, 8]

# 取其中某一列用均值填充
>>> df5["age"].fillna(df5["age"].mean())
A    22.5
B    20.0
C    22.5
D    25.0
Name: age, dtype: float64

# 使用方法backfill / bfill,用后一行未缺失值替换前一行
>>> df5.fillna(method = "backfill")
    age   sex  height    haveNums
A  20.0  girl   130.1  [15, 2, 5]
B  20.0   boy   175.3  [15, 2, 5]
C  25.0   boy   175.3   [3, 6, 8]
D  25.0   boy   188.9   [3, 6, 8]

# 填充前df5如下,可对比:
# >>> df5
#    age   sex  height    haveNums
# A   NaN  girl   130.1        None
# B  20.0  None     NaN  [15, 2, 5]
# C   NaN  None   175.3        None
# D  25.0   boy   188.9   [3, 6, 8]

# 使用方法ffill / pad,用前一行未缺失值替换后一行
>>> df5.fillna(method = "ffill")
    age   sex  height    haveNums
A   NaN  girl   130.1        None
B  20.0  girl   130.1  [15, 2, 5]
C  20.0  girl   175.3  [15, 2, 5]
D  25.0   boy   188.9   [3, 6, 8]
# 使用limit限制行数的替换
>>> df5.fillna(method = "ffill", limit = 1)
    age   sex  height    haveNums
A   NaN  girl   130.1        None
B  20.0  girl   130.1  [15, 2, 5]
C  20.0  None   175.3  [15, 2, 5]
D  25.0   boy   188.9   [3, 6, 8]

'''
3.2
DataFrame.interpolate(method='linear', axis=0, limit=None, inplace=False, limit_direction=None, limit_area=None, downcast=None, **kwargs)
'''
# df.interpolate()默认为线性替换,从上向下
>>> df5.interpolate()
    age   sex  height    haveNums
A   NaN  girl   130.1        None
B  20.0  None   152.7  [15, 2, 5]
C  22.5  None   175.3        None
D  25.0   boy   188.9   [3, 6, 8]

# 填充前df5如下,可对比:
# >>> df5
#    age   sex  height    haveNums
# A   NaN  girl   130.1        None
# B  20.0  None     NaN  [15, 2, 5]
# C   NaN  None   175.3        None
# D  25.0   boy   188.9   [3, 6, 8]

'''
3.3
可以使用正则表达式方法替换
DataFrame.replace(to_replace=None, value=None, inplace=False, limit=None, regex=False, method='pad')
'''
>>> df5.replace(to_replace = r'^b..', value = 'man', regex=True)
    age   sex  height    haveNums
A   NaN  girl   130.1        None
B  20.0  None     NaN  [15, 2, 5]
C   NaN  None   175.3        None
D  25.0   man   188.9   [3, 6, 8]

# 有时候为0的也可能是缺失值,可以这样替换
df5[df5 == 0] = np.nan
索引值重置
df.reset_index(drop=True)	# 不保留原来的索引,若想保留,用False,原索引会生成一列新的数据

pandas按行遍历

  1. iterrows()方法
  2. itertuples()方法
  3. iteritems()方法
for index, row in df.iterrows():
    print row["A"], row["B"]

for row in df.itertuples(index=True, name='Pandas'):
    print getattr(row, "A"), getattr(row, "B")

for index, row in df.iteritems():
    print(row)

pandas常用字符串方法

!!!字符串方法只能对Series数据类型使用。与python内置字符串方法相似。

方法说明
str.capitalize()可将表中数据/索引第一个字母转换为大写
str.title()每一个单词的首字母大写,注意和capitalize()方法区分
str.lower()/ str.upper()所有的字母小写/ 大写
str.swapcase()将大写的变小写,小写的变大写
str.cat()按照指定方式连接字符串
str.contains()测试字符串中是否包含给定元素,元素可以是正则表达式,返回相应的bool型Series
str.len()返回字符串/元组/列表/字典等长度,无len()方法的不能使用,如int
str.replace()用指定元素进行替换,也可以使用正则表达式,具体用法见此
str.split()以给定的方式分割字符串
>>> df6 = pd.DataFrame({"age": [5, 20, 18, 25], "sex": ["girl", "boy", "gril", "boy"], "height":[130.1, 180.5, 175.3, 188.9], "haveNums": [[1, 2, 3], [15, 2, 5], [5, 9, 14], [3, 6, 8]]}, index = ["a", "B", "C", "d"])
>>> df6
   age   sex  height    haveNums
a    5  girl   130.1   [1, 2, 3]
B   20   boy   180.5  [15, 2, 5]
C   18  gril   175.3  [5, 9, 14]
d   25   boy   188.9   [3, 6, 8]

# 第一个字母大写
>>> df6["sex"].str.capitalize()
a    Girl
B     Boy
C    Gril
d     Boy
Name: sex, dtype: object
# 将索引第一个字母大写
>>> df6.index.str.capitalize()
Index(['A', 'B', 'C', 'D'], dtype='object')

# 大小写交换
>>> df6.index.str.swapcase()
Index(['A', 'b', 'c', 'D'], dtype='object')

# 全部大写之后,又使用casefold使得全为小写
>>> df6["sex"].str.upper().str.casefold()
a    girl
B     boy
C    gril
d     boy
Name: sex, dtype: object

# 连接字符串
# Series.str.cat(others=None, sep=None, na_rep=None, join='left')
>>> df6["sex"].str.cat(sep = ", ")
'girl, boy, gril, boy'

# 字符串是否包含某元素
# Series.str.contains(pat, case=True, flags=0, na=nan, regex=True)
>>> df6["sex"].str.contains("r")
a     True
B    False
C     True
d    False
Name: sex, dtype: bool

# 字符串长度
>>> df6["sex"].str.len()
a    4
B    3
C    4
d    3
Name: sex, dtype: int64

# 不止对字符串可使用len(),只要有len()方法的即可使用
>>> s1 = pd.Series(
... ["age name", [1, 2, 3], (1, 2, 3), {"n": 8, "h": 188.6, "na": "Jack", 1: [1, 2]}, 189.5, 10])
>>> s1
0                                         age name
1                                        [1, 2, 3]
2                                        (1, 2, 3)
3    {'n': 8, 'h': 188.6, 'na': 'Jack', 1: [1, 2]}
4                                            189.5
5                                               10
>>> s1.str.len()
0    8.0
1    3.0
2    3.0
3    4.0
4    NaN
5    NaN
dtype: float64		# 这是因为有NaN进行了类型转换

# 字符串替换
# Series.str.replace(pat, repl, n=- 1, case=None, flags=0, regex=True)
>>> df6["sex"].str.replace("^g", "abc")
a    abcirl
B       boy
C    abcril
d       boy
Name: sex, dtype: object

# 字符串分割
# Series.str.split(pat=None, n=- 1, expand=False)
>>> df6["sex"].str.split("o")
a    [girl]
B    [b, y]
C    [gril]
d    [b, y]
Name: sex, dtype: object

注意:

  1. 正则表达式不能用来获取NaN和None,如果现有函数,如isnull,isnan等不便使用的话,可以采用循环判断的方法;
  2. str方法只对Series有用,若想对DataFrame使用,可利用循环每行或者每列替换,还可使用column和index方法对DataFrame的标签进行改变替换。

pandas常用统计方法

>>> df6["age"]
a     5
B    20
C    18
d    25
Name: age, dtype: int64

>>> df6["age"].mean()		# 取均值
17.0
>>> df6["age"].max()		# 取最大值
25
>>> df6["age"].argmax()		# 官方建议使用idxmax()方法替代
'd'			
>>> df6["age"].idxmax()		# 取最大值位置
'd'
>>> df6["age"].min()		# 取最小值
5
>>> df6["age"].argmin()		# 官方建议使用idxmin()方法替代
'a'
>>> df6["age"].idxmin()		# 取最小值位置
'a'
>>> df6["age"].median()		# 中位数
19.0

# 取一个集合中唯一的数的方法 1
>>> set(df6["sex"].tolist())
{'gril', 'boy', 'girl'}

# 取一个集合中唯一的数的方法 2
>>> df6["sex"].unique()
array(['girl', 'boy', 'gril'], dtype=object)

# 统计数量
>>> len(set(df6["sex"].tolist()))
3	# 为什么性别这一列输出会是3呢,因为我先前打错单词了。。。

pandas数据合并

pandas中数据合并主要有两种方法,如下表所示。

方法说明
join()DataFrame类型有此方法,将两个数据按行合并
merge()DataFrame类型有此方法,按照数据库的样式来合并,变化较多,具体见此处,下文也会提及

join()方法

DataFrame.join(other, on=None, how='left', lsuffix='', rsuffix='', sort=False)
参数说明
other需要与DataFrame进行合并的数据,可以是DataFrame类型、Series类型以及DataFrame中的列表
on以on所指定的为标准进行合并,默认为None,注意必须在需要连接的两个数据中都有这个列或者索引
how以什么样的方式连接,可以取*{‘left’, ‘right’, ‘outer’, ‘inner’}*,默认是左连接
lsuffix定义左边连接数据列名称所加后缀,默认不加,为""
rsuffix定义右边连接数据列名称所加后缀,默认不加,为""
sort通过连接键按字典顺序对结果数据排序,默认为False,连接键的顺序取决于连接类型,即是how所赋予的值

join()的基本用法

基本
>>> df7 = pd.DataFrame({"age": [5, 20, 18, 25], "sex": ["girl", "boy", "gril", "boy"], "height":[130.1, 180.5, 175.3, 188.9]}, index = list("ABCD"))
>>> df7 
   age   sex  height
A    5  girl   130.1
B   20   boy   180.5
C   18  gril   175.3
D   25   boy   188.9

>>> df8 = pd.DataFrame({"weight": [80.5, 140.4, 105.3], "address":["上海", "西安", "武汉"]}, index = list("AB1"))

>>> df8
   weight address
A    80.5      上海
B   140.4      西安
1   105.3      武汉

'''
注意它默认是按照索引来进行左连接合并,索引没有对上的,默认不存在,无论是字符串还是数值格式都被赋值为NaN
'''
>>> df7.join(df8)
   age   sex  height  weight address
A    5  girl   130.1    80.5      上海
B   20   boy   180.5   140.4      西安
C   18  gril   175.3     NaN     NaN
D   25   boy   188.9     NaN     NaN

'''
注意顺序互换之后结果不一样!!!以左边的df8数据索引为基准!其余的去除
'''
>>> df8.join(df7)
   weight address   age   sex  height
A    80.5      上海   5.0  girl   130.1
B   140.4      西安  20.0   boy   180.5
1   105.3      武汉   NaN   NaN     NaN
考虑索引有重合的情况
>>> df7 
   age   sex  height
A    5  girl   130.1
B   20   boy   180.5
C   18  gril   175.3
D   25   boy   188.9

>>> df7_1 = pd.DataFrame({"age": [5, 20, 18, 25], "sex": ["girl", "boy", "gril", "boy"], "height":[130.1, 180.5, 175.3, 188.9]}, index = list("ABAD"))
>>> df7_1
   age   sex  height
A    5  girl   130.1
B   20   boy   180.5
A   18  gril   175.3
D   25   boy   188.9

>>> df8
   weight address
A    80.5      上海
B   140.4      西安
1   105.3      武汉

>>> df8_1 = pd.DataFrame({"weight": [80.5, 140.4, 105.3], "address":["上海", "西安", "武汉"]}, index = list("ABA"))
>>> df8_1
   weight address
A    80.5      上海
B   140.4      西安
A   105.3      武汉

'''
以下几种情况一定一定要整清楚!以索引A为例
'''
# Example 1,A不重复 vs A重复
>>> df7.join(df8_1)
   age   sex  height  weight address
A    5  girl   130.1    80.5      上海
A    5  girl   130.1   105.3      武汉
B   20   boy   180.5   140.4      西安
C   18  gril   175.3     NaN     NaN
D   25   boy   188.9     NaN     NaN
# 可以看出它将df8_1中的每一个A索引都与df7中的那一个A索引结合,合并后这1*2列相当于
# df7 + df8_1[第一个A行]
# df7 + df8_1[第二个A行]

# Example 2,A重复 vs A不重复
>>> df7_1.join(df8)
   age   sex  height  weight address
A    5  girl   130.1    80.5      上海
A   18  gril   175.3    80.5      上海
B   20   boy   180.5   140.4      西安
D   25   boy   188.9     NaN     NaN
# 可以看出它将df8每一个A索引分别与df7中的两个A索引结合,合并后这2*1列相当于
# df7_1[第一个A行] + df8
# df7_1[第二个A行] + df8

# Example 3,A重复 vs A重复
>>> df7_1.join(df8_1)
   age   sex  height  weight address
A    5  girl   130.1    80.5      上海
A    5  girl   130.1   105.3      武汉
A   18  gril   175.3    80.5      上海
A   18  gril   175.3   105.3      武汉
B   20   boy   180.5   140.4      西安
D   25   boy   188.9     NaN     NaN
# 可以看出它将df7_1中的每一个A索引分别与df8_1中的每一个A索引结合,合并后这2*2列相当于
# df7_1[第一个A行] + df8_1[第一个A行] 
# df7_1[第一个A行] + df8_1[第二个A行]
# df7_1[第二个A行] + df8_1[第一个A行] 
# df7_1[第二个A行] + df8_1[第二个A行]

总结:重复的索引相当于进行排列组合,结合上面的例子,很好理解了,如果df7中有m个重复A索引,df8中有n个重复A索引,那么合并之后总共会形成m*n行A索引,只是其中的排列组合会不一样!

on参数的使用

以on所指定的为标准进行合并,默认为None,注意必须在需要连接的两个数据中都有这个列或者索引。

>>> df7 
   age   sex  height
A    5  girl   130.1
B   20   boy   180.5
C   18  gril   175.3
D   25   boy   188.9

>>> df9 = pd.DataFrame({"age":[5, 18, 10, 16], "weight": [80.5, 130.4, 105.3, 125.1], "address":["上海", "西安", "武汉", "北京"]})
>>> df9
   age  weight address
0    5    80.5      上海
1   18   130.4      西安
2   10   105.3      武汉
3   16   125.1      北京

'''
注意以下几种情况的区别,一定要理解!!!
set_index()函数的用处是将这一列置为索引,这个函数的示例在这个代码块的最后贴出
'''
# Example 1,必须要使用set_index()将标准列置为索引才能进行下一步,否则会报错!!!
>>> df7.join(df9.set_index("age"), on = "age")
   age   sex  height  weight address
A    5  girl   130.1    80.5      上海
B   20   boy   180.5     NaN     NaN
C   18  gril   175.3   130.4      西安
D   25   boy   188.9     NaN     NaN
# 首先使用set_index()函数将这一列的值置为索引
# 再使用on以age列为标准进行合并
# 可以发现,age相同的数被合并到了同一列,合并之后的DataFrame数据的索引与df7一样

# Example 2
>>> df7.set_index("age").join(df9.set_index("age"), on = "age")
      sex  height  weight address
age                              
5    girl   130.1    80.5      上海
20    boy   180.5     NaN     NaN
18   gril   175.3   130.4      西安
25    boy   188.9     NaN     NaN
# 因为对df7使用了set_index()函数,将df7的索引改变
# 所以合并之后的结果与Example 1类似,但是索引不再是原来的ABCD,而是变成了age

# Example 3
>>> df7.set_index("age").join(df9, on = "age")
      sex  height  age  weight address
age                                   
5    girl   130.1  NaN     NaN     NaN
20    boy   180.5  NaN     NaN     NaN
18   gril   175.3  NaN     NaN     NaN
25    boy   188.9  NaN     NaN     NaN
# 这种情况下因为首先将df7的索引改变为了age
# 而df9并没有age索引,所以认为合并之后在df9的区域呈现NaN

# set_index()函数示例
>>> df9
   age  weight address
0    5    80.5      上海
1   18   130.4      西安
2   10   105.3      武汉
3   16   125.1      北京
>>> df9.set_index("age")
     weight address
age                
5      80.5      上海
18    130.4      西安
10    105.3      武汉
16    125.1      北京

总结:join()是使用索引进行合并的,一切以索引为准,牢记这点,很多就清楚了!

how参数使用

以什么样的方式连接,可以取*{‘left’, ‘right’, ‘outer’, ‘inner’}*,默认是左连接。

>>> df7 
   age   sex  height
A    5  girl   130.1
B   20   boy   180.5
C   18  gril   175.3
D   25   boy   188.9

>>> df9
   age  weight address
0    5    80.5      上海
1   18   130.4      西安
2   10   105.3      武汉
3   16   125.1      北京

>>> df9.set_index("age")
     weight address
age                
5      80.5      上海
18    130.4      西安
10    105.3      武汉
16    125.1      北京
'''
注意只有两者共同的是age为5和18
'''

# 左连接的情况
>>> df7.join(df9.set_index("age"), on = "age")
   age   sex  height  weight address
A    5  girl   130.1    80.5      上海
B   20   boy   180.5     NaN     NaN
C   18  gril   175.3   130.4      西安
D   25   boy   188.9     NaN     NaN
# 索引使用df7, 以df7为基准,df7合并了df9中age为5和18的行,其余的数字记为NaN
# df7与df9直接取左连接的情况
>>> df7.join(df9, how = "left", rsuffix = "_what") 
   age   sex  height  age_what  weight address
A    5  girl   130.1       NaN     NaN     NaN
B   20   boy   180.5       NaN     NaN     NaN
C   18  gril   175.3       NaN     NaN     NaN
D   25   boy   188.9       NaN     NaN     NaN
# 下面还会说明rsuffix的用处,这里可暂时忽略

# 右连接的情况
>>> df7.join(df9.set_index("age"), on = "age", how = "right")
     age   sex  height  weight address
A      5  girl   130.1    80.5      上海
C     18  gril   175.3   130.4      西安
NaN   10   NaN     NaN   105.3      武汉
NaN   16   NaN     NaN   125.1      北京
# 索引仍然使用df7(这是由于on = "age"的影响,一般是以df9为准,如下),但合并以df9为基准,age为5和18,两者均有,其余df9的age在df7中没有,记为NaN
# df7与df9直接取右连接的情况
>>> df7.join(df9, how = "right", rsuffix = "_what") 
   age  sex  height  age_what  weight address
0  NaN  NaN     NaN         5    80.5      上海
1  NaN  NaN     NaN        18   130.4      西安
2  NaN  NaN     NaN        10   105.3      武汉
3  NaN  NaN     NaN        16   125.1      北京

# 采用内连接
>>> df7.join(df9.set_index("age"), on = "age", how = "inner")
   age   sex  height  weight address
A    5  girl   130.1    80.5      上海
C   18  gril   175.3   130.4      西安
# 只保留了df7和df9均有的行,相当于交集
# df7与df9直接取内连接的情况
>>> df7.join(df9, how = "inner", rsuffix = "_what")
Empty DataFrame
Columns: [age, sex, height, age_what, weight, address]
Index: []
# df7的索引为A、B、C、D,而df9的索引为0、1、2、3,之前说过join()是使用索引进行合并的,而他两的没有任何重合,所以交集为空集

# 采用外连接
>>> df7.join(df9.set_index("age"), on = "age", how = "outer")
     age   sex  height  weight address
A      5  girl   130.1    80.5      上海
B     20   boy   180.5     NaN     NaN
C     18  gril   175.3   130.4      西安
D     25   boy   188.9     NaN     NaN
NaN   10   NaN     NaN   105.3      武汉
NaN   16   NaN     NaN   125.1      北京
# df7和df9的所有行均有体现,相当于并集,其他用NaN补全
# 因为这里df9.set_index("age")的索引age作为了列,所以合并之后age不重合的行的索引变成了NaN
# df7与df9直接取外连接的情况
>>> df7.join(df9, how = "outer", rsuffix = "_what") 
    age   sex  height  age_what  weight address
A   5.0  girl   130.1       NaN     NaN     NaN
B  20.0   boy   180.5       NaN     NaN     NaN
C  18.0  gril   175.3       NaN     NaN     NaN
D  25.0   boy   188.9       NaN     NaN     NaN
0   NaN   NaN     NaN       5.0    80.5      上海
1   NaN   NaN     NaN      18.0   130.4      西安
2   NaN   NaN     NaN      10.0   105.3      武汉
3   NaN   NaN     NaN      16.0   125.1      北京

总结:

  1. "left"连接方法,即左连接,使用左边数据的索引连接;
  2. “right”连接方法,即右连接,使用右边数据的索引连接;
  3. “inner”连接方法,即内连接,相当于取交集;
  4. “outer”连接方法,即外连接,相当于取并集,没有的数据用NaN填充;
  5. 上面使用了df7与df9.set_index(“age”)、df7与df9两种合并情况进行对比,可以发现,合并之后df9.set_index(“age”)的索引age变成了列,相当于它合并之后与A、B、C、D相对应位置的索引为NaN,从外连接(“outer”)的例子中很容易观察得。

lsuffix/ rsuffix的使用

定义左边/右边连接数据列名称所加后缀,默认不加。

>>> df10 = pd.DataFrame({"age":[5, 18, 10, 16], "address":["上海", "西安", "武汉", "北京"]}, index = list("1BC2"))
>>> df10
   age address
1    5      上海
B   18      西安
C   10      武汉
2   16      北京
>>> df7.join(df10)
ValueError: columns overlap but no suffix specified: Index(['age'], dtype='object')
'''
df7和df10并不能直接合并,因为他们具有一样的列索引名称,“age”,这时需要使用lsuffix或者rsuffix来改变名称合并
'''

'''
使用lsuffix或者rsuffix会自动将重复的名称重命名
在左边(lsuffix)的重复名或者右边(rsuffix)的重复名加后缀
'''
>>> df7.join(df10, rsuffix = "_real")
   age   sex  height  age_real address
A    5  girl   130.1       NaN     NaN
B   20   boy   180.5      18.0      西安
C   18  gril   175.3      10.0      武汉
D   25   boy   188.9       NaN     NaN

>>> df7.join(df10, lsuffix = "_real") 
   age_real   sex  height   age address
A         5  girl   130.1   NaN     NaN
B        20   boy   180.5  18.0      西安
C        18  gril   175.3  10.0      武汉
D        25   boy   188.9   NaN     NaN

merge()方法

基于数据库格式将DataFrame或者Series合并。

DataFrame.merge(right, how='inner', on=None, left_on=None, right_on=None, left_index=False, right_index=False, sort=False, suffixes=('_x', '_y'), copy=True, indicator=False, validate=None)[source]
参数说明
right将要与DataFrame合并的对象,可以是DataFrame、Series格式
how连接的类型,有 {‘left’, ‘right’, ‘outer’, ‘inner’} ,默认是"inner"
on连接参照的列或者索引名称,注意必须在需要连接的两个数据中都有这个列或者索引
left_on在左边数据中连接参照的列或索引名称,也可以是数组或列表,默认为None
right_on在右边数据中连接参照的列或索引名称,也可以是数组或列表,默认为None
left_index使用左边数据的索引来连接,如果索引是多级的,那么另一个连接数据的索引的级数必须和它一样,默认为False
right_index使用右边数据的索引来连接,如果索引是多级的,那么另一个连接数据的索引的级数必须和它一样,默认为False
sort是否按照字典顺序来对合并后的数据排序
suffixes默认为(“_x”, “_y”),表示分别加到左右重叠名称上的后缀,至少有一个值不能为None(也就是不能有重名情况的意思)
copy默认为True
indicator默认为False,bool或str类型,详细见此处
validate检查合并是否属于指定类型,详细见此处
'''
方法的大致原理与join()类似,但仍存在区别,此处不赘述重复内容
'''

>>> df7 
   age   sex  height
A    5  girl   130.1
B   20   boy   180.5
C   18  gril   175.3
D   25   boy   188.9

>>> df11 = pd.DataFrame({"age":[5, 18, 10, 16], "weight": [80.5, 130.4, 105.3, 125.1], "address":["上海", "西安", "武汉", "北京"]}, index = list("0AC2"))
>>> df11 
   age  weight address
0    5    80.5      上海
A   18   130.4      西安
C   10   105.3      武汉
2   16   125.1      北京

>>> df12 = pd.DataFrame({"age": [5, 18, 16, 21], "sex": ["girl", "male", "gril", "boy"], "weight": [80.5, 130.4, 105.3, 125.1]}, index = list("ABCD"))
>>> df12 
   age   sex  weight
A    5  girl    80.5
B   18  male   130.4
C   16  gril   105.3
D   21   boy   125.1

'''
注意:
1. df7与df11中age列均有5、18;
2. df7与df12中age列均有5、18, sex列均有“girl”、“gril”、“boy”, 尤其注意相同元素的位置。
'''

# 默认内连接,取交集,且默认以相同的列索引来合并
>>> df7.merge(df11) 	# df7与df11, 此例子中为age
   age   sex  height  weight address
0    5  girl   130.1    80.5      上海
1   18  gril   175.3   130.4      西安		
>>> df7.merge(df12) 	# df7与df12, 此例子中为age和sex
   age   sex  height  weight
0    5  girl   130.1    80.5

# 左边右边以age列为基准合并,另一个重合的sex列自动根据默认的suffixes = ("_x", "_y")来重命名
>>> df7.merge(df12, left_on = "age", right_on = "age")
   age sex_x  height sex_y  weight
0    5  girl   130.1  girl    80.5
1   18  gril   175.3  male   130.4

# left_index和right_index是利用左边索引或者右边来匹配合并
>>> df7.merge(df11, left_index = True, right_index = True)
   age_x   sex  height  age_y  weight address
A      5  girl   130.1     18   130.4      西安
C     18  gril   175.3     10   105.3      武汉	

pandas数据分组聚合

groupby方法

使groupby函数来对DataFrame\Series等数据进行分组,主要使用参数为by,接下来对其使用进行演示说明,更详细的说明可见此处

DataFrame.groupby(by=None, axis=0, level=None, as_index=True, sort=True, group_keys=True, squeeze=<object object>, observed=False, dropna=True)
'''
遍历,分组
'''
>>> df14 = pd.DataFrame(
...     {"name": ["Sai", "Shindou", "Toya", "Violet", "LinS", "NiH", "MeiCS", "Edward", "Alphonse", "Winry"],
...      "age": [1000, 13, 12, 15, 19, 25, 27, 9, 9, 9],
...      "height": [np.NaN, 152.3, 161.2, 165.3, 185.3, 176.8, 186.2, 130.2, 136.7, 135.2],
...      "sex": ["men", None, "men", "women", "men", "women", "men", "men", "men", "women"],
...      "good at": ["go", "go", "go", "message", "warcraft", "warcraft", "strategy", "alchemy", "alchemy", "machinery"],
...      "nationality": ["JPN", "JPN", "JPN", "UnKnown", "CN", "CN", "CN", "Amestris", "Amestris", "Amestris"]})
>>> df14
       name   age  height    sex    good at nationality
0       Sai  1000     NaN    men         go         JPN
1   Shindou    13   152.3   None         go         JPN
2      Toya    12   161.2    men         go         JPN
3    Violet    15   165.3  women    message     UnKnown
4      LinS    19   185.3    men   warcraft          CN
5       NiH    25   176.8  women   warcraft          CN
6     MeiCS    27   186.2    men   strategy          CN
7    Edward     9   130.2    men    alchemy    Amestris
8  Alphonse     9   136.7    men    alchemy    Amestris
9     Winry     9   135.2  women  machinery    Amestris

>>> dfGroup = df14.groupby(by = "nationality")
>>> dfGroup
<pandas.core.groupby.generic.DataFrameGroupBy object at 0x000002C5F1F802C8>

从上述type所得结果可知直接使用groupby函数返回的并不是DataFrame或者Series分组结果,而是DataFrameGroupBy类型数据,使用循环可知DataFrameGroupBy内部所存储数据,演示如下:

# 以国籍来进行分类
>>> dfGroup = df14.groupby(by = "nationality")
>>> for i in dfGroup:
...     print(i)
...     print("---------------------------------------")
...
('Amestris',        name  age  height    sex    good at nationality
7    Edward    9   130.2    men    alchemy    Amestris
8  Alphonse    9   136.7    men    alchemy    Amestris
9     Winry    9   135.2  women  machinery    Amestris)
---------------------------------------
('CN',     name  age  height    sex   good at nationality
4   LinS   19   185.3    men  warcraft          CN
5    NiH   25   176.8  women  warcraft          CN
6  MeiCS   27   186.2    men  strategy          CN)
---------------------------------------
('JPN',       name   age  height   sex good at nationality
0      Sai  1000     NaN   men      go         JPN
1  Shindou    13   152.3  None      go         JPN
2     Toya    12   161.2   men      go         JPN)
---------------------------------------
('UnKnown',      name  age  height    sex  good at nationality
3  Violet   15   165.3  women  message     UnKnown)
---------------------------------------

从上面的数据不难观察出,返回的是元组,元组的第一个元素是分类的类别,之后是详细的其余数据,可使用二重循环将它们分别取出输出,如下。

>>> for i, j in dfGroup:
...     print(i)
...     print(j, "\n", type(j))
...     print("---------------------------------------")
...
Amestris
       name  age  height    sex    good at nationality
7    Edward    9   130.2    men    alchemy    Amestris
8  Alphonse    9   136.7    men    alchemy    Amestris
9     Winry    9   135.2  women  machinery    Amestris
 <class 'pandas.core.frame.DataFrame'>
---------------------------------------
CN
    name  age  height    sex   good at nationality
4   LinS   19   185.3    men  warcraft          CN
5    NiH   25   176.8  women  warcraft          CN
6  MeiCS   27   186.2    men  strategy          CN
 <class 'pandas.core.frame.DataFrame'>
---------------------------------------
JPN
      name   age  height   sex good at nationality
0      Sai  1000     NaN   men      go         JPN
1  Shindou    13   152.3  None      go         JPN
2     Toya    12   161.2   men      go         JPN
 <class 'pandas.core.frame.DataFrame'>
---------------------------------------
UnKnown
     name  age  height    sex  good at nationality
3  Violet   15   165.3  women  message     UnKnown
 <class 'pandas.core.frame.DataFrame'>
---------------------------------------

DataFrameGroupBy对象方法

以下列举出了部分统计方法,更详细可见官方文档,可通过搜索获得更多函数说明及用法。

函数说明
count分组中非NA的值的数量
sum分组中非NA的值的和
mean分组中非NA的值的平均值
median分组中非NA的值的算术中位数
std分组中非NA的值的标准差
var分组中非NA的值的方差
min分组中非NA值的最小值
max分组中非NA值的最大值
>>> df14
       name   age  height    sex    good at nationality
0       Sai  1000     NaN    men         go         JPN
1   Shindou    13   152.3   None         go         JPN
2      Toya    12   161.2    men         go         JPN
3    Violet    15   165.3  women    message     UnKnown
4      LinS    19   185.3    men   warcraft          CN
5       NiH    25   176.8  women   warcraft          CN
6     MeiCS    27   186.2    men   strategy          CN
7    Edward     9   130.2    men    alchemy    Amestris
8  Alphonse     9   136.7    men    alchemy    Amestris
9     Winry     9   135.2  women  machinery    Amestris
>>> dfGroup = df14.groupby(by = "nationality")

'''
DataFrameGroupBy的一些方法,注意:NaN和None的处理
'''
# 对每一个分组统计各项的数量,其中NaN和None均没有计入计数
>>> dfGroup.count()
             name  age  height  sex  good at
nationality
Amestris        3    3       3    3        3
CN              3    3       3    3        3
JPN             3    3       2    2        3
UnKnown         1    1       1    1        1

# 统计每一个分组的方差,只计算数值
>>> dfGroup.var()
                       age     height
nationality
Amestris          0.000000  11.583333
CN               17.333333  26.903333
JPN          325052.333333  39.605000
UnKnown                NaN        NaN

# 计算每一个分组的最小值,NaN被忽略,None的整列均没有计入
>>> dfGroup.min()
                 name  age  height   good at
nationality
Amestris     Alphonse    9   130.2   alchemy
CN               LinS   19   176.8  strategy
JPN               Sai   12   152.3        go
UnKnown        Violet   15   165.3   message

# 计算每一个分组的最大值,NaN被忽略,None的整列均没有计入
>>> dfGroup.max()
               name   age  height    good at
nationality
Amestris      Winry     9   136.7  machinery
CN              NiH    27   186.2   warcraft
JPN            Toya  1000   161.2         go
UnKnown      Violet    15   165.3    message

DataFrameGroupBy对象方法与索引

'''
数据按多个条件进行分组
'''

# 先筛选出国别为中国的人,再根据擅长的类别来进行分类统计数量
>>> dfCN = df14[df14["nationality"] == "CN"]
>>> dfGroupCN = dfCN.groupby(by = "good at").count()
>>> dfGroupCN
          name  age  height  sex  nationality
good at
strategy     1    1       1    1            1
warcraft     2    2       2    2            2

# 根据擅长的技能和国别来进行分类,写法一
>>> dfGroup1 = df14.groupby(by = [df14["good at"], df14["nationality"]]).count()
>>> dfGroup1
                       name  age  height  sex
good at   nationality
alchemy   Amestris        2    2       2    2
go        JPN             3    3       2    2
machinery Amestris        1    1       1    1
message   UnKnown         1    1       1    1
strategy  CN              1    1       1    1
warcraft  CN              2    2       2    2

# 根据擅长的技能和国别来进行分类,写法二
>>> dfGroup2 = df14.groupby(by = ["good at", "nationality"]).count()
>>> dfGroup2
                       name  age  height  sex
good at   nationality
alchemy   Amestris        2    2       2    2
go        JPN             3    3       2    2
machinery Amestris        1    1       1    1
message   UnKnown         1    1       1    1
strategy  CN              1    1       1    1
warcraft  CN              2    2       2    2

'''
注意:
当先取其中某一列再进行分类时,不可以用上述利用多条目分类写法,
因为此时为Series数据类型,只可以使用第一种多条目分类写法,如下所示
'''
# 不可以采用写法二
>>> dfGroup3 = df14["nationality"].groupby(by = ["good at", "sex"]).count()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "E:\Anaconda_3\lib\site-packages\pandas\core\generic.py", line 7894, in groupby
    **kwargs
  File "E:\Anaconda_3\lib\site-packages\pandas\core\groupby\groupby.py", line 2522, in groupby
    return klass(obj, by, **kwds)
  File "E:\Anaconda_3\lib\site-packages\pandas\core\groupby\groupby.py", line 391, in __init__
    mutated=self.mutated,
  File "E:\Anaconda_3\lib\site-packages\pandas\core\groupby\grouper.py", line 621, in _get_grouper
    raise KeyError(gpr)
KeyError: 'good at'

# 只能采用写法一
>>> dfGroup4 = df14["nationality"].groupby(by = [df14["good at"], df14["sex"]]).count()
>>> dfGroup4
good at    sex
alchemy    men      2
go         men      2
machinery  women    1
message    women    1
strategy   men      1
warcraft   men      1
           women    1
Name: nationality, dtype: int64

>>> type(dfGroup4)		# 返回的是Series类型
<class 'pandas.core.series.Series'>

从上述的type类型中可以看出,得出的为Series类型,只有一列,前面的为复合索引列,若想获得DataFrame类型,则可使用双中括号进行索引,如下所示。

>>> type(df14["name"])
<class 'pandas.core.series.Series'>
>>> type(df14[["name"]])
<class 'pandas.core.frame.DataFrame'>

# 利用双中括号返回DataFrame类型
>>> dfGroup5 = df14[["nationality"]].groupby(by = [df14["good at"], df14["sex"]]).count()
>>> dfGroup5
                 nationality
good at   sex
alchemy   men              2
go        men              2
machinery women            1
message   women            1
strategy  men              1
warcraft  men              1
          women            1
>>> type(dfGroup5)		# 返回的是DataFrame类型
<class 'pandas.core.frame.DataFrame'>


'''
其余索引用法示例
'''
# 先分类,之后利用索引取自己想要的这一列,返回Series类型
>>> dfGroup6 = df14.groupby(by = ["good at", "sex"])["nationality"].count()
>>> dfGroup6
good at    sex
alchemy    men      2
go         men      2
machinery  women    1
message    women    1
strategy   men      1
warcraft   men      1
           women    1
Name: nationality, dtype: int64
>>> type(dfGroup6)
<class 'pandas.core.series.Series'>

# 先分类,之后利用索引取自己想要的这一列,利用双中括号返回DataFrame类型
>>> dfGroup7 = df14.groupby(by = ["good at", "sex"])[["nationality"]].count()
>>> dfGroup7
                 nationality
good at   sex
alchemy   men              2
go        men              2
machinery women            1
message   women            1
strategy  men              1
warcraft  men              1
          women            1
>>> type(dfGroup7)
<class 'pandas.core.frame.DataFrame'>

# 也可在最后面进行分类之后的索引
>>> dfGroup8 = df14.groupby(by = ["good at", "sex"]).count()[["nationality"]]
>>> dfGroup8
                 nationality
good at   sex
alchemy   men              2
go        men              2
machinery women            1
message   women            1
strategy  men              1
warcraft  men              1
          women            1
>>> type(dfGroup8)
<class 'pandas.core.frame.DataFrame'>

pandas索引和复合索引

pandas索引重新赋值、去重等操作

>>> df14
       name   age  height    sex    good at nationality
0       Sai  1000     NaN    men         go         JPN
1   Shindou    13   152.3   None         go         JPN
2      Toya    12   161.2    men         go         JPN
3    Violet    15   165.3  women    message     UnKnown
4      LinS    19   185.3    men   warcraft          CN
5       NiH    25   176.8  women   warcraft          CN
6     MeiCS    27   186.2    men   strategy          CN
7    Edward     9   130.2    men    alchemy    Amestris
8  Alphonse     9   136.7    men    alchemy    Amestris
9     Winry     9   135.2  women  machinery    Amestris
'''
索引重新赋值
DataFrame.index = [ ]
Series.index = [ ]
'''
>>> df14.index = [string.ascii_uppercase[i] for i in range(10)]
>>> df14
       name   age  height    sex    good at nationality
A       Sai  1000     NaN    men         go         JPN
B   Shindou    13   152.3   None         go         JPN
C      Toya    12   161.2    men         go         JPN
D    Violet    15   165.3  women    message     UnKnown
E      LinS    19   185.3    men   warcraft          CN
F       NiH    25   176.8  women   warcraft          CN
G     MeiCS    27   186.2    men   strategy          CN
H    Edward     9   130.2    men    alchemy    Amestris
I  Alphonse     9   136.7    men    alchemy    Amestris
J     Winry     9   135.2  women  machinery    Amestris

'''
可理解为在df中取索引
DataFrame.reindex(labels=None, index=None, columns=None, axis=None, method=None, copy=True, level=None, fill_value=nan, limit=None, tolerance=None)
Series.reindex(index=None, **kwargs)
Index.reindex(target, method=None, level=None, limit=None, tolerance=None)
'''
>>> df14.reindex(["A", "b"])
  name     age  height  sex good at nationality
A  Sai  1000.0     NaN  men      go         JPN
b  NaN     NaN     NaN  NaN     NaN         NaN


'''
将当前DataFrame的某一列作为索引
DataFrame.set_index(keys, drop=True, append=False, inplace=False, verify_integrity=False)
'''
>>> df14.set_index("name")
           age  height    sex    good at nationality
name
Sai       1000     NaN    men         go         JPN
Shindou     13   152.3   None         go         JPN
Toya        12   161.2    men         go         JPN
Violet      15   165.3  women    message     UnKnown
LinS        19   185.3    men   warcraft          CN
NiH         25   176.8  women   warcraft          CN
MeiCS       27   186.2    men   strategy          CN
Edward       9   130.2    men    alchemy    Amestris
Alphonse     9   136.7    men    alchemy    Amestris
Winry        9   135.2  women  machinery    Amestris

# drop参数用处,将name置为索引的时候还保留这一列
>>> df14.set_index("name", drop = False)
              name   age  height    sex    good at nationality
name
Sai            Sai  1000     NaN    men         go         JPN
Shindou    Shindou    13   152.3   None         go         JPN
Toya          Toya    12   161.2    men         go         JPN
Violet      Violet    15   165.3  women    message     UnKnown
LinS          LinS    19   185.3    men   warcraft          CN
NiH            NiH    25   176.8  women   warcraft          CN
MeiCS        MeiCS    27   186.2    men   strategy          CN
Edward      Edward     9   130.2    men    alchemy    Amestris
Alphonse  Alphonse     9   136.7    men    alchemy    Amestris
Winry        Winry     9   135.2  women  machinery    Amestris

# index值可重复,unique()方法将DataFrame中所取出的数据作删除重复值的操作
>>> df14["age"].unique()
array([1000,   13,   12,   15,   19,   25,   27,    9], dtype=int64)
>>> df14["name"].unique()
array(['Sai', 'Shindou', 'Toya', 'Violet', 'LinS', 'NiH', 'MeiCS',
       'Edward', 'Alphonse', 'Winry'], dtype=object)
       
# 置为复合索引
>>> df15 = df14.set_index(["nationality", "name"])
>>> df15
                       age  height    sex    good at
nationality name
JPN         Sai       1000     NaN    men         go
            Shindou     13   152.3   None         go
            Toya        12   161.2    men         go
UnKnown     Violet      15   165.3  women    message
CN          LinS        19   185.3    men   warcraft
            NiH         25   176.8  women   warcraft
            MeiCS       27   186.2    men   strategy
Amestris    Edward       9   130.2    men    alchemy
            Alphonse     9   136.7    men    alchemy
            Winry        9   135.2  women  machinery

>>> df15.index
MultiIndex([(     'JPN',      'Sai'),
            (     'JPN',  'Shindou'),
            (     'JPN',     'Toya'),
            ( 'UnKnown',   'Violet'),
            (      'CN',     'LinS'),
            (      'CN',      'NiH'),
            (      'CN',    'MeiCS'),
            ('Amestris',   'Edward'),
            ('Amestris', 'Alphonse'),
            ('Amestris',    'Winry')],
           names=['nationality', 'name'])

对Series的复合索引

复合索引可以使用swaplevel()函数方法来调换索引的层次顺序,方便是以适应不同的索引需求,用法可见下代码示意。对于Series类型可以直接使用中括号索引,如 dfAge15[“CN”][“LinS”]dfAge15[“CN”]

>>> dfAge15 = df15["good at"]
>>> dfAge15
nationality  name
JPN          Sai                go
             Shindou            go
             Toya               go
UnKnown      Violet        message
CN           LinS         warcraft
             NiH          warcraft
             MeiCS        strategy
Amestris     Edward        alchemy
             Alphonse      alchemy
             Winry       machinery
Name: good at, dtype: object

>>> dfAge15["CN"]["LinS"]
'warcraft'

>>> dfAge15["CN"]
name
LinS     warcraft
NiH      warcraft
MeiCS    strategy
Name: good at, dtype: object
>>> type(dfAge15["CN"])
<class 'pandas.core.series.Series'>

>>> dfAge16 = dfAge15.swaplevel()
>>> dfAge16
name      nationality
Sai       JPN                   go
Shindou   JPN                   go
Toya      JPN                   go
Violet    UnKnown          message
LinS      CN              warcraft
NiH       CN              warcraft
MeiCS     CN              strategy
Edward    Amestris         alchemy
Alphonse  Amestris         alchemy
Winry     Amestris       machinery
Name: good at, dtype: object

对DataFrame的复合索引

对于DataFrame类型需要使用loc来先进行索引,如 df15.loc[“CN”].loc[“LinS”] ,直接用中括号会默认先进行列索引,如df15[“age”][“JPN”]。这两种方式在下面代码中均有例子,可以帮助理解。

>>> df15
                       age  height    sex    good at
nationality name
JPN         Sai       1000     NaN    men         go
            Shindou     13   152.3   None         go
            Toya        12   161.2    men         go
UnKnown     Violet      15   165.3  women    message
CN          LinS        19   185.3    men   warcraft
            NiH         25   176.8  women   warcraft
            MeiCS       27   186.2    men   strategy
Amestris    Edward       9   130.2    men    alchemy
            Alphonse     9   136.7    men    alchemy
            Winry        9   135.2  women  machinery
            
# 先取行,索引
>>> df15.loc["CN"].loc["LinS"]
age              19
height        185.3
sex             men
good at    warcraft
Name: LinS, dtype: object

# 先取列
>>> df15["age"]["JPN"]
name
Sai        1000
Shindou      13
Toya         12
Name: age, dtype: int64

# 交换索引顺序,从内层开始选取
>>> df15.swaplevel().loc["LinS"]
             age  height  sex   good at
nationality
CN            19   185.3  men  warcraft

'''
Series.swaplevel(i=- 2, j=- 1, copy=True)
MultiIndex.swaplevel(i=- 2, j=- 1)
DataFrame.swaplevel(i=- 2, j=- 1, axis=0)
'''

pandas对列/行进行批量处理

  1. apply 函数,可以对 DataFrame 进行批量操作
  2. map 函数,只能对 Series 进行操作

对整个表格进行操作

def function(x):
	return x**2

df.apply(function)	# 将表格中的每个元素进行平方处理
df.apply(lambda x: x*x if x>5 else x**3)	# 也可使用lambda匿名函数

对单列/单行进行操作

def function(x):
	return x**2

# 对表格中命名为age的列进行平方处理
df["age"].apply(function)
df["age"].apply(lambda x: x*x)

df["age"].map(function)
df["age"].map(lambda x: x*x)
df.loc["A", :].map(lambda x: x**2)

对多列/多行进行操作

df['score'] = df.apply(lambda x: 0.5 * x['Math'] + 0.5 * x['English'], axis=1)
In [14]: df2 = pd.DataFrame(np.arange(12).reshape((3, 4)), index = ["A", "B", "C"], columns = ["D", "E", "F", "G"])

In [19]: df2
Out[19]: 
   D  E   F   G
A  0  1   2   3
B  4  5   6   7
C  8  9  10  11

In [20]: df2.eval("""
H = D + E + F
I = D * F
J = G - F
K = G / E""", inplace=True)

In [21]: df2
Out[21]: 
   D  E   F   G   H   I  J         K
A  0  1   2   3   3   0  1  3.000000
B  4  5   6   7  15  24  1  1.400000
C  8  9  10  11  27  80  1  1.222222

pandas时间序列

时间序列生成date_range()

此处主要介绍最常使用的date_range()方法,其余方法可自行查询使用。

pandas.date_range(start=None, end=None, periods=None, freq=None, tz=None, normalize=False, name=None, closed=None, **kwargs)

对上述参数进行简单说明,如下表。

参数说明
startstr,生成日期的开始
endstr,生成日期的结束
periodsint,生成日期的个数
freqstr / DateOffset类型,默认为 ‘D’,据此频率来在指定范围内生成日期
tzstr / tzinfo类型,返回所选时区的时间
normalizebool类型,默认为 False,将开始结束日期初始化为凌晨,再进行生成序列
namestr,默认为 None,指定生成的DatetimeIndex序列的名字
close可以为 {None, ‘left’, ‘right’} ,决定所取时间区间的封闭性,默认为None,两端封闭
freq参数的使用可参考下表,更详细可见此处。在符号前可加上数字来制定间隔的天数/月数/…,如freq = “10D” 代表按照每隔10天的频率来生成时间序列。
参数偏移量类型
-------------
DDay
BBusinessDay
HHour
T/ minMinute
SSecond
L/ msMilli
UMicro
MMonthEnd
BMBusinessMonthEnd
MSMonthBegin
BMSBusinessMonthBegin
'''
参数使用示例
'''
# freq以"D"每天来获取时间序列
>>> pd.date_range(start = "20201226", end = "20210210", freq = "D")
DatetimeIndex(['2020-12-26', '2020-12-27', '2020-12-28', '2020-12-29',
               '2020-12-30', '2020-12-31', '2021-01-01', '2021-01-02',
               '2021-01-03', '2021-01-04', '2021-01-05', '2021-01-06',
               '2021-01-07', '2021-01-08', '2021-01-09', '2021-01-10',
               '2021-01-11', '2021-01-12', '2021-01-13', '2021-01-14',
               '2021-01-15', '2021-01-16', '2021-01-17', '2021-01-18',
               '2021-01-19', '2021-01-20', '2021-01-21', '2021-01-22',
               '2021-01-23', '2021-01-24', '2021-01-25', '2021-01-26',
               '2021-01-27', '2021-01-28', '2021-01-29', '2021-01-30',
               '2021-01-31', '2021-02-01', '2021-02-02', '2021-02-03',
               '2021-02-04', '2021-02-05', '2021-02-06', '2021-02-07',
               '2021-02-08', '2021-02-09', '2021-02-10'],
              dtype='datetime64[ns]', freq='D')
              
# 以10天为频率来获取时间序列             
>>> pd.date_range(start = "20201226", end = "20210210", freq = "10D")
DatetimeIndex(['2020-12-26', '2021-01-05', '2021-01-15', '2021-01-25',
               '2021-02-04'],
              dtype='datetime64[ns]', freq='10D')
              
# 以每月最后一个日历日来获取在时间范围内的时间序列
>>> pd.date_range(start = "20201226", end = "20210210", freq = "M")
DatetimeIndex(['2020-12-31', '2021-01-31'], dtype='datetime64[ns]', freq='M')

# 在时间范围内以每月最后一个日历日获取5个时间
>>> pd.date_range(start = "20201226", periods = 5, freq = "M")
DatetimeIndex(['2020-12-31', '2021-01-31', '2021-02-28', '2021-03-31',
               '2021-04-30'],
              dtype='datetime64[ns]', freq='M')

# 在时间范围内获取5个时间序列              
>>> pd.date_range(start = "20201226", end = "20210210", periods = 5)
DatetimeIndex(['2020-12-26 00:00:00', '2021-01-06 12:00:00',
               '2021-01-18 00:00:00', '2021-01-29 12:00:00',
               '2021-02-10 00:00:00'],
              dtype='datetime64[ns]', freq=None)


# 以Asia/Hong_Kong时区来获取时间序列
>>> pd.date_range(start = "20201226", end = "20210210", periods = 5, tz = "Asia/Hong_Kong")
DatetimeIndex(['2020-12-26 00:00:00+08:00', '2021-01-06 12:00:00+08:00',
               '2021-01-18 00:00:00+08:00', '2021-01-29 12:00:00+08:00',
               '2021-02-10 00:00:00+08:00'],
              dtype='datetime64[ns, Asia/Hong_Kong]', freq=None)


# 这是不考虑标准化为凌晨的情况
>>> pd.date_range(start = "20201226 09:10:15", end = "20210210 19:12:30", periods = 5, normalize = False)
DatetimeIndex([       '2020-12-26 09:10:15', '2021-01-06 23:40:48.750000',
               '2021-01-18 14:11:22.500000', '2021-01-30 04:41:56.250000',
                      '2021-02-10 19:12:30'],
              dtype='datetime64[ns]', freq=None)

# 将时间标准化为凌晨之后再取值,normalize = True
>>> pd.date_range(start = "20201226 09:10:15", end = "20210210 19:12:30", periods = 5, normalize = True)
DatetimeIndex(['2020-12-26 00:00:00', '2021-01-06 12:00:00',
               '2021-01-18 00:00:00', '2021-01-29 12:00:00',
               '2021-02-10 00:00:00'],
              dtype='datetime64[ns]', freq=None)


# 默认close为None,start和end全封闭的情况
>>> pd.date_range(start = "20201226", end = "20201229", freq = "D")
DatetimeIndex(['2020-12-26', '2020-12-27', '2020-12-28', '2020-12-29'], dtype='datetime64[ns]', freq='D')

# close为left,可取start,不可取end的情况
>>> pd.date_range(start = "20201226", end = "20201229", freq = "D", closed = "left")
DatetimeIndex(['2020-12-26', '2020-12-27', '2020-12-28'], dtype='datetime64[ns]', freq='D')

# close为right,可取end,不可取start的情况
>>> pd.date_range(start = "20201226", end = "20201229", freq = "D", closed = "right")
DatetimeIndex(['2020-12-27', '2020-12-28', '2020-12-29'], dtype='datetime64[ns]', freq='D')

'''
PeriodIndex
class pandas.PeriodIndex(data=None, ordinal=None, freq=None, tz=None, dtype=None, copy=False, name=None, **fields)[source]
'''
>>> dfTime4 = pd.DataFrame({"year": [2020, 2020, 2021, 2021], "month": [12, 12, 1, 1], "day": [28, 30, 1, 5], "hour": [18, 5, 22, 6]})
>>> dfTime4
   year  month  day  hour
0  2020     12   28    18
1  2020     12   30     5
2  2021      1    1    22
3  2021      1    5     6

# 频率为Hour, 把分开的时间整合
>>> pd.PeriodIndex(year = dfTime4["year"], month = dfTime4["month"], day = dfTime4["day"], hour = dfTime4["hour"], freq = "H")
PeriodIndex(['2020-12-28 18:00', '2020-12-30 05:00', '2021-01-01 22:00',
             '2021-01-05 06:00'],
            dtype='period[H]', freq='H')
# 频率为Day           
>>> pd.PeriodIndex(year = dfTime4["year"], month = dfTime4["month"], day = dfTime4["day"], hour = dfTime4["hour"], freq = "D")
PeriodIndex(['2020-12-28', '2020-12-30', '2021-01-01', '2021-01-05'], dtype='period[D]', freq='D')

时间序列转换

主要使用的函数为to_datetime(),可以将 int / float / str / datetime / list / tuple / 1-d array / Series / DataFrame / dict-like 等对象转换为时间,具体用法可见此处。其中 format 可以将非标准格式的时间通过一定格式转换,如 format = “%Y年%m月%d日” 可将 “2021年1月15日” 转换为 “2021-01-15” ,其中参数的含义具体可见此处,在下面代码仅做简单演示。

pandas.to_datetime(arg, errors='raise', dayfirst=False, yearfirst=False, utc=None, format=None, exact=True, unit=None, infer_datetime_format=False, origin='unix', cache=True)
# 定义需要转换的时间DataFrame类型
>>> dfTime = pd.DataFrame({"time": ["20201001 00:00:00", "20201105 07:30:51", "20201209 13:52:16", "20210115 23:15:20"]}
>>> dfTime
                time
0  20201001 00:00:00
1  20201105 07:30:51
2  20201209 13:52:16
3  20210115 23:15:20

# 利用to_datetime()函数来转换时间
>>> pd.to_datetime(dfTime["time"])
0   2020-10-01 00:00:00
1   2020-11-05 07:30:51
2   2020-12-09 13:52:16
3   2021-01-15 23:15:20
Name: time, dtype: datetime64[ns]

# 定义其他不能直接利用to_datetime()方法读取的时间格式
>>> dfTime1 = pd.DataFrame({"time": ["2020年10月1日 0点0分0秒", "2020年11月5日 7点30分51秒", "2020年12月9日 13点52分16 ", "2021年1月15日 23点15分20秒"]})
>>> dfTime1
                   time
0     20201010001   2020115730512  20201291352163  2021115231520# 使用format方法来读取格式,使得可用to_datetime()转换
>>> pd.to_datetime(dfTime1["time"], format = "%Y年%m月%d日 %H点%M分%S秒")
0   2020-10-01 00:00:00
1   2020-11-05 07:30:51
2   2020-12-09 13:52:16
3   2021-01-15 23:15:20
Name: time, dtype: datetime64[ns]

时间序列重采样

重采样:指的是将时间序列从一个频率转化为另一个频率进行处理的过程。
降采样:将高频率数据转化为低频率数据。
升采样:低频率转化为高频率。

使用resample方法可以实现重采样,使用次方法的前提是将时间置为索引。将之前的时间序列的频率转化为另一个频率的新的时间序列进行统计。resample方法的详细使用方法见此处

DataFrame.resample(rule, axis=0, closed=None, label=None, convention='start', kind=None, loffset=None, base=None, on=None, level=None, origin='start_day', offset=None)
# 定义数据
>>> dfTime2 = pd.DataFrame({"time": ["2020年10月1日", "2020年10月12日", "2020年11月3日", "2020年12月15日", "2020年12月10日", "2020年12月30日"], "num": [15.2, 51.3, 39.5, 98.6, 43.7, 36.8]})
>>> dfTime2["time"] = pd.to_datetime(dfTime2["time"], format = "%Y年%m月%d日")
>>> dfTime2
        time   num
0 2020-10-01  15.2
1 2020-10-12  51.3
2 2020-11-03  39.5
3 2020-12-15  98.6
4 2020-12-10  43.7
5 2020-12-30  36.8
>>> dfTime3 = dfTime2.set_index("time")
>>> dfTime3
             num
time
2020-10-01  15.2
2020-10-12  51.3
2020-11-03  39.5
2020-12-15  98.6
2020-12-10  43.7
2020-12-30  36.8

# 按照取每月份最后一天的频率,在范围内的话,进行计数
# 如在上述dfTime3中,10月份的有两个数据
>>> dfTime3.resample("M").count()
            num
time
2020-10-31    2
2020-11-30    1
2020-12-31    3

# 求均值, (15.2 + 51.3) / 2 = 33.25
>>> dfTime3.resample("M").mean()
              num
time
2020-10-31  33.25
2020-11-30  39.50
2020-12-31  59.70

# 求和, 15.2 + 51.3 = 66.5
>>> dfTime3.resample("M").sum()
              num
time
2020-10-31   66.5
2020-11-30   39.5
2020-12-31  179.1

记录一个有意思的merge问题

left_on, right_on, left_index, right_index混用情况,目前还缺乏足够的好的解释。

>>> df7
   age   sex  height
A    5  girl   130.1
B   20   boy   180.5
C   18  gril   175.3
D   25   boy   188.9
>>> df12
   age   sex  weight
A    5  girl    80.5
B   18  male   130.4
C   16  gril   105.3
D   21   boy   125.1

>>> df7.merge(df12, left_on = "sex", right_on = "sex", left_index = True)
   age_x   sex  height  age_y  weight
A      5  girl   130.1      5    80.5
D     20   boy   180.5     21   125.1
D     25   boy   188.9     21   125.1
C     18  gril   175.3     16   105.3

>>> df7.merge(df12, left_on = "sex", right_on = "sex", right_index = True)
Empty DataFrame
Columns: [sex, age_x, sex_x, height, age_y, sex_y, weight]
Index: []

>>> df7.merge(df12, left_on = "sex", right_on = "sex", left_index = True, right_index = True)
   age_x sex_x  height  age_y sex_y  weight
A      5  girl   130.1      5  girl    80.5
B     20   boy   180.5     18  male   130.4
C     18  gril   175.3     16  gril   105.3
D     25   boy   188.9     21   boy   125.1

>>> df7.merge(df12, right_on = "sex", left_index = True)
Empty DataFrame
Columns: [sex, age_x, sex_x, height, age_y, sex_y, weight]
Index: []

>>> df7.merge(df12, left_on = "sex", right_index = True)
Empty DataFrame
Columns: [sex, age_x, sex_x, height, age_y, sex_y, weight]
Index: []


>>> df7
   age   sex  height
A    5  girl   130.1
B   20   boy   180.5
C   18  gril   175.3
D   25   boy   188.9

>>> df11
   age  weight address
0    5    80.5      上海
A   18   130.4      西安
C   10   105.3      武汉
2   16   125.1      北京

>>> df7.merge(df11, left_on = "age", right_on = "age", left_index = True)
   age   sex  height  weight address
0    5  girl   130.1    80.5      上海
A   18  gril   175.3   130.4      西安

>>> df7.merge(df11, left_on = "age", right_on = "age", right_index = True)
ValueError: You are trying to merge on int64 and object columns. If you wish to proceed you should use pd.concat

>>> df7.merge(df11, left_on = "age", right_on = "age", left_index = True, right_index = True)
ValueError: You are trying to merge on int64 and object columns. If you wish to proceed you should use pd.concat


>>> df13 = pd.DataFrame({"age": [5, 18, 16, 21], "sex": ["girl", "male", "gril", "boy"], "weight": [80.5, 130.4, 105.3, 125.1]}, index = ["AA", "BB", "CC", "DD"])
>>> df13
    age   sex  weight
AA    5  girl    80.5
BB   18  male   130.4
CC   16  gril   105.3
DD   21   boy   125.1

>>> df7.merge(df13, left_on = "sex", right_on = "sex", left_index = True)
    age_x   sex  height  age_y  weight
AA      5  girl   130.1      5    80.5
DD     20   boy   180.5     21   125.1
DD     25   boy   188.9     21   125.1
CC     18  gril   175.3     16   105.3

>>> df7.merge(df13, left_on = "sex", right_on = "sex", right_index = True)
Empty DataFrame
Columns: [sex, age_x, sex_x, height, age_y, sex_y, weight]
Index: []

>>> df7.merge(df13, left_on = "sex", right_on = "sex", left_index = True, right_index = True)
Empty DataFrame
Columns: [sex, age_x, sex_x, height, age_y, sex_y, weight]
Index: []

日后若有时间可以再行完善。

参考资料

  1. pandas官方文档
  2. 一视频教程
  • 2
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值