pandas基本操作

Pandas知识总结

基础知识

pandas概览

Pandas 适用于处理以下类型的数据:

与 SQL 或 Excel 表类似的,含异构列的表格数据;
有序和无序(非固定频率)的时间序列数据;
带行列标签的矩阵数据,包括同构或异构型数据;
任意其它形式的观测、统计数据集, 数据转入 Pandas 数据结构时不必事先标记。

Pandas 的主要数据结构是 Series
(一维数据)与 DataFrame (二维数据),这两种数据结构足以处理金融、统计、社会科学、工程等领域里的大多数典型用例。对于 R 用户,DataFrame 提供了比 R 语言 data.frame 更丰富的功能。Pandas 基于 NumPy 开发,可以与其它第三方科学计算支持库完美集成。

数据结构

维数名称描述
1Series带标签的一维同构数组
2DataFrame带标签的,大小可变的,二维异构表格

部分优势

下面仅列出了它的部分优势 :

处理浮点与非浮点数据里的缺失数据,表示为 NaN;
大小可变:插入或删除 DataFrame 等多维对象的列;
自动、显式数据对齐:显式地将对象与一组标签对齐,也可以忽略标签,在 Series、DataFrame 计算时自动与数据对齐;
强大、灵活的分组(group by)功能:拆分-应用-组合数据集,聚合、转换数据;
把 Python 和 NumPy 数据结构里不规则、不同索引的数据轻松地转换为 DataFrame 对象;
基于智能标签,对大型数据集进行切片、花式索引、子集分解等操作;
直观地合并(merge)、**连接(join)**数据集;
灵活地重塑(reshape)、**透视(pivot)**数据集;
轴支持结构化标签:一个刻度支持多个标签;
成熟的 IO 工具:读取文本文件(CSV 等支持分隔符的文件)、Excel 文件、数据库等来源的数据,利用超快的 HDF5 格式保存 / 加载数据;
时间序列:支持日期范围生成、频率转换、移动窗口统计、移动窗口线性回归、日期位移等时间序列功能。

生成对象

可以通过直接导入数据生成,也可通过字典的方式进行补齐

>>> import pandas as pd
>>> import numpy as np
>>> s=pd.Series([1,2,6,np.nan,44,1])   #直接导入数据
>>> s
0     1.0
1     2.0
2     6.0
3     NaN
4    44.0
5     1.0
dtype: float64
>>> dates=pd.date_range('20200101',periods=6)   #用含日期时间索引与标签的 NumPy 数组生成 DataFrame :
>>> 
>>> dates
DatetimeIndex(['2020-01-01', '2020-01-02', '2020-01-03', '2020-01-04',
               '2020-01-05', '2020-01-06'],
              dtype='datetime64[ns]', freq='D')
              >>> df=pd.DataFrame(np.random.randn(6,4),index=dates,columns=['a','b','c','d'])
>>> df
                   a         b         c         d
2020-01-01 -1.112168  0.778275 -0.193154 -2.434284
2020-01-02 -0.802297  0.545922 -2.122220  0.324204
2020-01-03  0.116364 -0.814124  0.758326 -0.134940
2020-01-04  0.512205 -0.582522 -0.750738 -1.586458
2020-01-05 -0.091266  0.247491 -0.959177 -0.851816
2020-01-06  0.676351  1.216307 -1.722036 -0.295851

>>> df1=pd.DataFrame(np.arange(12).reshape((3,4)))
>>> df1
   0  1   2   3
0  0  1   2   3
1  4  5   6   7
2  8  9  10  11
#通过字典定义
>>> df2=pd.DataFrame({'A':1.,
		  'B':pd.Timestamp('20200102'),
		  'C':pd.Series(1,index=list(range(4)),dtype='float32'),
		  'D':np.array([3]*4,dtype='int32'),
		  'E':pd.Categorical(["test","train","test","train"]),
		  'F':'foo'})
>>> df2
     A          B    C  D      E    F
0  1.0 2020-01-02  1.0  3   test  foo
1  1.0 2020-01-02  1.0  3  train  foo
2  1.0 2020-01-02  1.0  3   test  foo
3  1.0 2020-01-02  1.0  3  train  foo

基本属性

>>> df2
     A          B    C  D      E    F
0  1.0 2020-01-02  1.0  3   test  foo
1  1.0 2020-01-02  1.0  3  train  foo
2  1.0 2020-01-02  1.0  3   test  foo
3  1.0 2020-01-02  1.0  3  train  foo
>>> df2.dtypes
A           float64
B    datetime64[ns]
C           float32
D             int32
E          category
F            object
dtype: object

>>> df2.index    #行号
Int64Index([0, 1, 2, 3], dtype='int64')
>>> df2.columns   #列号
Index(['A', 'B', 'C', 'D', 'E', 'F'], dtype='object')
>>> df2.values  
array([[1.0, Timestamp('2020-01-02 00:00:00'), 1.0, 3, 'test', 'foo'],
       [1.0, Timestamp('2020-01-02 00:00:00'), 1.0, 3, 'train', 'foo'],
       [1.0, Timestamp('2020-01-02 00:00:00'), 1.0, 3, 'test', 'foo'],
       [1.0, Timestamp('2020-01-02 00:00:00'), 1.0, 3, 'train', 'foo']],
      dtype=object)

>>> df2.describe()     #运算数字的平均数等
         A    C    D
count  4.0  4.0  4.0
mean   1.0  1.0  3.0
std    0.0  0.0  0.0
min    1.0  1.0  3.0
25%    1.0  1.0  3.0
50%    1.0  1.0  3.0
75%    1.0  1.0  3.0
max    1.0  1.0  3.0
>>> df2.T  #转置
                     0  ...                    3
A                    1  ...                    1
B  2020-01-02 00:00:00  ...  2020-01-02 00:00:00
C                    1  ...                    1
D                    3  ...                    3
E                 test  ...                train
F                  foo  ...                  foo

[6 rows x 4 columns]

>>> df2.sort_index(axis=1,ascending=False)       #降序  以行排序
     F      E  D    C          B    A
0  foo   test  3  1.0 2020-01-02  1.0
1  foo  train  3  1.0 2020-01-02  1.0
2  foo   test  3  1.0 2020-01-02  1.0
3  foo  train  3  1.0 2020-01-02  1.0
>>> df2.sort_index(axis=0,ascending=False)
     A          B    C  D      E    F
3  1.0 2020-01-02  1.0  3  train  foo
2  1.0 2020-01-02  1.0  3   test  foo
1  1.0 2020-01-02  1.0  3  train  foo
0  1.0 2020-01-02  1.0  3   test  foo
>>> df2.sort_values(by='E')               #对里面的值进行排序
     A          B    C  D      E    F
0  1.0 2020-01-02  1.0  3   test  foo
2  1.0 2020-01-02  1.0  3   test  foo
1  1.0 2020-01-02  1.0  3  train  foo
3  1.0 2020-01-02  1.0  3  train  foo

选择数据

按标签选择
按位置选择
混合选择
布尔索引

>>> import pandas as pd
>>> import numpy as np
>>>> dates=pd.date_range('20200101',periods=6)
>>> df=pd.DataFrame(np.arange(24).reshape((6,4)),index=dates,columns=['A','B','C','D'])
>>> df
             A   B   C   D
2020-01-01   0   1   2   3
2020-01-02   4   5   6   7
2020-01-03   8   9  10  11
2020-01-04  12  13  14  15
2020-01-05  16  17  18  19
2020-01-06  20  21  22  23

>>> print(df['A'])
2020-01-01     0
2020-01-02     4
2020-01-03     8
2020-01-04    12
2020-01-05    16
2020-01-06    20
Freq: D, Name: A, dtype: int32
>>> print(df.A)
2020-01-01     0
2020-01-02     4
2020-01-03     8
2020-01-04    12
2020-01-05    16
2020-01-06    20
Freq: D, Name: A, dtype: int32

>>> print(df['20200101':'20200104'])
             A   B   C   D
2020-01-01   0   1   2   3
2020-01-02   4   5   6   7
2020-01-03   8   9  10  11
2020-01-04  12  13  14  15
>>> print(df[0:3])
            A  B   C   D
2020-01-01  0  1   2   3
2020-01-02  4  5   6   7
2020-01-03  8  9  10  11

#select by label  标签选择  :loc
>>> print(df.loc['20200101'])
A    0
B    1
C    2
D    3
Name: 2020-01-01 00:00:00, dtype: int32
>>> print(df.loc[:,['A','B']])
             A   B
2020-01-01   0   1
2020-01-02   4   5
2020-01-03   8   9
2020-01-04  12  13
2020-01-05  16  17
2020-01-06  20  21
>>> print(df.loc['20200101',['A','B']])
A    0
B    1
Name: 2020-01-01 00:00:00, dtype: int32

#select by position  位置选择  :iloc
>>> print(df.iloc[3])    #第四行数据
A    12
B    13
C    14
D    15
Name: 2020-01-04 00:00:00, dtype: int32
>>> print(df.iloc[3,1])
13
>>> print(df.iloc[3:5,1:3])   #连续筛选
             B   C
2020-01-04  13  14
2020-01-05  17  18
>>> print(df.iloc[[1,3,5],1:3])    #不连续地筛选
             B   C
2020-01-02   5   6
2020-01-04  13  14
2020-01-06  21  22

#mixed selection 混合筛选 :ix
>>>print(df.ix[[1,3,5],['A','C']])
             A   C
2020-01-02   4   6
2020-01-04  12  14
2020-01-06  20  22

#Boolean indexing
>>> print(df)
             A   B   C   D
2020-01-01   0   1   2   3
2020-01-02   4   5   6   7
2020-01-03   8   9  10  11
2020-01-04  12  13  14  15
2020-01-05  16  17  18  19
2020-01-06  20  21  22  23
>>> print(df[df.A>8])
             A   B   C   D
2020-01-04  12  13  14  15
2020-01-05  16  17  18  19
2020-01-06  20  21  22  23

赋值

>>> import pandas as pd
>>> import numpy as np
>>>> dates=pd.date_range('20200101',periods=6)
>>> df=pd.DataFrame(np.arange(24).reshape((6,4)),index=dates,columns=['A','B','C','D'])
>>> df
             A   B   C   D
2020-01-01   0   1   2   3
2020-01-02   4   5   6   7
2020-01-03   8   9  10  11
2020-01-04  12  13  14  15
2020-01-05  16  17  18  19
2020-01-06  20  21  22  23

>>> df.iloc[2,2]=1111
>>> print(df)
             A   B     C   D
2020-01-01   0   1     2   3
2020-01-02   4   5     6   7
2020-01-03   8   9  1111  11
2020-01-04  12  13    14  15
2020-01-05  16  17    18  19
2020-01-06  20  21    22  23
>>> df[df.A>4]=0
>>> print(df)
            A  B  C  D
2020-01-01  0  1  2  3
2020-01-02  4  5  6  7
2020-01-03  0  0  0  0
2020-01-04  0  0  0  0
2020-01-05  0  0  0  0
2020-01-06  0  0  0  0

>>> df=pd.DataFrame(np.arange(24).reshape((6,4)),index=dates,columns=['A','B','C','D'])

>>> df.A[df.A>4]=0   #只对A下的数据进行更改
>>> print(df)
            A   B   C   D
2020-01-01  0   1   2   3
2020-01-02  4   5   6   7
2020-01-03  0   9  10  11
2020-01-04  0  13  14  15
2020-01-05  0  17  18  19
2020-01-06  0  21  22  23
>>> df['F']=np.nan
>>> print(df)
            A   B   C   D   F
2020-01-01  0   1   2   3 NaN
2020-01-02  4   5   6   7 NaN
2020-01-03  0   9  10  11 NaN
2020-01-04  0  13  14  15 NaN
2020-01-05  0  17  18  19 NaN
2020-01-06  0  21  22  23 NaN
>>> df['E']=pd.Series([1,2,3,4,5,6],index=pd.date_range('20200101',periods=6))
>>> print(df)
            A   B   C   D   F  E
2020-01-01  0   1   2   3 NaN  1
2020-01-02  4   5   6   7 NaN  2
2020-01-03  0   9  10  11 NaN  3
2020-01-04  0  13  14  15 NaN  4
2020-01-05  0  17  18  19 NaN  5
2020-01-06  0  21  22  23 NaN  6

处理丢失数据

>>> import pandas as pd
>>> import numpy as np
>>>> dates=pd.date_range('20200101',periods=6)
>>> df=pd.DataFrame(np.arange(24).reshape((6,4)),index=dates,columns=['A','B','C','D'])
>>> df.iloc[0,1]=np.nan
>>> df.iloc[1,2]=np.nan
>>> df
             A     B     C   D
2020-01-01   0   NaN   2.0   3
2020-01-02   4   5.0   NaN   7
2020-01-03   8   9.0  10.0  11
2020-01-04  12  13.0  14.0  15
2020-01-05  16  17.0  18.0  19
2020-01-06  20  21.0  22.0  23
#axis=0 丢掉行 axis=1 丢掉列
#how=any  任何一个为0就丢掉 how=all 所有为0才丢掉
>>> print(df.dropna(axis=0,how='any'))  #how{'any','all'}
             A     B     C   D
2020-01-03   8   9.0  10.0  11
2020-01-04  12  13.0  14.0  15
2020-01-05  16  17.0  18.0  19
2020-01-06  20  21.0  22.0  23

>>> df=pd.DataFrame(np.arange(24).reshape((6,4)),index=dates,columns=['A','B','C','D'])
>>> df.iloc[0,1]=np.nan
>>> df.iloc[1,2]=np.nan
>>> print(df.fillna(value=0))
             A     B     C   D
2020-01-01   0   0.0   2.0   3
2020-01-02   4   5.0   0.0   7
2020-01-03   8   9.0  10.0  11
2020-01-04  12  13.0  14.0  15
2020-01-05  16  17.0  18.0  19
2020-01-06  20  21.0  22.0  23
>>> print(df.isnull())   #判断是否有数据丢失
                A      B      C      D
2020-01-01  False   True  False  False
2020-01-02  False  False   True  False
2020-01-03  False  False  False  False
2020-01-04  False  False  False  False
2020-01-05  False  False  False  False
2020-01-06  False  False  False  False

>>> df.fillna(value=0)   #不会对df造成影响
             A     B     C   D
2020-01-01   0   0.0   2.0   3
2020-01-02   4   5.0   0.0   7
2020-01-03   8   9.0  10.0  11
2020-01-04  12  13.0  14.0  15
2020-01-05  16  17.0  18.0  19
2020-01-06  20  21.0  22.0  23
>>> df
             A     B     C   D
2020-01-01   0   NaN   2.0   3
2020-01-02   4   5.0   NaN   7
2020-01-03   8   9.0  10.0  11
2020-01-04  12  13.0  14.0  15
2020-01-05  16  17.0  18.0  19
2020-01-06  20  21.0  22.0  23
>>> print(np.any(df.isnull())==True)   #判断有无数据丢失
True

pandas导入导出数据

Youtube video tutorial: https://www.youtube.com/channel/UCdyjiB5H8Pu7aDTNVXTTpcg
Youku video tutorial: http://i.youku.com/pythontutorial


import pandas as pd

# read from
data = pd.read_csv('student.csv')
print(data)

# save to
data.to_pickle('student.pickle')

concat合并
结合(Concat)

Pandas 提供了多种将 Series、DataFrame 对象组合在一起的功能,用索引与关联代数功能的多种设置逻辑可执行连接(join)与合并(merge)操作。

>>> import pandas as pa
>>> import pandas as pd
>>> import numpy as np
>>> df1=pd.DataFrame(np.ones((3,4))*0,columns=['a','b','c','d'])
>>> df2=pd.DataFrame(np.ones((3,4))*1,columns=['a','b','c','d'])
>>> df3=pd.DataFrame(np.ones((3,4))*2,columns=['a','b','c','d'])
>>> print(df1)
     a    b    c    d
0  0.0  0.0  0.0  0.0
1  0.0  0.0  0.0  0.0
2  0.0  0.0  0.0  0.0
>>> print(df2)
     a    b    c    d
0  1.0  1.0  1.0  1.0
1  1.0  1.0  1.0  1.0
2  1.0  1.0  1.0  1.0
>>> print(df3)
     a    b    c    d
0  2.0  2.0  2.0  2.0
1  2.0  2.0  2.0  2.0
2  2.0  2.0  2.0  2.0
>>> res=pd.concat([df1,df2,df3],axis=0)
>>> print(res)
     a    b    c    d
0  0.0  0.0  0.0  0.0
1  0.0  0.0  0.0  0.0
2  0.0  0.0  0.0  0.0
0  1.0  1.0  1.0  1.0
1  1.0  1.0  1.0  1.0
2  1.0  1.0  1.0  1.0
0  2.0  2.0  2.0  2.0
1  2.0  2.0  2.0  2.0
2  2.0  2.0  2.0  2.0
>>> res=pd.concat([df1,df2,df3],axis=0,ignore_index=True)
>>> print(res)
     a    b    c    d
0  0.0  0.0  0.0  0.0
1  0.0  0.0  0.0  0.0
2  0.0  0.0  0.0  0.0
3  1.0  1.0  1.0  1.0
4  1.0  1.0  1.0  1.0
5  1.0  1.0  1.0  1.0
6  2.0  2.0  2.0  2.0
7  2.0  2.0  2.0  2.0
8  2.0  2.0  2.0  2.0

>>> df1.iloc[0,1]=np.nan   
>>> df2.iloc[1,2]=np.nan
>>> df1
     a    b    c    d
1  0.0  NaN  0.0  0.0
2  0.0  0.0  0.0  0.0
3  0.0  0.0  0.0  0.0
>>> df2
     a    b    c    d
2  1.0  1.0  1.0  1.0
3  1.0  1.0  NaN  1.0
4  1.0  1.0  1.0  1.0
>>> res=pd.concat([df1,df2],join='outer')  #'outter'忽略丢失数据,'inner'抛弃丢失数据
>>> print(res)
     a    b    c    d
1  0.0  NaN  0.0  0.0
2  0.0  0.0  0.0  0.0
3  0.0  0.0  0.0  0.0
2  1.0  1.0  1.0  1.0
3  1.0  1.0  NaN  1.0
4  1.0  1.0  1.0  1.0

追加 append

>>> import pandas as pd
>>> import numpy as np
>>> df1=pd.DataFrame(np.ones((3,4))*0,columns=['a','b','c','d'])
>>> df2=pd.DataFrame(np.ones((3,4))*1,columns=['a','b','c','d'])
>>> df1
     a    b    c    d
0  0.0  0.0  0.0  0.0
1  0.0  0.0  0.0  0.0
2  0.0  0.0  0.0  0.0
>>> df2
     a    b    c    d
0  1.0  1.0  1.0  1.0
1  1.0  1.0  1.0  1.0
2  1.0  1.0  1.0  1.0
>>> res=df1.append(df2,ignore_index=True)
>>> res
     a    b    c    d
0  0.0  0.0  0.0  0.0
1  0.0  0.0  0.0  0.0
2  0.0  0.0  0.0  0.0
3  1.0  1.0  1.0  1.0
4  1.0  1.0  1.0  1.0
5  1.0  1.0  1.0  1.0
>>> s1=pd.Series([1,2,3,4],index=['a','b','c','d'])
>>> s1
a    1
b    2
c    3
d    4
dtype: int64
>>> res=df1.append(s1,ignore_index=True)
>>> print(res)
     a    b    c    d
0  0.0  0.0  0.0  0.0
1  0.0  0.0  0.0  0.0
2  0.0  0.0  0.0  0.0
3  1.0  2.0  3.0  4.0

merge()合并

>>> import pandas as pd
>>> import numpy as np
>
# merging two df by key/keys. (may be used in database)
>>> left=pd.DataFrame({'key':['K0','K1','K2','K3'],
		       'A':['A0','A1','A2','A3'],
		       'B':['B0','B1','B2','B3']})
>>> right=pd.DataFrame({'key':['K0','K1','K2','K3'],
		       'C':['C0','C1','C2','C3'],
		       'D':['D0','D1','D2','D3']})
>>> left
  key   A   B
0  K0  A0  B0
1  K1  A1  B1
2  K2  A2  B2
3  K3  A3  B3
>>> right
  key   C   D
0  K0  C0  D0
1  K1  C1  D1
2  K2  C2  D2
3  K3  C3  D3
>>> res=pd.merge(left,right,on='key')
>>> print(res)
  key   A   B   C   D
0  K0  A0  B0  C0  D0
1  K1  A1  B1  C1  D1
2  K2  A2  B2  C2  D2
3  K3  A3  B3  C3  D3

# consider two keys
>>> left = pd.DataFrame({'key1': ['K0', 'K0', 'K1', 'K2'],
                             'key2': ['K0', 'K1', 'K0', 'K1'],
                             'A': ['A0', 'A1', 'A2', 'A3'],
                             'B': ['B0', 'B1', 'B2', 'B3']})
>>> right = pd.DataFrame({'key1': ['K0', 'K1', 'K1', 'K2'],
                              'key2': ['K0', 'K0', 'K0', 'K0'],
                              'C': ['C0', 'C1', 'C2', 'C3'],
                              'D': ['D0', 'D1', 'D2', 'D3']})
>>> left
  key1 key2   A   B
0   K0   K0  A0  B0
1   K0   K1  A1  B1
2   K1   K0  A2  B2
3   K2   K1  A3  B3
>>> right
  key1 key2   C   D
0   K0   K0  C0  D0
1   K1   K0  C1  D1
2   K1   K0  C2  D2
3   K2   K0  C3  D3
>>> res=pd.merge(left,right,on=['key1','key2'])   #默认how=inner
>>> res
  key1 key2   A   B   C   D
0   K0   K0  A0  B0  C0  D0
1   K1   K0  A2  B2  C1  D1
2   K1   K0  A2  B2  C2  D2
# how = ['left', 'right', 'outer', 'inner']
>>> res = pd.merge(left, right, on=['key1', 'key2'], how='left')
>>> res
  key1 key2   A   B    C    D
0   K0   K0  A0  B0   C0   D0
1   K0   K1  A1  B1  NaN  NaN
2   K1   K0  A2  B2   C1   D1
3   K1   K0  A2  B2   C2   D2
4   K2   K1  A3  B3  NaN  NaN

# indicator
>>> df1 = pd.DataFrame({'col1':[0,1], 'col_left':['a','b']})
>>> df2 = pd.DataFrame({'col1':[1,2,2],'col_right':[2,2,2]})
>>> df1
   col1 col_left
0     0        a
1     1        b
>>> df2
   col1  col_right
0     1          2
1     2          2
2     2          2
>>> res = pd.merge(df1, df2, on='col1', how='outer', indicator=True)
>>> res
   col1 col_left  col_right      _merge
0     0        a        NaN   left_only
1     1        b        2.0        both
2     2      NaN        2.0  right_only
3     2      NaN        2.0  right_only
>>> res = pd.merge(df1, df2, on='col1', how='outer', indicator='indicator_column')   #give the indicator a custom name
>>> res
   col1 col_left  col_right indicator_column
0     0        a        NaN        left_only
1     1        b        2.0             both
2     2      NaN        2.0       right_only
3     2      NaN        2.0       right_only
# merged by index
>>> left = pd.DataFrame({'A': ['A0', 'A1', 'A2'],
                                  'B': ['B0', 'B1', 'B2']},
                                  index=['K0', 'K1', 'K2'])
>>> right = pd.DataFrame({'C': ['C0', 'C2', 'C3'],
                                     'D': ['D0', 'D2', 'D3']},
                                      index=['K0', 'K2', 'K3'])
>>> left
     A   B
K0  A0  B0
K1  A1  B1
K2  A2  B2
>>> right
     C   D
K0  C0  D0
K2  C2  D2
K3  C3  D3
>>> res = pd.merge(left, right, left_index=True, right_index=True, how='outer')
>>> res
      A    B    C    D
K0   A0   B0   C0   D0
K1   A1   B1  NaN  NaN
K2   A2   B2   C2   D2
K3  NaN  NaN   C3   D3
>>> res = pd.merge(left, right, left_index=True, right_index=True, how='inner')
>>> res
     A   B   C   D
K0  A0  B0  C0  D0
K2  A2  B2  C2  D2
# handle overlapping
>>> boys = pd.DataFrame({'k': ['K0', 'K1', 'K2'], 'age': [1, 2, 3]})
>>> girls = pd.DataFrame({'k': ['K0', 'K0', 'K3'], 'age': [4, 5, 6]})
>>> boys
    k  age
0  K0    1
1  K1    2
2  K2    3
>>> girls
    k  age
0  K0    4
1  K0    5
2  K3    6
>>> res = pd.merge(boys, girls, on='k', suffixes=['_boy', '_girl'], how='inner')
>>> res
    k  age_boy  age_girl
0  K0        1         4
1  K0        1         5

可视化

>>> import pandas as pd
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> data=pd.Series(np.random.randn(1000),index=np.arange(1000))
>>> data=data.cumsum()
>>> data.plot()
<matplotlib.axes._subplots.AxesSubplot object at 0x0123F290>
>>> plt.show()
>>>data=pd.DataFrame(np.random.randn(1000,4),index=np.arange(1000),columns=list("ABCD"))
>>> data=data.cumsum()
>>> print(data.head(3))
          A         B         C         D
0 -0.893047 -0.423578 -1.246789  2.158506
1 -2.023930 -0.458917 -4.894671  2.304530
2 -1.693835 -1.826239 -3.805351  4.806164
>>> data.plot()
<matplotlib.axes._subplots.AxesSubplot object at 0x016E1030>
>>> plt.show()
>>> #plot methods
>>> #'bar','hist','box','kde','area','scatter','hexbin',pie
>>> data.plot.scatter(x='A',y='B')
<matplotlib.axes._subplots.AxesSubplot object at 0x01AC9070>
>>> plt.show()
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值