Python--pandas

pandas.read_excel

pandas.read_excel(iosheet_name=0header=0names=Noneindex_col=Noneusecols=Nonesqueeze=Falsedtype=Noneengine=Noneconverters=Nonetrue_values=Nonefalse_values=Noneskiprows=Nonenrows=Nonena_values=Noneparse_dates=Falsedate_parser=Nonethousands=Nonecomment=Noneskipfooter=0convert_float=True**kwds)[source]

Read an Excel table into a pandas DataFrame

Parameters:

io : string, path object (pathlib.Path or py._path.local.LocalPath),

file-like object, pandas ExcelFile, or xlrd workbook. The string could be a URL. Valid URL schemes include http, ftp, s3, and file. For file URLs, a host is expected. For instance, a local file could be file://localhost/path/to/workbook.xlsx

sheet_name : string, int, mixed list of strings/ints, or None, default 0

Strings are used for sheet names, Integers are used in zero-indexed sheet positions.

Lists of strings/integers are used to request multiple sheets.

Specify None to get all sheets.

str|int -> DataFrame is returned. list|None -> Dict of DataFrames is returned, with keys representing sheets.

Available Cases

  • Defaults to 0 -> 1st sheet as a DataFrame
  • 1 -> 2nd sheet as a DataFrame
  • “Sheet1” -> 1st sheet as a DataFrame
  • [0,1,”Sheet5”] -> 1st, 2nd & 5th sheet as a dictionary of DataFrames
  • None -> All sheets as a dictionary of DataFrames

sheetname : string, int, mixed list of strings/ints, or None, default 0

Deprecated since version 0.21.0: Use sheet_name instead

header : int, list of ints, default 0

Row (0-indexed) to use for the column labels of the parsed DataFrame. If a list of integers is passed those row positions will be combined into a MultiIndex. Use None if there is no header.

names : array-like, default None

List of column names to use. If file contains no header row, then you should explicitly pass header=None

index_col : int, list of ints, default None

Column (0-indexed) to use as the row labels of the DataFrame. Pass None if there is no such column. If a list is passed, those columns will be combined into a MultiIndex. If a subset of data is selected with usecols, index_col is based on the subset.

parse_cols : int or list, default None

Deprecated since version 0.21.0: Pass in usecols instead.

usecols : int or list, default None

  • If None then parse all columns,
  • If int then indicates last column to be parsed
  • If list of ints then indicates list of column numbers to be parsed
  • If string then indicates comma separated list of Excel column letters and column ranges (e.g. “A:E” or “A,C,E:F”). Ranges are inclusive of both sides.

squeeze : boolean, default False

If the parsed data only contains one column then return a Series

dtype : Type name or dict of column -> type, default None

Data type for data or columns. E.g. {‘a’: np.float64, ‘b’: np.int32} Use object to preserve data as stored in Excel and not interpret dtype. If converters are specified, they will be applied INSTEAD of dtype conversion.

New in version 0.20.0.

engine: string, default None

If io is not a buffer or path, this must be set to identify io. Acceptable values are None or xlrd

converters : dict, default None

Dict of functions for converting values in certain columns. Keys can either be integers or column labels, values are functions that take one input argument, the Excel cell content, and return the transformed content.

true_values : list, default None

Values to consider as True

New in version 0.19.0.

false_values : list, default None

Values to consider as False

New in version 0.19.0.

skiprows : list-like

Rows to skip at the beginning (0-indexed)

nrows : int, default None

Number of rows to parse

New in version 0.23.0.

na_values : scalar, str, list-like, or dict, default None

Additional strings to recognize as NA/NaN. If dict passed, specific per-column NA values. By default the following values are interpreted as NaN: ‘’, ‘#N/A’, ‘#N/A N/A’, ‘#NA’, ‘-1.#IND’, ‘-1.#QNAN’, ‘-NaN’, ‘-nan’, ‘1.#IND’, ‘1.#QNAN’, ‘N/A’, ‘NA’, ‘NULL’, ‘NaN’, ‘n/a’, ‘nan’, ‘null’.

keep_default_na : bool, default True

If na_values are specified and keep_default_na is False the default NaN values are overridden, otherwise they’re appended to.

verbose : boolean, default False

Indicate number of NA values placed in non-numeric columns

thousands : str, default None

Thousands separator for parsing string columns to numeric. Note that this parameter is only necessary for columns stored as TEXT in Excel, any numeric columns will automatically be parsed, regardless of display format.

comment : str, default None

Comments out remainder of line. Pass a character or characters to this argument to indicate comments in the input file. Any data between the comment string and the end of the current line is ignored.

skip_footer : int, default 0

Deprecated since version 0.23.0: Pass in skipfooter instead.

skipfooter : int, default 0

Rows at the end to skip (0-indexed)

convert_float : boolean, default True

convert integral floats to int (i.e., 1.0 –> 1). If False, all numeric data will be read in as floats: Excel stores all numbers as floats internally

Returns:

parsed : DataFrame or Dict of DataFrames

DataFrame from the passed in Excel file. See notes in sheet_name argument for more information on when a Dict of Dataframes is returned.

Examples

An example DataFrame written to a local file

>>> df_out = pd.DataFrame([('string1', 1),
...                        ('string2', 2),
...                        ('string3', 3)],
...                       columns=['Name', 'Value'])
>>> df_out
      Name  Value
0  string1      1
1  string2      2
2  string3      3
>>> df_out.to_excel('tmp.xlsx')

The file can be read using the file name as string or an open file object:

>>> pd.read_excel('tmp.xlsx')
      Name  Value
0  string1      1
1  string2      2
2  string3      3
>>> pd.read_excel(open('tmp.xlsx','rb'))
      Name  Value
0  string1      1
1  string2      2
2  string3      3

Index and header can be specified via the index_col and header arguments

>>> pd.read_excel('tmp.xlsx', index_col=None, header=None)
     0        1      2
0  NaN     Name  Value
1  0.0  string1      1
2  1.0  string2      2
3  2.0  string3      3

Column types are inferred but can be explicitly specified

>>> pd.read_excel('tmp.xlsx', dtype={'Name':str, 'Value':float})
      Name  Value
0  string1    1.0
1  string2    2.0
2  string3    3.0

True, False, and NA values, and thousands separators have defaults, but can be explicitly specified, too. Supply the values you would like as strings or lists of strings!

>>> pd.read_excel('tmp.xlsx',
...               na_values=['string1', 'string2'])
      Name  Value
0      NaN      1
1      NaN      2
2  string3      3

Comment lines in the excel input file can be skipped using the comment kwarg

>>> df = pd.DataFrame({'a': ['1', '#2'], 'b': ['2', '3']})
>>> df.to_excel('tmp.xlsx', index=False)
>>> pd.read_excel('tmp.xlsx')
    a  b
0   1  2
1  #2  3
>>> pd.read_excel('tmp.xlsx', comment='#')
   a  b
0  1  2

 

pandas.DataFrame.sort_values

DataFrame.sort_values(byaxis=0ascending=Trueinplace=Falsekind='quicksort'na_position='last')[source]

Sort by the values along either axis

Parameters:

by : str or list of str

Name or list of names to sort by.

  • if axis is 0 or ‘index’ then by may contain index levels and/or column labels
  • if axis is 1 or ‘columns’ then by may contain column levels and/or index labels

Changed in version 0.23.0: Allow specifying index or column level names.

axis : {0 or ‘index’, 1 or ‘columns’}, default 0

Axis to be sorted

ascending : bool or list of bool, default True

Sort ascending vs. descending. Specify list for multiple sort orders. If this is a list of bools, must match the length of the by.

inplace : bool, default False

if True, perform operation in-place

kind : {‘quicksort’, ‘mergesort’, ‘heapsort’}, default ‘quicksort’

Choice of sorting algorithm. See also ndarray.np.sort for more information.mergesort is the only stable algorithm. For DataFrames, this option is only applied when sorting on a single column or label.

na_position : {‘first’, ‘last’}, default ‘last’

first puts NaNs at the beginning, last puts NaNs at the end

Returns:

sorted_obj : DataFrame

Examples

>>> df = pd.DataFrame({
...     'col1' : ['A', 'A', 'B', np.nan, 'D', 'C'],
...     'col2' : [2, 1, 9, 8, 7, 4],
...     'col3': [0, 1, 9, 4, 2, 3],
... })
>>> df
    col1 col2 col3
0   A    2    0
1   A    1    1
2   B    9    9
3   NaN  8    4
4   D    7    2
5   C    4    3

Sort by col1

>>> df.sort_values(by=['col1'])
    col1 col2 col3
0   A    2    0
1   A    1    1
2   B    9    9
5   C    4    3
4   D    7    2
3   NaN  8    4

Sort by multiple columns

>>> df.sort_values(by=['col1', 'col2'])
    col1 col2 col3
1   A    1    1
0   A    2    0
2   B    9    9
5   C    4    3
4   D    7    2
3   NaN  8    4

Sort Descending

>>> df.sort_values(by='col1', ascending=False)
    col1 col2 col3
4   D    7    2
5   C    4    3
2   B    9    9
0   A    2    0
1   A    1    1
3   NaN  8    4

Putting NAs first

>>> df.sort_values(by='col1', ascending=False, na_position='first')
    col1 col2 col3
3   NaN  8    4
4   D    7    2
5   C    4    3
2   B    9    9
0   A    2    0
1   A    1    1

Pandas中更改列的数据类型

对于创建DataFrame的情形

如果要创建一个DataFrame,可以直接通过dtype参数指定类型:

 
  1.  
  2. df = pd.DataFrame(a, dtype='float') #示例1

  3. df = pd.DataFrame(data=d, dtype=np.int8) #示例2

  4. df = pd.read_csv("somefile.csv", dtype = {'column_name' : str})

对于单列或者Series

下面是一个字符串Seriess的例子,它的dtype为object

 
  1. >>> s = pd.Series(['1', '2', '4.7', 'pandas', '10'])

  2. >>> s

  3. 0 1

  4. 1 2

  5. 2 4.7

  6. 3 pandas

  7. 4 10

  8. dtype: object

使用to_numeric转为数值。默认情况下,它不能处理字母型的字符串'pandas':

 
  1. >>> pd.to_numeric(s) # or pd.to_numeric(s, errors='raise')

  2. ValueError: Unable to parse string

可以将无效值强制转换为NaN,如下所示:

 
  1. >>> pd.to_numeric(s, errors='coerce')

  2. 0 1.0

  3. 1 2.0

  4. 2 4.7

  5. 3 NaN

  6. 4 10.0

  7. dtype: float64

如果遇到无效值,第三个选项就是忽略该操作:

 
  1. >>> pd.to_numeric(s, errors='ignore')

  2. # the original Series is returned untouched

对于多列或者整个DataFrame

如果想要将这个操作应用到多个列,依次处理每一列是非常繁琐的,所以可以使用DataFrame.apply处理每一列。

对于某个DataFrame:

 
  1. >>> a = [['a', '1.2', '4.2'], ['b', '70', '0.03'], ['x', '5', '0']]

  2. >>> df = pd.DataFrame(a, columns=['col1','col2','col3'])

  3. >>> df

  4. col1 col2 col3

  5. 0 a 1.2 4.2

  6. 1 b 70 0.03

  7. 2 x 5 0

然后可以写:

df[['col2','col3']] = df[['col2','col3']].apply(pd.to_numeric)

那么'col2'和'col3'根据需要具有float64类型。

但是,可能不知道哪些列可以可靠地转换为数字类型。在这种情况下,设置参数:

df.apply(pd.to_numeric, errors='ignore')

然后该函数将被应用于整个DataFrame,可以转换为数字类型的列将被转换,而不能(例如,它们包含非数字字符串或日期)的列将被单独保留。

另外pd.to_datetimepd.to_timedelta可将数据转换为日期和时间戳。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值