pd.read_csv() 、to_csv() 之 常用参数

本文简单介绍一下read_csv()和 to_csv()的参数,最常用的拿出来讲,较少用的请转到官方文档看。

一.pd.read_csv()

作用:将csv文件读入并转化为数据框形式。

pd.read_csv(filepath_or_buffer, sep=',', delimiter=None, header='infer', names=None, index_col=None, usecols=None, squeeze=False, prefix=None, mangle_dupe_cols=True, dtype=None, engine=None, converters=None, true_values=None, false_values=None, skipinitialspace=False, skiprows=None, nrows=None, na_values=None, keep_default_na=True, na_filter=True, verbose=False, skip_blank_lines=True, parse_dates=False, infer_datetime_format=False, keep_date_col=False, date_parser=None, dayfirst=False, iterator=False, chunksize=None, compression='infer', thousands=None, decimal=b'.', lineterminator=None, quotechar='"', quoting=0, escapechar=None, comment=None, encoding=None, dialect=None, tupleize_cols=False, error_bad_lines=True, warn_bad_lines=True, skipfooter=0, skip_footer=0, doublequote=True, delim_whitespace=False, as_recarray=False, compact_ints=False, use_unsigned=False, low_memory=True, buffer_lines=None, memory_map=False, float_precision=None)

好多参数呀!
下面来看常用参数:
1.filepath_or_buffer:(这是唯一一个必须有的参数,其它都是按需求选用的
文件所在处的路径

2.sep
指定分隔符,默认为逗号','

3.delimiter : str, default None
定界符,备选分隔符(如果指定该参数,则sep参数失效)

4.header:int or list of ints, default ‘infer’
指定哪一行作为表头。默认设置为0(即第一行作为表头),如果没有表头的话,要修改参数,设置header=None

5.names
指定列的名称,用列表表示。一般我们没有表头,即header=None时,这个用来添加列名就很有用啦!

6.index_col:
指定哪一列数据作为行索引,可以是一列,也可以多列。多列的话,会看到一个分层索引

7.prefix:
给列名添加前缀。如prefix="x",会出来"x1"、"x2"、"x3"酱纸

8.nrows : int, default None
需要读取的行数(从文件头开始算起)

9.encoding:
乱码的时候用这个就是了,官网文档看看用哪个:
https://docs.python.org/3/library/codecs.html#standard-encodings

10.skiprows : list-like or integer, default None
需要忽略的行数(从文件开始处算起),或需要跳过的行号列表(从0开始)。

下面是举栗子时间:

import pandas as pd
data = pd.read_csv(r"G:\data\Kaggle\Titanic\train.csv")
data.head()
PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked
0103Braund, Mr. Owen Harrismale22.010A/5 211717.2500NaNS
1211Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C
2313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS
3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000C123S
4503Allen, Mr. William Henrymale35.0003734508.0500NaNS
data1 = pd.read_csv(r"G:\data\Kaggle\Titanic\train.csv",header=None)   #可以看到表头都直接当作数据在用了
data1.head()
01234567891011
0PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked
1103Braund, Mr. Owen Harrismale2210A/5 211717.25NaNS
2211Cumings, Mrs. John Bradley (Florence Briggs Th...female3810PC 1759971.2833C85C
3313Heikkinen, Miss. Lainafemale2600STON/O2. 31012827.925NaNS
4411Futrelle, Mrs. Jacques Heath (Lily May Peel)female351011380353.1C123S
data2 = pd.read_csv(r"G:\data\Kaggle\Titanic\train.csv",index_col=["Survived","Sex"])   
data2.head()
PassengerIdPclassNameAgeSibSpParchTicketFareCabinEmbarked
SurvivedSex
0male13Braund, Mr. Owen Harris22.010A/5 211717.2500NaNS
1female21Cumings, Mrs. John Bradley (Florence Briggs Th...38.010PC 1759971.2833C85C
female33Heikkinen, Miss. Laina26.000STON/O2. 31012827.9250NaNS
female41Futrelle, Mrs. Jacques Heath (Lily May Peel)35.01011380353.1000C123S
0male53Allen, Mr. William Henry35.0003734508.0500NaNS
data3 = pd.read_csv(r"G:\data\Kaggle\Titanic\train.csv", skiprows=3, header=None)   #包括表头的前三行被跳过了
data3.head()
01234567891011
0313Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS
1411Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000C123S
2503Allen, Mr. William Henrymale35.0003734508.0500NaNS
3603Moran, Mr. JamesmaleNaN003308778.4583NaNQ
4701McCarthy, Mr. Timothy Jmale54.0001746351.8625E46S

二.pd.to_csv()

作用:将数据框写入本地电脑,保存起来

先了解一下当前工作路径

import os
father_path = os.getcwd()
father_path
'C:\\Users\\acerpc'

to_csv(path_or_buf,sep,na_rep,columns,header,index)
参数解析:
1.path_or_buf:字符串,放文件名、相对路径、文件流等;

2.sep:字符串,分隔符,跟read_csv()的一个意思

3.na_rep:字符串,将NaN转换为特定值

4.columns:列表,指定哪些列写进去

5.header:默认header=0,如果没有表头,设置header=None,表示我没有表头呀!

6.index:关于索引的,默认True,写入索引

举栗子时间到:

import numpy as np
df = pd.DataFrame({"a":[1,2,3],
               "b":[6,np.nan,6],
               "c":[3,4,np.nan]})
df
abc
016.03.0
12NaN4.0
236.0NaN
path1 = father_path + r'\df1.csv'
df.to_csv(path1)


1465325-20190307143536834-703854619.png

path2 = father_path + r'\df2.csv'
df.to_csv(path2,header=None)


1465325-20190307143526771-1098802023.png

path3 = father_path + r'\df3.csv'
df.to_csv(path3, columns=["a","c"],index=False)


1465325-20190307143519911-1941038447.png

path4 = father_path + r'\df4.csv'
df.to_csv(path4, na_rep=0)

1465325-20190307143439396-954937315.png

转载于:https://www.cnblogs.com/wyy1480/p/10322336.html

  • 0
    点赞
  • 17
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值