python read_csv chunk,python - 使用具有大csv的pandas结构(iterate和chunksize)

I have a large csv file, about 600mb with 11 million rows and I want to create statistical data like pivots, histograms, graphs etc. Obviously trying to just to read it normally:

df = pd.read_csv('Check400_900.csv', sep='\t')

doesn't work so I found iterate and chunksize in a similar post so I used

df = pd.read_csv('Check1_900.csv', sep='\t', iterator=True, chunksize=1000)

All good, i can for example print df.get_chunk(5) and search the whole file with just

for chunk in df:

print chunk

My problem is I don't know how to use stuff like these below for the whole df and not for just one chunk

plt.plot()

print df.head()

print df.describe()

print df.dtypes

customer_group3 = df.groupby('UserID')

y3 = customer_group.size()

I hope my question is not so confusing

解决方案

I think you need concat chunks to df, because type of output of function:

df = pd.read_csv('Check1_900.csv', sep='\t', iterator=True, chunksize=1000)

isn't dataframe, but pandas.io.parsers.TextFileReader - source.

tp = pd.read_csv('Check1_900.csv', sep='\t', iterator=True, chunksize=1000)

print tp

#

df = pd.concat(tp, ignore_index=True)

I think is necessary add parameter ignore index to function concat, because avoiding duplicity of indexes.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值