在Python中使用大型CSV文件

large csv files in pythonI’m currently working on a project that has multiple very large CSV files (6 gigabytes+). Normally when working with CSV data, I read the data in using pandas and then start munging and analyzing the data. With files this large, reading the data into pandas directly can be difficult (or impossible) due to memory constrictions, especially if you’re working on a prosumer computer. In this post, I describe a method that will help you when working with large CSV files in python.

我目前正在一个项目中,该项目具有多个非常大的CSV文件(超过6 GB)。 通常,当使用CSV数据时,我使用熊猫读取数据,然后开始修改和分析数据。 对于如此大的文件,由于内存不足,直接将数据读入熊猫可能很困难(或不可能),尤其是在使用专业消费计算机的情况下。 在本文中,我描述了一种在python中使用大型CSV文件时将为您提供帮助的方法。

While it would be pretty straightforward to load the data from these CSV files into a database, there might be times when you don’t have access to a database server and/or you don’t want to go through the hassle of setting up a server.  If you are going to be working on a data set long-term, you absolutely should load that data into a database of some type (mySQL, postgreSQL, etc) but if you just need to do some quick checks / tests / analysis of the data, below is one way to get a look at the data in these large files with python, pandas and sqllite.

虽然将这些CSV文件中的数据加载到数据库中非常简单,但是有时您可能无权访问数据库服务器和/或不想经历设置数据库的麻烦。服务器。 如果要长期处理数据集,则绝对应将数据加载到某种类型的数据库(mySQL,postgreSQL等)中,但是如果您只需要对数据集进行一些快速检查/测试/分析,数据,下面是使用python,pandas和sqllite查看这些大文件中数据的一种方法。

To get started, you’ll need to import pandas and sqlalchemy. The commands below will do that.

首先,您需要导入pandas和sqlalchemy。 下面的命令将执行此操作。

import pandas as pd
from sqlalchemy import create_engine

Next, set up a variable that points to your csv file.  This isn’t necessary but it does help in re-usability.

接下来,设置一个指向您的csv文件的变量。 这不是必需的,但确实有助于重用。

file = '/path/to/csv/file'

With these three lines of code, we are ready to start analyzing our data. Let’s take a look at the ‘head’ of the csv file to see what the contents might look like.

有了这三行代码,我们就可以开始分析数据了。 让我们看一下csv文件的“头”,看看内容可能是什么样子。

print pd.read_csv(file, nrows=5)

This command uses pandas’ “read_csv” command to read in only 5 rows (nrows=5) and then print those rows to the screen. This lets you understand the structure of the csv file and make sure the data is formatted in a way that makes sense for your work.

此命令使用pandas的“ read_csv”命令仅读取5行(行数= 5),然后将这些行打印到屏幕上。 这使您了解csv文件的结构,并确保以对您的工作有意义的方式格式化数据。

Before we can actually work with the data, we need to do something with it so we can begin to filter it to work with subsets of the data. This is usually what I would use pandas’ dataframe for but with large data files, we need to store the data somewhere else. In this case, we’ll set up a local sqllite database, read the csv file in chunks and then write those chunks to sqllite.

在实际使用数据之前,我们需要对其进行处理,以便可以开始对其进行过滤以与数据的子集一起使用。 这通常是我将使用pandas数据框的方式,但是对于大型数据文件,我们需要将数据存储在其他位置。 在这种情况下,我们将建立一个本地sqllite数据库,分块读取csv文件,然后将那些块写入sqllite。

To do this, we’ll first need to create the sqllite database using the following command.

为此,我们首先需要使用以下命令创建sqllite数据库。

csv_database = create_engine('sqlite:///csv_database.db')

Next, we need to iterate through the CSV file in chunks and store the data into sqllite.

接下来,我们需要分批遍历CSV文件并将数据存储到sqllite中。

chunksize = 100000
i = 0
j = 1
for df in pd.read_csv(file, chunksize=chunksize, iterator=True):
      df = df.rename(columns={c: c.replace(' ', '') for c in df.columns}) 
      df.index += j
      i+=1
      df.to_sql('table', csv_database, if_exists='append')
      j = df.index[-1] + 1

With this code, we are setting the chunksize at 100,000 to keep the size of the chunks managable, initializing a couple of iterators (i=0, j=0) and then running a through a for loop.  The for loop read a chunk of data from the CSV file, removes space from any of column names, then stores the chunk into the sqllite database (df.to_sql(…)).

使用此代码,我们将块大小设置为100,000,以保持可管理的块大小,并初始化几个迭代器(i = 0,j = 0),然后运行一个for循环。 for循环从CSV文件中读取数据块,从任何列名称中删除空间,然后将数据块存储到sqllite数据库(df.to_sql(…))中。

This might take a while if your CSV file is sufficiently large, but the time spent waiting is worth it because you can now use pandas ‘sql’ tools to pull data from the database without worrying about memory constraints.

如果您的CSV文件足够大,则可能要花一些时间,但是花在等待上的时间是值得的,因为您现在可以使用pandas“ sql”工具从数据库中提取数据,而不必担心内存限制。

To access the data now, you can run commands like the following:

要立即访问数据,您可以运行以下命令:

df = pd.read_sql_query('SELECT * FROM table', csv_database)

Of course, using ‘select *…’ will load all data into memory, which is the problem we are trying to get away from so you should throw from filters into your select statements to filter the data. For example:

当然,使用“ select *…”会将所有数据加载到内存中,这是我们试图摆脱的问题,因此您应该将过滤器放入select语句中以过滤数据。 例如:

df = pd.read_sql_query('SELECT COl1, COL2 FROM table where COL1 = SOMEVALUE', csv_database)

翻译自: https://www.pybloggers.com/2016/11/working-with-large-csv-files-in-python/

  • 2
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值