Pandas处理sqlite数据库CSV文件EXCEL文件的速度对比
近期在对大数据文件的处理时,发现pandas处理不同文件格式的处理速度差距很大,可以达到几十数百倍的差距。所以写了个简单的DEMO查看各种文件加载和处理所需时间,分别用pandas加载和查询sqlite数据、csv数据和excel的数据,数据约有3万多行,部分代码如下:
for index in range(10):
start = dt.datetime.now()
conn = sqlite3.connect('./data/test_data')
case_name = pd.read_sql_query("select 用例标题 from test_data where 所属冲刺='99';", conn)
end = dt.datetime.now()
db_time = end - start
db_time_count = db_time_count + db_time
logger.debug('db processing time = ' + str(end - start))
logger.debug('db_time_count = ' + str(db_time_count))
start = dt.datetime.now()
df_test_data_csv = pd.read_csv("./data/test_data.csv", low_memory=False)
case_name = df_test_data_csv[df_test_data_csv['所属冲刺'] == 99]['用例标题']
end = dt.datetime.now()
csv_time = end - start
csv_time_count = csv_time_count + csv_time
logger.debug('csv processing time = ' + str(end - start))
logger.debug('csv_time_count = ' + str(csv_time_count))
start = dt.datetime.now()
df_test_data_excel = pd.read_excel("./data/test_data.xlsx")
case_name = df_test_data_excel[df_test_data_excel['所属冲刺'] == 99]['用例标题']
end = dt.datetime.now()
excel_time = end - start
excel_time_count = excel_time_count + excel_time
logger.debug('excel processing time = ' + str(end - start))
logger.debug('excel_time_count = ' + str(excel_time_count))
logger.debug('10times average db processing time = ' + str(db_time_count/10))
logger.debug('10times average csv processing time = ' + str(csv_time_count/10))
logger.debug('10times average excel processing time = ' + str(excel_time_count/10))
输出结果:
10times average db processing time = 0:00:00.018071
10times average csv processing time = 0:00:00.285458
10times average excel processing time = 0:00:13.251632
结果可见,针对于该数据集和要查询的数据,csv数据加载处理时间是db数据加载处理处理时间的15倍左右,excel数据加载处理是db数据加载处理的733倍。