json 和 msgpack 都是非常优秀的序列化格式,下面我们将简单的对比一下两者的性能
- 使用语言
python 2.7 - 序列化数据类型:大小为
30 m左右的dict - 测试过程,先伪造
20w个元素的dict - 然后将
dict序列化存入文本,读取文本数据进行反序列化
-
msgpack
-
序列化 耗时
0.196000099182 -
反序列化 耗时
0.0929999351501
-
-
json
-
序列化 耗时
0.28200006485 -
反序列化 耗时
1.21799993515
-
import os
import redis
import time
import msgpack
pool = redis.ConnectionPool(host='localhost', port=6379, decode_responses=True)
redis_client = redis.Redis(connection_pool=pool)
dict_sn = redis_client.hgetall('fake_data_2019-07-11')
start = time.time()
str_sn = msgpack.dumps(dict_sn)
print(time.time() - start)
if not os.path.exists('./file.json'):
with open('file.json', 'wb') as f:
f.write(str_sn)
else:
print('File already exists!')
# -*- coding: utf-8 -*-
import os
import time
import msgpack
start = time.time()
if os.path.exists('./file.json'):
f = open('./file.json', 'rb')
data = f.read()
dict_sn = msgpack.loads(data)
print(len(dict_sn))
else:
print("file not exists")
print(time.time() - start)
import os
import redis
import json
import time
import msgpack
import pickle
pool = redis.ConnectionPool(host='localhost', port=6379, decode_responses=True)
redis_client = redis.Redis(connection_pool=pool)
dict_sn = redis_client.hgetall('fake_data_2019-07-11')
start = time.time()
print type(dict_sn)
str_sn = json.dumps(dict_sn)
print time.time() - start
if not os.path.exists('./file.json'):
with open('file.json', 'wt') as f:
f.write(str_sn)
else:
print('File already exists!')
# -*- coding: utf-8 -*-
import json
import os
import time
import msgpack
start = time.time()
if os.path.exists('./file.json'):
f = open('./file.json', 'rt')
data = f.read()
dict_sn = json.loads(data)
print len(dict_sn)
else:
print "file not exists"
print time.time() - start
本文对比了msgpack和json两种序列化格式的性能。通过测试,展示了在相同条件下,msgpack在序列化和反序列化上的速度优势,表明msgpack可能比json快约10倍。
2054

被折叠的 条评论
为什么被折叠?



