简介:随着数据量的增加,对redis的读写速度要求也越来越高。为了满足在秒级取值1000+数据,传统的单任务循环取值,消耗实在太大。因此使用redis特有的功能pipeline管道功能,可以很好的解决。
批量插入数据操作
import redis
pool = redis.ConnectionPool(host="20.20.100.133", port=6379, db=7)
redis = redis.Redis(connection_pool=pool, decode_responses=True)
pipe = redis.pipeline()
values = ['zhangfei', 'lisi', 'liuneng']
for i in values:
pipe.hset('hash_key', i, 8)
result = pipe.execute()
print("打印结果:", result)
打印结果:
[1, 1, 1]
批量读取数据
import redis
pool = redis.ConnectionPool(host="20.20.100.133", port=6379, db=7)
redis = redis.Redis(connection_pool=pool, decode_responses=True)
values = ['zhangfei', 'lisi', 'liuneng']
with redis.pipeline(transaction=False) as pipe:
for i in values:
pipe.hget('hash_key', i)
result = pipe.execute()
print("打印结果:", result)
打印结果:
打印结果: [b'8', b'8', b'8']
备注:线上的redis一般都是集群模式,集群模式下创建pipeline的对象时,需要指定:
pipe = redis.pipeline(transaction=False)
Redis中的事务介绍:https://www.cnblogs.com/kangoroo/p/7535405.html