下面是使用py-redis的完整工作示例:from redis import StrictRedis
cache = StrictRedis()
def clear_ns(ns):
"""
Clears a namespace
:param ns: str, namespace i.e your:prefix
:return: int, cleared keys
"""
count = 0
ns_keys = ns + '*'
for key in cache.scan_iter(ns_keys):
cache.delete(key)
count += 1
return count
您还可以执行scan_iter将所有键放入内存,然后将所有键传递给delete进行批量删除,但对于较大的命名空间,可能会占用大量内存。所以最好为每个键运行一个delete。
干杯!
更新:
在写了答案之后,我开始使用redis的流水线特性在一个请求中发送所有命令并避免网络延迟:from redis import StrictRedis
cache = StrictRedis()
def clear_cache_ns(ns):
"""
Clears a namespace in redis cache.
This may be very time consuming.
:param ns: str, namespace i.e your:prefix*
:return: int, num cleared keys
"""
count = 0
pipe = cache.pipeline()
for key in cache.scan_iter(ns):
pipe.delete(key)
count += 1
pipe.execute()
return count
更新2(最佳性能):
如果使用scan而不是scan_iter,则可以控制块大小,并使用自己的逻辑遍历光标。这似乎也要快得多,尤其是在处理许多密钥时。如果你添加流水线,你会得到一点性能提升,10-25%取决于块大小,以内存使用为代价,因为你不会发送execute命令到Redis,直到一切都生成。所以我坚持扫描:from redis import StrictRedis
cache = StrictRedis()
CHUNK_SIZE = 5000
def clear_ns(ns):
"""
Clears a namespace
:param ns: str, namespace i.e your:prefix
:return: int, cleared keys
"""
cursor = '0'
ns_keys = ns + '*'
while cursor != 0:
cursor, keys = cache.scan(cursor=cursor, match=ns_keys, count=CHUNK_SIZE)
if keys:
cache.delete(*keys)
return True
以下是一些基准:
使用繁忙的Redis集群的5k块:Done removing using scan in 4.49929285049
Done removing using scan_iter in 98.4856731892
Done removing using scan_iter & pipe in 66.8833789825
Done removing using scan & pipe in 3.20298910141
5k块和一个空闲的dev redis(localhost):Done removing using scan in 1.26654982567
Done removing using scan_iter in 13.5976779461
Done removing using scan_iter & pipe in 4.66061878204
Done removing using scan & pipe in 1.13942599297