HyperLogLogs —
A HyperLogLog is a probabilistic data structure used in order to count unique things (technically this is referred to estimating the cardinality of a set). Usually counting unique items requires using an amount of memory proportional to the number of items you want to count, because you need to remember the elements you have already seen in the past in order to avoid counting them multiple times. However there is a set of algorithms that trade memory for precision: you end with an estimated measure with a standard error, in the case of the Redis implementation, which is less than 1%. The magic of this algorithm is that you no longer need to use an amount of memory proportional to the number of items counted, and instead can use a constant amount of memory! 12k bytes in the worst case, or a lot less if your HyperLogLog (We’ll just call them HLL from now) has seen very few elements.
HLLs in Redis, while technically a different data structure, is encoded as a Redis string, so you can call GET
to serialize a HLL, and SET
to deserialize it back to the server.
Conceptually the HLL API is like using Sets to do the same task. You would SADD
every observed element into a set, and would use SCARD
to check the number of elements inside the set, which are unique since SADD
will not re-add an existing element.
While you don’t really add items into an HLL, because the data structure only contains a state that does not include actual elements, the API is the same:
- Every time you see a new element, you add it to the count with
PFADD
. -
Every time you want to retrieve the current approximation of the unique elements added with
PFADD
so far, you use thePFCOUNT
.> pfadd hll a b c d (integer) 1 > pfcount hll (integer) 4
An example of use case for this data structure is counting unique queries performed by users in a search form every day.
Redis is also able to perform the union of HLLs, please check the full documentation for more information.