Fast, easy, realtime metrics using Redis bitmaps

At Spool, we calculate our key metrics in real time. Traditionally, metrics are performed by a batch job (running hourly, daily, etc.). Redis backed bitmaps allow us to perform such calculations in realtime and are extremely space efficient. In a simulation of 128 million users, a typical metric such as “daily unique users” takes less than 50 ms on a MacBook Pro and only takes 16 MB of memory. Spool doesn’t have 128 million users yet but it’s nice to know our approach will scale. We thought we’d share how we do it, in case other startups find our approach useful.

Crash Course on Bitmap and Redis Bitmaps

Bitmap (aka Bitset)

A Bitmap or bitset is an array of zeros and ones. A bit in a bitset can be set to either 0 or 1, and each position in the array is referred to as an offset. Operations such as logical AND, OR, XOR, etc. and other bitwise operations are fair game for Bitmaps.

Population Count

The population count of a Bitmap is the number of bits set to 1. There are efficient algorithms for calculating population count. For instance, the population count of a 90% filled bitset containing 1 billion bits took 21.1 ms on a MacBook Pro. There is even a hardware instruction in SSE4 for the population count of an integer.

Bitmaps in Redis

Redis allows binary keys and binary values. Bitmaps are nothing but binary values. The setbit(key, offset, value) operation, which takes O(1) time, sets the value of a bit to 0 or 1 at the specified offset for a given key.

A simple example: Daily Active Users

To count unique users that logged in today, we set up a bitmap where each user is identified by an offset value. When a user visits a page or performs an action, which warrants it to be counted, set the bit to 1 at the offset representing user id. The key for the bitmap is a function of the name of the action user performed and the timestamp.

In this simple example, every time a user logs in we perform a redis.setbit(daily_active_users, user_id, 1). This flips the appropriate offset in the daily_active_users bitmap to 1. This is an O(1) operation. Doing a population count on this results in 9 unique users that logged in today. The key is daily_active_users and the value is 1011110100100101.

Of course, since the daily active users will change every day we need a way to create a new bitmap every day. We do this by simply appending the date to the bitmap key. For example, if we want to calculate the daily unique users who have played at least 1 song in a music app for a given day, we can set the key name to be play:yyyy-mm-dd. If we want to calculate the number of unique users playing a song each hour, we can name the key name will be play:yyyy-mm-dd-hh. For the rest of the discussion, we will stick with daily unique users that played a song. To collect daily metrics, we will simple set the user’s bit to 1 in the play:yyyy-mm-dd key whenever a user plays a song. This is an O(1) operation.

redis.setbit(play:yyyy-mm-dd, user_id, 1)

The unique users that played a song today is the population count of the bitmap stored as the value for the play:yyyy-mm-dd key.To calculate weekly or monthly metrics, we can simply compute the union of all the daily Bitmaps over the week or the month, and then calculate the population count of the resulting bitmap.

You can also extract more complex metrics very easily. For example, the premium account holders who played a song in November would be:
(play:2011-11-01 ∪ play:2011-11-02 ∪...∪play:2011-11-30) ∩ premium:2011-11

Performance comparison using 128 million users

The table below shows a comparison of daily unique action calculations calculated over 1 day, 7 days and 30 days for 128 million users. The 7 and 30 metrics are calculated by combining daily bitmaps.

Period Time (ms)
Daily 50.2
Weekly 392.0
Monthly 1624.8


In the above example, we can optimize the weekly and monthly computations by caching the calculated daily, weekly, monthly counts in Redis.

This is a very flexible approach. An added bonus of caching is that it allows fast cohort analysis, such as weekly unique users who are also mobile users — the intersection of a mobile users bitmap with a weekly active users bitmap. Or, if we want to compute rolling unique users over the last n days, having cached daily unique counts makes this easy — simply grab the previous n-1 days from your cache and union it with the real time daily count, which only takes 50ms.

Sample Code

A Java code snippet below computes unique users for a given user action and date.

import redis.clients.jedis.Jedis;
import java.util.BitSet;
  Jedis redis = new Jedis("localhost");
  public int uniqueCount(String action, String date) {
    String key = action + ":" + date;
    BitSet users = BitSet.valueOf(redis.get(key.getBytes()));
    return users.cardinality();

The code snippet below computes the unique users for a given given user action and a list of dates.

import redis.clients.jedis.Jedis;
import java.util.BitSet;
  Jedis redis = new Jedis("localhost");
  public int uniqueCount(String action, String... dates) {
    BitSet all = new BitSet();
    for (String date : dates) {
      String key = action + ":" + date;
      BitSet users = BitSet.valueOf(redis.get(key.getBytes()));
    return all.cardinality();

What’s in a name?

Before we were Spool, we were Blueprint Labs. It’s extremely difficult to name a product and we iterated on about 200 names before we picked Spool.

The criteria we set for our product’s name were:

  • easily to spell
  • 1 or 2 syllables (most major brands are)
  • no major SEO conflicts
  • no app store conflicts
  • easy to turn into a verb (Just Spool it!)
  • an unused .com domain (even if it’s squatted, we could buy it if we’re successful)
  • something generic enough to become a brand or umbrella company

The nice to haves:

  • a pun, reference, or double meaning (who doesn’t love a little intellectual masturbation)
  • easy to say in a variety of languages (Google ran into this issue in China)
  • no unfortunate substrings (
  • something we could turn into a simple logo, preferably through typography

In case you’re wondering, the double meaning/pun: A spool is the thing that video tapes are wound around (our core tech is around video and media extraction and delivery) and spooling is the act of sending data to a remote machine for processing, like when you have a print job that is spooling to your printer (which is what Spool does for any URL).

Spool’s full javascript stack

Javascript Stack

We run an unconventional technology stack at Spool. Our full stack is Javascript.

Javascript stack with Node.js, Redis, and only JSON

How the Spool technology stack works

  • When a user interacts with one of our clients (Android, iOS, browser add-ons, HTML5 webapp), data requests are sent to our servers.
  • On the server, Node.js (javascript webserver built on v8) receives this request and if it needs to read data from our datastore, it sends that request to Redis.
  • In Redis, we save key-value pairs. The values are all JSON strings. So Redis just returns JSON to Node.js
  • Node.js receives this JSON and passes it down to the client that requested it.
  • The HTML5 web client uses backbone.js so we keep the JSON data structures we passed down and quickly & easily generate views from these models.


  • The entire stack is extremely fast because there is no difference between data and objects. Everything is JSON. We store JSON, process JSON, and transmit JSON from our REST API. There is no ORM layer and no conversion from ruby/php/python objects to JSON (or vice versa).
  • It’s easy to jump between frontend and backend because the data structures being used are the same JSON
  • Our APIs are extremely flexible and we can iterate on them quickly. We can just change what we store and we’ll return the right thing because we just return the JSON we store. If we realize we need to add fields to what we’re returning or remove fields we can make changes quickly.


  • Analyizing data requires writing scripts to extract and manipulate JSON.
  • Unlike SQL engines, most engineers/analysts/ product managers are not well versed in the Redis data model and will need to reorient themselves.
  • The ecosystem is nascent. There is not deep documentation available, there are not many robust tools, and it’s entirely possible you run into bugs or edge cases.

SpoolBot Updates

We make dozens of improvements to SpoolBot every week. Most of these go unnoticed because if SpoolBot is working, you get what you want. It’s only when SpoolBot doesn’t work well that someone notices. Every now and then we’ll post an update that showcases some of the bigger updates we’ve made to SpoolBot.

This is SpoolBot

Some notable improvements to SpoolBot in the last few weeks:

  • SpoolBot can now fetch PDFs!
  • SpoolBot figures out when to extract content in comments. Some sites make heavy use of comments and you want that commentary in addition to the primary content on the page.
  • SpoolBot can now close lightboxes and popups
In total we’ve made hundreds of improvements to SpoolBot and have hundreds more coming. The more you use Spool the smarter SpoolBot gets.

Spool for iPhone and iPad!

We’ve very glad to announce that both the iPhone and iPad versions of Spool are now available. Go to the iTunes store and get Spool now!

With this new application you can easily cache any web content and use them for your later use. For example, You come across one big blogspot and want to save and read that for your later use then you can do that with Good news is that you can read even without connecting to net, It means you can just save and read is offline too.

Go, ahead and get your spool installed now only!