Approximating row counts

Caching count(*) queries without a cache

Written by

Andreas Thomas

Published on

Unkey allows users to create an unlimited number of API keys for their applications. Counting these for our dashboard or API has become a growing issue for us.

Most APIs have fewer than a thousand keys, however some of our larger customers have hundreds of thousands. And those customers are also the ones hitting our API the most.

Schema

1CREATE TABLE `key_space` (
2	`id` varchar(256) NOT NULL,
3	`workspace_id` varchar(256) NOT NULL,
4	# ... omitted
5)
6
7CREATE TABLE `keys` (
8	`id` varchar(256) NOT NULL,
9	`hash` varchar(256) NOT NULL,
10	`workspace_id` varchar(256) NOT NULL,
11	`key_space_id` varchar(256) NOT NULL,
12	# ... omitted
13)

As you can see, many keys belong to a single key_space and out query in question is:

1SELECT count(*) FROM keys WHERE key_space_id = ?

Options

We were looking at a few options how to fix this:

  1. Caching the count as part of a larger query
  2. Caching the count(*) query separately in our tiered cache using SWR semantics.
  3. Adding two new columns for storing approximated counts.

Solution

We went with the 3rd option, mainly because we would never run into a cold cache, where we don't have a value at all, nor does it depend on another component. We can use this in our dashboard just as easily as in our API and it behaves the same.

Adding these two columns, one for storing the approximated count and one for storing a timestamp of when we last updated the count.

1ALTER TABLE `key_space`
2  ADD COLUMN `size_approx` int NOT NULL DEFAULT '0',
3  ADD COLUMN `size_last_updated_at` bigint NOT NULL DEFAULT '0'

By storing the count on the key_space table, we get the count for free cause we're not doing an extra query. To keep it up to date, we check the size_last_updated_at timestamp after every read and if it's too old (60s in our case), we refresh it asynchronously.

Here's how we do it in drizzle:

1const keySpace = await db.query.keySpace.findFirst({where: ...})
2if (keySpace.sizeLastUpdatedAt < Date.now() - 60_000) {
3  const count = await db
4    .select({ count: sql<string>`count(*)` })
5    .from(schema.keys)
6    .where(and(eq(schema.keys.keySpaceId, keySpace.id), isNull(schema.keys.deletedAt)));
7
8  keySpace.sizeApprox = Number.parseInt(count?.at(0)?.count ?? "0");
9  keySpace.sizeLastUpdatedAt = Date.now();
10
11  c.executionCtx.waitUntil(
12    db.primary
13      .update(schema.keySpace)
14      .set({
15        sizeApprox: keySpace.sizeApprox,
16        sizeLastUpdatedAt: keySpace.sizeLastUpdatedAt,
17      })
18      .where(eq(schema.keySpace.id, keySpace.id)),
19  );
20}

We first load the keySpace and if the data is too old, we kick off a second query to count all keys. Potentially this might kick off many queries to refresh if a lot of requests come in at the same time, but that's also the case for our current system, where we always count all rows.

In the future we might want to run a cron job to refresh counts in the background and remove the manual refresh, but we haven't needed that yet.

Protect your API.
Start today.

2500 verifications and 100K successful rate‑limited requests per month. No CC required.