Skip to content

t87s Cloud

Sometimes, when you vacation in Florida, a glitzy salesman corners you with free coffee and a timeshare pitch. In t87s, that time is now.

Look, we built the open-source library because we love caching. We built the cloud because we love money. But also because self-hosting cache infrastructure is a pain in the ass and we’re genuinely good at it.

Our cloud is:

Fast. Sub-millisecond response times. We’re talking 0.3ms p50, 0.8ms p99.

Scalable. We’re built on Cloudflare using their globally distributed network.

Smart. Real-time cache optimization via AI analysis. We analyze your cache patterns and auto-tune TTLs so you don’t have to guess.

Simple. One API key. No Redis cluster to babysit.

Adapterp50p99Ops/sec
Memory0.01ms0.05ms∞ (local)
Redis1.2ms4.5ms50k
t87s Cloud0.3ms0.8ms500k

Yes, we’re faster than Redis for a globally distributed app. No, we didn’t stack the deck in our favor. Yes, the benchmarks are open source, and you can reproduce them at your leisure.

Query your cache analytics with SQL via POST /v1/query.

Terminal window
curl -X POST https://api.t87s.dev/v1/query \
-H "Authorization: Bearer $T87S_API_KEY" \
-H "Content-Type: application/json" \
-d '{"sql": "SELECT * FROM verifications LIMIT 10"}'

verifications key, cached_hash, fresh_hash, is_stale, timestamp

cache_operations type, key, timestamp

invalidations id, tag, exact, timestamp

invalidated key, invalidation_id, matched_tag, timestamp

Staleness by Key (24h) — Shows cache keys ranked by verification sample count. “stale” = times the cached value differed from a fresh fetch. High stale_pct suggests TTL is too long or invalidation is missing.

SELECT key, COUNT(*) as samples, SUM(is_stale) as stale,
ROUND(SUM(is_stale) * 100.0 / COUNT(*), 1) as stale_pct
FROM verifications
WHERE timestamp > (strftime('%s', 'now') - 86400) * 1000
GROUP BY key
ORDER BY samples DESC
LIMIT 100

Potential Issues — Keys that may need attention: at least 10 samples AND >10% stale rate. These are candidates for shorter TTLs or adding invalidation triggers. Empty results = your cache is healthy!

SELECT key, COUNT(*) as samples, SUM(is_stale) as stale,
ROUND(SUM(is_stale) * 100.0 / COUNT(*), 1) as stale_pct
FROM verifications
WHERE timestamp > (strftime('%s', 'now') - 86400) * 1000
GROUP BY key
HAVING COUNT(*) >= 10 AND (SUM(is_stale) * 100.0 / COUNT(*)) > 10
ORDER BY stale_pct DESC

Cache Hit Rate (24h) — Cache performance breakdown: hits, misses, and sets. hit_rate_pct = hits / (hits + misses). Higher is better. Many misses? Check if invalidations are too aggressive.

SELECT
SUM(CASE WHEN type = 'hit' THEN 1 ELSE 0 END) as hits,
SUM(CASE WHEN type = 'miss' THEN 1 ELSE 0 END) as misses,
SUM(CASE WHEN type = 'set' THEN 1 ELSE 0 END) as sets,
ROUND(SUM(CASE WHEN type = 'hit' THEN 1 ELSE 0 END) * 100.0 /
NULLIF(SUM(CASE WHEN type IN ('hit', 'miss') THEN 1 ELSE 0 END), 0), 1) as hit_rate_pct
FROM cache_operations
WHERE timestamp > (strftime('%s', 'now') - 86400) * 1000

Recent Invalidations — Tag invalidations in the last 24 hours. “exact” = only that exact tag; false = prefix match (invalidates children). “affected_keys” = cache entries invalidated.

SELECT i.tag, i.exact, datetime(i.timestamp/1000, 'unixepoch') as time,
COUNT(inv.id) as affected_keys
FROM invalidations i
LEFT JOIN invalidated inv ON inv.invalidation_id = i.id
WHERE i.timestamp > (strftime('%s', 'now') - 86400) * 1000
GROUP BY i.id
ORDER BY i.timestamp DESC
LIMIT 50

Blast Radius (24h) — Which invalidations affected the most cache entries? Large blast radius may indicate overly broad tags. Consider more specific tags to reduce unnecessary cache misses.

SELECT i.tag, i.exact, datetime(i.timestamp/1000, 'unixepoch') as time,
COUNT(inv.id) as affected_keys
FROM invalidations i
LEFT JOIN invalidated inv ON inv.invalidation_id = i.id
WHERE i.timestamp > (strftime('%s', 'now') - 86400) * 1000
GROUP BY i.id
ORDER BY affected_keys DESC
LIMIT 20

Hourly Verifications — Verification samples per hour (last 24 hours). Verifications randomly re-fetch to detect stale cache entries. Spikes in “stale” column indicate periods of cache inconsistency.

SELECT strftime('%Y-%m-%d %H:00', timestamp/1000, 'unixepoch') as hour,
COUNT(*) as total, SUM(is_stale) as stale
FROM verifications
GROUP BY hour
ORDER BY hour DESC
LIMIT 24

The Query Explorer includes a “Prompt for LLMs” button that copies your API key and documentation to your clipboard. Paste it into Claude, ChatGPT, Cursor, or any AI assistant to analyze your cache patterns.

Ask questions like:

  • “What cache issues do I have?”
  • “Which keys have the highest stale rate?”
  • “Should I increase the TTL on user:* keys?”

For the supremely lazy and crafty dev, one popular approach is the “dumb cache”—wrap everything in a cache call without any tags, deploy to production, and let it run for 48-72 hours. Then, use the “Prompt for LLMs” button and your favorite AI assistant to analyze your actual traffic patterns and tell you exactly which operations benefit from caching and what TTLs make sense.

Free tier: 10k operations/month. No credit card required.

After that: $0.001 per operation.

Let’s say you have a small SaaS with 50 daily active users. Each user averages 3 sessions per day, and each session hits your cache 10 times. That’s 50 × 3 × 10 = 1,500 operations per day, or about 45k per month. Subtract the 10k free tier, and you’re at 35k × $0.001 = $35/month.

import { QueryCache, at, wild, CloudAdapter } from '@t87s/core';
const cache = QueryCache({
schema: at('users', () => wild),
adapter: new CloudAdapter({ apiKey: process.env.T87S_API_KEY! }),
queries: (tags) => ({
getUser: (id: string) => ({
tags: [tags.users(id)],
fn: () => db.users.findById(id),
}),
}),
});

That’s it. Same API you already know. Just faster, smarter, and with someone else waking up at 3am.

Sign up for free