t87s Cloud
Sometimes, when you vacation in Florida, a glitzy salesman corners you with free coffee and a timeshare pitch. In t87s, that time is now.
Why Cloud?
Section titled “Why Cloud?”Look, we built the open-source library because we love caching. We built the cloud because we love money. But also because self-hosting cache infrastructure is a pain in the ass and we’re genuinely good at it.
Our cloud is:
Fast. Sub-millisecond response times. We’re talking 0.3ms p50, 0.8ms p99.
Scalable. We’re built on Cloudflare using their globally distributed network.
Smart. Real-time cache optimization via AI analysis. We analyze your cache patterns and auto-tune TTLs so you don’t have to guess.
Simple. One API key. No Redis cluster to babysit.
Benchmarks
Section titled “Benchmarks”| Adapter | p50 | p99 | Ops/sec |
|---|---|---|---|
| Memory | 0.01ms | 0.05ms | ∞ (local) |
| Redis | 1.2ms | 4.5ms | 50k |
| t87s Cloud | 0.3ms | 0.8ms | 500k |
Yes, we’re faster than Redis for a globally distributed app. No, we didn’t stack the deck in our favor. Yes, the benchmarks are open source, and you can reproduce them at your leisure.
The API
Section titled “The API”Query your cache analytics with SQL via POST /v1/query.
curl -X POST https://api.t87s.dev/v1/query \ -H "Authorization: Bearer $T87S_API_KEY" \ -H "Content-Type: application/json" \ -d '{"sql": "SELECT * FROM verifications LIMIT 10"}'Tables
Section titled “Tables”verifications key, cached_hash, fresh_hash, is_stale, timestamp
cache_operations type, key, timestamp
invalidations id, tag, exact, timestamp
invalidated key, invalidation_id, matched_tag, timestamp
Example Queries
Section titled “Example Queries”Staleness by Key (24h) — Shows cache keys ranked by verification sample count. “stale” = times the cached value differed from a fresh fetch. High stale_pct suggests TTL is too long or invalidation is missing.
SELECT key, COUNT(*) as samples, SUM(is_stale) as stale, ROUND(SUM(is_stale) * 100.0 / COUNT(*), 1) as stale_pctFROM verificationsWHERE timestamp > (strftime('%s', 'now') - 86400) * 1000GROUP BY keyORDER BY samples DESCLIMIT 100Potential Issues — Keys that may need attention: at least 10 samples AND >10% stale rate. These are candidates for shorter TTLs or adding invalidation triggers. Empty results = your cache is healthy!
SELECT key, COUNT(*) as samples, SUM(is_stale) as stale, ROUND(SUM(is_stale) * 100.0 / COUNT(*), 1) as stale_pctFROM verificationsWHERE timestamp > (strftime('%s', 'now') - 86400) * 1000GROUP BY keyHAVING COUNT(*) >= 10 AND (SUM(is_stale) * 100.0 / COUNT(*)) > 10ORDER BY stale_pct DESCCache Hit Rate (24h) — Cache performance breakdown: hits, misses, and sets. hit_rate_pct = hits / (hits + misses). Higher is better. Many misses? Check if invalidations are too aggressive.
SELECT SUM(CASE WHEN type = 'hit' THEN 1 ELSE 0 END) as hits, SUM(CASE WHEN type = 'miss' THEN 1 ELSE 0 END) as misses, SUM(CASE WHEN type = 'set' THEN 1 ELSE 0 END) as sets, ROUND(SUM(CASE WHEN type = 'hit' THEN 1 ELSE 0 END) * 100.0 / NULLIF(SUM(CASE WHEN type IN ('hit', 'miss') THEN 1 ELSE 0 END), 0), 1) as hit_rate_pctFROM cache_operationsWHERE timestamp > (strftime('%s', 'now') - 86400) * 1000Recent Invalidations — Tag invalidations in the last 24 hours. “exact” = only that exact tag; false = prefix match (invalidates children). “affected_keys” = cache entries invalidated.
SELECT i.tag, i.exact, datetime(i.timestamp/1000, 'unixepoch') as time, COUNT(inv.id) as affected_keysFROM invalidations iLEFT JOIN invalidated inv ON inv.invalidation_id = i.idWHERE i.timestamp > (strftime('%s', 'now') - 86400) * 1000GROUP BY i.idORDER BY i.timestamp DESCLIMIT 50Blast Radius (24h) — Which invalidations affected the most cache entries? Large blast radius may indicate overly broad tags. Consider more specific tags to reduce unnecessary cache misses.
SELECT i.tag, i.exact, datetime(i.timestamp/1000, 'unixepoch') as time, COUNT(inv.id) as affected_keysFROM invalidations iLEFT JOIN invalidated inv ON inv.invalidation_id = i.idWHERE i.timestamp > (strftime('%s', 'now') - 86400) * 1000GROUP BY i.idORDER BY affected_keys DESCLIMIT 20Hourly Verifications — Verification samples per hour (last 24 hours). Verifications randomly re-fetch to detect stale cache entries. Spikes in “stale” column indicate periods of cache inconsistency.
SELECT strftime('%Y-%m-%d %H:00', timestamp/1000, 'unixepoch') as hour, COUNT(*) as total, SUM(is_stale) as staleFROM verificationsGROUP BY hourORDER BY hour DESCLIMIT 24Using with LLMs
Section titled “Using with LLMs”The Query Explorer includes a “Prompt for LLMs” button that copies your API key and documentation to your clipboard. Paste it into Claude, ChatGPT, Cursor, or any AI assistant to analyze your cache patterns.
Ask questions like:
- “What cache issues do I have?”
- “Which keys have the highest stale rate?”
- “Should I increase the TTL on user:* keys?”
For the supremely lazy and crafty dev, one popular approach is the “dumb cache”—wrap everything in a cache call without any tags, deploy to production, and let it run for 48-72 hours. Then, use the “Prompt for LLMs” button and your favorite AI assistant to analyze your actual traffic patterns and tell you exactly which operations benefit from caching and what TTLs make sense.
Pricing
Section titled “Pricing”Free tier: 10k operations/month. No credit card required.
After that: $0.001 per operation.
Let’s say you have a small SaaS with 50 daily active users. Each user averages 3 sessions per day, and each session hits your cache 10 times. That’s 50 × 3 × 10 = 1,500 operations per day, or about 45k per month. Subtract the 10k free tier, and you’re at 35k × $0.001 = $35/month.
Get Started
Section titled “Get Started”import { QueryCache, at, wild, CloudAdapter } from '@t87s/core';
const cache = QueryCache({ schema: at('users', () => wild), adapter: new CloudAdapter({ apiKey: process.env.T87S_API_KEY! }), queries: (tags) => ({ getUser: (id: string) => ({ tags: [tags.users(id)], fn: () => db.users.findById(id), }), }),});import osfrom t87s import QueryCache, TagSchema, Wild, cachedfrom t87s.adapters import AsyncCloudAdapter
class Tags(TagSchema): users: Wild[TagSchema]
class Cache(QueryCache[Tags]): @cached(Tags.users()) async def get_user(self, id: str): return await db.users.find_by_id(id)
cache = Cache(adapter=AsyncCloudAdapter(api_key=os.environ["T87S_API_KEY"]))That’s it. Same API you already know. Just faster, smarter, and with someone else waking up at 3am.