Caching is one of the most effective ways to improve application performance. Redis, with its sub-millisecond latency and rich data structures, is the go-to choice for application caching. But implementing caching correctly requires understanding the tradeoffs.
Why Cache with Redis?
| Data Source | Typical Latency | Throughput |
|---|---|---|
| Redis (local network) | < 1ms | 100K+ ops/sec |
| PostgreSQL (simple query) | 1-10ms | 1-10K queries/sec |
| External API | 50-500ms | Rate limited |
A well-implemented cache can reduce database load by 80-90% and cut response times dramatically.
Caching Patterns
1. Cache-Aside (Lazy Loading)
The most common pattern. Application manages cache explicitly:
async function getUser(userId) {
// 1. Check cache first
const cached = await redis.get(`user:${userId}`);
if (cached) {
return JSON.parse(cached);
}
// 2. Cache miss - fetch from database
const user = await db.query('SELECT * FROM users WHERE id = ?', [userId]);
// 3. Store in cache for next time
await redis.setex(`user:${userId}`, 3600, JSON.stringify(user));
return user;
}
Pros:
- Only caches data that's actually requested
- Cache failures don't break the application
- Simple to understand and implement
Cons:
- First request always hits the database (cache miss)
- Cache can become stale
2. Write-Through
Cache is updated synchronously with the database:
async function updateUser(userId, data) {
// 1. Update database
await db.query('UPDATE users SET ? WHERE id = ?', [data, userId]);
// 2. Update cache immediately
const user = await db.query('SELECT * FROM users WHERE id = ?', [userId]);
await redis.setex(`user:${userId}`, 3600, JSON.stringify(user));
return user;
}
Pros:
- Cache is always up-to-date
- No stale reads after writes
Cons:
- Write latency increases (two operations)
- May cache data that's never read
3. Write-Behind (Write-Back)
Writes go to cache first, then asynchronously to database:
async function updateUserAsync(userId, data) {
// 1. Update cache immediately
await redis.setex(`user:${userId}`, 3600, JSON.stringify(data));
// 2. Queue database write for later
await queue.add('update-user', { userId, data });
return data;
}
Pros:
- Fastest write performance
- Good for write-heavy workloads
Cons:
- Risk of data loss if cache fails before database write
- Complex consistency handling
Cache Invalidation Strategies
"There are only two hard things in Computer Science: cache invalidation and naming things." — Phil Karlton
Time-Based Expiration (TTL)
Simplest approach—data expires after a set time:
// Cache for 1 hour
await redis.setex('popular-products', 3600, JSON.stringify(products));
Best for: Data that can tolerate staleness (product listings, blog posts)
Event-Based Invalidation
Delete cache when underlying data changes:
async function updateProduct(productId, data) {
await db.query('UPDATE products SET ? WHERE id = ?', [data, productId]);
// Invalidate all related caches
await redis.del(`product:${productId}`);
await redis.del('popular-products');
await redis.del(`category:${data.categoryId}:products`);
}
Best for: Data requiring consistency (user profiles, inventory counts)
Tag-Based Invalidation
Group related cache keys with tags for bulk invalidation:
// When caching, track tags
await redis.sadd('tag:products', 'product:1', 'product:2', 'popular-products');
// Invalidate all product caches at once
async function invalidateProductCaches() {
const keys = await redis.smembers('tag:products');
if (keys.length > 0) {
await redis.del(...keys);
await redis.del('tag:products');
}
}
What to Cache
High-Value Cache Targets
| Data Type | TTL | Invalidation |
|---|---|---|
| Session data | 24 hours | On logout |
| User profiles | 1 hour | On update |
| Configuration | 5 minutes | On change |
| API responses | Varies | TTL only |
| Computed aggregates | 15 minutes | TTL + event |
| Rate limit counters | 1 minute | Automatic |
Things NOT to Cache
- Highly dynamic data: Real-time prices, live scores
- Sensitive data: Passwords, payment info (security risk)
- Large blobs: Images, files (use CDN instead)
- User-specific data at scale: 1M users × 10KB = 10GB cache
Redis Data Structures for Caching
Strings
Simple key-value, most common:
SET user:123 "{"name":"John"}"
GET user:123
Hashes
When you need to update individual fields:
HSET user:123 name "John" email "john@example.com"
HGET user:123 name
HINCRBY user:123 login_count 1
Sorted Sets
For leaderboards and ranked data:
ZADD leaderboard 1000 "player:1" 950 "player:2"
ZRANGE leaderboard 0 9 REV WITHSCORES
Lists
For recent items, activity feeds:
LPUSH recent:user:123 "viewed product:456"
LTRIM recent:user:123 0 99 # Keep last 100
LRANGE recent:user:123 0 9
Cache Warming
Pre-populate cache to avoid cold-start latency:
// On application startup or schedule
async function warmCache() {
const popularProducts = await db.query(
'SELECT * FROM products ORDER BY views DESC LIMIT 100'
);
for (const product of popularProducts) {
await redis.setex(
`product:${product.id}`,
3600,
JSON.stringify(product)
);
}
}
Monitoring Cache Effectiveness
Track these metrics:
- Hit rate: % of requests served from cache (target: >90%)
- Miss rate: % of requests hitting the database
- Latency: Cache response time (should be <1ms)
- Memory usage: Are you approaching limits?
- Evictions: Keys removed due to memory pressure
# Redis INFO stats
redis-cli INFO stats | grep -E "keyspace_hits|keyspace_misses"
Common Pitfalls
1. Cache Stampede
Many requests hit database when cache expires simultaneously. Solution: staggered TTLs or locking:
const TTL = 3600;
const JITTER = Math.random() * 300; // 0-5 minutes
await redis.setex(key, TTL + JITTER, value);
2. Thundering Herd
Multiple processes try to rebuild cache simultaneously. Solution: distributed lock:
async function getWithLock(key, fetchFn) {
const cached = await redis.get(key);
if (cached) return JSON.parse(cached);
const lockKey = `lock:${key}`;
const acquired = await redis.set(lockKey, '1', 'NX', 'EX', 10);
if (acquired) {
const data = await fetchFn();
await redis.setex(key, 3600, JSON.stringify(data));
await redis.del(lockKey);
return data;
}
// Wait and retry
await sleep(100);
return getWithLock(key, fetchFn);
}
3. Storing Too Much
Cache what you need, not entire objects:
// Bad: caching entire user object with relations
await redis.set('user:123', JSON.stringify(userWithAllRelations));
// Good: cache only what's frequently needed
await redis.set('user:123:profile', JSON.stringify({
id: user.id,
name: user.name,
avatar: user.avatar
}));
Conclusion
Effective caching requires understanding your data access patterns and choosing the right strategy. Start with cache-aside for simplicity, implement proper invalidation, and monitor your hit rates.
Ready to implement caching? Deploy a managed Redis instance and start optimizing your application performance.