Redis Managed Cache
Redis is an open-source, in-memory data structure store used as a database, cache, and message broker. DanubeData offers fully managed Redis instances with automatic failover, persistence, and easy scaling.
Overview
Redis provides:
- High Performance: Sub-millisecond response times
- Versatile Data Structures: Strings, hashes, lists, sets, sorted sets, and more
- Persistence: Optional data persistence to disk
- Replication: Master-replica architecture for high availability
- Pub/Sub: Real-time messaging capabilities
- Lua Scripting: Server-side scripting support
Supported Redis Versions
DanubeData supports the following Redis versions:
- Redis 7.2 (Latest - Recommended)
- Redis 7.0
- Redis 6.2 (LTS)
Recommendation: Use Redis 7.2 for new instances to get the latest features and performance improvements.
Creating a Redis Instance
Via Dashboard
- Navigate to Cache in the main menu
- Click Create Cache Instance
- Select Redis as the engine
- Choose Redis version
- Select a resource profile (see below)
- Choose a data center location
- Configure optional settings:
- Instance name
- Enable replicas for high availability
- Persistence settings (AOF, RDB, or both)
- Eviction policy
- Click Create Instance
Your Redis instance will be provisioned within 2-3 minutes.
Resource Profiles
Redis instances are available in multiple memory-optimized profiles:
| Profile | vCPUs | Memory | Max Connections | Price/Month |
|---|---|---|---|---|
| Cache-Micro | 1 | 1 GB | 10,000 | $15 |
| Cache-Small | 2 | 2 GB | 10,000 | $30 |
| Cache-Medium | 2 | 4 GB | 10,000 | $60 |
| Cache-Large | 4 | 8 GB | 10,000 | $120 |
| Cache-XLarge | 8 | 16 GB | 10,000 | $240 |
| Cache-2XLarge | 16 | 32 GB | 10,000 | $480 |
All profiles include:
- NVMe SSD for persistence
- Automatic failover with replicas
- Hourly billing with monthly cap
- TLS/SSL encryption
Connecting to Redis
Connection Details
After creation, you'll receive connection details:
Host: redis-123456.danubedata.com
Port: 6379
Password: [secure_password]
TLS: Required
Redis CLI
Connect using redis-cli:
redis-cli -h redis-123456.danubedata.com -p 6379 -a your_password --tls
Connection from Application
PHP (Predis)
<?php
require 'vendor/autoload.php';
use Predis\Client;
$client = new Client([
'scheme' => 'tls',
'host' => 'redis-123456.danubedata.com',
'port' => 6379,
'password' => 'your_password',
]);
// Set a value
$client->set('user:1000', 'John Doe');
// Get a value
$name = $client->get('user:1000');
// Set with expiration
$client->setex('session:abc123', 3600, json_encode(['user_id' => 1000]));
Python (redis-py)
import redis
import ssl
r = redis.Redis(
host='redis-123456.danubedata.com',
port=6379,
password='your_password',
ssl=True,
ssl_cert_reqs=ssl.CERT_REQUIRED,
decode_responses=True
)
# Set a value
r.set('user:1000', 'John Doe')
# Get a value
name = r.get('user:1000')
# Set with expiration
r.setex('session:abc123', 3600, '{"user_id": 1000}')
# Hash operations
r.hset('user:1000:profile', mapping={'name': 'John', 'age': 30})
r.hget('user:1000:profile', 'name')
Node.js (ioredis)
const Redis = require('ioredis');
const redis = new Redis({
host: 'redis-123456.danubedata.com',
port: 6379,
password: 'your_password',
tls: {
rejectUnauthorized: true
}
});
// Set a value
await redis.set('user:1000', 'John Doe');
// Get a value
const name = await redis.get('user:1000');
// Set with expiration
await redis.setex('session:abc123', 3600, JSON.stringify({user_id: 1000}));
// Hash operations
await redis.hmset('user:1000:profile', 'name', 'John', 'age', 30);
const userName = await redis.hget('user:1000:profile', 'name');
Laravel
// config/database.php
'redis' => [
'client' => env('REDIS_CLIENT', 'phpredis'),
'default' => [
'url' => env('REDIS_URL'),
'host' => env('REDIS_HOST', 'redis-123456.danubedata.com'),
'password' => env('REDIS_PASSWORD'),
'port' => env('REDIS_PORT', '6379'),
'database' => env('REDIS_DB', '0'),
'scheme' => 'tls',
],
],
// Usage in application
use Illuminate\Support\Facades\Redis;
// Set a value
Redis::set('user:1000', 'John Doe');
// Get a value
$name = Redis::get('user:1000');
// Cache usage
Cache::store('redis')->put('key', 'value', $seconds);
Redis Data Structures
Strings
Basic key-value storage:
# Set and get
SET user:1000:name "John Doe"
GET user:1000:name
# Increment counter
SET views:homepage 0
INCR views:homepage
INCRBY views:homepage 10
# Set with expiration
SETEX session:abc123 3600 "user_data"
# Set if not exists
SETNX lock:resource:1 "locked"
Hashes
Store objects as field-value pairs:
# Set hash fields
HSET user:1000 name "John Doe" email "john@example.com" age 30
# Get single field
HGET user:1000 name
# Get all fields
HGETALL user:1000
# Get multiple fields
HMGET user:1000 name email
# Increment field
HINCRBY user:1000 age 1
Lists
Ordered collections:
# Add to list
LPUSH notifications:user:1000 "New message"
RPUSH queue:emails "email1@example.com"
# Get from list
LPOP notifications:user:1000
RPOP queue:emails
# Get range
LRANGE queue:emails 0 10
# List length
LLEN queue:emails
# Blocking pop (for queues)
BLPOP queue:emails 0
Sets
Unordered collections of unique elements:
# Add to set
SADD tags:article:100 "redis" "caching" "performance"
# Check membership
SISMEMBER tags:article:100 "redis"
# Get all members
SMEMBERS tags:article:100
# Set operations
SINTER tags:article:100 tags:article:101 # Intersection
SUNION tags:article:100 tags:article:101 # Union
SDIFF tags:article:100 tags:article:101 # Difference
Sorted Sets
Ordered sets with scores:
# Add members with scores
ZADD leaderboard 1000 "player1" 950 "player2" 1200 "player3"
# Get by rank
ZRANGE leaderboard 0 10 WITHSCORES
# Get by score
ZRANGEBYSCORE leaderboard 1000 2000
# Get rank
ZRANK leaderboard "player1"
# Increment score
ZINCRBY leaderboard 50 "player1"
Common Use Cases
Session Storage
import redis
import json
r = redis.Redis(host='redis-123456.danubedata.com', ...)
def create_session(session_id, user_data, ttl=3600):
"""Create a user session with TTL"""
r.setex(f'session:{session_id}', ttl, json.dumps(user_data))
def get_session(session_id):
"""Retrieve session data"""
data = r.get(f'session:{session_id}')
return json.loads(data) if data else None
def update_session(session_id, updates):
"""Update session and refresh TTL"""
data = get_session(session_id)
if data:
data.update(updates)
create_session(session_id, data)
Caching
function get_user($user_id) {
// Try cache first
$cached = Redis::get("user:{$user_id}");
if ($cached) {
return json_decode($cached, true);
}
// Cache miss - fetch from database
$user = User::find($user_id);
// Store in cache for 1 hour
Redis::setex("user:{$user_id}", 3600, json_encode($user));
return $user;
}
function invalidate_user_cache($user_id) {
Redis::del("user:{$user_id}");
}
Rate Limiting
def is_rate_limited(user_id, limit=100, window=60):
"""Check if user exceeded rate limit
Args:
user_id: User identifier
limit: Max requests per window
window: Time window in seconds
Returns:
bool: True if rate limited
"""
key = f'rate_limit:{user_id}'
current = r.incr(key)
if current == 1:
r.expire(key, window)
return current > limit
Leaderboard
class Leaderboard {
constructor(redis, name) {
this.redis = redis;
this.key = `leaderboard:${name}`;
}
async addScore(player, score) {
await this.redis.zadd(this.key, score, player);
}
async getTop(n = 10) {
return await this.redis.zrevrange(this.key, 0, n-1, 'WITHSCORES');
}
async getRank(player) {
return await this.redis.zrevrank(this.key, player);
}
async getScore(player) {
return await this.redis.zscore(this.key, player);
}
}
Distributed Locking
def acquire_lock(resource_id, timeout=10):
"""Acquire distributed lock"""
lock_key = f'lock:{resource_id}'
lock_value = str(uuid.uuid4())
# Try to acquire lock
acquired = r.set(lock_key, lock_value, nx=True, ex=timeout)
return lock_value if acquired else None
def release_lock(resource_id, lock_value):
"""Release distributed lock"""
lock_key = f'lock:{resource_id}'
# Lua script for atomic check-and-delete
lua_script = """
if redis.call("get", KEYS[1]) == ARGV[1] then
return redis.call("del", KEYS[1])
else
return 0
end
"""
return r.eval(lua_script, 1, lock_key, lock_value)
Pub/Sub Messaging
# Publisher
def publish_message(channel, message):
r.publish(channel, json.dumps(message))
# Subscriber
def subscribe_to_channel(channel):
pubsub = r.pubsub()
pubsub.subscribe(channel)
for message in pubsub.listen():
if message['type'] == 'message':
data = json.loads(message['data'])
process_message(data)
Performance Optimization
Pipelining
Batch multiple commands for better performance:
# Without pipelining (3 round trips)
r.set('key1', 'value1')
r.set('key2', 'value2')
r.set('key3', 'value3')
# With pipelining (1 round trip)
pipe = r.pipeline()
pipe.set('key1', 'value1')
pipe.set('key2', 'value2')
pipe.set('key3', 'value3')
pipe.execute()
Connection Pooling
Reuse connections efficiently:
import redis
pool = redis.ConnectionPool(
host='redis-123456.danubedata.com',
port=6379,
password='your_password',
max_connections=50,
ssl=True
)
# Use pool for all connections
r = redis.Redis(connection_pool=pool)
Key Naming
Use consistent, hierarchical key naming:
user:1000:profile
user:1000:sessions
user:1000:cart
article:500:views
article:500:comments
cache:homepage:en
cache:homepage:es
Expiration
Always set expiration on temporary data:
# Absolute expiration
r.setex('temp:data', 3600, 'value') # Expires in 1 hour
# Set expiration on existing key
r.expire('existing:key', 7200) # Expires in 2 hours
# Check time to live
ttl = r.ttl('existing:key')
Persistence
Persistence Options
DanubeData Redis offers two persistence methods:
RDB (Point-in-time Snapshots)
- How it works: Periodic snapshots of dataset
- Pros: Compact, fast restart, good for backups
- Cons: Potential data loss between snapshots
- Use case: Can tolerate some data loss
AOF (Append-Only File)
- How it works: Logs every write operation
- Pros: More durable, minimal data loss
- Cons: Larger files, slower restart
- Use case: Need maximum durability
Both (Recommended)
- Use RDB for backups and fast restarts
- Use AOF for durability
- Best of both worlds
Configuring Persistence
- Navigate to your Redis instance
- Click Settings > Persistence
- Select persistence method:
- None (cache-only, no persistence)
- RDB only
- AOF only
- RDB + AOF (recommended)
- Click Save
Note: Persistence settings can be changed anytime, but require instance restart.
Eviction Policies
When Redis reaches max memory, it can evict keys based on policy:
Available Policies
- noeviction: Return errors when memory limit reached (default)
- allkeys-lru: Evict least recently used keys
- volatile-lru: Evict least recently used keys with TTL
- allkeys-random: Evict random keys
- volatile-random: Evict random keys with TTL
- volatile-ttl: Evict keys with shortest TTL
Choosing Eviction Policy
For Caching (recommended):
allkeys-lru
Evicts least recently used keys to make room for new data.
For Mixed Workload:
volatile-lru
Only evicts keys with expiration set, preserves persistent data.
For Guaranteed Writes:
noeviction
Returns error if memory full, ensures no data loss.
Setting Eviction Policy
- Navigate to your Redis instance
- Click Settings > Configuration
- Select Eviction Policy
- Click Save (no restart required)
Monitoring and Maintenance
Key Metrics
Monitor these metrics in the dashboard:
- Memory Usage: Current memory consumption
- Hit Rate: Cache hit ratio (should be > 90%)
- Commands/sec: Operations per second
- Connected Clients: Active connections
- Evicted Keys: Keys evicted due to memory pressure
- Expired Keys: Keys expired naturally
Redis Commands for Monitoring
# Memory info
INFO memory
# Stats
INFO stats
# Key space info
INFO keyspace
# Slow log
SLOWLOG GET 10
# Connected clients
CLIENT LIST
# Check specific key
TYPE mykey
TTL mykey
MEMORY USAGE mykey
Best Practices
Key Management
- Use meaningful, hierarchical key names
- Set expiration on temporary data
- Avoid very large keys (> 1MB)
- Use hashes for objects instead of serialized strings
- Regularly clean up unused keys
Performance
- Use pipelining for batch operations
- Implement connection pooling
- Avoid KEYS command in production (use SCAN instead)
- Monitor slow log regularly
- Keep values reasonably sized (< 100KB ideal)
Security
- Always use TLS/SSL connections
- Use strong passwords
- Limit Redis access via firewalls
- Don't expose Redis directly to internet
- Regularly rotate passwords
High Availability
- Enable replicas for production instances
- Configure automatic failover
- Monitor replication lag
- Test failover procedures
- Use DNS endpoints (not IP addresses)
Troubleshooting
High Memory Usage
Symptoms: Redis using more memory than expected
Solutions:
- Review key space with
INFO keyspace - Check for keys without expiration
- Implement eviction policy
- Consider scaling to larger profile
Low Hit Rate
Symptoms: Cache hit rate below 80%
Solutions:
- Review cache key structure
- Increase TTL for stable data
- Pre-warm cache with common queries
- Review access patterns
Connection Errors
Symptoms: Cannot connect to Redis
Solutions:
- Verify connection details
- Check TLS/SSL configuration
- Ensure firewall allows your IP
- Test with redis-cli
- Check Redis instance status
Slow Performance
Symptoms: High latency for Redis operations
Solutions:
- Check slow log for expensive commands
- Review network latency
- Use pipelining for batch operations
- Consider upgrading instance profile
- Monitor CPU usage