Cache Instances Overview
DanubeData provides fully managed Redis cache instances with persistence, replication, and high availability.
What are Cache Instances?
Cache instances are managed Redis servers that provide:
- High Performance: In-memory data storage
- Persistence: Optional data persistence to disk
- Replication: Read replicas for scaling
- Monitoring: Real-time performance metrics
- Security: Firewall integration and authentication
Redis Features
Key Features
- In-Memory Storage: Lightning-fast read/write operations
- Data Structures: Strings, hashes, lists, sets, sorted sets
- Pub/Sub: Message broker functionality
- Lua Scripting: Server-side script execution
- Transactions: MULTI/EXEC support
- Persistence: RDB and AOF options
Use Cases
- Session Storage: Store user sessions
- Caching: Cache database queries and API responses
- Real-time Analytics: Count and track events
- Message Queue: Task queue and pub/sub messaging
- Leaderboards: Sorted sets for rankings
Getting Started
Create a Cache Instance
- Navigate to Cache Instances in the main menu
- Click Create Cache Instance
- Enter a name for your instance
- Select a resource profile
- Configure persistence settings
- Click Create Cache
Your Redis instance will be ready in 1-2 minutes!
Connect to Your Cache
Once your cache is running, you'll receive:
- Hostname: Connection endpoint
- Port: 6379 (default Redis port)
- Password: Secure password
Example Connection:
redis-cli -h your-cache-hostname -p 6379 -a your-password
Example with Code:
import redis
r = redis.Redis(
host='your-cache-hostname',
port=6379,
password='your-password'
)
r.set('key', 'value')
print(r.get('key'))
Resource Profiles
| Profile | vCPU | RAM | Price/hour |
|---|---|---|---|
| Micro | 1 | 1 GB | $0.008 |
| Small | 2 | 2 GB | $0.016 |
| Medium | 4 | 4 GB | $0.032 |
| Large | 8 | 8 GB | $0.064 |
| XL | 16 | 16 GB | $0.128 |
Persistence Options
No Persistence
- Fastest performance
- Data lost on restart
- Best for: Temporary caching
RDB (Redis Database Backup)
- Point-in-time snapshots
- Low performance impact
- Best for: General caching with occasional persistence
AOF (Append-Only File)
- Every write operation logged
- Maximum durability
- Best for: Session storage, critical data
RDB + AOF
- Best of both worlds
- Highest durability
- Best for: Production workloads requiring persistence
Replication
Scale read operations with replicas:
Add a Replica
- Go to your cache instance page
- Click Add Replica
- Select a node
- Click Create Replica
Benefits
- Read Scaling: Offload reads to replicas
- High Availability: Automatic failover
- Geographic Distribution: Place replicas closer to users
Monitoring
Track cache performance:
Key Metrics
- CPU Usage: Monitor compute resources
- Memory Usage: Track memory utilization
- Connections: Active client connections
- Operations per Second: Command throughput
- Hit Rate: Cache hit/miss ratio
- Network I/O: Bandwidth usage
Memory Management
- Monitor memory usage to prevent OOM
- Set eviction policies (LRU, LFU, etc.)
- Configure max memory limit
Snapshots
Create point-in-time snapshots:
- Go to your cache instance page
- Click Snapshots tab
- Click Create Snapshot
- Enter a name
- Click Create
Use Cases
- Backup before major changes
- Clone cache instances
- Migrate to different regions
Best Practices
Performance
- Use connection pooling
- Batch commands with pipelining
- Use appropriate data structures
- Monitor memory usage
- Set appropriate TTLs
Security
- Use strong passwords
- Enable firewalls
- Use private networks
- Disable dangerous commands
- Regular security updates
Reliability
- Enable persistence for important data
- Use replicas for high availability
- Take regular snapshots
- Monitor replication lag
- Test failover procedures
Memory Management
- Set max memory limits
- Configure eviction policies
- Monitor memory usage
- Remove expired keys regularly
- Use appropriate TTLs
Common Operations
Session Storage
# Store session
r.setex(f'session:{user_id}', 3600, session_data)
# Get session
session = r.get(f'session:{user_id}')
Caching
# Check cache first
cached = r.get('api:users:list')
if cached:
return cached
# Fetch from database
data = fetch_from_database()
# Cache for 5 minutes
r.setex('api:users:list', 300, data)
return data
Leaderboard
# Add score
r.zadd('leaderboard', {user_id: score})
# Get top 10
top_users = r.zrevrange('leaderboard', 0, 9, withscores=True)
Pub/Sub
# Publisher
r.publish('notifications', 'New message')
# Subscriber
pubsub = r.pubsub()
pubsub.subscribe('notifications')
for message in pubsub.listen():
print(message)
Troubleshooting
High Memory Usage
- Check memory stats with
INFO memory - Review eviction policy
- Set appropriate TTLs
- Consider upgrading to larger profile
Slow Performance
- Check CPU usage
- Review slow log
- Use pipelining for multiple commands
- Add read replicas for read-heavy workloads
Connection Issues
- Verify firewall rules
- Check connection limits
- Use connection pooling
- Review network latency
Next Steps
Need help? Contact our support team through the dashboard.