OpenClaw has taken the developer world by storm. The open-source personal AI assistant surpassed 100,000 GitHub stars within weeks of launch, and for good reason: it gives you a fully autonomous AI agent that runs on your hardware, uses your API keys, and keeps your data private. No subscriptions, no vendor lock-in.
But running OpenClaw on your laptop means it's only available when your laptop is on. The real power move? Self-hosting OpenClaw on a VPS so it's always online, always reachable via Telegram, and always ready to work.
This guide walks you through a complete production setup on a DanubeData VPS, including Docker deployment, optional local LLM inference with Ollama, Telegram bot integration, and security hardening.
What Is OpenClaw?
OpenClaw is a free and open-source autonomous AI agent created by Peter Steinberger. Originally released as "Clawdbot" in November 2025, it was renamed to OpenClaw in January 2026 after trademark discussions. When Steinberger joined OpenAI in February 2026, the project transitioned to an open-source foundation, where it continues to thrive.
What makes OpenClaw different from a simple chatbot:
- Autonomous execution: It doesn't just answer questions — it executes tasks on your machine (file management, code execution, system administration)
- Persistent memory: It remembers context across conversations, building up knowledge about your projects and preferences
- Messaging-first interface: Designed to be used through Telegram, Discord, or other messaging platforms rather than a web UI
- Local-first: Runs entirely on your infrastructure with bring-your-own API keys (OpenAI, Anthropic, or local models via Ollama)
- Plugin ecosystem: Extensible with community plugins for Git, Docker, databases, and more
Why Self-Host on a VPS?
Running OpenClaw locally is fine for experimentation, but a VPS deployment unlocks real productivity:
| Feature | Local | VPS |
|---|---|---|
| Availability | Only when laptop is on | 24/7 always-on |
| Telegram/Discord access | Requires port forwarding | Direct public IP |
| Local LLM inference | Drains laptop battery | Dedicated resources |
| Team access | Single user only | Multiple users via messaging |
| Data persistence | Tied to your machine | Survives reboots with volumes |
Prerequisites
Before we start, you'll need:
- A DanubeData VPS — We recommend the CX21 plan (2 vCPUs, 4 GB RAM, 40 GB NVMe) at €5.99/month for OpenClaw with cloud API keys, or the CX41 plan (4 vCPUs, 16 GB RAM, 160 GB NVMe) at €19.99/month if you want to run local models with Ollama
- Ubuntu 24.04 (recommended) or Debian 12
- An API key from OpenAI, Anthropic, or another supported provider (or plan to use Ollama for local inference)
- A Telegram account (for the bot interface)
- Basic familiarity with SSH and the Linux command line
Step 1: Initial Server Setup
After creating your VPS in the DanubeData dashboard, SSH in and perform basic setup:
# Update the system
sudo apt update && sudo apt upgrade -y
# Install essential packages
sudo apt install -y curl git ufw
# Configure firewall — only allow SSH
sudo ufw allow OpenSSH
sudo ufw enable
If you haven't already, follow our SSH hardening checklist to secure your server.
Step 2: Install Docker
Docker is the recommended way to deploy OpenClaw. It provides isolation, easy updates, and reproducible deployments:
# Install Docker using the official script
curl -fsSL https://get.docker.com | sh
# Add your user to the docker group
sudo usermod -aG docker $USER
# Log out and back in, then verify
docker --version
docker compose version
Step 3: Install Node.js and Bun
OpenClaw requires Node.js 22 or later. Some plugins (including Telegram channel integration) also use Bun:
# Install Node.js 22 from NodeSource
curl -fsSL https://deb.nodesource.com/setup_22.x | sudo -E bash -
sudo apt install -y nodejs
# Install Bun runtime
curl -fsSL https://bun.sh/install | bash
source ~/.bashrc
# Verify installations
node --version # Should be v22.x+
bun --version # Should be v1.x+
Step 4: Deploy OpenClaw with Docker Compose
Create a directory for your OpenClaw deployment and set up Docker Compose:
# Create the deployment directory
mkdir -p ~/openclaw && cd ~/openclaw
# Create the Docker Compose file
cat > docker-compose.yml << 'EOF'
services:
openclaw:
image: ghcr.io/openclaw/openclaw:latest
container_name: openclaw
restart: unless-stopped
ports:
- "127.0.0.1:18789:18789"
volumes:
- openclaw-data:/app/data
- openclaw-config:/app/config
env_file:
- .env
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:18789/health"]
interval: 30s
timeout: 10s
retries: 3
volumes:
openclaw-data:
openclaw-config:
EOF
Create the environment file with your API keys:
cat > .env << 'EOF'
# Choose your LLM provider (uncomment one)
ANTHROPIC_API_KEY=sk-ant-your-key-here
# OPENAI_API_KEY=sk-your-key-here
# Telegram bot token (we'll create this next)
TELEGRAM_BOT_TOKEN=your-bot-token-here
# Security: restrict access to your Telegram user ID
ALLOWED_TELEGRAM_USERS=your-telegram-user-id
# OpenClaw settings
OPENCLAW_DATA_DIR=/app/data
OPENCLAW_LOG_LEVEL=info
EOF
# Secure the env file
chmod 600 .env
Important: Never expose port 18789 to the public internet. The Docker Compose file above binds it to 127.0.0.1 only. If you need external HTTP access, put Nginx in front of it (covered in Step 7).
Step 5: Create a Telegram Bot
Telegram is the recommended interface for VPS-hosted OpenClaw because it doesn't require port forwarding or a public URL (it uses Telegram's Bot API polling):
- Open Telegram and search for @BotFather
- Send
/newbotand follow the prompts to name your bot - Copy the bot token and add it to your
.envfile asTELEGRAM_BOT_TOKEN - Send
/setprivacyto @BotFather, select your bot, and choose Disable (so the bot can read all messages in group chats, if desired)
To find your Telegram user ID (for the allowlist), message @userinfobot on Telegram — it will reply with your numeric ID.
Step 6: Start OpenClaw
# Start the container
cd ~/openclaw
docker compose up -d
# Check the logs
docker compose logs -f openclaw
# You should see:
# ✓ OpenClaw gateway running on port 18789
# ✓ Telegram bot connected
# ✓ Ready to receive messages
Now open Telegram, find your bot, and send it a message. You should get a response within seconds. Try something like:
Hey OpenClaw, what's the current disk usage on this server?
Step 7: (Optional) Add Ollama for Local LLM Inference
If you want to run models locally instead of paying for API calls, add Ollama to your stack. This is ideal for privacy-sensitive workloads or if you want to experiment with open-source models like Llama 3, Mistral, or Phi-3:
# Update docker-compose.yml to add Ollama
cat > docker-compose.yml << 'EOF'
services:
openclaw:
image: ghcr.io/openclaw/openclaw:latest
container_name: openclaw
restart: unless-stopped
ports:
- "127.0.0.1:18789:18789"
volumes:
- openclaw-data:/app/data
- openclaw-config:/app/config
env_file:
- .env
depends_on:
- ollama
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:18789/health"]
interval: 30s
timeout: 10s
retries: 3
ollama:
image: ollama/ollama:latest
container_name: ollama
restart: unless-stopped
volumes:
- ollama-models:/root/.ollama
ports:
- "127.0.0.1:11434:11434"
volumes:
openclaw-data:
openclaw-config:
ollama-models:
EOF
Add the Ollama configuration to your .env:
# Add to .env
echo 'OLLAMA_BASE_URL=http://ollama:11434' >> .env
echo 'OLLAMA_MODEL=llama3.1:8b' >> .env
Pull a model and restart:
# Restart the stack
docker compose up -d
# Pull a model (this downloads several GB — be patient)
docker exec ollama ollama pull llama3.1:8b
# For a lighter model on smaller VPS plans:
# docker exec ollama ollama pull phi3:mini
Which Model Should You Choose?
| Model | Size | RAM Needed | Best For | Recommended VPS |
|---|---|---|---|---|
| Phi-3 Mini | 2.3 GB | 4 GB | Quick tasks, summarisation | CX21 (4 GB RAM) |
| Llama 3.1 8B | 4.7 GB | 8 GB | General assistant, coding | CX31 (8 GB RAM) |
| Mistral 7B | 4.1 GB | 8 GB | Instruction following | CX31 (8 GB RAM) |
| Llama 3.1 70B (Q4) | 40 GB | 48 GB | Complex reasoning | Dedicated CPU plan |
Step 8: Set Up Nginx Reverse Proxy (Optional)
If you want to access OpenClaw's web dashboard or API over HTTPS, set up Nginx with a Let's Encrypt certificate:
# Install Nginx and Certbot
sudo apt install -y nginx certbot python3-certbot-nginx
# Create Nginx config
sudo tee /etc/nginx/sites-available/openclaw << 'EOF'
server {
listen 80;
server_name openclaw.yourdomain.com;
location / {
proxy_pass http://127.0.0.1:18789;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
EOF
# Enable the site
sudo ln -s /etc/nginx/sites-available/openclaw /etc/nginx/sites-enabled/
sudo nginx -t && sudo systemctl reload nginx
# Get SSL certificate
sudo certbot --nginx -d openclaw.yourdomain.com
# Open HTTPS in firewall
sudo ufw allow 'Nginx Full'
Step 9: Run as a systemd Service
Ensure OpenClaw starts automatically on boot and restarts on failure:
sudo tee /etc/systemd/system/openclaw.service << 'EOF'
[Unit]
Description=OpenClaw AI Assistant
After=docker.service
Requires=docker.service
[Service]
Type=oneshot
RemainAfterExit=yes
WorkingDirectory=/home/your-user/openclaw
ExecStart=/usr/bin/docker compose up -d
ExecStop=/usr/bin/docker compose down
[Install]
WantedBy=multi-user.target
EOF
sudo systemctl enable openclaw
sudo systemctl start openclaw
Step 10: Security Hardening
Your OpenClaw instance has significant access to your server, so security is critical:
Restrict Telegram Access
The ALLOWED_TELEGRAM_USERS environment variable ensures only your Telegram account can send commands. Strangers DMing the bot will be ignored. Add multiple user IDs separated by commas if you want team access.
Limit Docker Capabilities
Add security constraints to the Docker Compose file:
# Add under the openclaw service
security_opt:
- no-new-privileges:true
read_only: true
tmpfs:
- /tmp
Enable Automatic Updates
# Install Watchtower for automatic container updates
docker run -d
--name watchtower
--restart unless-stopped
-v /var/run/docker.sock:/var/run/docker.sock
containrrr/watchtower
--interval 86400
openclaw ollama
Monitor Resource Usage
# Check OpenClaw resource consumption
docker stats openclaw ollama
# Set up log rotation
cat > /etc/docker/daemon.json << 'EOF'
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}
EOF
sudo systemctl restart docker
Troubleshooting
OpenClaw won't start
# Check logs for errors
docker compose logs openclaw
# Common fix: ensure .env file exists and has valid API keys
cat .env
Telegram bot not responding
- Verify the bot token is correct in
.env - Check that your Telegram user ID is in
ALLOWED_TELEGRAM_USERS - Ensure the container can reach Telegram's API:
docker exec openclaw curl -s https://api.telegram.org
Ollama out of memory
- Use a smaller model (e.g.,
phi3:miniinstead ofllama3.1:8b) - Upgrade to a larger VPS plan with more RAM
- Set
OLLAMA_NUM_PARALLEL=1to limit concurrent inference
What Can You Do With a Self-Hosted OpenClaw?
Once your instance is running, the possibilities are vast. Here are some popular use cases:
- DevOps assistant: "Check if the Nginx logs show any 5xx errors in the last hour"
- Code reviews on the go: "Review the latest commit in my project repo and flag any issues"
- Server monitoring: "Alert me on Telegram if disk usage exceeds 80%"
- Research assistant: "Summarise this PDF and extract the key findings"
- Automation: "Every morning at 9am, pull the latest data from the API and generate a report"
Cost Comparison
| Setup | Monthly Cost | Notes |
|---|---|---|
| DanubeData CX21 + Cloud API keys | €5.99 + API usage | Best value for most users |
| DanubeData CX31 + Ollama (local LLM) | €11.99 | No API costs, fully private |
| ChatGPT Plus subscription | $20/month | No server access, no autonomy |
| Claude Pro subscription | $20/month | No self-hosting, no local execution |
Self-hosting OpenClaw on a DanubeData VPS gives you a fully autonomous AI assistant for a fraction of what you'd pay for commercial AI subscriptions — and you own the entire stack.
Conclusion
OpenClaw represents a shift in how developers interact with AI: instead of visiting a website to chat, you have a persistent, autonomous agent that lives on your server and is always a Telegram message away. Running it on a VPS means it's always on, always secure, and always under your control.
The setup takes about 15 minutes on a DanubeData VPS, and the result is a personal AI assistant that can do everything from managing your server to reviewing code — all from your phone.
Ready to get started? Create a DanubeData account and spin up a VPS in under 60 seconds. Your AI assistant is waiting.
Questions? Contact our team