Every production application breaks. Users hit null pointers, background jobs throw unhandled exceptions, and a subtle race condition turns into a midnight page. Without an error tracker, you're stitching together stack traces from log files and hoping the customer who reported the bug remembers what they clicked.
Sentry is the de-facto standard for error tracking and application performance monitoring (APM). It supports dozens of languages, has world-class stack trace demangling, and ships a polished UI. GlitchTip is a lighter, Django-based alternative that speaks the same SDK wire protocol — so you can keep the Sentry client libraries you already use and point them at your own server.
Both are open source. Both can be self-hosted. This guide walks through when to pick each, the realistic resource and cost footprint, a full Docker Compose deployment for each, and how to integrate them with your applications.
Why Self-Host Error Tracking at All?
Sentry.io is a great product. For small teams it's cheap. The problem is the business model: you pay per event, not per user or per project. An event is any error, transaction, replay, or attachment Sentry ingests. A single noisy deploy can eat a month's quota in a weekend.
Let's look at the numbers. Sentry's Team plan starts at $26/month but includes only 50,000 errors and 100,000 performance events. Once you exceed the quota, additional events cost extra — roughly $0.00029 per error and $0.000015 per transaction. Those fractions of a cent add up fast:
- A mid-traffic SaaS with 2 million transactions/month and 200k errors on the Team plan pays roughly $26 base + ~$43 errors overage + ~$28 transactions overage = ~$97/month.
- Scale that to 10 million transactions and 1 million errors and the Team plan becomes untenable. You move to Business ($80+ base) and still face overages approaching $300–$500/month.
- Add Session Replay (50 included on Team, $0.003 each after) and a busy app with 50k replays/month adds another $150.
There's also the data-residency angle. Sentry.io runs on AWS in the United States and the EU. For a lot of European teams — especially those subject to Schrems II scrutiny — keeping crash data (which routinely contains user emails, IP addresses, and request payloads) inside an operator you control is a compliance win, not just a cost optimization.
Self-hosting flips the equation: you pay a flat VPS bill and the event count is whatever your disk can hold. No surprise invoices, no rate-limiting your own dashboards during an incident.
Sentry vs GlitchTip: Which Do You Actually Need?
Before deploying anything, be honest about your feature requirements. Sentry is a much bigger product than GlitchTip. That breadth is a virtue when you need it and a tax when you don't.
| Capability | Sentry self-hosted | GlitchTip self-hosted |
|---|---|---|
| Error / crash tracking | Full | Full |
| Sentry SDK compatibility | Native | Drop-in (same DSN format) |
| Source maps / symbolication | Advanced (debug files, proguard, dSYM) | JavaScript source maps only |
| Performance / tracing | Full APM with spans + waterfalls | Basic transaction support |
| Session Replay | Yes | No |
| Profiling | Yes (continuous) | No |
| Cron monitoring / heartbeats | Yes | Yes (uptime + cron) |
| Alerting (Slack, email, webhooks) | Rich rules engine | Basic rules + webhooks |
| Services required | Postgres, Kafka, Redis, ClickHouse, Snuba, Relay, Symbolicator, Vroom, ~20 containers | Postgres, Redis, 2 app containers |
| Minimum RAM (production) | ~16 GB | ~1–2 GB |
| Disk footprint (baseline) | 40+ GB | 5–10 GB |
| License | FSL (non-compete, converts to Apache after 2y) | MIT |
Rule of thumb:
- Pick GlitchTip if you mainly want crash reports with stack traces, release tracking, and basic alerting. Teams up to ~50 engineers, mostly web/mobile apps, who already use Sentry SDKs and don't need Replay or deep APM.
- Pick self-hosted Sentry if you need the full observability stack: tracing waterfalls across services, Session Replay, profiling, native crash symbolication (Android, iOS, C++), or feature flag integration.
Recommended DanubeData Plans
Both tools are network-light (they don't stream a lot of media); the bottleneck is CPU and RAM for event ingestion and query workloads.
| Workload | Events/month | Recommended VPS | Notes |
|---|---|---|---|
| GlitchTip (small) | < 500k | DD Small + managed Postgres | ~€12.49 + €19.99 = €32.48/mo |
| GlitchTip (medium) | 500k – 5M | DD Medium + managed Postgres | ~€24.99 + €19.99 = €44.98/mo |
| Sentry self-hosted | up to ~5M | DD Large (16 vCPU / 32 GB) | €49.99/mo, everything on one node |
| Sentry (high volume) | > 10M | DD Large + dedicated Postgres + Redis | ~€49.99 + €19.99 + €4.99 = €74.97/mo |
All DanubeData plans include 20 TB of traffic (plenty for an internal observability stack) and €50 of signup credit, meaning the first month is effectively free for any of the above configurations.
Total Cost of Ownership: Sentry.io vs Self-Hosted
The table below is a realistic comparison at three workload levels. Sentry.io pricing is the 2026 published pricing, including typical overages for a mixed error + performance workload. Self-hosted rows assume no transaction charges — you already paid for the VPS.
| Monthly Events | Sentry.io Business | Sentry self-hosted (DanubeData) | GlitchTip self-hosted |
|---|---|---|---|
| 100k errors + 500k tx | $80–$120/mo | €49.99/mo (DD Large) | €32.48/mo (DD Small + Postgres) |
| 1M errors + 5M tx | $400–$600/mo | €49.99/mo (DD Large) | €44.98/mo (DD Medium + Postgres) |
| 10M errors + 50M tx | $2,000–$3,500/mo | €74.97/mo (DD Large + managed DB) | Not recommended at this scale |
The inflection point is around 1–2 million events per month. Below that, the convenience of SaaS may be worth the premium. Above it, self-hosting pays for itself in weeks.
Part 1: Deploy GlitchTip (the fast path)
GlitchTip is the right starting point if you're unsure. It takes 15 minutes to deploy, it speaks the Sentry wire protocol, and if you later decide you need the real thing you can swap DSNs without touching application code.
Step 1: Provision the VPS
- Create a DD Small VPS on DanubeData with Ubuntu 24.04 LTS.
- Note the IPv4 address, add an A record for
errors.yourdomain.com. - SSH in and update.
ssh root@YOUR_SERVER_IP
apt update && apt upgrade -y
apt install -y curl ufw
# Firewall
ufw allow OpenSSH
ufw allow 80/tcp
ufw allow 443/tcp
ufw --force enable
# Docker
curl -fsSL https://get.docker.com | sh
apt install -y docker-compose-plugin
systemctl enable --now docker
Step 2: Provision Managed Postgres
GlitchTip stores everything in Postgres (no ClickHouse, no Kafka). You can run Postgres in Docker alongside the app, but for production it's cleaner to use a managed instance — backups, replicas, and failover are handled for you.
- Create a Managed PostgreSQL 17 instance on DanubeData (€19.99/mo).
- Create a database named
glitchtipand a userglitchtip. - Copy the connection string; it looks like
postgres://glitchtip:PASSWORD@pg-xyz.danubedata.ro:5432/glitchtip.
Step 3: Compose File
mkdir -p /opt/glitchtip && cd /opt/glitchtip
cat > .env << 'EOF'
SECRET_KEY=$(openssl rand -hex 48)
DATABASE_URL=postgres://glitchtip:PASSWORD@pg-xyz.danubedata.ro:5432/glitchtip
REDIS_URL=redis://redis:6379/0
DEFAULT_FROM_EMAIL=errors@yourdomain.com
GLITCHTIP_DOMAIN=https://errors.yourdomain.com
EMAIL_URL=smtp://user:pass@smtp.eu.mailgun.org:587
CELERY_WORKER_AUTOSCALE=1,3
CELERY_WORKER_MAX_TASKS_PER_CHILD=10000
ENABLE_OBSERVABILITY_API=true
EOF
cat > docker-compose.yml << 'EOF'
services:
redis:
image: redis:7-alpine
restart: always
volumes:
- redis-data:/data
web:
image: glitchtip/glitchtip:v4.1
restart: always
env_file: .env
depends_on:
- redis
ports:
- "127.0.0.1:8000:8000"
volumes:
- uploads:/code/uploads
worker:
image: glitchtip/glitchtip:v4.1
restart: always
command: ./bin/run-celery-with-beat.sh
env_file: .env
depends_on:
- redis
volumes:
- uploads:/code/uploads
migrate:
image: glitchtip/glitchtip:v4.1
depends_on:
- redis
command: "./manage.py migrate"
env_file: .env
volumes:
redis-data:
uploads:
EOF
Step 4: Reverse Proxy with Caddy
apt install -y caddy
cat > /etc/caddy/Caddyfile << 'EOF'
errors.yourdomain.com {
reverse_proxy 127.0.0.1:8000
encode gzip zstd
header {
Strict-Transport-Security "max-age=31536000; includeSubDomains"
X-Content-Type-Options nosniff
Referrer-Policy strict-origin-when-cross-origin
}
}
EOF
systemctl reload caddy
Step 5: Launch
cd /opt/glitchtip
docker compose run --rm migrate
docker compose up -d
docker compose logs -f web
Open https://errors.yourdomain.com, register the first user (becomes superuser), then set ENABLE_USER_REGISTRATION=false in .env and docker compose restart web worker to lock down signups.
Step 6: Point an SDK at GlitchTip
Create a project in the GlitchTip UI; you'll be given a DSN like https://abc123@errors.yourdomain.com/1. Use it with the standard Sentry SDK — no special client needed:
// JavaScript (Node or browser)
import * as Sentry from "@sentry/node";
Sentry.init({
dsn: "https://abc123@errors.yourdomain.com/1",
environment: process.env.NODE_ENV,
release: process.env.GIT_SHA,
tracesSampleRate: 0.1, // GlitchTip supports basic transactions
sendDefaultPii: false,
});
# Python / Django
import sentry_sdk
from sentry_sdk.integrations.django import DjangoIntegration
sentry_sdk.init(
dsn="https://abc123@errors.yourdomain.com/1",
integrations=[DjangoIntegration()],
traces_sample_rate=0.1,
send_default_pii=False,
release=os.environ.get("GIT_SHA"),
)
// PHP / Laravel — config/sentry.php
'dsn' => env('SENTRY_LARAVEL_DSN', 'https://abc123@errors.yourdomain.com/1'),
'traces_sample_rate' => 0.1,
'send_default_pii' => false,
Part 2: Deploy Full Self-Hosted Sentry
Sentry's official self-hosted stack lives at github.com/getsentry/self-hosted. It's a curated Docker Compose bundle that wires up ~20 containers: web, workers, relay, ingest-consumer, symbolicator, vroom, Postgres, Redis, Kafka, Zookeeper, ClickHouse, Snuba, memcached, and a few cron workers.
Minimum requirements (the installer will refuse to run otherwise):
- Docker 24+
- Docker Compose v2.19+
- 8 GB RAM minimum, 16 GB recommended (plan for 32 GB on DD Large if you enable Replay and profiling)
- Python 3 on the host (only for the installer script)
- At least 40 GB of free disk for the baseline; expect 100 GB+ within a month of real traffic
Step 1: Host Setup
ssh root@YOUR_SERVER_IP
apt update && apt upgrade -y
apt install -y curl python3 git ufw
ufw allow OpenSSH
ufw allow 80/tcp
ufw allow 443/tcp
ufw --force enable
curl -fsSL https://get.docker.com | sh
apt install -y docker-compose-plugin
# Sentry expects an unprivileged user by default
useradd -m -s /bin/bash sentry
usermod -aG docker sentry
Also bump vm.max_map_count (ClickHouse + Kafka need it) and raise file descriptor limits:
cat >> /etc/sysctl.conf << 'EOF'
vm.max_map_count=262144
fs.file-max=1048576
EOF
sysctl -p
cat >> /etc/security/limits.conf << 'EOF'
* soft nofile 1048576
* hard nofile 1048576
EOF
Step 2: Clone and Install
su - sentry
git clone https://github.com/getsentry/self-hosted.git
cd self-hosted
# Check out the latest tag (avoid the master branch in production)
git checkout 25.4.0 # or whatever the latest release is
./install.sh --skip-user-creation
The installer generates sentry/config.yml, sentry/sentry.conf.py, and .env. Review them. Key settings to change:
# sentry/config.yml
system.url-prefix: 'https://sentry.yourdomain.com'
system.admin-email: 'ops@yourdomain.com'
mail.from: 'sentry@yourdomain.com'
mail.host: 'smtp.eu.mailgun.org'
mail.port: 587
mail.username: 'postmaster@mg.yourdomain.com'
mail.password: 'your_smtp_key'
mail.use-tls: true
# Data retention (90 days default; drop to 30 on small VPS)
system.event-retention-days: 30
Step 3: Create the First User
docker compose run --rm web createuser --email ops@yourdomain.com --password 'strong-password' --superuser
Step 4: Start Sentry
docker compose up -d
docker compose ps
First start takes several minutes — Snuba has to build ClickHouse schemas, and Kafka topics need to materialize. Watch docker compose logs -f web worker until the web container reports it's listening on :9000.
Step 5: Reverse Proxy with Caddy
Sentry listens on 127.0.0.1:9000. Caddy handles HTTPS and WebSockets (needed for Session Replay):
cat > /etc/caddy/Caddyfile << 'EOF'
sentry.yourdomain.com {
reverse_proxy 127.0.0.1:9000 {
header_up X-Forwarded-Proto https
header_up X-Forwarded-For {remote_host}
}
# Relay (event ingestion) and WebSockets pass through automatically
encode zstd gzip
header {
Strict-Transport-Security "max-age=31536000; includeSubDomains"
X-Content-Type-Options nosniff
}
# Large requests (source maps, attachments)
request_body {
max_size 100MB
}
}
EOF
systemctl reload caddy
Step 6: Plug In the SDK
Identical to GlitchTip: create a project, copy the DSN, drop it into your application. For Sentry you can now enable the full feature set:
import * as Sentry from "@sentry/react";
import { browserTracingIntegration, replayIntegration } from "@sentry/react";
Sentry.init({
dsn: "https://public@sentry.yourdomain.com/2",
environment: import.meta.env.MODE,
release: import.meta.env.VITE_GIT_SHA,
integrations: [
browserTracingIntegration(),
replayIntegration({
maskAllText: true, // GDPR: never record typed content
blockAllMedia: true,
}),
],
tracesSampleRate: 0.2,
replaysSessionSampleRate: 0.05, // 5% of sessions
replaysOnErrorSampleRate: 1.0, // 100% of sessions that error
sendDefaultPii: false,
});
Source Maps and Release Tracking
Minified production JavaScript is useless without source maps. Both Sentry and GlitchTip accept them via the sentry-cli tool — use the same CLI for both, just change the URL:
npm install -g @sentry/cli
# One-time: configure the CLI
sentry-cli login --url https://errors.yourdomain.com # or sentry.yourdomain.com
# In your CI pipeline, after `npm run build`:
export SENTRY_URL=https://errors.yourdomain.com
export SENTRY_ORG=your-org-slug
export SENTRY_PROJECT=frontend
export SENTRY_AUTH_TOKEN=$CI_SENTRY_TOKEN
export VERSION=$(git rev-parse --short HEAD)
sentry-cli releases new "$VERSION"
sentry-cli releases files "$VERSION" upload-sourcemaps ./dist --url-prefix "~/static/"
sentry-cli releases finalize "$VERSION"
sentry-cli releases deploys "$VERSION" new -e production
Sentry additionally supports native debug files (dSYM for iOS, Proguard mappings for Android, breakpad symbols for C++). Those go through sentry-cli upload-dif.
Performance Monitoring (Tracing)
Both products accept the traces_sample_rate option on the SDK; enabling it turns your app into a distributed tracer.
On Sentry self-hosted: you get the full waterfall UI, N+1 query detection, frontend + backend trace stitching, and slow transaction alerts. It works out of the box; make sure you set the sample rate conservatively (0.1 = 10% is typical) and use tracesSampler to exclude health checks and static assets.
On GlitchTip: transactions are stored and summarized per endpoint, but there's no detailed waterfall. Treat it as a "slowest endpoints" list — useful, but don't expect the full Sentry APM experience.
Sentry.init({
// ...
tracesSampler: (samplingContext) => {
const url = samplingContext.request?.url || '';
if (url.includes('/health') || url.includes('/metrics')) return 0;
if (samplingContext.parentSampled) return 1.0;
return 0.1;
},
});
GDPR / PII Scrubbing
Running in the EU buys you data residency, but not automatic compliance. Both Sentry and GlitchTip capture request bodies, headers, and stack-trace local variables by default. Some of that will be personal data.
Mitigations to enable from day one:
sendDefaultPii: falseon every SDK — stops IPs, cookies, and usernames from being sent.- Scrubbing rules at the server level. Sentry has a Data Scrubbing pane (Project → Security & Privacy) where you define patterns like
[Filter]:password,[Filter]:authorization,[Filter]:card_number. Apply them before data is stored. beforeSendhook on the SDK — redact user objects client-side so they never leave the browser.- Session Replay masking —
maskAllText: true,blockAllMedia: true, and CSS class.sentry-maskon any form field. - Retention — drop
system.event-retention-daysto the minimum your runbook needs (usually 30–60 days). - Access logs — keep a processing record showing who inspected crash data; Sentry's audit log covers this.
// Scrub user PII before it leaves the browser
Sentry.init({
// ...
beforeSend(event) {
if (event.user) {
event.user = { id: event.user.id }; // keep only the id
}
if (event.request?.headers) {
delete event.request.headers['cookie'];
delete event.request.headers['authorization'];
}
return event;
},
});
Backups
Backing up Sentry is straightforward if you trust Postgres as your source of truth (it is) and remember that ClickHouse event data is, by design, re-creatable from raw ingest if you keep Kafka retention long enough. For day-to-day operations, backup Postgres and the upload volume.
#!/bin/bash
# /opt/sentry/backup.sh
set -euo pipefail
DATE=$(date +%Y-%m-%d_%H-%M-%S)
DIR=/opt/sentry/backups
S3=danubedata:sentry-backups
mkdir -p "$DIR"
echo "[backup] pg_dump"
docker compose exec -T postgres pg_dump -U postgres postgres
| gzip > "$DIR/sentry-pg-$DATE.sql.gz"
echo "[backup] clickhouse schemas (not data — data is huge, events are ephemeral)"
docker compose exec -T clickhouse clickhouse-client
--query "SHOW CREATE TABLE default.sentry_local"
> "$DIR/sentry-clickhouse-schema-$DATE.sql"
echo "[backup] config"
tar -czf "$DIR/sentry-config-$DATE.tar.gz"
./sentry ./relay ./geoip .env docker-compose.yml
echo "[backup] upload to S3"
rclone copy "$DIR/sentry-pg-$DATE.sql.gz" "$S3/postgres/"
rclone copy "$DIR/sentry-config-$DATE.tar.gz" "$S3/config/"
find "$DIR" -type f -mtime +7 -delete
echo "[backup] done"
chmod +x /opt/sentry/backup.sh
(crontab -l; echo "30 2 * * * /opt/sentry/backup.sh >> /var/log/sentry-backup.log 2>&1") | crontab -
GlitchTip is simpler — only Postgres needs backing up (Redis is a cache). If you use DanubeData's managed Postgres, backups are already scheduled; just keep a copy of /opt/glitchtip/.env and docker-compose.yml off-box.
Monitoring the Monitor
A broken error tracker is worse than no error tracker — your app is silently broken and you don't know. Three safety nets:
- Heartbeats into itself. Send a cron event every 15 minutes from a lambda or another VPS to your Sentry/GlitchTip instance. If the heartbeat is missed, an upstream monitor (UptimeRobot, Uptime Kuma, Grafana) pages you.
- Rate-limit SDKs. Set
sample_rateand enable client-side deduplication so a runaway error loop doesn't fill your disk. - Disk alerts. ClickHouse will happily fill your whole NVMe drive. Run
docker exec clickhouse du -sh /var/lib/clickhouseas a Prometheus textfile target and alert at 80%.
Common Pitfalls
- "Swap is disabled on my VPS." Sentry's ClickHouse occasionally wants to spill; if it can't, it OOMs. On DanubeData's dedicated plans you have the full RAM to yourself — no swap needed. On shared plans, add a 4 GB swap file.
- Session Replay is huge. Each replay is typically 50–500 KB. A busy site can add 10 GB/day to your ClickHouse disk. Sample aggressively (
replaysSessionSampleRate: 0.01) and drop retention for the replay table. - Kafka retention keeps growing. Sentry defaults to 7 days. On a small VPS, drop it to 24 hours in
.env:KAFKA_LOG_RETENTION_HOURS=24. - Caddy timeouts on large source maps. Bump
request_body max_sizeand addtransport http { response_header_timeout 300s }if uploads fail. - GlitchTip auth emails going to spam. Use SPF/DKIM for
errors.yourdomain.com; it matters even for internal tools because password resets fail silently otherwise. - SDK noise. Set
ignoreErrorsfor known third-party spam:ResizeObserver loop limit exceeded, adblocker errors, and third-party analytics scripts tend to dominate your dashboard if you don't filter them.
FAQ
Can I migrate from Sentry.io to GlitchTip or self-hosted Sentry?
Yes. The SDK wire protocol is the same. Change the DSN in your applications, redeploy, and new events land in the new instance. Historical data doesn't migrate automatically — if you need it, export via the Sentry.io API (issues and events JSON endpoints). Most teams simply run both in parallel for a week and cut over.
Are Sentry SDKs actually compatible with GlitchTip?
Yes — this is GlitchTip's core value proposition. The official @sentry/* packages work unmodified. The one gotcha is feature parity: if you call GlitchTip-incompatible APIs (Session Replay, Profiling, feedback widgets), those features fail silently. Everything core — errors, breadcrumbs, releases, environments, basic transactions, user context — works.
Does GlitchTip scale?
Comfortably to a few million events per month on a single node. Beyond that you hit Postgres write saturation; the remedy is a bigger Postgres (DanubeData Medium or Large managed DB) and multiple Celery worker containers. At ~20 million events/month, self-hosted Sentry (with ClickHouse as the event store) becomes the better fit.
Is self-hosted Sentry open source?
Since 2023, Sentry uses the Functional Source License (FSL): free to run, modify, and redistribute for any purpose except competing commercial products. After two years each release converts to Apache 2.0. For in-house use this changes nothing. GlitchTip remains MIT.
How much does Sentry self-hosted actually cost at 1M events/month?
A DD Large VPS on DanubeData (€49.99/mo) comfortably handles 1M errors and several million transactions, assuming 30-day retention and modest Session Replay sampling. Add a managed Postgres (€19.99/mo) if you want failover and automated backups. Total: ~€70/month flat, versus $400–$600 on Sentry.io Business with overages.
Do I need to worry about GDPR if I self-host?
You become the data controller instead of a processor. That means documenting your retention policy, responding to subject access requests (searching by user email in both Sentry and GlitchTip works), and ensuring scrubbing rules remove card numbers, tokens, and passwords before storage. Self-hosting in a German data center like DanubeData's Falkenstein site removes the Schrems II transfer issue entirely.
Can I run both at once?
Yes, and it's a common pattern during migrations. Point half your services' DSN at GlitchTip, half at Sentry, and compare. Or run GlitchTip in staging and Sentry in production. Nothing in the SDK prevents parallel targets — you can even fan out with multi-client configs.
What breaks when I upgrade?
Sentry ships roughly monthly; upgrades are git pull && ./install.sh. The installer runs migrations, reloads Snuba, and prints a changelog. Do it in off-hours and always back up Postgres first. Major-version bumps occasionally change Kafka topic names — read the release notes. GlitchTip upgrades are docker compose pull && docker compose up -d; migrations run automatically.
Which One Would I Pick?
For the median team — 5–30 engineers, a web app and maybe a mobile app, mostly wanting crash reports and release-based regression tracking — GlitchTip on DanubeData DD Small + Managed Postgres is the right call. €32.48/month, 15 minutes to deploy, drop-in SDK compatibility, and enough headroom to handle multi-million event volumes before you need to think about it.
For teams that live in their error tracker — frontend-heavy apps that need Session Replay, mobile apps that need native symbolication, backend services that need distributed tracing waterfalls — self-hosted Sentry on DanubeData DD Large is worth the extra operational overhead. €49.99/month for a plan that on Sentry.io would run you $500+ at your volume is hard to argue with.
Either way, self-hosting on a European VPS gives you three things SaaS won't: a flat predictable bill, full data ownership inside GDPR-friendly borders, and the ability to turn the noise dial up during incidents without worrying about the meter running.
Ready to Deploy?
- Spin up a DanubeData VPS — DD Small for GlitchTip, DD Large for Sentry.
- Provision a managed PostgreSQL instance (optional but recommended).
- Follow the Compose files above.
- Point your existing Sentry SDKs at the new DSN.
New DanubeData accounts get €50 in credit, which covers the first month of any plan in this guide. German data centers, 20 TB of included traffic, and NVMe storage for every instance.
Stuck on Kafka retention or ClickHouse disk growth? Talk to our team — we run the same stack internally and are happy to share what worked.