Files
touchbase/docs/deploy.md
2026-05-05 10:39:59 -04:00

7.8 KiB

Deploying TouchBase

A small, opinionated runbook for deploying TouchBase to a single self-hosted host. v1 assumes one practice, one host, no orchestration. When that stops being enough, the path forward is straightforward — Postgres goes managed, the app+worker images go to a container registry, and the same compose file or any orchestrator runs them.

Prerequisites

  • A Linux host with Docker + Docker Compose v5+ (or docker compose plugin v2+)
  • A real domain pointed at the host's public IP (e.g. book.your-practice.com)
  • Ports 80 and 443 open inbound (Caddy needs both for ACME)
  • Outbound to: Postgres (if external), Resend SMTP (smtp.resend.com:587), Stripe API (api.stripe.com:443)
  • git, pnpm, and node 22+ for migrations on first install (only needed once)

What runs in production

Service Image Command Port Notes
postgres postgres:16-alpine (default) 5432 (internal) Volume pgdata
app touchbase/app:latest pnpm start 3000 (internal) Healthchecked at /api/health
worker touchbase/app:latest pnpm worker Long-lived; pg-boss handlers
caddy caddy:2-alpine (default) 80, 443 (host) Auto-TLS via ACME

Mailpit is not in the prod profile — production uses real SMTP (Resend). Stop the dev mailpit container if you brought it up.

First-time setup

# On the host
git clone <your fork> /opt/touchbase
cd /opt/touchbase

# 1. Create a real .env (see "Required env vars" below)
cp .env.example .env
$EDITOR .env

# 2. Bring up just Postgres so we can run migrations
docker-compose up -d postgres
./scripts/db-bootstrap.sh   # idempotent: ensures touchbase_test exists + extensions
                            # in prod you only need the prod DB; trim the script if desired

# 3. Run migrations from the host (one-time)
pnpm install --frozen-lockfile
pnpm exec prisma migrate deploy

# 4. (Optional) seed an admin user — replace the seed entirely for prod, or
#    do it manually via psql / a one-off script.
#    Minimal admin row:
docker exec -i touchbase-postgres-1 psql -U touchbase -d touchbase_dev <<SQL
INSERT INTO "User" (id, email, name, role, "createdAt", "updatedAt")
VALUES (gen_random_uuid()::text, 'admin@your-practice.com', 'Admin', 'ADMIN', now(), now());
SQL

# 5. Point Caddy at your domain
$EDITOR caddy/Caddyfile      # set the site address (replace `localhost`)
export APP_DOMAIN=book.your-practice.com
export ACME_EMAIL=you@your-practice.com

# 6. Build and start the prod stack
docker-compose --profile prod build
docker-compose --profile prod up -d

# 7. Verify
curl -sf https://book.your-practice.com/api/health
docker-compose --profile prod logs -f app worker

Required env vars

For production these must all be set (in .env or as host env vars):

Var Example Notes
DATABASE_URL postgresql://touchbase:STRONG_PW@postgres:5432/touchbase_dev?schema=public Inside compose, host is the postgres service. For external/managed Postgres, point at the real host. Use a strong password.
APP_URL https://book.your-practice.com Used in email links and Stripe return_url
APP_TZ America/Detroit All WorkingHours math uses this
AUTH_SECRET (random 32 bytes, base64) Generate with openssl rand -base64 32. Different from dev.
SMTP_HOST smtp.resend.com
SMTP_PORT 587
SMTP_USER resend Resend's docs
SMTP_PASS re_… Resend API key
SMTP_FROM TouchBase <bookings@your-practice.com> Must be a verified Resend sender domain

Optional (only when payments are wired):

Var Notes
STRIPE_SECRET_KEY sk_live_… for prod, sk_test_… for staging
STRIPE_PUBLISHABLE_KEY matched env to secret
STRIPE_WEBHOOK_SECRET from your Stripe webhook configuration in the Stripe Dashboard, NOT the CLI

Optional (tweakable):

Var Default Notes
REMINDER_LEAD_MIN 1440 (24h) Minutes before each appointment to send the reminder email. Set on both the app and worker containers; producer-side scheduling uses the value, handler-side fires when it does.

If STRIPE_* vars are absent, the app skips the deposit branch entirely — bookings proceed straight to CONFIRMED with no payment. This means don't accidentally launch without them set if you intend to require deposits.

Migrations

Migrations live in prisma/migrations/. Apply on each deploy that adds them:

docker-compose --profile prod exec app pnpm exec prisma migrate deploy

Prisma's migrate deploy is non-destructive (no resets, no prompts). It applies any unapplied migrations in order and is safe to run on every deploy.

Where things log

Production stdout/stderr from each container is captured by Docker:

docker-compose --profile prod logs -f app
docker-compose --profile prod logs -f worker
docker-compose --profile prod logs -f caddy
docker-compose --profile prod logs -f postgres

For long-term retention, point Docker at a logging driver (json-file with rotation, journald, or a remote sink). Out of scope here.

Backups

# Daily, e.g. via host cron:
docker exec touchbase-postgres-1 pg_dump -U touchbase touchbase_dev \
  | gzip > /var/backups/touchbase-$(date +%F).sql.gz

# Encrypted off-host (recommended):
... | age -r <recipient> | aws s3 cp - s3://your-backups/touchbase-$(date +%F).sql.gz.age

Test restore quarterly. The exclusion-constraint migration depends on the btree_gist extension — ensure your restore target has it (the db/init script does, plus pgcrypto).

Rollback

If a deploy breaks production:

# Roll the app + worker back to the previous image tag
docker-compose --profile prod down app worker
git checkout <previous good commit>
docker-compose --profile prod build app worker
docker-compose --profile prod up -d app worker

Postgres data is unaffected (it's in a volume). Migrations are not auto-rolled-back — Prisma doesn't generate down-migrations. If a migration is the breaking change, write a corrective migration in code and apply it forward; only resort to manual SQL for incidents.

Healthcheck

curl -sf https://book.your-practice.com/api/health | jq
# {
#   "status": "ok",
#   "version": "dev",
#   "time": "2026-…",
#   "checks": { "app": "ok", "db": "ok" }
# }

503 with checks.db populated = app can't reach Postgres. The Docker HEALTHCHECK in the app service watches this every 30s.

Common operations

Task Command
See all bookings (admin) https://book.your-practice.com/admin/bookings
Run a one-off SQL query docker exec -it touchbase-postgres-1 psql -U touchbase -d touchbase_dev
Check pg-boss queue docker exec touchbase-postgres-1 psql -U touchbase -d touchbase_dev -c "SELECT name, state, COUNT(*) FROM pgboss.job GROUP BY 1, 2;"
Force a reminder to fire now UPDATE pgboss.job SET start_after = now() WHERE name='booking-reminder' AND state='created';
Make a user admin UPDATE "User" SET role='ADMIN' WHERE email='someone@example.com';
Restart just the worker docker-compose --profile prod restart worker

What's not yet automated

  • Image registry: touchbase/app:latest is local-build only. For multi-host or CI deploys, push to a registry (GHCR, ECR, etc.) and pin tags by commit SHA in compose.
  • Secret management: .env on the host is fine for one-host. Beyond that, use Docker secrets, SOPS-encrypted env files, or your platform's secret store.
  • Observability: stdout logs only. Add Sentry/GlitchTip + Pino structured logs when the practice has appetite.
  • CI: there isn't one. Add a GitHub Actions workflow to run pnpm test, pnpm lint, pnpm exec tsc --noEmit, and docker build on every PR; tag-based release builds.

These are all "next step" items, not v1 blockers.