Add OpenAI-compatible voice servers (faster-whisper + Kokoro)

Path B from VoiceModels.md — adds two new compose stacks alongside the
Wyoming pair so OpenWebUI/Conduit get voice without a Wyoming-shim:

- compose/faster-whisper.yml — fedirz/faster-whisper-server CPU image,
  large-v3-turbo by default, OpenAI /v1/audio/transcriptions on :8001.
  Built-in web UI for ad-hoc transcription.
- compose/kokoro.yml — ghcr.io/remsky/kokoro-fastapi-cpu, Kokoro-82M,
  OpenAI /v1/audio/speech on :8880.

Both run alongside (not instead of) Wyoming Whisper + Piper — Wyoming
keeps serving HA Assist, OpenAI-API serves OpenWebUI / Conduit. Memory
budget on Strix Halo accommodates everything plus Qwen3-Coder loaded
concurrently with plenty of headroom.

Homepage gets dedicated tiles for both. README documents the
OpenWebUI Audio configuration that wires the new endpoints. Conduit
inherits voice via OpenWebUI without app-side setup.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
2026-05-08 14:42:45 -04:00
parent 36b8cfe835
commit 6db46d8f6a
7 changed files with 157 additions and 27 deletions

View File

@@ -45,9 +45,11 @@ Tailscale. Coding agents, monitoring, voice — all self-hosted.
| `3030` | OpenHands | Autonomous agent + sandbox runtime — Tailscale-only by design |
| `4317` | Phoenix OTLP/gRPC | Trace ingestion |
| `6006` | Phoenix UI / OTLP/HTTP | Per-trace agent waterfall (also `:6006/v1/traces`) |
| `8001` | faster-whisper | STT (OpenAI API) — large-v3-turbo, for OpenWebUI/Conduit |
| `8090` | Beszel | Host + container + AMD GPU dashboard |
| `10200` | Piper | TTS (Wyoming protocol) — used by Home Assistant Assist |
| `10300` | Whisper | STT (Wyoming protocol) — used by Home Assistant Assist |
| `8880` | Kokoro | TTS (OpenAI API) — Kokoro-82M, for OpenWebUI/Conduit |
| `10200` | Piper | TTS (Wyoming protocol) — for Home Assistant Assist |
| `10300` | Whisper | STT (Wyoming protocol) — for Home Assistant Assist |
## Quick start