Containerized local LLM stack for the Framework Desktop / Strix Halo,
plus the OpenCode harness on the Mac side.
- pyinfra/framework/: pyinfra deploy targeting the box
- llama.cpp (Vulkan), vLLM (ROCm), Ollama (ROCm with HSA override
for gfx1151), OpenWebUI
- Beszel (host + container + AMD GPU dashboard via sysfs)
- OpenLIT (LLM fleet metrics)
- Phoenix (per-trace agent waterfall)
- OpenHands (autonomous agent in a Docker sandbox)
- opencode/: OpenCode config + Phoenix bridge plugin (OTel exporter)
- install.sh deploys to ~/.config/opencode/
- StrixHaloSetup.md / StrixHaloMemory.md / Roadmap.md / TODO.md:
documentation and planning
- testing/qwen3-coder-30b/: small evaluation harness
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
17 lines
311 B
YAML
17 lines
311 B
YAML
services:
|
|
whisper:
|
|
image: rhasspy/wyoming-whisper:latest
|
|
container_name: wyoming-whisper
|
|
restart: unless-stopped
|
|
ports:
|
|
- "10300:10300"
|
|
volumes:
|
|
- ./whisper-data:/data
|
|
command:
|
|
- --model
|
|
- tiny-int8
|
|
- --language
|
|
- en
|
|
- --beam-size
|
|
- "1"
|