Snapshot of where opencode + Qwen3-Coder + MCPs + Kimi-Linear + voice
+ Phoenix tracing land today, plus in-flight (oc-tree, kimi-linear
context ramp) and next (ComfyUI) items with pointers to per-project
NEXT_STEPS.md guides.
The OpenLIT secondary exporter regressed tool-call parsing in OpenCode:
OpenLIT's image doesn't currently host an OTLP receiver on 4328, so the
exporter retries failed silently and the failures cascaded into the AI
SDK's telemetry pipeline. Symptom: model output came through as raw
Qwen3-Coder XML tool-call text instead of being parsed into actual tool
invocations.
Re-add when openlit.yml gets an otel-collector sidecar that actually
listens on the receiver ports.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Homepage as the front door: single page at framework:7575 with one tile
per service, live widgets where the upstream supports it (Ollama loaded
models, container state via docker.sock, etc.), bookmarks for reference
docs. Config files are pyinfra-managed — source of truth lives in
compose/homepage/, sync by editing there and re-running ./run.sh.
OpenCode plugin now dual-exports spans to Phoenix and OpenLIT in
parallel. Phoenix remains the per-trace waterfall view; OpenLIT picks
up the same data for fleet-level metrics. Each destination has its own
batch processor so a hiccup at one doesn't block the other.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>