Containerized local LLM stack for the Framework Desktop / Strix Halo,
plus the OpenCode harness on the Mac side.
- pyinfra/framework/: pyinfra deploy targeting the box
- llama.cpp (Vulkan), vLLM (ROCm), Ollama (ROCm with HSA override
for gfx1151), OpenWebUI
- Beszel (host + container + AMD GPU dashboard via sysfs)
- OpenLIT (LLM fleet metrics)
- Phoenix (per-trace agent waterfall)
- OpenHands (autonomous agent in a Docker sandbox)
- opencode/: OpenCode config + Phoenix bridge plugin (OTel exporter)
- install.sh deploys to ~/.config/opencode/
- StrixHaloSetup.md / StrixHaloMemory.md / Roadmap.md / TODO.md:
documentation and planning
- testing/qwen3-coder-30b/: small evaluation harness
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
14 lines
407 B
JSON
14 lines
407 B
JSON
{
|
|
"name": "localgenai-opencode-plugins",
|
|
"version": "0.1.0",
|
|
"private": true,
|
|
"type": "module",
|
|
"description": "OpenCode plugins for the localgenai stack. Run `npm install` here once.",
|
|
"dependencies": {
|
|
"@opentelemetry/api": "^1.9.0",
|
|
"@opentelemetry/exporter-trace-otlp-proto": "^0.205.0",
|
|
"@opentelemetry/sdk-node": "^0.205.0",
|
|
"@opentelemetry/sdk-trace-base": "^2.0.0"
|
|
}
|
|
}
|