Containerized local LLM stack for the Framework Desktop / Strix Halo,
plus the OpenCode harness on the Mac side.
- pyinfra/framework/: pyinfra deploy targeting the box
- llama.cpp (Vulkan), vLLM (ROCm), Ollama (ROCm with HSA override
for gfx1151), OpenWebUI
- Beszel (host + container + AMD GPU dashboard via sysfs)
- OpenLIT (LLM fleet metrics)
- Phoenix (per-trace agent waterfall)
- OpenHands (autonomous agent in a Docker sandbox)
- opencode/: OpenCode config + Phoenix bridge plugin (OTel exporter)
- install.sh deploys to ~/.config/opencode/
- StrixHaloSetup.md / StrixHaloMemory.md / Roadmap.md / TODO.md:
documentation and planning
- testing/qwen3-coder-30b/: small evaluation harness
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
95 lines
4.3 KiB
YAML
95 lines
4.3 KiB
YAML
# OpenHands 1.7 (May 2026) — autonomous agent in a Docker sandbox.
|
|
# https://docs.openhands.dev — repo: github.com/OpenHands/OpenHands
|
|
#
|
|
# Architecture: this container is a thin orchestrator. Per conversation
|
|
# it spawns a separate `agent-server` container on the host Docker daemon
|
|
# (that's what the docker.sock mount is for) and talks to it over REST.
|
|
# AGENT_SERVER_IMAGE_TAG below pins the per-session sandbox image.
|
|
#
|
|
# Complements OpenCode: OpenCode is the interactive terminal driver,
|
|
# OpenHands is for autonomous loops (write code, run tests, browse the
|
|
# web in a sandbox, report back).
|
|
services:
|
|
openhands:
|
|
# Org rebranded All-Hands-AI → OpenHands at v1.0 (Dec 2025); the old
|
|
# docker.all-hands.dev/all-hands-ai/openhands image is gone.
|
|
image: docker.openhands.dev/openhands/openhands:1.7
|
|
container_name: openhands
|
|
restart: unless-stopped
|
|
|
|
# 3030 host-side because :3000 is OpenWebUI and :3001 is OpenLIT.
|
|
# Loopback-only — reach via SSH tunnel or Tailscale, don't expose
|
|
# this directly.
|
|
ports:
|
|
- "127.0.0.1:3030:3000"
|
|
|
|
volumes:
|
|
# Required: orchestrator spawns sandbox containers via the host daemon.
|
|
- /var/run/docker.sock:/var/run/docker.sock
|
|
# State, settings, conversation history, MCP config, secrets.
|
|
# Pre-0.44 used ~/.openhands-state — N/A on a fresh install.
|
|
- /srv/docker/openhands/state:/.openhands
|
|
# Workspace the sandbox reads/writes. The host path on the LEFT must
|
|
# match SANDBOX_VOLUMES below — the sandbox container is spawned by
|
|
# the host daemon, so its bind mount is resolved on the host, not
|
|
# via this container's filesystem.
|
|
- /srv/docker/openhands/workspace:/srv/docker/openhands/workspace
|
|
|
|
# Linux Docker doesn't auto-provide host.docker.internal; this fixes it.
|
|
extra_hosts:
|
|
- "host.docker.internal:host-gateway"
|
|
|
|
environment:
|
|
# ---- Sandbox / agent-server image pin ----
|
|
# Replaces the V0.x SANDBOX_RUNTIME_CONTAINER_IMAGE. 1.19.1-python is
|
|
# the agent-server tag the 1.7 main image expects; bumping the main
|
|
# image will likely want a newer agent-server tag — check the
|
|
# upstream docker-compose.yml on each upgrade.
|
|
AGENT_SERVER_IMAGE_REPOSITORY: ghcr.io/openhands/agent-server
|
|
AGENT_SERVER_IMAGE_TAG: 1.19.1-python
|
|
|
|
# ---- Workspace mount into the per-session sandbox ----
|
|
# SANDBOX_VOLUMES is the V1 replacement for the deprecated
|
|
# WORKSPACE_BASE / WORKSPACE_MOUNT_PATH variables.
|
|
SANDBOX_VOLUMES: /srv/docker/openhands/workspace:/workspace:rw
|
|
# Match the host's `noise` UID so files the agent writes aren't
|
|
# owned by root.
|
|
SANDBOX_USER_ID: "1000"
|
|
|
|
# ---- LLM: host Ollama via OpenAI-compatible endpoint ----
|
|
# Per the official local-llms doc, the recommended path is the
|
|
# /v1 OpenAI-compatible endpoint with the `openai/` LiteLLM prefix
|
|
# — NOT `ollama/...`, which has worse tool-call behaviour.
|
|
LLM_MODEL: "openai/qwen3-coder:30b"
|
|
LLM_BASE_URL: "http://host.docker.internal:11434/v1"
|
|
LLM_API_KEY: "ollama" # any non-empty string; Ollama doesn't auth.
|
|
|
|
# Default tool-calling renderer mismatches Qwen3-Coder's training
|
|
# format and produces malformed calls (issue #8140). Forcing false
|
|
# falls back to OpenHands' prompt-based protocol — costs some token
|
|
# efficiency, gains reliability with local models.
|
|
LLM_NATIVE_TOOL_CALLING: "false"
|
|
|
|
LOG_ALL_EVENTS: "true"
|
|
|
|
# ---- Optional: ship traces to Phoenix on :4318 ----
|
|
# OpenHands V1 uses LiteLLM + OpenTelemetry; standard OTLP env vars
|
|
# are honoured. Comment out to disable.
|
|
OTEL_EXPORTER_OTLP_ENDPOINT: "http://host.docker.internal:4318"
|
|
OTEL_EXPORTER_OTLP_PROTOCOL: "http/protobuf"
|
|
OTEL_SERVICE_NAME: "openhands"
|
|
|
|
# Per-session agent-server containers spawn headless chromium for
|
|
# browser tasks; default 64 MB shm causes silent crashes.
|
|
shm_size: "2gb"
|
|
|
|
# Playwright/chromium needs higher fd limits than Docker's default.
|
|
ulimits:
|
|
nofile:
|
|
soft: 65536
|
|
hard: 65536
|
|
|
|
# Bridge networking is correct here. Don't switch to network_mode: host
|
|
# — the spawned sandbox containers reach this orchestrator via Docker
|
|
# bridge DNS, which only works on a bridge network.
|