Open OpenHands UI to all interfaces

Was bound to 127.0.0.1:3030 — overcautious on a Tailscale-only box
where Phoenix/Beszel/OpenWebUI are all reached the same way. Updated
the homepage tile description and added a security note in the README
for the case where the host ever leaves the tailnet.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
2026-05-08 12:04:35 -04:00
parent 178d7d3c0f
commit 5d3fce22a1
4 changed files with 23 additions and 22 deletions

View File

@@ -169,25 +169,23 @@ based on how you like to drive the agent:
Bring-up: `cd /srv/docker/openwebui && docker compose up -d`.
- **OpenHands** (`/srv/docker/openhands`, http://framework:3030,
loopback-only) — autonomous agent in a Docker sandbox. Spawns a
per-conversation `agent-server` container that can write code, run
tests, browse the web. Pre-configured for Ollama at
`openai/qwen3-coder:30b` over the OpenAI-compatible endpoint;
ships traces to Phoenix.
- **OpenHands** (`/srv/docker/openhands`, http://framework:3030) —
autonomous agent in a Docker sandbox. Spawns a per-conversation
`agent-server` container that can write code, run tests, browse the
web. Pre-configured for Ollama at `openai/qwen3-coder:30b` over the
OpenAI-compatible endpoint; ships traces to Phoenix.
Bring-up:
```sh
cd /srv/docker/openhands && docker compose up -d
# Tunnel the loopback-bound UI from your laptop:
ssh -L 3030:127.0.0.1:3030 noise@framework
open http://localhost:3030
```
Bring-up: `cd /srv/docker/openhands && docker compose up -d`. First
run pulls the agent-server image (~2 GB) lazily on first conversation,
not at startup, so the orchestrator comes up fast but your first
message takes 3060 s. Pre-0.44 state path was `~/.openhands-state`;
not relevant on a fresh install.
First run pulls the agent-server image (~2 GB) lazily on first
conversation, not at startup, so the orchestrator comes up fast but
your first message takes 3060 s. Pre-0.44 state path was
`~/.openhands-state`; not relevant on a fresh install.
> **Security note:** the orchestrator container has docker-socket
> access and spawns code-running sandboxes. Fine to expose on a
> Tailscale-only box; **change the compose port mapping back to
> `127.0.0.1:3030:3000` and tunnel in** if this host ever sees LAN
> or internet traffic.
Tool-call quality with local models is much better when Ollama's
context is bumped — the compose at `/srv/docker/ollama` already sets

View File

@@ -47,7 +47,7 @@
- OpenHands:
icon: mdi-robot
href: http://framework:3030
description: Autonomous coding agent (loopback — needs SSH tunnel)
description: Autonomous coding agent in a Docker sandbox
server: localhost-docker
container: openhands

View File

@@ -18,10 +18,13 @@ services:
restart: unless-stopped
# 3030 host-side because :3000 is OpenWebUI and :3001 is OpenLIT.
# Loopback-only — reach via SSH tunnel or Tailscale, don't expose
# this directly.
# Bound to all interfaces — fine on a Tailscale-only box where every
# other service is reached the same way. If you ever expose this
# host to the LAN/internet, change this to "127.0.0.1:3030:3000"
# and tunnel in (this orchestrator has docker.sock access and
# spawns code-running sandboxes — not something you want public).
ports:
- "127.0.0.1:3030:3000"
- "3030:3000"
volumes:
# Required: orchestrator spawns sandbox containers via the host daemon.