apt's btop on 24.04 is 1.3.x, which has no AMD GPU monitoring. 1.4+ adds it but requires C++23, which gcc-13 (24.04 default) doesn't fully support. Plan: - Add ubuntu-toolchain-r/test PPA, install g++-14 (C++23-capable). - Add librocm-smi-dev to ROCm host diagnostics — btop dlopens librocm_smi64 at runtime; the headers are needed at compile time. - Drop btop from apt list, build from a pinned BTOP_VERSION tag with GPU_SUPPORT=true CXX=g++-14 -j; install to /usr/local/bin. - Idempotent — only rebuilds if installed version doesn't match. After deploy: btop → Esc → Options → "show_gpu_info" → On to enable the GPU panel. Also clean up TODO.md — the box is on 24.04 (noble), not 26.04. The libxml2 ABI mismatch / "ROCm gap" section was stale. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
1.5 KiB
1.5 KiB
TODO
ROCm / vLLM on Strix Halo (gfx1151)
The Framework Desktop runs Ubuntu 24.04 LTS (noble), which aligns
with AMD's ROCm 7.x packaging. The deploy installs rocminfo and
librocm-smi-dev host-side; heavier ROCm bits (full HIP toolchain,
device-mapped libraries) still run inside containers that ship their
own ROCm stack. The host stays slim by design.
Open questions
- Does
rocm/vllm:latestactually run on Strix Halo's iGPU? vLLM's AMD support officially targets datacenter cards (MI300X / gfx942). gfx1151 (RDNA 3.5 consumer) is a different ISA. If the stock image doesn't initialize the device, tryrocm/vllm-dev:nightlyor build from source against ROCm 7.x with-DAMDGPU_TARGETS=gfx1151.
If you ever want full host-side ROCm
For native ROCm work on the host (compiling HIP kernels, full toolchain):
- Bump
ROCM_VERSIONandAMDGPU_INSTALL_DEBinpyinfra/framework/deploy.pyto the latest release. - Add a step that runs
amdgpu-install -y --usecase=rocm --no-dkms(currently avoided to stay slim — ~25 GB toolchain). ./run.sh.
For container-only workflows (current default), no action is needed — container images update independently of the host.
Pick a coding model (StrixHaloSetup Phase 6)
Open question — research current Strix Halo benchmarks before
committing. Candidates: Qwen3-Coder, DeepSeek-Coder-V3.x, GLM-4.6,
Devstral, Kimi-K2. Track Kimi Linear separately via the weekly routine
referenced in StrixHaloSetup.md.