Skip to main content

First run on your Spark

· 3 min read
Justin Goheen
AI/ML Engineer

DGX Lab expects to run on the DGX Spark (or at least on a box where nvidia-smi and your model cache match how the tools query the system). This is not a hosted product: you clone, you run, you own the outcome.

Prerequisites

RequirementNotes
Python 3.12+Backend is pinned to a modern CPython; use uv to match lockfile.
uvInstalls and syncs backend/pyproject.toml + uv.lock.
Bun 1.3+Frontend monorepo and make dev use Bun workspaces.
Docker + ComposeOnly for make build / make up production-style runs.

If you are following along from a laptop that is not the Spark, you still need the repo on the Spark for GPU-backed tools and local paths. SSH in, clone there, open the UI from the Spark browser or from your Mac via LAN/Tailscale (see the remote access post).

Install

From the repo root on the machine that will host the app:

git clone https://github.com/jxtngx/dgx-lab.git ~/dgx-lab
cd ~/dgx-lab

cd backend && uv sync && cd ..
cd frontend && bun install && cd ..

uv sync respects backend/uv.lock. bun install uses the workspace package.json under frontend/.

Development

make dev

That brings up FastAPI on port 8000 and Next.js on port 3000. The dev frontend proxies /api/* to the backend, so you usually open:

http://localhost:3000

From another device on the same LAN, use the Spark’s IP: http://<spark-ip>:3000.

TCP listeners (local dev vs Docker)
Dev: open Next on 3000; browser calls `/api/*` via Next proxy. Docker: nginx :80 → frontend + `/api/` → FastAPI.

Production-style Docker

make build
make up

nginx listens on port 80 and routes / to the frontend container and /api/ to FastAPI. Use make down, make rebuild, and make logs as needed; the README summarizes each target.

When something fails

SymptomLikely check
Backend import errorsRe-run cd backend && uv sync after pulls; lockfile drift is the usual cause.
Frontend won’t startcd frontend && bun install; ensure Bun meets the minimum version.
Monitor shows no GPUConfirm nvidia-smi on the host. If you use Docker, the compose file must grant GPU to the backend container.
Empty Control model listDefault model dir is ~/.cache/huggingface/hub. Pull a model or set DGX_LAB_MODELS_DIR.

The setup guide mirrors these steps. The codebase and docs/ are the source of truth — there is no ticket queue.

Expectations

Per CONTRIBUTING.md, this repo is a personal project: issues and PRs are not triaged. Forks are encouraged if you need different defaults or tools. The modular layout (one router per tool under backend/app/routers/, one route under frontend/apps/web/app/(tools)/) exists so you can adapt without waiting on a maintainer.

make dev

If that command succeeds and http://localhost:3000 loads, you are past the hardest part.