Files
comfyui-nvidia/README.md
William Gill 97547c783c Make ai-stack the only deployment shape
Drops the duplicate standalone compose / .env.example / SETUP.md at the
repo root. Bring-up content folded into deployments/ai-stack/README.md
so there's exactly one set of deployment instructions, sitting next to
the files it describes. Root README is now just the repo overview and a
pointer at the deployment.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-19 10:45:23 -05:00

34 lines
1.5 KiB
Markdown

# comfyui-nvidia
ComfyUI image-generation backend, NVIDIA-accelerated, fronted by Open WebUI
for multi-user chat and image generation/editing.
Built from the official ComfyUI [manual install for
NVIDIA](https://docs.comfy.org/installation/manual_install#nvidia) — no
third-party base image. CI publishes the image to
`git.anomalous.dev/alphacentri/comfyui-nvidia` on every `v*` tag (see
[.gitea/workflows/release.yml](.gitea/workflows/release.yml)).
## Repository layout
| Path | What |
| -------------------------- | ----------------------------------------------------- |
| `Dockerfile` | ComfyUI on NVIDIA, manual-install pattern |
| `workflows/` | txt2img + img2img workflow JSONs and node mappings |
| `deployments/ai-stack/` | The deployment — compose, Caddyfile, env, model preseed |
| `.gitea/workflows/` | Release pipeline (build & push image on tag) |
## Deploy
The full stack — Caddy + Ollama + ComfyUI + Open WebUI (+ optional
Anubis) — lives under [`deployments/ai-stack/`](deployments/ai-stack/).
Bring-up steps, host prerequisites, Open WebUI workflow wiring, and
gotchas are in [`deployments/ai-stack/README.md`](deployments/ai-stack/README.md).
## Replaces
This repo supersedes the previous figment + segment + Forge stack.
ComfyUI's node graph covers everything those services provided
(txt2img, img2img, inpaint, mask generation via SAM/GroundingDINO custom
nodes), and Open WebUI talks to it natively.