# comfyui-nvidia ComfyUI image-generation backend, NVIDIA-accelerated, fronted by Open WebUI for multi-user chat and image generation/editing. Built from the official ComfyUI [manual install for NVIDIA](https://docs.comfy.org/installation/manual_install#nvidia) — no third-party base image. CI publishes the image to `git.anomalous.dev/alphacentri/comfyui-nvidia` on every `v*` tag (see [.gitea/workflows/release.yml](.gitea/workflows/release.yml)). ## Repository layout | Path | What | | -------------------------- | ----------------------------------------------------- | | `Dockerfile` | ComfyUI on NVIDIA, manual-install pattern | | `workflows/` | txt2img + img2img workflow JSONs and node mappings | | `deployments/ai-stack/` | The deployment — compose, Caddyfile, env, model preseed | | `.gitea/workflows/` | Release pipeline (build & push image on tag) | ## Deploy The full stack — Caddy + Ollama + ComfyUI + Open WebUI (+ optional Anubis) — lives under [`deployments/ai-stack/`](deployments/ai-stack/). Bring-up steps, host prerequisites, Open WebUI workflow wiring, and gotchas are in [`deployments/ai-stack/README.md`](deployments/ai-stack/README.md). ## Replaces This repo supersedes the previous figment + segment + Forge stack. ComfyUI's node graph covers everything those services provided (txt2img, img2img, inpaint, mask generation via SAM/GroundingDINO custom nodes), and Open WebUI talks to it natively.