William Gill d935e24624
All checks were successful
release / Build & Push Docker Image (push) Successful in 44s
Add text-targeted inpainting via GroundingDINO+SAM (mask_text param)
Five pieces:

1. Dockerfile installs storyicon/comfyui_segment_anything (GroundingDINO
   + SAM-HQ in one bundle) into custom_nodes and pip-installs its
   requirements at build time. Model weights auto-download to the
   comfyui-models volume on first inpaint (~3 GB one-time cost).

2. install-custom-node-deps.sh — entrypoint wrapper that pip-installs
   requirements.txt for any custom_node present at startup. Lets users
   add custom nodes via ComfyUI-Manager (or by git-cloning into the
   volume) and have the deps picked up on the next restart, without
   editing the Dockerfile.

3. smart_image_gen v0.6: edit_image gains a `mask_text` param. When
   set, builds an inpainting workflow (LoadImage → GroundingDinoSAM
   Segment → SetLatentNoiseMask → KSampler) so only the named region
   is repainted. When unset, falls through to the existing img2img
   path. Denoise default switches: 1.0 with mask_text (full repaint
   within mask), 0.7 without.

4. Image Studio system prompt teaches the LLM the LOCAL vs GLOBAL
   distinction — set mask_text whenever the user names a specific
   object/region ('the ball', 'the dog', 'the sky'); leave it unset
   only for whole-image style/lighting transformations.

5. Deployment README documents the new mode + the first-inpaint
   weight-download caveat.

Image rebuild required — bump tag to pick up the Dockerfile change.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-19 14:43:52 -05:00

comfyui-nvidia

ComfyUI image-generation backend, NVIDIA-accelerated, fronted by Open WebUI for multi-user chat and image generation/editing.

Built from the official ComfyUI manual install for NVIDIA — no third-party base image. CI publishes the image to git.anomalous.dev/alphacentri/comfyui-nvidia on every v* tag (see .gitea/workflows/release.yml).

Repository layout

Path What
Dockerfile ComfyUI on NVIDIA, manual-install pattern
workflows/ txt2img + img2img workflow JSONs and node mappings
deployments/ai-stack/ The deployment — compose, Caddyfile, env, model preseed
.gitea/workflows/ Release pipeline (build & push image on tag)

Deploy

The full stack — Caddy + Ollama + ComfyUI + Open WebUI (+ optional Anubis) — lives under deployments/ai-stack/. Bring-up steps, host prerequisites, Open WebUI workflow wiring, and gotchas are in deployments/ai-stack/README.md.

Replaces

This repo supersedes the previous figment + segment + Forge stack. ComfyUI's node graph covers everything those services provided (txt2img, img2img, inpaint, mask generation via SAM/GroundingDINO custom nodes), and Open WebUI talks to it natively.

Description
No description provided
Readme 553 KiB
Languages
Python 84%
Shell 11.7%
Dockerfile 4.3%