Files
comfyui-nvidia/deployments/ai-stack/init-models.sh
William Gill b1c9bff15f Match init-models.sh to the live preseed list
Five models from the production GPU host's current pull set. Picks up
the idempotency-checking loop pattern from the source script so re-runs
print "already present" instead of re-pulling.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-19 10:41:29 -05:00

23 lines
660 B
Bash

#!/bin/sh
# Preseed Ollama with the models the stack should have available at startup.
# Runs once via the model-init service (see docker-compose.yml). Safe to
# re-run — already-present models are skipped.
#
# Add or remove tags to taste. The host needs enough disk for everything
# listed; check sizes at https://ollama.com/library before adding.
set -e
MODELS="dolphin3:8b llama3.1:8b ministral-3:8b mistral-nemo:12b qwen3.6:latest"
for model in $MODELS; do
if ollama list | awk 'NR>1 {print $1}' | grep -qx "$model"; then
echo "$model already present"
else
echo "→ Pulling $model"
ollama pull "$model"
fi
done
echo "Done."