# ATCR AppView > The registry frontend component of ATCR (ATProto Container Registry) ## Overview **AppView** is the frontend server component of ATCR. It serves as the OCI-compliant registry API endpoint and web interface that Docker clients interact with when pushing and pulling container images. AppView is the orchestration layer that: - **Serves the OCI Distribution API V2** - Compatible with Docker, containerd, podman, and all OCI clients - **Resolves ATProto identities** - Converts handles (`alice.bsky.social`) and DIDs (`did:plc:xyz123`) to PDS endpoints - **Routes manifests** - Stores container image manifests as ATProto records in users' Personal Data Servers - **Routes blobs** - Proxies blob (layer) operations to hold services for S3-compatible storage - **Provides web UI** - Browse repositories, search images, view tags, track pull counts, manage stars, vulnerability scan results - **Manages authentication** - ATProto OAuth with device authorization flow, issues registry JWTs to Docker clients ### The ATCR Ecosystem AppView is the **frontend** of a multi-component architecture: 1. **AppView** (this component) - Registry API + web interface 2. **[Hold Service](hold.md)** - Storage backend with embedded PDS for blob storage 3. **Credential Helper** - Client-side tool for ATProto OAuth authentication **Data flow:** ``` Docker Client → AppView (resolves identity) → User's PDS (stores manifest) ↓ Hold Service (stores blobs in S3/Storj/etc.) ``` Manifests (small JSON metadata) live in users' ATProto PDS, while blobs (large binary layers) live in hold services. AppView orchestrates the routing between these components. ## When to Run Your Own AppView Most users can simply use **https://atcr.io** - you don't need to run your own AppView. **Run your own AppView if you want to:** - Host a private/organizational container registry with ATProto authentication - Run a public registry for a specific community - Customize the registry UI or policies - Maintain full control over registry infrastructure **Prerequisites:** - A running [Hold service](hold.md) (required for blob storage) - (Optional) Domain name with SSL/TLS certificates for production - (Optional) Access to ATProto Jetstream for real-time indexing ## Quick Start ### 1. Build the Docker image ```bash docker build -t atcr-appview:latest -f Dockerfile.appview . ``` This produces a ~30MB scratch image with a statically-linked binary. ### 2. Generate a config file ```bash docker run --rm atcr-appview config init > config-appview.yaml ``` This creates a fully-commented YAML file with all available options and their defaults. You can also generate it from a local binary: ```bash ./bin/atcr-appview config init config-appview.yaml ``` ### 3. Set the required field Edit `config-appview.yaml` and set `server.default_hold_did` to your hold service's DID: ```yaml server: default_hold_did: "did:web:127.0.0.1:8080" # local dev # default_hold_did: "did:web:hold01.example.com" # production ``` This is the **only required configuration field**. To find a hold's DID, visit its `/.well-known/did.json` endpoint. For production, also set your public URL: ```yaml server: base_url: "https://registry.example.com" default_hold_did: "did:web:hold01.example.com" ``` ### 4. Run ```bash docker run -d \ -v ./config-appview.yaml:/config.yaml:ro \ -v atcr-data:/var/lib/atcr \ -p 5000:5000 \ atcr-appview serve --config /config.yaml ``` ### 5. Verify ```bash curl http://localhost:5000/v2/ # Should return: {} curl http://localhost:5000/health # Should return: {"status":"ok"} ``` ## Configuration AppView uses YAML configuration with environment variable overrides. The generated `config-appview.yaml` is the canonical reference — every field is commented inline with its purpose and default value. ### Config loading priority (highest wins) 1. Environment variables (`ATCR_` prefix) 2. YAML config file (`--config`) 3. Built-in defaults ### Environment variable convention YAML paths map to env vars with `ATCR_` prefix and `_` separators: ``` server.default_hold_did → ATCR_SERVER_DEFAULT_HOLD_DID server.base_url → ATCR_SERVER_BASE_URL ui.database_path → ATCR_UI_DATABASE_PATH jetstream.backfill_enabled → ATCR_JETSTREAM_BACKFILL_ENABLED ``` ### Config sections overview | Section | Purpose | Notes | |---------|---------|-------| | `server` | Listen address, public URL, hold DID, OAuth key, branding | Only `default_hold_did` is required | | `ui` | Database path, theme, libSQL sync | All have defaults; auto-creates DB on first run | | `auth` | JWT signing key/cert paths | Auto-generated on first run | | `jetstream` | Real-time ATProto event streaming, backfill sync | Runs automatically; backfill enabled by default | | `health` | Hold health check interval and cache TTL | Sensible defaults (15m) | | `log_shipper` | Remote log shipping (Victoria, OpenSearch, Loki) | Disabled by default | | `legal` | Terms/privacy page customization | Optional | | `credential_helper` | Credential helper download source | Optional | ### Auto-generated files On first run, AppView auto-generates these under `/var/lib/atcr/`: | File | Purpose | |------|---------| | `ui.db` | SQLite database (OAuth sessions, stars, pull counts, device approvals) | | `auth/private-key.pem` | RSA private key for signing registry JWTs | | `auth/private-key.crt` | X.509 certificate for JWT verification | | `oauth/client.key` | P-256 private key for OAuth client authentication | **Persist `/var/lib/atcr/` across restarts.** Losing the auth keys invalidates all active sessions; losing the database loses OAuth state and UI data. ## Deployment ### Docker (recommended) `Dockerfile.appview` builds a minimal scratch image (~30MB) containing: - Static `atcr-appview` binary (CGO-enabled with embedded SQLite) - `healthcheck` binary for container health checks - CA certificates and timezone data **Port:** `5000` (HTTP) **Volume:** `/var/lib/atcr` (auth keys, database, OAuth keys) **Health check:** `GET /health` returns `{"status":"ok"}` ```bash docker run -d \ --name atcr-appview \ -v ./config-appview.yaml:/config.yaml:ro \ -v atcr-data:/var/lib/atcr \ -p 5000:5000 \ --health-cmd '/healthcheck http://localhost:5000/health' \ --health-interval 30s \ --restart unless-stopped \ atcr-appview serve --config /config.yaml ``` ### Production with reverse proxy AppView serves HTTP on port 5000. For production, put a reverse proxy in front for HTTPS termination. The repository includes a working Caddy + Docker Compose setup at [`deploy/docker-compose.prod.yml`](../deploy/docker-compose.prod.yml) that runs AppView, Hold, and Caddy together with automatic TLS. A minimal production compose override: ```yaml services: atcr-appview: image: atcr-appview:latest command: ["serve", "--config", "/config.yaml"] environment: ATCR_SERVER_BASE_URL: https://registry.example.com ATCR_SERVER_DEFAULT_HOLD_DID: did:web:hold.example.com volumes: - ./config-appview.yaml:/config.yaml:ro - atcr-appview-data:/var/lib/atcr healthcheck: test: ["CMD", "/healthcheck", "http://localhost:5000/health"] interval: 30s timeout: 10s retries: 3 start_period: 30s volumes: atcr-appview-data: ``` ### Systemd (bare metal) For non-Docker deployments, see the systemd service templates in [`deploy/upcloud/`](../deploy/upcloud/) which include security hardening (dedicated user, filesystem protection, private tmp). ## Deployment Scenarios ### Public Registry Open to all ATProto users: ```yaml # config-appview.yaml server: base_url: "https://registry.example.com" default_hold_did: "did:web:hold01.example.com" jetstream: backfill_enabled: true ``` The linked hold service should have `server.public: true` and `registration.allow_all_crew: true`. ### Private Organizational Registry Restricted to crew members only: ```yaml # config-appview.yaml server: base_url: "https://registry.internal.example.com" default_hold_did: "did:web:hold.internal.example.com" ``` The linked hold service should have `server.public: false` and `registration.allow_all_crew: false`, with an explicit `registration.owner_did` set to the organization's DID. ### Local Development ```yaml # config-appview.yaml log_level: debug server: default_hold_did: "did:web:127.0.0.1:8080" test_mode: true # allows HTTP for DID resolution ``` Run a hold service locally with Minio for S3-compatible storage. See [hold.md](hold.md) for hold setup. ## Web Interface The AppView web UI provides: - **Home page** - Featured repositories and recent pushes - **Repository pages** - Tags, manifests, pull instructions, health status, vulnerability scan results - **Search** - Find repositories by owner handle or repository name - **User profiles** - View a user's repositories and starred images - **Stars** - Favorite repositories (requires login) - **Pull counts** - Image pull statistics - **Multi-arch support** - Platform-specific manifests (linux/amd64, linux/arm64, etc.) - **Health indicators** - Real-time hold service reachability - **Device management** - Approve and revoke Docker credential helper pairings - **Settings** - Choose default hold, view crew memberships, storage usage