did:plc Identity Support (pkg/hold/pds/did.go, pkg/hold/config.go, pkg/hold/server.go) The big feature — holds can now use did:plc identities instead of only did:web. This adds: - LoadOrCreateDID() — resolves hold DID by priority: config DID > did.txt on disk > create new - CreatePLCIdentity() — builds a genesis operation, signs with rotation key, submits to PLC directory - EnsurePLCCurrent() — on boot, compares local signing key + URL against PLC directory and auto-updates if they've drifted (requires rotation key) - New config fields: did_method (web/plc), did, plc_directory_url, rotation_key_path - GenerateDIDDocument() now uses the stored DID instead of always deriving did:web from URL - NewHoldServer wired up to call LoadOrCreateDID instead of GenerateDIDFromURL CAR Export/Import (pkg/hold/pds/export.go, pkg/hold/pds/import.go, cmd/hold/repo.go) New CLI subcommands for repo backup/restore: - atcr-hold repo export — streams the hold's repo as a CAR file to stdout - atcr-hold repo import <file>... — reads CAR files, upserts all records in a single atomic commit. Uses a bulkImportRecords method that opens a delta session, checks each record for create vs update, commits once, and fires repo events. - openHoldPDS() helper to spin up a HoldPDS from config for offline CLI operations Admin UI Fixes (pkg/hold/admin/) - Logout changed from GET to POST — nav template now uses a <form method=POST> instead of an <a> link (prevents CSRF on logout) - Removed return_to parameter from login flow — simplified redirect logic, auth middleware now redirects to /admin/auth/login without query params Config/Deploy - config-hold.example.yaml and deploy/upcloud/configs/hold.yaml.tmpl updated with the four new did:plc config fields - go.mod / go.sum — added github.com/did-method-plc/go-didplc dependency
9.2 KiB
ATCR AppView
The registry frontend component of ATCR (ATProto Container Registry)
Overview
AppView is the frontend server component of ATCR. It serves as the OCI-compliant registry API endpoint and web interface that Docker clients interact with when pushing and pulling container images.
AppView is the orchestration layer that:
- Serves the OCI Distribution API V2 - Compatible with Docker, containerd, podman, and all OCI clients
- Resolves ATProto identities - Converts handles (
alice.bsky.social) and DIDs (did:plc:xyz123) to PDS endpoints - Routes manifests - Stores container image manifests as ATProto records in users' Personal Data Servers
- Routes blobs - Proxies blob (layer) operations to hold services for S3-compatible storage
- Provides web UI - Browse repositories, search images, view tags, track pull counts, manage stars, vulnerability scan results
- Manages authentication - ATProto OAuth with device authorization flow, issues registry JWTs to Docker clients
The ATCR Ecosystem
AppView is the frontend of a multi-component architecture:
- AppView (this component) - Registry API + web interface
- Hold Service - Storage backend with embedded PDS for blob storage
- Credential Helper - Client-side tool for ATProto OAuth authentication
Data flow:
Docker Client → AppView (resolves identity) → User's PDS (stores manifest)
↓
Hold Service (stores blobs in S3/Storj/etc.)
Manifests (small JSON metadata) live in users' ATProto PDS, while blobs (large binary layers) live in hold services. AppView orchestrates the routing between these components.
When to Run Your Own AppView
Most users can simply use https://atcr.io - you don't need to run your own AppView.
Run your own AppView if you want to:
- Host a private/organizational container registry with ATProto authentication
- Run a public registry for a specific community
- Customize the registry UI or policies
- Maintain full control over registry infrastructure
Prerequisites:
- A running Hold service (required for blob storage)
- (Optional) Domain name with SSL/TLS certificates for production
- (Optional) Access to ATProto Jetstream for real-time indexing
Quick Start
1. Build the Docker image
docker build -t atcr-appview:latest -f Dockerfile.appview .
This produces a ~30MB scratch image with a statically-linked binary.
2. Generate a config file
docker run --rm atcr-appview config init > config-appview.yaml
This creates a fully-commented YAML file with all available options and their defaults. You can also generate it from a local binary:
./bin/atcr-appview config init config-appview.yaml
3. Set the required field
Edit config-appview.yaml and set server.default_hold_did to your hold service's DID:
server:
default_hold_did: "did:web:127.0.0.1:8080" # local dev
# default_hold_did: "did:web:hold01.example.com" # production
This is the only required configuration field. To find a hold's DID, visit its /.well-known/did.json endpoint.
For production, also set your public URL:
server:
base_url: "https://registry.example.com"
default_hold_did: "did:web:hold01.example.com"
4. Run
docker run -d \
-v ./config-appview.yaml:/config.yaml:ro \
-v atcr-data:/var/lib/atcr \
-p 5000:5000 \
atcr-appview serve --config /config.yaml
5. Verify
curl http://localhost:5000/v2/
# Should return: {}
curl http://localhost:5000/health
# Should return: {"status":"ok"}
Configuration
AppView uses YAML configuration with environment variable overrides. The generated config-appview.yaml is the canonical reference — every field is commented inline with its purpose and default value.
Config loading priority (highest wins)
- Environment variables (
ATCR_prefix) - YAML config file (
--config) - Built-in defaults
Environment variable convention
YAML paths map to env vars with ATCR_ prefix and _ separators:
server.default_hold_did → ATCR_SERVER_DEFAULT_HOLD_DID
server.base_url → ATCR_SERVER_BASE_URL
ui.database_path → ATCR_UI_DATABASE_PATH
jetstream.backfill_enabled → ATCR_JETSTREAM_BACKFILL_ENABLED
Config sections overview
| Section | Purpose | Notes |
|---|---|---|
server |
Listen address, public URL, hold DID, OAuth key, branding | Only default_hold_did is required |
ui |
Database path, theme, libSQL sync | All have defaults; auto-creates DB on first run |
auth |
JWT signing key/cert paths | Auto-generated on first run |
jetstream |
Real-time ATProto event streaming, backfill sync | Runs automatically; backfill enabled by default |
health |
Hold health check interval and cache TTL | Sensible defaults (15m) |
log_shipper |
Remote log shipping (Victoria, OpenSearch, Loki) | Disabled by default |
legal |
Terms/privacy page customization | Optional |
credential_helper |
Credential helper download source | Optional |
Auto-generated files
On first run, AppView auto-generates these under /var/lib/atcr/:
| File | Purpose |
|---|---|
ui.db |
SQLite database (OAuth sessions, stars, pull counts, device approvals) |
auth/private-key.pem |
RSA private key for signing registry JWTs |
auth/private-key.crt |
X.509 certificate for JWT verification |
oauth/client.key |
P-256 private key for OAuth client authentication |
Persist /var/lib/atcr/ across restarts. Losing the auth keys invalidates all active sessions; losing the database loses OAuth state and UI data.
Deployment
Docker (recommended)
Dockerfile.appview builds a minimal scratch image (~30MB) containing:
- Static
atcr-appviewbinary (CGO-enabled with embedded SQLite) healthcheckbinary for container health checks- CA certificates and timezone data
Port: 5000 (HTTP)
Volume: /var/lib/atcr (auth keys, database, OAuth keys)
Health check: GET /health returns {"status":"ok"}
docker run -d \
--name atcr-appview \
-v ./config-appview.yaml:/config.yaml:ro \
-v atcr-data:/var/lib/atcr \
-p 5000:5000 \
--health-cmd '/healthcheck http://localhost:5000/health' \
--health-interval 30s \
--restart unless-stopped \
atcr-appview serve --config /config.yaml
Production with reverse proxy
AppView serves HTTP on port 5000. For production, put a reverse proxy in front for HTTPS termination. The repository includes a working Caddy + Docker Compose setup at deploy/docker-compose.prod.yml that runs AppView, Hold, and Caddy together with automatic TLS.
A minimal production compose override:
services:
atcr-appview:
image: atcr-appview:latest
command: ["serve", "--config", "/config.yaml"]
environment:
ATCR_SERVER_BASE_URL: https://registry.example.com
ATCR_SERVER_DEFAULT_HOLD_DID: did:web:hold.example.com
volumes:
- ./config-appview.yaml:/config.yaml:ro
- atcr-appview-data:/var/lib/atcr
healthcheck:
test: ["CMD", "/healthcheck", "http://localhost:5000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes:
atcr-appview-data:
Systemd (bare metal)
For non-Docker deployments, see the systemd service templates in deploy/upcloud/ which include security hardening (dedicated user, filesystem protection, private tmp).
Deployment Scenarios
Public Registry
Open to all ATProto users:
# config-appview.yaml
server:
base_url: "https://registry.example.com"
default_hold_did: "did:web:hold01.example.com"
jetstream:
backfill_enabled: true
The linked hold service should have server.public: true and registration.allow_all_crew: true.
Private Organizational Registry
Restricted to crew members only:
# config-appview.yaml
server:
base_url: "https://registry.internal.example.com"
default_hold_did: "did:web:hold.internal.example.com"
The linked hold service should have server.public: false and registration.allow_all_crew: false, with an explicit registration.owner_did set to the organization's DID.
Local Development
# config-appview.yaml
log_level: debug
server:
default_hold_did: "did:web:127.0.0.1:8080"
test_mode: true # allows HTTP for DID resolution
Run a hold service locally with Minio for S3-compatible storage. See hold.md for hold setup.
Web Interface
The AppView web UI provides:
- Home page - Featured repositories and recent pushes
- Repository pages - Tags, manifests, pull instructions, health status, vulnerability scan results
- Search - Find repositories by owner handle or repository name
- User profiles - View a user's repositories and starred images
- Stars - Favorite repositories (requires login)
- Pull counts - Image pull statistics
- Multi-arch support - Platform-specific manifests (linux/amd64, linux/arm64, etc.)
- Health indicators - Real-time hold service reachability
- Device management - Approve and revoke Docker credential helper pairings
- Settings - Choose default hold, view crew memberships, storage usage