24 KiB
CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Project Overview
ATCR (ATProto Container Registry) is an OCI-compliant container registry that uses the AT Protocol for manifest storage and S3 for blob storage. This creates a decentralized container registry where manifests are stored in users' Personal Data Servers (PDS) while layers are stored in S3.
Build Commands
# Build all binaries
# create go builds in the bin/ directory
go build -o bin/atcr-appview ./cmd/appview
go build -o bin/atcr-hold ./cmd/hold
go build -o bin/docker-credential-atcr ./cmd/credential-helper
# Run tests
go test ./...
# Run with race detector
go test -race ./...
# Update dependencies
go mod tidy
# Build Docker images
docker build -t atcr.io/appview:latest .
docker build -f Dockerfile.hold -t atcr.io/hold:latest .
# Or use docker-compose
docker-compose up -d
# Run locally (AppView) - configure via env vars (see .env.appview.example)
export ATCR_HTTP_ADDR=:5000
export ATCR_DEFAULT_HOLD=http://127.0.0.1:8080
./bin/atcr-appview serve
# Or use .env file:
cp .env.appview.example .env.appview
# Edit .env.appview with your settings
source .env.appview
./bin/atcr-appview serve
# Legacy mode (still supported):
# ./bin/atcr-appview serve config/config.yml
# Run hold service (configure via env vars - see .env.hold.example)
export HOLD_PUBLIC_URL=http://127.0.0.1:8080
export STORAGE_DRIVER=filesystem
export STORAGE_ROOT_DIR=/tmp/atcr-hold
export HOLD_OWNER=did:plc:your-did-here
./bin/atcr-hold
# Check logs for OAuth URL, visit in browser to complete registration
Architecture Overview
Core Design
ATCR uses distribution/distribution as a library and extends it through middleware to route different types of content to different storage backends:
- Manifests → ATProto PDS (small JSON metadata, stored as
io.atcr.manifestrecords) - Blobs/Layers → S3 or user-deployed storage (large binary data)
- Authentication → ATProto OAuth with DPoP + Docker credential helpers
Three-Component Architecture
-
AppView (
cmd/appview) - OCI Distribution API server- Resolves identities (handle/DID → PDS endpoint)
- Routes manifests to user's PDS
- Routes blobs to storage endpoint (default or BYOS)
- Validates OAuth tokens via PDS
- Issues registry JWTs
-
Hold Service (
cmd/hold) - Optional BYOS component- Lightweight HTTP server for presigned URLs
- Supports S3, Storj, Minio, filesystem, etc.
- Authorization based on PDS records (hold.public, crew records)
- Auto-registration via OAuth
- Configured entirely via environment variables
-
Credential Helper (
cmd/credential-helper) - Client-side OAuth- Implements Docker credential helper protocol
- ATProto OAuth flow with DPoP
- Token caching and refresh
- Exchanges OAuth token for registry JWT
Request Flow
Push with Default Storage
1. Client: docker push atcr.io/alice/myapp:latest
2. HTTP Request → /v2/alice/myapp/manifests/latest
3. Registry Middleware (pkg/appview/middleware/registry.go)
→ Resolves "alice" to DID and PDS endpoint
→ Queries alice's sailor profile for defaultHold
→ If not set, checks alice's io.atcr.hold records
→ Falls back to AppView's default_storage_endpoint
→ Stores DID/PDS/storage endpoint in context
4. Routing Repository (pkg/appview/storage/routing_repository.go)
→ Creates RoutingRepository
→ Returns ATProto ManifestStore for manifests
→ Returns ProxyBlobStore for blobs
5. Blob PUT → Resolved hold service (redirects to S3/storage)
6. Manifest PUT → alice's PDS as io.atcr.manifest record (includes holdEndpoint)
Push with BYOS (Bring Your Own Storage)
1. Client: docker push atcr.io/alice/myapp:latest
2. Registry Middleware resolves alice → did:plc:alice123
3. Hold discovery via findStorageEndpoint():
a. Check alice's sailor profile for defaultHold
b. If not set, check alice's io.atcr.hold records
c. Fall back to AppView's default_storage_endpoint
4. Found: alice's profile has defaultHold = "https://alice-storage.fly.dev"
5. Routing Repository returns ProxyBlobStore(alice-storage.fly.dev)
6. ProxyBlobStore calls alice-storage.fly.dev for presigned URL
7. Storage service validates alice's DID, generates S3 presigned URL
8. Client redirected to upload blob directly to alice's S3/Storj
9. Manifest stored in alice's PDS with holdEndpoint = "https://alice-storage.fly.dev"
Pull Flow
1. Client: docker pull atcr.io/alice/myapp:latest
2. GET /v2/alice/myapp/manifests/latest
3. AppView fetches manifest from alice's PDS
4. Manifest contains holdEndpoint = "https://alice-storage.fly.dev"
5. Hold endpoint cached: (alice's DID, "myapp") → "https://alice-storage.fly.dev"
6. Client requests blobs: GET /v2/alice/myapp/blobs/sha256:abc123
7. AppView checks cache, routes to hold from manifest (not re-discovered)
8. ProxyBlobStore calls alice-storage.fly.dev for presigned download URL
9. Client redirected to download blob directly from alice's S3
Key insight: Pull uses the historical holdEndpoint from the manifest, ensuring blobs are fetched from the hold where they were originally pushed, even if alice later changes her default hold.
Name Resolution
Names follow the pattern: atcr.io/<identity>/<image>:<tag>
Where <identity> can be:
- Handle:
alice.bsky.social→ resolved via .well-known/atproto-did - DID:
did:plc:xyz123→ resolved via PLC directory
Resolution happens in pkg/atproto/resolver.go:
- Handle → DID (via DNS/HTTPS)
- DID → PDS endpoint (via DID document)
Middleware System
ATCR uses middleware and routing to handle requests:
1. Registry Middleware (pkg/appview/middleware/registry.go)
- Wraps
distribution.Namespace - Intercepts
Repository(name)calls - Performs name resolution (alice → did:plc:xyz → pds.example.com)
- Queries PDS for
io.atcr.holdrecords to find storage endpoint - Stores resolved identity and storage endpoint in context
2. Auth Middleware (pkg/appview/middleware/auth.go)
- Validates JWT tokens from Docker clients
- Extracts DID from token claims
- Injects authenticated identity into context
3. Routing Repository (pkg/appview/storage/routing_repository.go)
- Implements
distribution.Repository - Returns custom
Manifests()andBlobs()implementations - Routes manifests to ATProto, blobs to S3 or BYOS
Authentication Architecture
ATProto OAuth with DPoP
ATCR implements the full ATProto OAuth specification with mandatory security features:
Required Components:
- DPoP (RFC 9449) - Cryptographic proof-of-possession for every request
- PAR (RFC 9126) - Pushed Authorization Requests for server-to-server parameter exchange
- PKCE (RFC 7636) - Proof Key for Code Exchange to prevent authorization code interception
Key Components (pkg/auth/oauth/):
-
Client (
client.go) - Core OAuth client with encapsulated configuration- Constructor:
NewClient(baseURL)- accepts base URL, derives client ID/redirect URI NewClientWithKey(baseURL, dpopKey)- for token refresh with stored DPoP keyClientID()- computes localhost vs production client ID dynamicallyRedirectURI()- returnsbaseURL + "/auth/oauth/callback"GetDefaultScopes()- returns ATCR registry scopes- All OAuth flows (authorization, token exchange, refresh) in one place
- Constructor:
-
DPoP Transport (
transport.go) - HTTP RoundTripper that auto-adds DPoP headers -
Token Storage (
tokenstorage.go) - Persists refresh tokens and DPoP keys for AppView- File-based storage in
/var/lib/atcr/refresh-tokens.json(AppView) - Client uses
~/.atcr/oauth-token.json(credential helper)
- File-based storage in
-
Refresher (
refresher.go) - Token refresh manager for AppView- Caches access tokens with automatic refresh
- Per-DID locking prevents concurrent refresh races
- Uses Client methods for consistency
-
Server (
server.go) - OAuth authorization endpoints for AppViewGET /auth/oauth/authorize- starts OAuth flowGET /auth/oauth/callback- handles OAuth callback- Uses Client methods for authorization and token exchange
-
Interactive Flow (
flow.go) - Reusable OAuth flow for CLI tools- Used by credential helper and hold service registration
- Two-phase callback setup ensures PAR metadata availability
Authentication Flow:
1. User configures Docker to use the credential helper (adds to config.json)
2. On first docker push/pull, Docker calls credential helper
3. Credential helper opens browser → AppView OAuth page
4. AppView handles OAuth flow:
- Resolves handle → DID → PDS endpoint
- Discovers OAuth server metadata from PDS
- PAR request with DPoP header → get request_uri
- User authorizes in browser
- AppView exchanges code for OAuth token with DPoP proof
- AppView stores: OAuth token, refresh token, DPoP key, DID, handle
5. AppView shows device approval page: "Can [device] push to your account?"
6. User approves device
7. AppView issues registry JWT with validated DID
8. AppView returns JSON token to credential helper (via callback or browser display)
9. Credential helper saves registry JWT locally
10. Helper returns registry JWT to Docker
Later (subsequent docker push):
11. Docker calls credential helper
12. Helper returns cached registry JWT (or re-authenticates if expired)
Key distinction: The credential helper never manages OAuth tokens or DPoP keys directly. AppView owns the OAuth session and issues registry JWTs to the credential helper. This means AppView has access to user OAuth tokens and DPoP keys, which it needs for:
- Writing manifests to user's PDS
- Validating user sessions
- Delegating access to hold services
Security:
- Tokens validated against authoritative source (user's PDS)
- No trust in client-provided identity information
- DPoP binds tokens to specific client key
- 15-minute token expiry for registry JWTs
Key Components
ATProto Integration (pkg/atproto/)
resolver.go: DID and handle resolution
ResolveIdentity(): alice → did:plc:xyz → pds.example.comResolveHandle(): Uses .well-known/atproto-didResolvePDS(): Parses DID document for PDS endpoint
client.go: ATProto PDS client
PutRecord(): Store manifest as ATProto recordGetRecord(): Retrieve manifest from PDSDeleteRecord(): Remove manifest- Uses XRPC protocol (com.atproto.repo.*)
lexicon.go: ATProto record schemas
ManifestRecord: OCI manifest stored as ATProto record (includesholdEndpointfield)TagRecord: Tag pointing to manifest digestHoldRecord: Storage hold definition (for BYOS)HoldCrewRecord: Hold crew membership/permissionsSailorProfileRecord: User profile withdefaultHoldpreference- Collections:
io.atcr.manifest,io.atcr.tag,io.atcr.hold,io.atcr.hold.crew,io.atcr.sailor.profile
profile.go: Sailor profile management
EnsureProfile(): Creates profile with default hold on first authenticationGetProfile(): Retrieves user's profile from PDSUpdateProfile(): Updates user's profile
manifest_store.go: Implements distribution.ManifestService
- Stores OCI manifests as ATProto records
- Digest-based addressing (sha256:abc123 → record key)
- Converts between OCI and ATProto formats
Storage Layer (pkg/appview/storage/)
routing_repository.go: Routes content by type
Manifests()→ returns ATProto ManifestStore (caches instance for hold endpoint extraction)Blobs()→ checks hold cache for pull, uses discovery for push- Pull: Uses cached
holdEndpointfrom manifest (historical reference) - Push: Uses discovery-based endpoint from
findStorageEndpoint() - Always returns ProxyBlobStore (routes to hold service)
- Pull: Uses cached
- Implements
distribution.Repositoryinterface
hold_cache.go: In-memory hold endpoint cache
- Caches
(DID, repository) → holdEndpointfor pull operations - TTL: 10 minutes (covers typical pull operations)
- Cleanup: Background goroutine runs every 5 minutes
- NOTE: Simple in-memory cache for MVP. For production: use Redis or similar
- Prevents expensive ATProto lookups on every blob request
proxy_blob_store.go: External storage proxy
- Calls user's storage service for presigned URLs
- Issues HTTP redirects for blob uploads/downloads
- Implements full
distribution.BlobStoreinterface - Supports multipart uploads for large blobs
- Used when user has
io.atcr.holdrecord
AppView Web UI (pkg/appview/)
The AppView includes a web interface for browsing the registry:
Features:
- Repository browsing and search
- Star/favorite repositories
- Pull count tracking
- User profiles and settings
- OAuth-based authentication for web users
Database Layer (pkg/appview/db/):
- SQLite database for metadata (stars, pulls, repository info)
- Schema migrations via SQL files in
pkg/appview/db/schema.go - Stores: OAuth sessions, device flows, repository metadata
- NOTE: Simple SQLite for MVP. For production multi-instance: use PostgreSQL
Jetstream Integration (pkg/appview/jetstream/):
- Consumes ATProto Jetstream for real-time updates
- Backfills repository records from PDS
- Indexes manifests, tags, and repository metadata
- Worker processes incoming events
Web Handlers (pkg/appview/handlers/):
home.go- Landing pagerepository.go- Repository detail pagessearch.go- Search functionalityauth.go- OAuth login/logout for websettings.go- User settings managementapi.go- JSON API endpoints
Static Assets (pkg/appview/static/, pkg/appview/templates/):
- Templates use Go html/template
- JavaScript in
static/js/app.js - Minimal CSS for clean UI
Hold Service (cmd/hold/)
Lightweight standalone service for BYOS (Bring Your Own Storage):
Architecture:
- Reuses distribution's storage driver factory
- Supports all distribution drivers: S3, Storj, Minio, Azure, GCS, filesystem
- Authorization follows ATProto's public-by-default model
- Generates presigned URLs (15min expiry) or proxies uploads/downloads
Authorization Model:
Read access:
- Public hold (
HOLD_PUBLIC=true): Anonymous + all authenticated users - Private hold (
HOLD_PUBLIC=false): Authenticated users only (any ATCR user)
Write access:
- Hold owner OR crew members only
- Verified via
io.atcr.hold.crewrecords in owner's PDS
Key insight: "Private" gates anonymous access, not authenticated access. This reflects ATProto's current limitation (no private PDS records yet).
Endpoints:
POST /get-presigned-url- Get download URL for blobPOST /put-presigned-url- Get upload URL for blobGET /blobs/{digest}- Proxy download (fallback if no presigned URL support)PUT /blobs/{digest}- Proxy upload (fallback)POST /register- Manual registration endpointGET /health- Health check
Configuration: Environment variables (see .env.example)
HOLD_PUBLIC_URL- Public URL of hold service (required)STORAGE_DRIVER- Storage driver type (s3, filesystem)AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY- S3 credentialsS3_BUCKET,S3_ENDPOINT- S3 configurationHOLD_PUBLIC- Allow public reads (default: false)HOLD_OWNER- DID for auto-registration (optional)
Deployment: Can run on Fly.io, Railway, Docker, Kubernetes, etc.
ATProto Storage Model
Manifests are stored as records with this structure:
{
"$type": "io.atcr.manifest",
"repository": "myapp",
"digest": "sha256:abc123...",
"holdEndpoint": "https://hold1.alice.com",
"schemaVersion": 2,
"mediaType": "application/vnd.oci.image.manifest.v1+json",
"config": { "digest": "sha256:...", "size": 1234 },
"layers": [
{ "digest": "sha256:...", "size": 5678 }
],
"createdAt": "2025-09-30T..."
}
Record key = manifest digest (without algorithm prefix)
Collection = io.atcr.manifest
Sailor Profile System
ATCR uses a "sailor profile" to manage user preferences for hold (storage) selection. The nautical theme reflects the architecture:
- Sailors = Registry users
- Captains = Hold owners
- Crew = Hold members with access
- Holds = Storage endpoints (BYOS)
Profile Record (io.atcr.sailor.profile):
{
"$type": "io.atcr.sailor.profile",
"defaultHold": "https://hold1.alice.com",
"createdAt": "2025-10-02T...",
"updatedAt": "2025-10-02T..."
}
Profile Management:
- Created automatically on first authentication (OAuth or Basic Auth)
- If AppView has
default_storage_endpointconfigured, profile gets that asdefaultHold - Users can update their profile to change default hold (future: via UI)
- Setting
defaultHoldto null opts out of defaults (use own holds or AppView default)
Hold Resolution Priority (in findStorageEndpoint()):
- Profile's
defaultHold- User's explicit preference - User's
io.atcr.holdrecords - User's own holds - AppView's
default_storage_endpoint- Fallback default
This ensures:
- Users can join shared holds by setting their profile's
defaultHold - Users can opt out of defaults (set
defaultHoldto null) - URL structure remains
atcr.io/<owner>/<image>(ownership-based, not hold-based) - Hold choice is transparent infrastructure (like choosing an S3 region)
Key Design Decisions
- No fork of distribution: Uses distribution as library, extends via middleware
- Hybrid storage: Manifests in ATProto (small, federated), blobs in S3 or BYOS (cheap, scalable)
- Content addressing: Manifests stored by digest, blobs deduplicated globally
- ATProto-native: Manifests are first-class ATProto records, discoverable via AT Protocol
- OCI compliant: Fully compatible with Docker/containerd/podman
- Account-agnostic AppView: Server validates any user's token, queries their PDS for config
- BYOS architecture: Users can deploy their own storage service, AppView just routes
- OAuth with DPoP: Full ATProto OAuth implementation with mandatory DPoP proofs
- Sailor profile system: User preferences for hold selection, transparent to image ownership
- Historical hold references: Manifests store
holdEndpointfor immutable blob location tracking
Configuration
AppView configuration (environment variables):
Both AppView and Hold service follow the same pattern: zero config files, all configuration via environment variables.
See .env.appview.example for all available options. Key environment variables:
Server:
ATCR_HTTP_ADDR- HTTP listen address (default::5000)ATCR_BASE_URL- Public URL for OAuth/JWT realm (auto-detected in dev)ATCR_DEFAULT_HOLD- Default hold endpoint for blob storage (REQUIRED)
Authentication:
ATCR_AUTH_KEY_PATH- JWT signing key path (default:/var/lib/atcr/auth/private-key.pem)ATCR_TOKEN_EXPIRATION- JWT expiration in seconds (default: 300)
UI:
ATCR_UI_ENABLED- Enable web interface (default: true)ATCR_UI_DATABASE_PATH- SQLite database path (default:/var/lib/atcr/ui.db)
Jetstream:
JETSTREAM_URL- ATProto event stream URLATCR_BACKFILL_ENABLED- Enable periodic sync (default: false)
Legacy: config/config.yml is still supported but deprecated. Use environment variables instead.
Hold Service configuration (environment variables):
See .env.hold.example for all available options. Key environment variables:
HOLD_PUBLIC_URL- Public URL of hold service (REQUIRED)STORAGE_DRIVER- Storage backend (s3, filesystem)AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY- S3 credentialsS3_BUCKET,S3_ENDPOINT- S3 configurationHOLD_PUBLIC- Allow public reads (default: false)HOLD_OWNER- DID for auto-registration (optional)
Credential Helper:
- Token storage:
~/.atcr/credential-helper-token.json(or Docker's credential store) - Contains: Registry JWT issued by AppView (NOT OAuth tokens)
- OAuth session managed entirely by AppView
Development Notes
General:
- Middleware is in
pkg/appview/middleware/(auth.go, registry.go) - Storage routing is in
pkg/appview/storage/(routing_repository.go, proxy_blob_store.go, hold_cache.go) - Storage drivers imported as
_ "github.com/distribution/distribution/v3/registry/storage/driver/s3-aws" - Hold service reuses distribution's driver factory for multi-backend support
OAuth implementation:
- Client (
pkg/auth/oauth/client.go) encapsulates all OAuth configuration - Token validation via
com.atproto.server.getSessionensures no trust in client-provided identity - All ATCR components use standardized
/auth/oauth/callbackpath - Client ID generation (localhost query-based vs production metadata URL) handled internally
Testing Strategy
When writing tests:
- Mock ATProto client for manifest operations
- Mock S3 driver for blob operations
- Test name resolution independently
- Integration tests require real PDS + S3
Common Tasks
Adding a new ATProto record type:
- Define schema in
pkg/atproto/lexicon.go - Add collection constant (e.g.,
MyCollection = "io.atcr.my-type") - Add constructor function (e.g.,
NewMyRecord()) - Update client methods if needed
Modifying storage routing:
- Edit
pkg/appview/storage/routing_repository.go - Update
Blobs()method to change routing logic - Consider context values:
storage.endpoint,atproto.did
Changing name resolution:
- Modify
pkg/atproto/resolver.gofor DID/handle resolution - Update
pkg/appview/middleware/registry.goif changing routing logic - Remember:
findStorageEndpoint()queries PDS forio.atcr.holdrecords
Working with OAuth client:
- Client is self-contained: pass
baseURL, it handles client ID/redirect URI/scopes - For AppView server/refresher: use
NewClient(baseURL)orNewClientWithKey(baseURL, storedKey) - For custom scopes: call
client.SetScopes(customScopes)after initialization - Standard callback path:
/auth/oauth/callback(used by all ATCR components) - Client methods are consistent across authorization, token exchange, and refresh flows
Adding BYOS support for a user:
- User sets environment variables (storage credentials, public URL)
- User runs hold service with
HOLD_OWNERset - auto-registration via OAuth - Hold service creates
io.atcr.hold+io.atcr.hold.crewrecords in PDS - AppView automatically queries PDS and routes blobs to user's storage
- No AppView changes needed - fully decentralized
Supporting a new storage backend:
- Ensure driver is registered in
cmd/hold/main.goimports - Distribution supports: S3, Azure, GCS, Swift, filesystem, OSS
- For custom drivers: implement
storagedriver.StorageDriverinterface - Add case to
buildStorageConfig()incmd/hold/main.go - Update
.env.examplewith new driver's env vars
Working with the database:
- Schema defined in
pkg/appview/db/schema.go - Queries in
pkg/appview/db/queries.go - Stores for OAuth, devices, sessions in separate files
- Run migrations automatically on startup
- Database path configurable via
ATCR_UI_DATABASE_PATHenv var
Adding web UI features:
- Add handler in
pkg/appview/handlers/ - Register route in
cmd/appview/serve.go - Create template in
pkg/appview/templates/pages/ - Use existing auth middleware for protected routes
- API endpoints return JSON, pages return HTML
Important Context Values
When working with the codebase, these context values are used for routing:
atproto.did- Resolved DID for the user (e.g.,did:plc:alice123)atproto.pds- User's PDS endpoint (e.g.,https://bsky.social)atproto.identity- Original identity string (handle or DID)storage.endpoint- Storage service URL (if user hasio.atcr.registryrecord)auth.did- Authenticated DID from validated token
Documentation References
- BYOS Architecture: See
docs/BYOS.mdfor complete BYOS documentation - OAuth Implementation: See
docs/OAUTH.mdfor OAuth/DPoP flow details - ATProto Spec: https://atproto.com/specs/oauth
- OCI Distribution Spec: https://github.com/opencontainers/distribution-spec
- DPoP RFC: https://datatracker.ietf.org/doc/html/rfc9449
- PAR RFC: https://datatracker.ietf.org/doc/html/rfc9126
- PKCE RFC: https://datatracker.ietf.org/doc/html/rfc7636