17 KiB
ATProto Container Registry (atcr.io) Implementation Plan
Project Structure
/home/data/atcr.io/
├── cmd/
│ └── registry/
│ └── main.go # Entrypoint that imports distribution
├── pkg/
│ ├── atproto/
│ │ ├── client.go # ATProto client wrapper (using indigo)
│ │ ├── manifest_store.go # Implements distribution.ManifestService
│ │ ├── resolver.go # DID/handle resolution (alice → did:plc:...)
│ │ └── lexicon.go # ATProto record schemas for manifests
│ ├── storage/
│ │ ├── s3_blob_store.go # Wraps distribution's S3 driver for blobs
│ │ └── routing_repository.go # Routes manifests→ATProto, blobs→S3
│ ├── middleware/
│ │ ├── repository.go # Repository middleware registration
│ │ └── registry.go # Registry middleware for name resolution
│ └── server/
│ └── handler.go # HTTP wrapper for custom name resolution
├── config/
│ └── config.yml # Registry configuration
├── go.mod
├── go.sum
├── Dockerfile
├── README.md
└── CLAUDE.md # Updated with architecture docs
Implementation Steps
Phase 1: Project Setup
1. Initialize Go module with github.com/distribution/distribution/v3 and github.com/bluesky-social/indigo
2. Create basic project structure
3. Set up cmd/appview/main.go that imports distribution and registers middleware
Phase 2: Core ATProto Integration
4. Implement DID/handle resolver (pkg/atproto/resolver.go)
- Resolve handles to DIDs (alice.bsky.social → did:plc:xyz)
- Discover PDS endpoints from DID documents
5. Create ATProto client wrapper (pkg/atproto/client.go)
- Wrap indigo SDK for manifest storage
- Handle authentication with PDS
6. Design ATProto lexicon for manifest records (pkg/atproto/lexicon.go)
- Define schema for storing OCI manifests as ATProto records
Phase 3: Storage Layer
7. Implement ATProto manifest store (pkg/atproto/manifest_store.go)
- Implements distribution.ManifestService
- Stores/retrieves manifests from PDS
8. Implement S3 blob store wrapper (pkg/storage/s3_blob_store.go)
- Wraps distribution's built-in S3 driver
9. Create routing repository (pkg/storage/routing_repository.go)
- Returns ATProto store for Manifests()
- Returns S3 store for Blobs()
Phase 4: Middleware Layer
10. Implement repository middleware (pkg/middleware/repository.go)
- Registers routing repository
- Configurable via YAML
11. Implement registry/namespace middleware (pkg/middleware/registry.go)
- Intercepts Repository(name) calls
- Performs name resolution before repository creation
Phase 5: HTTP Layer (if needed)
12. Create custom HTTP handler (pkg/server/handler.go)
- Wraps distribution's HTTP handlers
- Performs early name resolution: atcr.io/alice/myimage → resolve alice
- Delegates to distribution handlers
Phase 6: Configuration & Deployment
13. Create registry configuration (config/config.yml)
14. Create Dockerfile for building atcr-appview binary
16. Write README.md with usage instructions
Phase 7: Documentation
17. Update CLAUDE.md with:
- Architecture overview (ATProto for manifests, S3 for blobs)
- Build/run/test commands
- How name resolution works
- Middleware registration flow
- Key design decisions
Key Technical Decisions
ATProto Storage Design:
- Manifests stored as ATProto records in user's PDS
- Each image manifest is a record in a io.atcr.manifest collection
- Record key = image digest (sha256:...)
Name Resolution:
- atcr.io/alice/myimage → resolve alice to DID → discover PDS
- Support both handles (alice.bsky.social) and DIDs (did:plc:xyz)
Blob Storage:
- All layers/blobs in S3 (content-addressable by digest)
- Manifests reference S3-stored blobs by digest
- S3 provides cheap, durable blob storage
Middleware Strategy:
- Repository middleware for storage routing
- Registry middleware (or HTTP wrapper) for name resolution
- No fork of distribution core needed
Perfect. To match Docker Hub/ghcr.io/gcr.io, here's what we need:
● Implementation Plan (Drop-in replacement for Docker Hub/ghcr.io)
Flow 1: Standard Token Auth (Like Docker Hub) - PRIMARY
User experience
docker login atcr.io -u alice.bsky.social -p docker push atcr.io/alice/myapp:latest
Behind the scenes
- docker login stores credentials locally
- docker push → Registry returns 401 with WWW-Authenticate: Bearer realm="https://atcr.io/auth/token"...
- Docker auto-calls /auth/token with Basic auth (alice.bsky.social:app-password)
- Auth service validates against ATProto createSession
- Returns JWT token with scope for alice/myapp
- Docker uses JWT for manifest/blob uploads
- Registry validates JWT signature and scope
Components:
- /auth/token endpoint (standalone service or embedded)
- ATProto session validator (username/password → validate via PDS)
- JWT issuer/signer
- JWT validator middleware for registry
Flow 2: Credential Helper (Like gcr.io) - ADVANCED
User experience
docker-credential-atcr configure
Opens browser for ATProto OAuth
docker push atcr.io/alice/myapp:latest
No manual login needed
Behind the scenes
- Helper does OAuth flow → gets ATProto access token
- Caches token securely
- When Docker needs credentials, calls helper via stdin/stdout
- Helper exchanges ATProto token for registry JWT at /auth/exchange
- Returns JWT to Docker
- Docker uses JWT for requests
Components:
- cmd/credential-helper/main.go - Standalone binary
- ATProto OAuth client
- Token exchange endpoint (/auth/exchange)
- Secure token cache
Architecture:
pkg/auth/ ├── token/ │ ├── service.go # HTTP handler for /auth/token │ ├── claims.go # JWT claims structure │ ├── issuer.go # Signs JWTs │ └── validator.go # Validates JWTs (middleware for registry) ├── atproto/ │ ├── session.go # Validates username/password via ATProto │ └── oauth.go # OAuth flow implementation ├── exchange/ │ └── handler.go # /auth/exchange endpoint (OAuth → JWT) └── scope.go # Parses/validates Docker scopes
cmd/ ├── registry/main.go # Registry server (existing) ├── auth/main.go # Standalone auth service (optional) └── credential-helper/ └── main.go # docker-credential-atcr binary
Config:
auth: token: realm: https://atcr.io/auth/token # Where Docker gets tokens service: atcr.io issuer: atcr.io rootcertbundle: /etc/atcr/token-signing.crt privatekey: /etc/atcr/token-signing.pem expiration: 300
atproto:
# Used by auth service to validate credentials
pds_endpoint: https://bsky.social
client_id: atcr-appview
oauth_redirect: http://localhost:8888/callback
ATProto OAuth Implementation Plan
Architecture
Dependencies:
- authelia.com/client/oauth2 - OAuth + PAR support
- github.com/AxisCommunications/go-dpop - DPoP proof generation (handles JWK automatically)
- github.com/golang-jwt/jwt/v5 - JWT library (transitive via go-dpop)
- Our existing pkg/atproto/resolver.go - ATProto identity resolution
Implementation Components
1. OAuth Client (pkg/auth/oauth/client.go) - ~100 lines
type Client struct {
config *oauth2.Config
dpopKey *ecdsa.PrivateKey
resolver *atproto.Resolver
clientID string // URL to our metadata document
redirectURI string
dpopNonce string // Server-provided nonce
}
func NewClient(clientID, redirectURI string) (*Client, error)
func (c *Client) AuthorizeURL(handle string, scopes []string) (string, error)
func (c *Client) Exchange(code string) (*Token, error)
func (c *Client) addDPoPHeader(req *http.Request, method, url string) error
Flow:
1. Generate ECDSA P-256 key for DPoP
2. Discover authorization server from handle/DID
3. Use authelia's PushedAuth() for PAR with DPoP header
4. Exchange code for token with DPoP proof
2. Authorization Server Discovery (pkg/auth/oauth/discovery.go) - ~30 lines
type AuthServerMetadata struct {
Issuer string `json:"issuer"`
AuthorizationEndpoint string `json:"authorization_endpoint"`
TokenEndpoint string `json:"token_endpoint"`
PushedAuthorizationRequestEndpoint string `json:"pushed_authorization_request_endpoint"`
DPoPSigningAlgValuesSupported []string `json:"dpop_signing_alg_values_supported"`
}
func DiscoverAuthServer(pdsEndpoint string) (*AuthServerMetadata, error)
Implementation:
- GET {pds}/.well-known/oauth-authorization-server
- Parse JSON metadata
- Validate required endpoints exist
3. Client Metadata Server (pkg/auth/oauth/metadata.go) - ~40 lines
type ClientMetadata struct {
ClientID string `json:"client_id"`
RedirectURIs []string `json:"redirect_uris"`
GrantTypes []string `json:"grant_types"`
ResponseTypes []string `json:"response_types"`
Scope string `json:"scope"`
DPoPBoundAccessTokens bool `json:"dpop_bound_access_tokens"`
}
func ServeMetadata(clientID string, redirectURIs []string) http.Handler
Serves: https://atcr.io/oauth/client-metadata.json
4. Token Storage (pkg/auth/oauth/storage.go) - ~50 lines
type TokenStore struct {
AccessToken string
RefreshToken string
DPoPKey *ecdsa.PrivateKey // Persist for refresh
ExpiresAt time.Time
}
func (s *TokenStore) Save(path string) error
func LoadTokenStore(path string) (*TokenStore, error)
Storage location: ~/.atcr/oauth-tokens.json
5. Credential Helper (cmd/credential-helper/main.go) - ~80 lines
// Docker credential helper protocol
// Reads JSON from stdin, writes to stdout
func main() {
if len(os.Args) < 2 {
os.Exit(1)
}
switch os.Args[1] {
case "get":
handleGet() // Return credentials for registry
case "store":
handleStore() // Store credentials
case "erase":
handleErase() // Remove credentials
}
}
func handleGet() {
var request struct {
ServerURL string `json:"ServerURL"`
}
json.NewDecoder(os.Stdin).Decode(&request)
// Load token from storage
// Exchange for registry JWT if needed
// Output: {"Username": "oauth2", "Secret": "<jwt>"}
}
6. OAuth Flow (cmd/credential-helper/oauth.go) - ~60 lines
func RunOAuthFlow(handle string) (*TokenStore, error) {
// 1. Start local HTTP server on :8888
// 2. Open browser to authorization URL
// 3. Wait for callback with code
// 4. Exchange code for token
// 5. Save token store
// 6. Return token
}
func startCallbackServer() (chan string, *http.Server)
Complete Flow Example
User runs:
docker-credential-atcr configure
What happens:
1. Generate DPoP key (client.go)
dpopKey, _ := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
2. Resolve handle → DID → PDS (using our resolver)
did, pds, _ := resolver.ResolveIdentity(ctx, "alice.bsky.social")
3. Discover auth server (discovery.go)
metadata, _ := DiscoverAuthServer(pds)
// Returns: PAR endpoint, token endpoint, etc.
4. Create PAR request with DPoP (client.go + go-dpop)
// Generate DPoP proof for PAR endpoint
claims := &dpop.ProofTokenClaims{
Method: dpop.POST,
URL: metadata.PushedAuthorizationRequestEndpoint,
RegisteredClaims: &jwt.RegisteredClaims{
IssuedAt: jwt.NewNumericDate(time.Now()),
},
}
dpopProof, _ := dpop.Create(jwt.SigningMethodES256, claims, dpopKey)
// Use authelia for PAR
config := &oauth2.Config{
ClientID: "https://atcr.io/oauth/client-metadata.json",
Endpoint: oauth2.Endpoint{
AuthURL: metadata.AuthorizationEndpoint,
TokenURL: metadata.TokenEndpoint,
},
}
// Create custom HTTP client that adds DPoP header
client := &http.Client{
Transport: &dpopTransport{
base: http.DefaultTransport,
dpopKey: dpopKey,
},
}
ctx := context.WithValue(context.Background(), oauth2.HTTPClient, client)
// PAR request (authelia handles this)
authURL, parResp, _ := config.PushedAuth(ctx, state,
oauth2.SetAuthURLParam("code_challenge", pkceChallenge),
oauth2.SetAuthURLParam("code_challenge_method", "S256"),
)
5. Open browser, get code (oauth.go)
exec.Command("open", authURL).Run()
// User authorizes
// Callback: http://localhost:8888?code=xyz&state=abc
6. Exchange code for token with DPoP (client.go + go-dpop)
// Generate DPoP proof for token endpoint
claims := &dpop.ProofTokenClaims{
Method: dpop.POST,
URL: metadata.TokenEndpoint,
RegisteredClaims: &jwt.RegisteredClaims{
IssuedAt: jwt.NewNumericDate(time.Now()),
},
}
dpopProof, _ := dpop.Create(jwt.SigningMethodES256, claims, dpopKey)
// Exchange (with DPoP header added by our transport)
token, _ := config.Exchange(ctx, code,
oauth2.SetAuthURLParam("code_verifier", pkceVerifier),
)
7. Save token + DPoP key (storage.go)
store := &TokenStore{
AccessToken: token.AccessToken,
RefreshToken: token.RefreshToken,
DPoPKey: dpopKey,
ExpiresAt: token.Expiry,
}
store.Save("~/.atcr/oauth-tokens.json")
Later, when docker push happens:
docker push atcr.io/alice/myapp:latest
1. Docker calls credential helper: docker-credential-atcr get
2. Helper loads stored token
3. Helper calls /auth/exchange with OAuth token → gets registry JWT
4. Returns JWT to Docker
5. Docker uses JWT for push
Directory Structure
pkg/auth/oauth/
├── client.go # OAuth client with DPoP integration
├── discovery.go # Authorization server discovery
├── metadata.go # Client metadata server
├── storage.go # Token persistence
└── transport.go # HTTP transport that adds DPoP headers
cmd/credential-helper/
├── main.go # Docker credential helper protocol
├── oauth.go # OAuth flow (browser, callback)
└── config.go # Configuration
go.mod additions:
authelia.com/client/oauth2 v0.25.0
github.com/AxisCommunications/go-dpop v1.1.2
Unified Model
Every hold service requires HOLD_OWNER:
- Owner's PDS has the io.atcr.hold record
- Owner's PDS has all io.atcr.hold.crew records
- Authorization is always governed by PDS records
For "public" hold (like Tangled's public knot):
- Owner creates hold with public: true
- Anyone can push/pull without being crew
- Owner can add crew records for special privileges/tracking if desired
Config has emergency override: auth: # Emergency freeze: ignore public setting, restrict to crew only # Use this to stop abuse without changing PDS records freeze: false
Authorization logic:
- Check freeze in config → if true, skip to crew check
- Query owner's PDS for io.atcr.hold record
- If public: true → allow all operations (unless frozen)
- If public: false OR frozen → query io.atcr.hold.crew records, check membership
Remove from config:
- allow_all (replaced by public: true in PDS)
- allowed_dids (replaced by crew records in PDS)
This way the hold owner at atcr.io can run a public hold at hold1.atcr.io that anyone can use, but can freeze it instantly if needed without touching PDS records.