Compare commits
335 Commits
v0.0.2
...
label-serv
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
8bf3e15ca2 | ||
|
|
d6816fd00e | ||
|
|
385f8987fe | ||
|
|
8adbc7505f | ||
|
|
cdca30f346 | ||
|
|
29ef8138aa | ||
|
|
7d8e195189 | ||
|
|
e886192aeb | ||
|
|
8fb69497e3 | ||
|
|
347e7ac80b | ||
|
|
11a8be1413 | ||
|
|
fcc5fa78bc | ||
|
|
b235e4a7dc | ||
|
|
7d74e76772 | ||
|
|
0827219716 | ||
|
|
7c064ba8b0 | ||
|
|
136c0a0ecc | ||
|
|
dc31ca2f35 | ||
|
|
1e04c91507 | ||
|
|
e6c2099a0f | ||
|
|
5249c9eaab | ||
|
|
2b9ea997ac | ||
|
|
356f9d529a | ||
|
|
f90a46e0a4 | ||
|
|
33548ecf32 | ||
|
|
76383ec764 | ||
|
|
200d8a7bb9 | ||
|
|
5b722b3c73 | ||
|
|
0d00de76c6 | ||
|
|
22b2d69cb3 | ||
|
|
5615dd4132 | ||
|
|
27cf78158b | ||
|
|
dba201998e | ||
|
|
cd4986c0c8 | ||
|
|
6b87539ef8 | ||
|
|
2df5377541 | ||
|
|
10b35642a5 | ||
|
|
abefcfd1ed | ||
|
|
0d723cb708 | ||
|
|
f307d6ea85 | ||
|
|
3085fc726b | ||
|
|
cecf6d4b7c | ||
|
|
f340158a79 | ||
|
|
e3843db9d8 | ||
|
|
83e5c82ca4 | ||
|
|
ec2063ef52 | ||
|
|
8048921f5e | ||
|
|
de02e1f046 | ||
|
|
434a5f1eee | ||
|
|
07bc924a60 | ||
|
|
24c7b03ce5 | ||
|
|
c0cf3fb94f | ||
|
|
92c31835e2 | ||
|
|
8d39daa09d | ||
|
|
ac32a98104 | ||
|
|
150975a9fa | ||
|
|
22d5396589 | ||
|
|
8e45b2eee5 | ||
|
|
9723de0bcd | ||
|
|
914328dbf1 | ||
|
|
b251c8857f | ||
|
|
4ac2b97c33 | ||
|
|
53de92e5d3 | ||
|
|
aad9ebfc8b | ||
|
|
7ba42080c5 | ||
|
|
fbe7338492 | ||
|
|
bc034e3465 | ||
|
|
4d9452bb75 | ||
|
|
cd47945301 | ||
|
|
ef0161fb0e | ||
|
|
834bb8d36c | ||
|
|
2c39a78ac2 | ||
|
|
73109641e8 | ||
|
|
d6114cf549 | ||
|
|
9c9c808eea | ||
|
|
35f7a47af3 | ||
|
|
5d3b6c2047 | ||
|
|
6a52175d70 | ||
|
|
34f342f637 | ||
|
|
ca56a7c309 | ||
|
|
57593a8683 | ||
|
|
3b7455a299 | ||
|
|
865c597188 | ||
|
|
536fa416d4 | ||
|
|
d8b0305ce8 | ||
|
|
f79d6027ad | ||
|
|
0358e2e5ad | ||
|
|
faf63d8344 | ||
|
|
26f049fcbe | ||
|
|
ebb107ebec | ||
|
|
d0843323fe | ||
|
|
b7ed0e7d5b | ||
|
|
dbe0efd949 | ||
|
|
2d7d2fd5ca | ||
|
|
c48a763529 | ||
|
|
a7d3292624 | ||
|
|
b99ae53755 | ||
|
|
57d44389b9 | ||
|
|
8f3d992ce4 | ||
|
|
6272273588 | ||
|
|
950b1f94d0 | ||
|
|
908e124917 | ||
|
|
eb3eed5f7a | ||
|
|
055b34af71 | ||
|
|
23a9b52619 | ||
|
|
4c0f20a32e | ||
|
|
b1767cfb6b | ||
|
|
ac5821593f | ||
|
|
fa9abc28b9 | ||
|
|
3155f91e3a | ||
|
|
9e600649a6 | ||
|
|
64cdb66957 | ||
|
|
51f6917444 | ||
|
|
f27e2e0d93 | ||
|
|
263ec4b7af | ||
|
|
ab7e7c7abc | ||
|
|
3409af6c67 | ||
|
|
d4b88b5105 | ||
|
|
56dd522218 | ||
|
|
9704fe091d | ||
|
|
c82dad81f7 | ||
|
|
2d5039d33c | ||
|
|
e0a2dda1af | ||
|
|
482d921cc8 | ||
|
|
c80b5b2941 | ||
|
|
f5979b8f08 | ||
|
|
f35bf2bcde | ||
|
|
a448e8257b | ||
|
|
487fc8a47e | ||
|
|
e5e59fdcbf | ||
|
|
af815fbc7d | ||
|
|
efef46b15a | ||
|
|
fbcaf56fce | ||
|
|
680e4bdfe2 | ||
|
|
a7175f9e3e | ||
|
|
aa4b32bbd6 | ||
|
|
53e196a261 | ||
|
|
f74bc3018a | ||
|
|
6dd612e157 | ||
|
|
84866f5e74 | ||
|
|
e6bd4c122e | ||
|
|
7dcef54d28 | ||
|
|
506d8b002b | ||
|
|
647c33e164 | ||
|
|
1f0705a218 | ||
|
|
347db5c391 | ||
|
|
e97e51a59c | ||
|
|
045aeb2de5 | ||
|
|
74c90697a7 | ||
|
|
cd6928ec4a | ||
|
|
88998904d6 | ||
|
|
1df1bb57a4 | ||
|
|
f19dfa2716 | ||
|
|
af99929aa3 | ||
|
|
7f2d780b0a | ||
|
|
8956568ed2 | ||
|
|
c1f2ae0f7a | ||
|
|
012a14c4ee | ||
|
|
4cda163099 | ||
|
|
41bcee4a59 | ||
|
|
24d6b49481 | ||
|
|
363c12e6bf | ||
|
|
2a60a47fd5 | ||
|
|
34c2b8b17c | ||
|
|
8d0cff63fb | ||
|
|
d11356cd18 | ||
|
|
79d1126726 | ||
|
|
8e31137c62 | ||
|
|
023efb05aa | ||
|
|
b18e4c3996 | ||
|
|
24b265bf12 | ||
|
|
e8e375639d | ||
|
|
5a208de4c9 | ||
|
|
104eb86c04 | ||
|
|
509a1c0306 | ||
|
|
8d64efe229 | ||
|
|
23303c2187 | ||
|
|
e872b71d63 | ||
|
|
bd55783d8e | ||
|
|
3b343c9fdb | ||
|
|
a9704143f0 | ||
|
|
96e29a548d | ||
|
|
5f19213e32 | ||
|
|
afbc039751 | ||
|
|
044d408cf8 | ||
|
|
4063544cdf | ||
|
|
111cc4cc18 | ||
|
|
cefe0038fc | ||
|
|
82dd0d6a9b | ||
|
|
02fabc4a41 | ||
|
|
5dff759064 | ||
|
|
c4a9e4bf00 | ||
|
|
a09453c60d | ||
|
|
4a4a7b4258 | ||
|
|
ec08cec050 | ||
|
|
ed0f35e841 | ||
|
|
5f1eb05a96 | ||
|
|
66037c332e | ||
|
|
08b8bcf295 | ||
|
|
88df0c4ae5 | ||
|
|
fb7ddd0d53 | ||
|
|
ecf84ed8bc | ||
|
|
3bdc0da90b | ||
|
|
628f8b7c62 | ||
|
|
15d3684cf6 | ||
|
|
4667d34b46 | ||
|
|
4d5182e2b2 | ||
|
|
65d155f74f | ||
|
|
92d794415a | ||
|
|
270fe15e1e | ||
|
|
7285dd44f3 | ||
|
|
9bd49b9e49 | ||
|
|
6b56f18715 | ||
|
|
e296971c47 | ||
|
|
d7eba25f66 | ||
|
|
7a0050235d | ||
|
|
ff7bc131b2 | ||
|
|
2d720e4154 | ||
|
|
e6b1264269 | ||
|
|
15d2be9210 | ||
|
|
5a41f876ff | ||
|
|
d4b9d84df1 | ||
|
|
f07376c3d0 | ||
|
|
2f2b8c8275 | ||
|
|
9af56daa34 | ||
|
|
55afa99efa | ||
|
|
6793ba6a50 | ||
|
|
c7fdb748ae | ||
|
|
5a3b3f3372 | ||
|
|
9d773d484a | ||
|
|
6ef2aaf709 | ||
|
|
b0799cd94d | ||
|
|
93b1d0d4ba | ||
|
|
e62ebdaa53 | ||
|
|
4cfe6f221d | ||
|
|
0cf03109be | ||
|
|
0b22082f89 | ||
|
|
1727801df3 | ||
|
|
6bc929f2dc | ||
|
|
6024953571 | ||
|
|
28ee948d0f | ||
|
|
c831d3f735 | ||
|
|
162d91d079 | ||
|
|
d75a27557a | ||
|
|
c79d0ac3ab | ||
|
|
bf93dfba03 | ||
|
|
e17600db28 | ||
|
|
35ba417a96 | ||
|
|
8d1040b0d7 | ||
|
|
ba97e19ef3 | ||
|
|
771cd4390a | ||
|
|
8201d9977d | ||
|
|
2026780e11 | ||
|
|
2f27f22650 | ||
|
|
2b0501a437 | ||
|
|
e2d65c627f | ||
|
|
f75d9ceafb | ||
|
|
0c4d1cae8f | ||
|
|
2a795ed5cd | ||
|
|
ec90f43d3e | ||
|
|
d7e9580aa1 | ||
|
|
9eb69e2ea7 | ||
|
|
dd79b8a0ee | ||
|
|
a8815737fd | ||
|
|
751fa1a3f0 | ||
|
|
220022c9c5 | ||
|
|
957b216c79 | ||
|
|
b5a0e19843 | ||
|
|
97d1b3cdd5 | ||
|
|
30ea5256f3 | ||
|
|
aff5d7248c | ||
|
|
3809bcab25 | ||
|
|
1b1400a6fb | ||
|
|
0e4dd9af20 | ||
|
|
26c1b4e28e | ||
|
|
fde8421dac | ||
|
|
3e9a496a5d | ||
|
|
a118904cb8 | ||
|
|
9daf364d61 | ||
|
|
c966fab53e | ||
|
|
16f354b7b9 | ||
|
|
0404ea025b | ||
|
|
2708af614a | ||
|
|
c37abe377f | ||
|
|
61479d15ed | ||
|
|
78207ba65a | ||
|
|
7cde02bf02 | ||
|
|
1f72d90726 | ||
|
|
abf48407cc | ||
|
|
08fb8abb41 | ||
|
|
ce7160cdca | ||
|
|
5d52007104 | ||
|
|
4ca90fc3af | ||
|
|
b155534d1b | ||
|
|
965e73881b | ||
|
|
7228b532ba | ||
|
|
1b3a4eea47 | ||
|
|
fa931aca3b | ||
|
|
90ef4e90e5 | ||
|
|
1658a53cad | ||
|
|
b4e1a0869f | ||
|
|
6f3c1fc0ba | ||
|
|
f4b84ca75f | ||
|
|
80b65ee619 | ||
|
|
606c8a842a | ||
|
|
d41686c340 | ||
|
|
48414be75d | ||
|
|
50d5eea4a5 | ||
|
|
0db35bacad | ||
|
|
003dab263d | ||
|
|
7cf6da09f9 | ||
|
|
963786f7cc | ||
|
|
29ccb15e54 | ||
|
|
0dc2294c87 | ||
|
|
70e802764b | ||
|
|
08086e5afc | ||
|
|
fade86abaa | ||
|
|
a271d3d8e3 | ||
|
|
2bd7db16a4 | ||
|
|
379f23283c | ||
|
|
8a3f88a104 | ||
|
|
74f665f9e0 | ||
|
|
6b897fe23b | ||
|
|
bd7d8c62b0 | ||
|
|
4c930e8ae5 | ||
|
|
21e6d08f75 | ||
|
|
12935490d4 | ||
|
|
c0f1011ed6 | ||
|
|
4221985b90 | ||
|
|
d726e464a6 | ||
|
|
764642d271 | ||
|
|
18fe0684d3 | ||
|
|
2ee8bd8786 | ||
|
|
46c75ab44a | ||
|
|
f450d910c7 |
26
.air.hold.toml
Normal file
26
.air.hold.toml
Normal file
@@ -0,0 +1,26 @@
|
||||
root = "."
|
||||
tmp_dir = "tmp"
|
||||
|
||||
[build]
|
||||
pre_cmd = ["go generate ./pkg/hold/..."]
|
||||
cmd = "go build -buildvcs=false -o ./tmp/atcr-hold ./cmd/hold"
|
||||
entrypoint = ["./tmp/atcr-hold", "serve", "--config", "config-hold.example.yaml"]
|
||||
include_ext = ["go", "html", "css", "js"]
|
||||
exclude_dir = ["bin", "tmp", "vendor", "deploy", "docs", ".git", "dist", "pkg/appview", "node_modules"]
|
||||
exclude_regex = ["_test\\.go$", "cbor_gen\\.go$", "\\.min\\.js$", "public/css/style\\.css$", "public/icons\\.svg$"]
|
||||
delay = 3000
|
||||
stop_on_error = true
|
||||
send_interrupt = true
|
||||
kill_delay = 500
|
||||
|
||||
[log]
|
||||
time = false
|
||||
|
||||
[color]
|
||||
main = "blue"
|
||||
watcher = "magenta"
|
||||
build = "yellow"
|
||||
runner = "green"
|
||||
|
||||
[misc]
|
||||
clean_on_exit = true
|
||||
30
.air.toml
Normal file
30
.air.toml
Normal file
@@ -0,0 +1,30 @@
|
||||
root = "."
|
||||
tmp_dir = "tmp"
|
||||
|
||||
[build]
|
||||
# Use polling for Docker volume mounts (inotify doesn't work across mounts)
|
||||
poll = true
|
||||
poll_interval = 500
|
||||
# Pre-build: generate assets if missing (each string is a shell command)
|
||||
pre_cmd = ["go generate ./pkg/appview/..."]
|
||||
cmd = "go build -tags billing -buildvcs=false -o ./tmp/atcr-appview ./cmd/appview"
|
||||
entrypoint = ["./tmp/atcr-appview", "serve", "--config", "config-appview.example.yaml"]
|
||||
include_ext = ["go", "html", "css", "js"]
|
||||
exclude_dir = ["bin", "tmp", "vendor", "deploy", "docs", ".git", "dist", "node_modules", "pkg/hold"]
|
||||
exclude_regex = ["_test\\.go$", "cbor_gen\\.go$", "\\.min\\.js$", "public/css/style\\.css$", "public/icons\\.svg$"]
|
||||
delay = 3000
|
||||
stop_on_error = true
|
||||
send_interrupt = true
|
||||
kill_delay = 3000
|
||||
|
||||
[log]
|
||||
time = false
|
||||
|
||||
[color]
|
||||
main = "cyan"
|
||||
watcher = "magenta"
|
||||
build = "yellow"
|
||||
runner = "green"
|
||||
|
||||
[misc]
|
||||
clean_on_exit = true
|
||||
3
.claudeignore
Normal file
3
.claudeignore
Normal file
@@ -0,0 +1,3 @@
|
||||
# Generated files
|
||||
pkg/appview/public/css/style.css
|
||||
pkg/appview/public/js/bundle.min.js
|
||||
@@ -1,90 +0,0 @@
|
||||
# ATCR AppView Configuration
|
||||
# Copy this file to .env.appview and fill in your values
|
||||
# Load with: source .env.appview && ./bin/atcr-appview serve
|
||||
|
||||
# ==============================================================================
|
||||
# Server Configuration
|
||||
# ==============================================================================
|
||||
|
||||
# HTTP listen address (default: :5000)
|
||||
ATCR_HTTP_ADDR=:5000
|
||||
|
||||
# Debug listen address (default: :5001)
|
||||
# ATCR_DEBUG_ADDR=:5001
|
||||
|
||||
# Base URL for the AppView service (REQUIRED for production)
|
||||
# Used to generate OAuth redirect URIs and JWT realms
|
||||
# Development: Auto-detected from ATCR_HTTP_ADDR (e.g., http://127.0.0.1:5000)
|
||||
# Production: Set to your public URL (e.g., https://atcr.io)
|
||||
# ATCR_BASE_URL=http://127.0.0.1:5000
|
||||
|
||||
# Service name (used for JWT service/issuer fields)
|
||||
# Default: Derived from base URL hostname, or "atcr.io"
|
||||
# ATCR_SERVICE_NAME=atcr.io
|
||||
|
||||
# ==============================================================================
|
||||
# Storage Configuration
|
||||
# ==============================================================================
|
||||
|
||||
# Default hold service endpoint for users without their own storage (REQUIRED)
|
||||
# Users with a sailor profile defaultHold setting will override this
|
||||
# Docker: Use container name (http://atcr-hold:8080)
|
||||
# Local dev: Use localhost (http://127.0.0.1:8080)
|
||||
ATCR_DEFAULT_HOLD=http://127.0.0.1:8080
|
||||
|
||||
# ==============================================================================
|
||||
# Authentication Configuration
|
||||
# ==============================================================================
|
||||
|
||||
# Path to JWT signing private key (auto-generated if missing)
|
||||
# Default: /var/lib/atcr/auth/private-key.pem
|
||||
# ATCR_AUTH_KEY_PATH=/var/lib/atcr/auth/private-key.pem
|
||||
|
||||
# Path to JWT signing certificate (auto-generated if missing)
|
||||
# Default: /var/lib/atcr/auth/private-key.crt
|
||||
# ATCR_AUTH_CERT_PATH=/var/lib/atcr/auth/private-key.crt
|
||||
|
||||
# JWT token expiration in seconds (default: 300 = 5 minutes)
|
||||
# ATCR_TOKEN_EXPIRATION=300
|
||||
|
||||
# ==============================================================================
|
||||
# UI Configuration
|
||||
# ==============================================================================
|
||||
|
||||
# Enable web UI (default: true)
|
||||
# Set to "false" to disable web interface and run registry-only
|
||||
ATCR_UI_ENABLED=true
|
||||
|
||||
# SQLite database path for UI data (sessions, stars, pull counts, etc.)
|
||||
# Default: /var/lib/atcr/ui.db
|
||||
# ATCR_UI_DATABASE_PATH=/var/lib/atcr/ui.db
|
||||
|
||||
# ==============================================================================
|
||||
# Logging Configuration
|
||||
# ==============================================================================
|
||||
|
||||
# Log level: debug, info, warn, error (default: info)
|
||||
# ATCR_LOG_LEVEL=info
|
||||
|
||||
# Log formatter: text, json (default: text)
|
||||
# ATCR_LOG_FORMATTER=text
|
||||
|
||||
# ==============================================================================
|
||||
# Jetstream Configuration (ATProto event streaming)
|
||||
# ==============================================================================
|
||||
|
||||
# Jetstream WebSocket URL for real-time ATProto events
|
||||
# Default: wss://jetstream2.us-west.bsky.network/subscribe
|
||||
# JETSTREAM_URL=wss://jetstream2.us-west.bsky.network/subscribe
|
||||
|
||||
# Enable backfill worker to sync historical records (default: false)
|
||||
# Set to "true" to enable periodic syncing of ATProto records
|
||||
# ATCR_BACKFILL_ENABLED=true
|
||||
|
||||
# ATProto relay endpoint for backfill sync API
|
||||
# Default: https://relay1.us-east.bsky.network
|
||||
# ATCR_RELAY_ENDPOINT=https://relay1.us-east.bsky.network
|
||||
|
||||
# Backfill interval (default: 1h)
|
||||
# Examples: 30m, 1h, 2h, 24h
|
||||
# ATCR_BACKFILL_INTERVAL=1h
|
||||
@@ -1,69 +0,0 @@
|
||||
# ATCR Hold Service Configuration
|
||||
# Copy this file to .env and fill in your values
|
||||
|
||||
# ==============================================================================
|
||||
# Required Configuration
|
||||
# ==============================================================================
|
||||
|
||||
# Hold service public URL (REQUIRED)
|
||||
# The hostname becomes the hold name/record key
|
||||
# Examples: https://hold1.atcr.io, http://127.0.0.1:8080
|
||||
HOLD_PUBLIC_URL=http://127.0.0.1:8080
|
||||
|
||||
# ==============================================================================
|
||||
# Storage Configuration
|
||||
# ==============================================================================
|
||||
|
||||
# Storage driver type (s3, filesystem)
|
||||
# Default: s3
|
||||
#
|
||||
# S3 Presigned URLs:
|
||||
# When using S3 storage, presigned URLs are automatically enabled for direct
|
||||
# client ↔ S3 transfers. This eliminates the hold service as a bandwidth
|
||||
# bottleneck, reducing hold bandwidth by ~99% for push/pull operations.
|
||||
# Falls back to proxy mode automatically for non-S3 drivers.
|
||||
STORAGE_DRIVER=filesystem
|
||||
|
||||
# For S3/Storj/Minio:
|
||||
AWS_ACCESS_KEY_ID=your_access_key
|
||||
AWS_SECRET_ACCESS_KEY=your_secret_key
|
||||
AWS_REGION=us-east-1
|
||||
S3_BUCKET=atcr-blobs
|
||||
|
||||
# For Storj/Minio (optional - custom S3 endpoint):
|
||||
# S3_ENDPOINT=https://gateway.storjshare.io
|
||||
|
||||
# For filesystem driver:
|
||||
# STORAGE_DRIVER=filesystem
|
||||
# STORAGE_ROOT_DIR=/var/lib/atcr/hold
|
||||
|
||||
# ==============================================================================
|
||||
# Server Configuration
|
||||
# ==============================================================================
|
||||
|
||||
# Server listen address (default: :8080)
|
||||
# HOLD_SERVER_ADDR=:8080
|
||||
|
||||
# Allow public blob reads (pulls) without authentication
|
||||
# Writes (pushes) always require crew membership via PDS
|
||||
# Default: false
|
||||
HOLD_PUBLIC=false
|
||||
|
||||
# ==============================================================================
|
||||
# Registration (REQUIRED)
|
||||
# ==============================================================================
|
||||
|
||||
# Your ATProto DID (REQUIRED for registration)
|
||||
# Get your DID: https://bsky.social/xrpc/com.atproto.identity.resolveHandle?handle=yourhandle.bsky.social
|
||||
#
|
||||
# On first run with HOLD_OWNER set:
|
||||
# 1. Hold service will print an OAuth URL to the logs
|
||||
# 2. Visit the URL in your browser to authorize
|
||||
# 3. Hold service creates hold + crew records in your PDS
|
||||
# 4. Registration complete!
|
||||
#
|
||||
# On subsequent runs:
|
||||
# - Hold service checks if already registered
|
||||
# - Skips OAuth if records exist
|
||||
#
|
||||
HOLD_OWNER=did:plc:your-did-here
|
||||
15
.gitignore
vendored
15
.gitignore
vendored
@@ -1,6 +1,9 @@
|
||||
# Binaries
|
||||
bin/
|
||||
dist/
|
||||
tmp/
|
||||
./appview
|
||||
./hold
|
||||
|
||||
# Test artifacts
|
||||
.atcr-pids
|
||||
@@ -11,7 +14,18 @@ dist/
|
||||
# Environment configuration
|
||||
.env
|
||||
|
||||
# Deploy state (contains server UUIDs and IPs)
|
||||
deploy/upcloud/state.json
|
||||
|
||||
# Generated assets (run go generate to rebuild)
|
||||
pkg/appview/licenses/spdx-licenses.json
|
||||
pkg/appview/public/css/style.css
|
||||
pkg/appview/public/js/htmx.min.js
|
||||
pkg/appview/public/js/lucide.min.js
|
||||
pkg/hold/admin/public/css/style.css
|
||||
|
||||
# IDE
|
||||
.zed/
|
||||
.claude/
|
||||
.vscode/
|
||||
.idea/
|
||||
@@ -21,3 +35,4 @@ dist/
|
||||
# OS
|
||||
.DS_Store
|
||||
Thumbs.db
|
||||
node_modules
|
||||
|
||||
40
.golangci.yml
Normal file
40
.golangci.yml
Normal file
@@ -0,0 +1,40 @@
|
||||
# golangci-lint configuration for ATCR
|
||||
# See: https://golangci-lint.run/usage/configuration/
|
||||
version: "2"
|
||||
|
||||
issues:
|
||||
fix: true
|
||||
|
||||
linters:
|
||||
settings:
|
||||
staticcheck:
|
||||
checks:
|
||||
- "all"
|
||||
- "-SA1019" # Ignore deprecated package warnings for github.com/ipfs/go-ipfs-blockstore
|
||||
# Cannot upgrade to github.com/ipfs/boxo/blockstore due to opentelemetry
|
||||
# dependency conflicts with distribution/distribution
|
||||
errcheck:
|
||||
exclude-functions:
|
||||
- (github.com/distribution/distribution/v3/registry/storage/driver.FileWriter).Cancel
|
||||
- (github.com/distribution/distribution/v3.BlobWriter).Cancel
|
||||
- (*database/sql.Tx).Rollback
|
||||
- (*database/sql.Rows).Close
|
||||
- (*net/http.Server).Shutdown
|
||||
|
||||
exclusions:
|
||||
presets:
|
||||
- std-error-handling
|
||||
rules:
|
||||
- path: _test\.go
|
||||
linters:
|
||||
- errcheck
|
||||
|
||||
formatters:
|
||||
enable:
|
||||
- gofmt
|
||||
- goimports
|
||||
settings:
|
||||
gofmt:
|
||||
rewrite-rules:
|
||||
- pattern: 'interface{}'
|
||||
replacement: 'any'
|
||||
@@ -6,6 +6,7 @@ version: 2
|
||||
before:
|
||||
hooks:
|
||||
- go mod tidy
|
||||
- go generate ./...
|
||||
|
||||
builds:
|
||||
# Credential helper - cross-platform native binary distribution
|
||||
|
||||
24
.tangled/workflows/lint.yaml
Normal file
24
.tangled/workflows/lint.yaml
Normal file
@@ -0,0 +1,24 @@
|
||||
when:
|
||||
- event: ["push"]
|
||||
branch: ["*"]
|
||||
- event: ["pull_request"]
|
||||
branch: ["main"]
|
||||
|
||||
engine: kubernetes
|
||||
image: golang:1.25-trixie
|
||||
architecture: amd64
|
||||
|
||||
steps:
|
||||
- name: Download and Generate
|
||||
environment:
|
||||
CGO_ENABLED: 1
|
||||
command: |
|
||||
go mod download
|
||||
go install github.com/golangci/golangci-lint/v2/cmd/golangci-lint@v2.7.2
|
||||
go generate ./...
|
||||
|
||||
- name: Run Linter
|
||||
environment:
|
||||
CGO_ENABLED: 1
|
||||
command: |
|
||||
golangci-lint run ./...
|
||||
155
.tangled/workflows/release-credential-helper.yml
Normal file
155
.tangled/workflows/release-credential-helper.yml
Normal file
@@ -0,0 +1,155 @@
|
||||
# Tangled Workflow: Release Credential Helper
|
||||
#
|
||||
# This workflow builds cross-platform binaries for the credential helper.
|
||||
# Creates tarballs for curl/bash installation and provides instructions
|
||||
# for updating the Homebrew formula.
|
||||
#
|
||||
# Triggers on version tags (v*) pushed to the repository.
|
||||
|
||||
when:
|
||||
- event: ["manual"]
|
||||
tag: ["v*"]
|
||||
|
||||
engine: "nixery"
|
||||
|
||||
dependencies:
|
||||
nixpkgs:
|
||||
- go_1_24 # Go 1.24+ for building
|
||||
- goreleaser # For building multi-platform binaries
|
||||
- curl # Required by go generate for downloading vendor assets
|
||||
- gnugrep # Required for tag detection
|
||||
- gnutar # Required for creating tarballs
|
||||
- gzip # Required for compressing tarballs
|
||||
- coreutils # Required for sha256sum
|
||||
|
||||
environment:
|
||||
CGO_ENABLED: "0" # Build static binaries
|
||||
|
||||
steps:
|
||||
- name: Get tag for current commit
|
||||
command: |
|
||||
# Fetch tags (shallow clone doesn't include them by default)
|
||||
git fetch --tags
|
||||
|
||||
# Find the tag that points to the current commit
|
||||
TAG=$(git tag --points-at HEAD | grep -E '^v[0-9]' | head -n1)
|
||||
|
||||
if [ -z "$TAG" ]; then
|
||||
echo "Error: No version tag found for current commit"
|
||||
echo "Available tags:"
|
||||
git tag
|
||||
echo "Current commit:"
|
||||
git rev-parse HEAD
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Building version: $TAG"
|
||||
echo "$TAG" > .version
|
||||
|
||||
# Also get the commit hash for reference
|
||||
COMMIT_HASH=$(git rev-parse HEAD)
|
||||
echo "Commit: $COMMIT_HASH"
|
||||
|
||||
- name: Build binaries with GoReleaser
|
||||
command: |
|
||||
VERSION=$(cat .version)
|
||||
export VERSION
|
||||
|
||||
# Build for all platforms using GoReleaser
|
||||
goreleaser build --clean --snapshot --config .goreleaser.yaml
|
||||
|
||||
# List what was built
|
||||
echo "Built artifacts:"
|
||||
if [ -d "dist" ]; then
|
||||
ls -lh dist/
|
||||
else
|
||||
echo "Error: dist/ directory was not created by GoReleaser"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Package artifacts
|
||||
command: |
|
||||
VERSION=$(cat .version)
|
||||
VERSION_NO_V=${VERSION#v} # Remove 'v' prefix for filenames
|
||||
|
||||
cd dist
|
||||
|
||||
# Create tarballs for each platform
|
||||
# GoReleaser creates directories like: credential-helper_{os}_{arch}_v{goversion}
|
||||
|
||||
# Darwin x86_64
|
||||
if [ -d "credential-helper_darwin_amd64_v1" ]; then
|
||||
tar czf "docker-credential-atcr_${VERSION_NO_V}_Darwin_x86_64.tar.gz" \
|
||||
-C credential-helper_darwin_amd64_v1 docker-credential-atcr
|
||||
echo "Created: docker-credential-atcr_${VERSION_NO_V}_Darwin_x86_64.tar.gz"
|
||||
fi
|
||||
|
||||
# Darwin arm64
|
||||
for dir in credential-helper_darwin_arm64*; do
|
||||
if [ -d "$dir" ]; then
|
||||
tar czf "docker-credential-atcr_${VERSION_NO_V}_Darwin_arm64.tar.gz" \
|
||||
-C "$dir" docker-credential-atcr
|
||||
echo "Created: docker-credential-atcr_${VERSION_NO_V}_Darwin_arm64.tar.gz"
|
||||
break
|
||||
fi
|
||||
done
|
||||
|
||||
# Linux x86_64
|
||||
if [ -d "credential-helper_linux_amd64_v1" ]; then
|
||||
tar czf "docker-credential-atcr_${VERSION_NO_V}_Linux_x86_64.tar.gz" \
|
||||
-C credential-helper_linux_amd64_v1 docker-credential-atcr
|
||||
echo "Created: docker-credential-atcr_${VERSION_NO_V}_Linux_x86_64.tar.gz"
|
||||
fi
|
||||
|
||||
# Linux arm64
|
||||
for dir in credential-helper_linux_arm64*; do
|
||||
if [ -d "$dir" ]; then
|
||||
tar czf "docker-credential-atcr_${VERSION_NO_V}_Linux_arm64.tar.gz" \
|
||||
-C "$dir" docker-credential-atcr
|
||||
echo "Created: docker-credential-atcr_${VERSION_NO_V}_Linux_arm64.tar.gz"
|
||||
break
|
||||
fi
|
||||
done
|
||||
|
||||
echo ""
|
||||
echo "Tarballs ready:"
|
||||
ls -lh *.tar.gz 2>/dev/null || echo "Warning: No tarballs created"
|
||||
|
||||
- name: Generate checksums
|
||||
command: |
|
||||
VERSION=$(cat .version)
|
||||
VERSION_NO_V=${VERSION#v}
|
||||
|
||||
cd dist
|
||||
|
||||
echo ""
|
||||
echo "=========================================="
|
||||
echo "SHA256 Checksums"
|
||||
echo "=========================================="
|
||||
echo ""
|
||||
|
||||
# Generate checksums file
|
||||
sha256sum docker-credential-atcr_${VERSION_NO_V}_*.tar.gz 2>/dev/null | tee checksums.txt || echo "No checksums generated"
|
||||
|
||||
- name: Next steps
|
||||
command: |
|
||||
VERSION=$(cat .version)
|
||||
|
||||
echo ""
|
||||
echo "=========================================="
|
||||
echo "Release $VERSION is ready!"
|
||||
echo "=========================================="
|
||||
echo ""
|
||||
echo "Distribution tarballs are in: dist/"
|
||||
echo ""
|
||||
echo "Next steps:"
|
||||
echo ""
|
||||
echo "1. Upload tarballs to your hosting/CDN (or GitHub releases)"
|
||||
echo ""
|
||||
echo "2. For Homebrew users, update the formula:"
|
||||
echo " ./scripts/update-homebrew-formula.sh $VERSION"
|
||||
echo " # Then update Formula/docker-credential-atcr.rb and push to homebrew-tap"
|
||||
echo ""
|
||||
echo "3. For curl/bash installation, users can download directly:"
|
||||
echo " curl -L <your-cdn>/docker-credential-atcr_<version>_<os>_<arch>.tar.gz | tar xz"
|
||||
echo " sudo mv docker-credential-atcr /usr/local/bin/"
|
||||
@@ -1,55 +1,44 @@
|
||||
# ATCR Release Pipeline for Tangled.org
|
||||
# Triggers on version tags and builds cross-platform binaries using GoReleaser
|
||||
# Triggers on version tags and builds cross-platform binaries using buildah
|
||||
|
||||
when:
|
||||
- event: ["push", "manual"]
|
||||
# TODO: Trigger only on version tags (v1.0.0, v2.1.3, etc.)
|
||||
branch: ["main"]
|
||||
- event: ["push"]
|
||||
tag: ["v*"]
|
||||
|
||||
engine: "nixery"
|
||||
engine: kubernetes
|
||||
image: quay.io/buildah/stable:latest
|
||||
architecture: amd64
|
||||
|
||||
dependencies:
|
||||
nixpkgs:
|
||||
- git
|
||||
- go
|
||||
#- goreleaser
|
||||
- podman
|
||||
environment:
|
||||
IMAGE_REGISTRY: atcr.io
|
||||
IMAGE_USER: atcr.io
|
||||
|
||||
steps:
|
||||
- name: Fetch git tags
|
||||
command: git fetch --tags --force
|
||||
|
||||
- name: Checkout tag for current commit
|
||||
- name: Login to registry
|
||||
command: |
|
||||
CURRENT_COMMIT=$(git rev-parse HEAD)
|
||||
export TAG=$(git tag --points-at $CURRENT_COMMIT --sort=-version:refname | head -n1)
|
||||
if [ -z "$TAG" ]; then
|
||||
echo "Error: No tag found for commit $CURRENT_COMMIT"
|
||||
exit 1
|
||||
fi
|
||||
echo "Found tag $TAG for commit $CURRENT_COMMIT"
|
||||
git checkout $TAG
|
||||
echo "${APP_PASSWORD}" | buildah login \
|
||||
-u "${IMAGE_USER}" \
|
||||
--password-stdin \
|
||||
${IMAGE_REGISTRY}
|
||||
|
||||
- name: Build AppView Docker image
|
||||
- name: Build and push AppView image
|
||||
command: |
|
||||
TAG=$(git describe --tags --exact-match 2>/dev/null || git tag --points-at HEAD | head -n1)
|
||||
podman login atcr.io -u evan.jarrett.net -p ${APP_PASSWORD}
|
||||
podman build -f Dockerfile.appview -t atcr.io/evan.jarrett.net/atcr-appview:${TAG} .
|
||||
podman push atcr.io/evan.jarrett.net/atcr-appview:${TAG}
|
||||
buildah bud \
|
||||
--tag ${IMAGE_REGISTRY}/${IMAGE_USER}/appview:${TANGLED_REF_NAME} \
|
||||
--tag ${IMAGE_REGISTRY}/${IMAGE_USER}/appview:latest \
|
||||
--file ./Dockerfile.appview \
|
||||
.
|
||||
|
||||
- name: Build Hold Docker image
|
||||
buildah push \
|
||||
${IMAGE_REGISTRY}/${IMAGE_USER}/appview:latest
|
||||
|
||||
- name: Build and push Hold image
|
||||
command: |
|
||||
TAG=$(git describe --tags --exact-match 2>/dev/null || git tag --points-at HEAD | head -n1)
|
||||
podman login atcr.io -u evan.jarrett.net -p ${APP_PASSWORD}
|
||||
podman build -f Dockerfile.hold -t atcr.io/evan.jarrett.net/atcr-hold:${TAG} .
|
||||
podman push atcr.io/evan.jarrett.net/atcr-hold:${TAG}
|
||||
|
||||
# disable for now
|
||||
# - name: Tidy Go modules
|
||||
# command: go mod tidy
|
||||
buildah bud \
|
||||
--tag ${IMAGE_REGISTRY}/${IMAGE_USER}/hold:${TANGLED_REF_NAME} \
|
||||
--tag ${IMAGE_REGISTRY}/${IMAGE_USER}/hold:latest \
|
||||
--file ./Dockerfile.hold \
|
||||
.
|
||||
|
||||
# - name: Install Goat
|
||||
# command: go install github.com/bluesky-social/goat@latest
|
||||
|
||||
# - name: Run GoReleaser
|
||||
# command: goreleaser release --clean
|
||||
buildah push \
|
||||
${IMAGE_REGISTRY}/${IMAGE_USER}/hold:latest
|
||||
|
||||
23
.tangled/workflows/tests.yml
Normal file
23
.tangled/workflows/tests.yml
Normal file
@@ -0,0 +1,23 @@
|
||||
when:
|
||||
- event: ["push"]
|
||||
branch: ["*"]
|
||||
- event: ["pull_request"]
|
||||
branch: ["main"]
|
||||
|
||||
engine: kubernetes
|
||||
image: golang:1.25-trixie
|
||||
architecture: amd64
|
||||
|
||||
steps:
|
||||
- name: Download and Generate
|
||||
environment:
|
||||
CGO_ENABLED: 1
|
||||
command: |
|
||||
go mod download
|
||||
go generate ./...
|
||||
|
||||
- name: Run Tests
|
||||
environment:
|
||||
CGO_ENABLED: 1
|
||||
command: |
|
||||
go test -cover ./...
|
||||
671
CLAUDE.md
671
CLAUDE.md
@@ -4,575 +4,260 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
|
||||
|
||||
## Project Overview
|
||||
|
||||
ATCR (ATProto Container Registry) is an OCI-compliant container registry that uses the AT Protocol for manifest storage and S3 for blob storage. This creates a decentralized container registry where manifests are stored in users' Personal Data Servers (PDS) while layers are stored in S3.
|
||||
ATCR (ATProto Container Registry) is an OCI-compliant container registry that uses the AT Protocol for manifest storage and S3 for blob storage. Manifests are stored in users' Personal Data Servers (PDS) while layers are stored in S3.
|
||||
|
||||
## Go Workspace
|
||||
|
||||
The project uses a Go workspace (`go.work`) with two modules:
|
||||
- `atcr.io` — Main module (appview, hold, credential-helper, oauth-helper)
|
||||
- `atcr.io/scanner` — Scanner module (separate to isolate heavy Syft/Grype dependencies)
|
||||
|
||||
## Build Commands
|
||||
|
||||
Always build into the `bin/` directory (`-o bin/...`), not the project root.
|
||||
|
||||
```bash
|
||||
# Build all binaries
|
||||
# create go builds in the bin/ directory
|
||||
# Build main binaries
|
||||
go build -o bin/atcr-appview ./cmd/appview
|
||||
go build -o bin/atcr-hold ./cmd/hold
|
||||
go build -o bin/docker-credential-atcr ./cmd/credential-helper
|
||||
go build -o bin/oauth-helper ./cmd/oauth-helper
|
||||
|
||||
# Run tests
|
||||
go test ./...
|
||||
# Build scanner (separate module)
|
||||
cd scanner && go build -o ../bin/atcr-scanner ./cmd/scanner && cd ..
|
||||
|
||||
# Run with race detector
|
||||
go test -race ./...
|
||||
# Build hold with billing support (optional build tag)
|
||||
go build -tags billing -o bin/atcr-hold ./cmd/hold
|
||||
|
||||
# Update dependencies
|
||||
go mod tidy
|
||||
# Tests
|
||||
go test ./... # all tests
|
||||
go test ./pkg/atproto/... # specific package
|
||||
go test -run TestManifestStore ./pkg/atproto/... # specific test
|
||||
go test -race ./... # race detector
|
||||
|
||||
# Build Docker images
|
||||
docker build -t atcr.io/appview:latest .
|
||||
# Docker
|
||||
docker build -f Dockerfile.appview -t atcr.io/appview:latest .
|
||||
docker build -f Dockerfile.hold -t atcr.io/hold:latest .
|
||||
|
||||
# Or use docker-compose
|
||||
docker build -f Dockerfile.scanner -t atcr.io/scanner:latest .
|
||||
docker-compose up -d
|
||||
|
||||
# Run locally (AppView) - configure via env vars (see .env.appview.example)
|
||||
export ATCR_HTTP_ADDR=:5000
|
||||
export ATCR_DEFAULT_HOLD=http://127.0.0.1:8080
|
||||
./bin/atcr-appview serve
|
||||
# Generate & run with config
|
||||
./bin/atcr-appview config init config-appview.yaml
|
||||
./bin/atcr-hold config init config-hold.yaml
|
||||
./bin/atcr-appview serve --config config-appview.yaml
|
||||
./bin/atcr-hold serve --config config-hold.yaml
|
||||
|
||||
# Or use .env file:
|
||||
cp .env.appview.example .env.appview
|
||||
# Edit .env.appview with your settings
|
||||
source .env.appview
|
||||
./bin/atcr-appview serve
|
||||
# Scanner (env vars only, no YAML)
|
||||
SCANNER_HOLD_URL=ws://localhost:8080 SCANNER_SHARED_SECRET=secret ./bin/atcr-scanner serve
|
||||
|
||||
# Legacy mode (still supported):
|
||||
# ./bin/atcr-appview serve config/config.yml
|
||||
# Usage report
|
||||
go run ./cmd/usage-report --hold https://hold01.atcr.io
|
||||
go run ./cmd/usage-report --hold https://hold01.atcr.io --from-manifests
|
||||
|
||||
# Run hold service (configure via env vars - see .env.hold.example)
|
||||
export HOLD_PUBLIC_URL=http://127.0.0.1:8080
|
||||
export STORAGE_DRIVER=filesystem
|
||||
export STORAGE_ROOT_DIR=/tmp/atcr-hold
|
||||
export HOLD_OWNER=did:plc:your-did-here
|
||||
./bin/atcr-hold
|
||||
# Check logs for OAuth URL, visit in browser to complete registration
|
||||
# Utilities
|
||||
go run ./cmd/db-migrate --help # SQLite → libsql migration
|
||||
go run ./cmd/record-query --help # Query ATProto relay by collection
|
||||
go run ./cmd/s3-test # S3 connectivity test
|
||||
go run ./cmd/healthcheck <url> # HTTP health check (for Docker)
|
||||
```
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
### Core Design
|
||||
ATCR uses **distribution/distribution** as a library, extending it via middleware to route content to different backends:
|
||||
|
||||
ATCR uses **distribution/distribution** as a library and extends it through middleware to route different types of content to different storage backends:
|
||||
|
||||
- **Manifests** → ATProto PDS (small JSON metadata, stored as `io.atcr.manifest` records)
|
||||
- **Blobs/Layers** → S3 or user-deployed storage (large binary data)
|
||||
- **Manifests** → ATProto PDS (small JSON, stored as `io.atcr.manifest` records)
|
||||
- **Blobs/Layers** → S3 via hold service (presigned URLs for direct client-to-S3 transfers)
|
||||
- **Authentication** → ATProto OAuth with DPoP + Docker credential helpers
|
||||
|
||||
### Three-Component Architecture
|
||||
### Four Components
|
||||
|
||||
1. **AppView** (`cmd/appview`) - OCI Distribution API server
|
||||
- Resolves identities (handle/DID → PDS endpoint)
|
||||
- Routes manifests to user's PDS
|
||||
- Routes blobs to storage endpoint (default or BYOS)
|
||||
- Validates OAuth tokens via PDS
|
||||
- Issues registry JWTs
|
||||
1. **AppView** (`cmd/appview`) — OCI Distribution API server. Resolves identities, routes manifests to PDS, routes blobs to hold service, validates OAuth, issues registry JWTs. Includes web UI for browsing.
|
||||
2. **Hold Service** (`cmd/hold`) — BYOS blob storage. Embedded PDS with captain/crew/stats/scan records (all ATProto records in CAR store), S3-compatible storage, presigned URLs. Supports did:web (default) or did:plc identity with auto-recovery. Optional subsystems: admin UI, quotas, billing (Stripe), GC, scan dispatch, Bluesky status posts.
|
||||
3. **Scanner** (`scanner/cmd/scanner`) — Vulnerability scanning. Connects to hold via WebSocket, generates SBOMs (Syft), scans vulnerabilities (Grype). Priority queue with tier-based scheduling.
|
||||
4. **Credential Helper** (`cmd/credential-helper`) — Docker credential helper implementing ATProto OAuth flow, exchanges OAuth token for registry JWT.
|
||||
|
||||
2. **Hold Service** (`cmd/hold`) - Optional BYOS component
|
||||
- Lightweight HTTP server for presigned URLs
|
||||
- Supports S3, Storj, Minio, filesystem, etc.
|
||||
- Authorization based on PDS records (hold.public, crew records)
|
||||
- Auto-registration via OAuth
|
||||
- Configured entirely via environment variables
|
||||
### Request Flow Summary
|
||||
|
||||
3. **Credential Helper** (`cmd/credential-helper`) - Client-side OAuth
|
||||
- Implements Docker credential helper protocol
|
||||
- ATProto OAuth flow with DPoP
|
||||
- Token caching and refresh
|
||||
- Exchanges OAuth token for registry JWT
|
||||
**Push:** Client pushes to `atcr.io/<identity>/<image>:<tag>`. Registry middleware resolves identity → DID → PDS, discovers hold DID (from sailor profile `defaultHold` → legacy `io.atcr.hold` records → AppView default). Blobs go to hold via XRPC multipart upload (presigned S3 URLs). Manifests stored in user's PDS as `io.atcr.manifest` records with `holdDid` reference.
|
||||
|
||||
### Request Flow
|
||||
**Pull:** AppView fetches manifest from user's PDS. The manifest's `holdDid` field tells where blobs were stored. Blobs fetched from that hold via presigned download URLs. Pull always uses the historical hold from the manifest, even if the user changed their default since pushing.
|
||||
|
||||
#### Push with Default Storage
|
||||
```
|
||||
1. Client: docker push atcr.io/alice/myapp:latest
|
||||
2. HTTP Request → /v2/alice/myapp/manifests/latest
|
||||
3. Registry Middleware (pkg/appview/middleware/registry.go)
|
||||
→ Resolves "alice" to DID and PDS endpoint
|
||||
→ Queries alice's sailor profile for defaultHold
|
||||
→ If not set, checks alice's io.atcr.hold records
|
||||
→ Falls back to AppView's default_storage_endpoint
|
||||
→ Stores DID/PDS/storage endpoint in context
|
||||
4. Routing Repository (pkg/appview/storage/routing_repository.go)
|
||||
→ Creates RoutingRepository
|
||||
→ Returns ATProto ManifestStore for manifests
|
||||
→ Returns ProxyBlobStore for blobs
|
||||
5. Blob PUT → Resolved hold service (redirects to S3/storage)
|
||||
6. Manifest PUT → alice's PDS as io.atcr.manifest record (includes holdEndpoint)
|
||||
```
|
||||
|
||||
#### Push with BYOS (Bring Your Own Storage)
|
||||
```
|
||||
1. Client: docker push atcr.io/alice/myapp:latest
|
||||
2. Registry Middleware resolves alice → did:plc:alice123
|
||||
3. Hold discovery via findStorageEndpoint():
|
||||
a. Check alice's sailor profile for defaultHold
|
||||
b. If not set, check alice's io.atcr.hold records
|
||||
c. Fall back to AppView's default_storage_endpoint
|
||||
4. Found: alice's profile has defaultHold = "https://alice-storage.fly.dev"
|
||||
5. Routing Repository returns ProxyBlobStore(alice-storage.fly.dev)
|
||||
6. ProxyBlobStore calls alice-storage.fly.dev for presigned URL
|
||||
7. Storage service validates alice's DID, generates S3 presigned URL
|
||||
8. Client redirected to upload blob directly to alice's S3/Storj
|
||||
9. Manifest stored in alice's PDS with holdEndpoint = "https://alice-storage.fly.dev"
|
||||
```
|
||||
|
||||
#### Pull Flow
|
||||
```
|
||||
1. Client: docker pull atcr.io/alice/myapp:latest
|
||||
2. GET /v2/alice/myapp/manifests/latest
|
||||
3. AppView fetches manifest from alice's PDS
|
||||
4. Manifest contains holdEndpoint = "https://alice-storage.fly.dev"
|
||||
5. Hold endpoint cached: (alice's DID, "myapp") → "https://alice-storage.fly.dev"
|
||||
6. Client requests blobs: GET /v2/alice/myapp/blobs/sha256:abc123
|
||||
7. AppView checks cache, routes to hold from manifest (not re-discovered)
|
||||
8. ProxyBlobStore calls alice-storage.fly.dev for presigned download URL
|
||||
9. Client redirected to download blob directly from alice's S3
|
||||
```
|
||||
|
||||
**Key insight:** Pull uses the historical `holdEndpoint` from the manifest, ensuring blobs are fetched from the hold where they were originally pushed, even if alice later changes her default hold.
|
||||
**Hold discovery priority** (in `findHoldDID()`, `pkg/appview/middleware/registry.go`):
|
||||
1. Sailor profile's `defaultHold` (user preference)
|
||||
2. User's `io.atcr.hold` records (legacy)
|
||||
3. AppView's `default_hold_did` (fallback)
|
||||
|
||||
### Name Resolution
|
||||
|
||||
Names follow the pattern: `atcr.io/<identity>/<image>:<tag>`
|
||||
Pattern: `atcr.io/<identity>/<image>:<tag>` where identity is a handle or DID.
|
||||
|
||||
Where `<identity>` can be:
|
||||
- **Handle**: `alice.bsky.social` → resolved via .well-known/atproto-did
|
||||
- **DID**: `did:plc:xyz123` → resolved via PLC directory
|
||||
Resolution in `pkg/atproto/resolver.go`: Handle → DID (DNS/HTTPS) → PDS endpoint (DID document).
|
||||
|
||||
Resolution happens in `pkg/atproto/resolver.go`:
|
||||
1. Handle → DID (via DNS/HTTPS)
|
||||
2. DID → PDS endpoint (via DID document)
|
||||
### Nautical Terminology
|
||||
|
||||
### Middleware System
|
||||
- **Sailors** = registry users, **Captains** = hold owners, **Crew** = hold members
|
||||
- **Holds** = storage endpoints (BYOS), **Quartermaster/Bosun/Deckhand** = crew tiers
|
||||
|
||||
ATCR uses middleware and routing to handle requests:
|
||||
### Hold Embedded PDS Records
|
||||
|
||||
#### 1. Registry Middleware (`pkg/appview/middleware/registry.go`)
|
||||
- Wraps `distribution.Namespace`
|
||||
- Intercepts `Repository(name)` calls
|
||||
- Performs name resolution (alice → did:plc:xyz → pds.example.com)
|
||||
- Queries PDS for `io.atcr.hold` records to find storage endpoint
|
||||
- Stores resolved identity and storage endpoint in context
|
||||
The hold's embedded PDS stores all operational data as ATProto records in a CAR store (not SQLite). SQLite holds only the records index and events.
|
||||
|
||||
#### 2. Auth Middleware (`pkg/appview/middleware/auth.go`)
|
||||
- Validates JWT tokens from Docker clients
|
||||
- Extracts DID from token claims
|
||||
- Injects authenticated identity into context
|
||||
| Collection | Cardinality | Description |
|
||||
|---|---|---|
|
||||
| `io.atcr.hold.captain` | Singleton | Hold identity, owner DID, settings |
|
||||
| `io.atcr.hold.crew` | Per-member | Crew membership + permissions |
|
||||
| `io.atcr.hold.layer` | Per-layer | Layer metadata (digest, size, media type) |
|
||||
| `io.atcr.hold.stats` | Per-repo | Push/pull counts per owner+repository |
|
||||
| `io.atcr.hold.scan` | Per-scan | Vulnerability scan results |
|
||||
| `io.atcr.hold.image.config` | Per-manifest | OCI image config (history, env, entrypoint, labels) |
|
||||
| `app.bsky.feed.post` | Status posts | Online/offline status, push notifications |
|
||||
| `sh.tangled.actor.profile` | Singleton | Hold profile (name, description, avatar) |
|
||||
|
||||
#### 3. Routing Repository (`pkg/appview/storage/routing_repository.go`)
|
||||
- Implements `distribution.Repository`
|
||||
- Returns custom `Manifests()` and `Blobs()` implementations
|
||||
- Routes manifests to ATProto, blobs to S3 or BYOS
|
||||
## Authentication
|
||||
|
||||
### Authentication Architecture
|
||||
Three token types flow through the system:
|
||||
|
||||
#### ATProto OAuth with DPoP
|
||||
| Token | Issued By | Used For | Lifetime |
|
||||
|-------|-----------|----------|----------|
|
||||
| OAuth (access+refresh) | User's PDS | AppView → PDS communication | ~2h / ~90d |
|
||||
| Registry JWT | AppView | Docker client → AppView | 5 min |
|
||||
| Service Token | User's PDS | AppView → Hold service | 60s (cached 50s) |
|
||||
|
||||
ATCR implements the full ATProto OAuth specification with mandatory security features:
|
||||
|
||||
**Required Components:**
|
||||
- **DPoP** (RFC 9449) - Cryptographic proof-of-possession for every request
|
||||
- **PAR** (RFC 9126) - Pushed Authorization Requests for server-to-server parameter exchange
|
||||
- **PKCE** (RFC 7636) - Proof Key for Code Exchange to prevent authorization code interception
|
||||
|
||||
**Key Components** (`pkg/auth/oauth/`):
|
||||
|
||||
1. **Client** (`client.go`) - Core OAuth client with encapsulated configuration
|
||||
- Constructor: `NewClient(baseURL)` - accepts base URL, derives client ID/redirect URI
|
||||
- `NewClientWithKey(baseURL, dpopKey)` - for token refresh with stored DPoP key
|
||||
- `ClientID()` - computes localhost vs production client ID dynamically
|
||||
- `RedirectURI()` - returns `baseURL + "/auth/oauth/callback"`
|
||||
- `GetDefaultScopes()` - returns ATCR registry scopes
|
||||
- All OAuth flows (authorization, token exchange, refresh) in one place
|
||||
|
||||
2. **DPoP Transport** (`transport.go`) - HTTP RoundTripper that auto-adds DPoP headers
|
||||
|
||||
3. **Token Storage** (`tokenstorage.go`) - Persists refresh tokens and DPoP keys for AppView
|
||||
- File-based storage in `/var/lib/atcr/refresh-tokens.json` (AppView)
|
||||
- Client uses `~/.atcr/oauth-token.json` (credential helper)
|
||||
|
||||
4. **Refresher** (`refresher.go`) - Token refresh manager for AppView
|
||||
- Caches access tokens with automatic refresh
|
||||
- Per-DID locking prevents concurrent refresh races
|
||||
- Uses Client methods for consistency
|
||||
|
||||
5. **Server** (`server.go`) - OAuth authorization endpoints for AppView
|
||||
- `GET /auth/oauth/authorize` - starts OAuth flow
|
||||
- `GET /auth/oauth/callback` - handles OAuth callback
|
||||
- Uses Client methods for authorization and token exchange
|
||||
|
||||
6. **Interactive Flow** (`flow.go`) - Reusable OAuth flow for CLI tools
|
||||
- Used by credential helper and hold service registration
|
||||
- Two-phase callback setup ensures PAR metadata availability
|
||||
|
||||
**Authentication Flow:**
|
||||
```
|
||||
1. User configures Docker to use the credential helper (adds to config.json)
|
||||
2. On first docker push/pull, helper generates ECDSA P-256 DPoP key
|
||||
3. Resolve handle → DID → PDS endpoint
|
||||
4. Discover OAuth server metadata from PDS
|
||||
5. PAR request with DPoP header → get request_uri
|
||||
6. Open browser for user authorization
|
||||
7. Exchange code for token with DPoP proof
|
||||
8. Save: access token, refresh token, DPoP key, DID, handle
|
||||
|
||||
Later (subsequent docker push):
|
||||
9. Docker calls credential helper
|
||||
10. Helper loads token, refreshes if needed
|
||||
11. Helper calls /auth/exchange with OAuth token + handle
|
||||
12. AppView validates token via PDS getSession
|
||||
13. AppView ensures sailor profile exists (creates with defaultHold if first login)
|
||||
14. AppView issues registry JWT with validated DID
|
||||
15. Helper returns JWT to Docker
|
||||
Docker Client ──Registry JWT──→ AppView ──OAuth──→ User's PDS ──Service Token──→ Hold
|
||||
```
|
||||
|
||||
**Security:**
|
||||
- Tokens validated against authoritative source (user's PDS)
|
||||
- No trust in client-provided identity information
|
||||
- DPoP binds tokens to specific client key
|
||||
- 15-minute token expiry for registry JWTs
|
||||
The credential helper never manages OAuth tokens directly — AppView owns the OAuth session and issues registry JWTs. See `docs/OAUTH.md` for full OAuth/DPoP implementation details.
|
||||
|
||||
### Key Components
|
||||
## Hold Authorization
|
||||
|
||||
#### ATProto Integration (`pkg/atproto/`)
|
||||
- **Public hold**: Anonymous reads allowed. Writes require captain or crew with `blob:write`.
|
||||
- **Private hold**: Reads require crew with `blob:read` or `blob:write`. Writes require `blob:write`.
|
||||
- `blob:write` implicitly grants `blob:read`.
|
||||
- Captain has all permissions implicitly.
|
||||
- See `docs/BYOS.md` for full authorization model and permission matrix.
|
||||
|
||||
**resolver.go**: DID and handle resolution
|
||||
- `ResolveIdentity()`: alice → did:plc:xyz → pds.example.com
|
||||
- `ResolveHandle()`: Uses .well-known/atproto-did
|
||||
- `ResolvePDS()`: Parses DID document for PDS endpoint
|
||||
## Key File Locations
|
||||
|
||||
**client.go**: ATProto PDS client
|
||||
- `PutRecord()`: Store manifest as ATProto record
|
||||
- `GetRecord()`: Retrieve manifest from PDS
|
||||
- `DeleteRecord()`: Remove manifest
|
||||
- Uses XRPC protocol (com.atproto.repo.*)
|
||||
| Responsibility | Files |
|
||||
|---|---|
|
||||
| ATProto records & collections | `pkg/atproto/lexicon.go` |
|
||||
| DID/handle resolution | `pkg/atproto/resolver.go` |
|
||||
| PDS client (XRPC) | `pkg/atproto/client.go` |
|
||||
| Manifest ↔ ATProto storage | `pkg/atproto/manifest_store.go` |
|
||||
| Sailor profiles | `pkg/atproto/profile.go` |
|
||||
| Registry middleware (identity resolution, hold discovery) | `pkg/appview/middleware/registry.go` |
|
||||
| Auth middleware (JWT validation) | `pkg/appview/middleware/auth.go` |
|
||||
| Content routing (manifests vs blobs) | `pkg/appview/storage/routing_repository.go` |
|
||||
| Blob proxy to hold (presigned URLs) | `pkg/appview/storage/proxy_blob_store.go` |
|
||||
| Request context struct | `pkg/appview/storage/context.go` |
|
||||
| Database queries | `pkg/appview/db/queries.go` |
|
||||
| Database schema | `pkg/appview/db/schema.sql` |
|
||||
| OAuth client & session refresher | `pkg/auth/oauth/client.go` |
|
||||
| OAuth P-256 key management | `pkg/auth/oauth/keys.go` |
|
||||
| Hold PDS endpoints & auth | `pkg/hold/pds/xrpc.go`, `pkg/hold/pds/auth.go` |
|
||||
| Hold DID management (did:web, did:plc, PLC recovery) | `pkg/hold/pds/did.go` |
|
||||
| Hold captain records | `pkg/hold/pds/captain.go` |
|
||||
| Hold crew management | `pkg/hold/pds/crew.go` |
|
||||
| Hold push/pull stats (ATProto records in CAR store) | `pkg/hold/pds/stats.go` |
|
||||
| Hold layer records | `pkg/hold/pds/layer.go` |
|
||||
| Hold scan records & scanner integration | `pkg/hold/pds/scan.go`, `pkg/hold/pds/scan_broadcaster.go` |
|
||||
| Hold Bluesky status posts | `pkg/hold/pds/status.go` |
|
||||
| Hold OCI upload endpoints | `pkg/hold/oci/xrpc.go` |
|
||||
| Hold config | `pkg/hold/config.go` |
|
||||
| AppView config | `pkg/appview/config.go` |
|
||||
| Config marshaling (commented YAML) | `pkg/config/marshal.go` |
|
||||
| Scanner config (env-only) | `scanner/internal/config/config.go` |
|
||||
|
||||
**lexicon.go**: ATProto record schemas
|
||||
- `ManifestRecord`: OCI manifest stored as ATProto record (includes `holdEndpoint` field)
|
||||
- `TagRecord`: Tag pointing to manifest digest
|
||||
- `HoldRecord`: Storage hold definition (for BYOS)
|
||||
- `HoldCrewRecord`: Hold crew membership/permissions
|
||||
- `SailorProfileRecord`: User profile with `defaultHold` preference
|
||||
- Collections: `io.atcr.manifest`, `io.atcr.tag`, `io.atcr.hold`, `io.atcr.hold.crew`, `io.atcr.sailor.profile`
|
||||
## Configuration
|
||||
|
||||
**profile.go**: Sailor profile management
|
||||
- `EnsureProfile()`: Creates profile with default hold on first authentication
|
||||
- `GetProfile()`: Retrieves user's profile from PDS
|
||||
- `UpdateProfile()`: Updates user's profile
|
||||
ATCR uses **Viper** for config. YAML primary, env vars override. Generate defaults with `config init`.
|
||||
|
||||
**manifest_store.go**: Implements `distribution.ManifestService`
|
||||
- Stores OCI manifests as ATProto records
|
||||
- Digest-based addressing (sha256:abc123 → record key)
|
||||
- Converts between OCI and ATProto formats
|
||||
**Env var convention:** Prefix + YAML path with `_` separators:
|
||||
- AppView: `ATCR_` (e.g., `ATCR_SERVER_DEFAULT_HOLD_DID`)
|
||||
- Hold: `HOLD_` (e.g., `HOLD_SERVER_PUBLIC_URL`)
|
||||
- S3: standard AWS names (`AWS_ACCESS_KEY_ID`, `S3_BUCKET`, `S3_ENDPOINT`)
|
||||
- Scanner: `SCANNER_` prefix (env-only, no Viper)
|
||||
|
||||
#### Storage Layer (`pkg/appview/storage/`)
|
||||
See `config-appview.example.yaml` and `config-hold.example.yaml` for all options. Config structs use `comment` struct tags for auto-generating commented YAML via `MarshalCommentedYAML()` in `pkg/config/marshal.go`.
|
||||
|
||||
**routing_repository.go**: Routes content by type
|
||||
- `Manifests()` → returns ATProto ManifestStore (caches instance for hold endpoint extraction)
|
||||
- `Blobs()` → checks hold cache for pull, uses discovery for push
|
||||
- Pull: Uses cached `holdEndpoint` from manifest (historical reference)
|
||||
- Push: Uses discovery-based endpoint from `findStorageEndpoint()`
|
||||
- Always returns ProxyBlobStore (routes to hold service)
|
||||
- Implements `distribution.Repository` interface
|
||||
## Development Gotchas
|
||||
|
||||
**hold_cache.go**: In-memory hold endpoint cache
|
||||
- Caches `(DID, repository) → holdEndpoint` for pull operations
|
||||
- TTL: 10 minutes (covers typical pull operations)
|
||||
- Cleanup: Background goroutine runs every 5 minutes
|
||||
- **NOTE:** Simple in-memory cache for MVP. For production: use Redis or similar
|
||||
- Prevents expensive ATProto lookups on every blob request
|
||||
- **Do NOT run `npm run css:build` or `npm run js:build` manually** — Air handles these on file change
|
||||
- **Do NOT edit `icons.svg` directly** — SVG icon sprite sheets (`pkg/appview/public/icons.svg`, `pkg/hold/admin/public/icons.svg`) are auto-generated from template icon references during build. Just reference icons by name in templates and the build will include them.
|
||||
- **RoutingRepository is created fresh on EVERY request** (no caching). Previous caching caused stale OAuth sessions and "invalid refresh token" errors. The OAuth refresher caches efficiently already (in-memory + DB).
|
||||
- **Storage driver import**: `_ "github.com/distribution/distribution/v3/registry/storage/driver/s3-aws"` — blank import required
|
||||
- **Hold DID lookups use database** (`manifests` table), not in-memory cache — persistent across restarts
|
||||
- **Context keys** (`auth.method`, `puller.did`) exist because `Repository()` receives `context.Context` from the distribution library interface — context values are the only way to pass data from HTTP middleware into the distribution middleware layer. Both are copied into `RegistryContext` inside `Repository()`.
|
||||
- **OAuth key types**: AppView uses P-256 (ES256) for OAuth, not K-256 like PDS keys
|
||||
- **Confidential vs public clients**: Production uses P-256 key at `/var/lib/atcr/oauth/client.key` (auto-generated); localhost is always public client
|
||||
- **Hold stats are ATProto records in CAR store** — `io.atcr.hold.stats` records are stored via `repomgr.PutRecord()`, not in SQLite. Lost if CAR store is lost without backup.
|
||||
- **PLC auto-update on boot** — When using did:plc, `LoadOrCreateDID()` calls `EnsurePLCCurrent()` every startup. If local signing key or URL doesn't match plc.directory, it auto-updates (requires rotation key on disk).
|
||||
- **Hold CAR store is the source of truth** — Captain, crew, layer, stats, scan records, Bluesky posts, profiles are all ATProto records in the CAR store. SQLite holds only the records index and events.
|
||||
|
||||
**proxy_blob_store.go**: External storage proxy
|
||||
- Calls user's storage service for presigned URLs
|
||||
- Issues HTTP redirects for blob uploads/downloads
|
||||
- Implements full `distribution.BlobStore` interface
|
||||
- Supports multipart uploads for large blobs
|
||||
- Used when user has `io.atcr.hold` record
|
||||
## Common Tasks
|
||||
|
||||
#### AppView Web UI (`pkg/appview/`)
|
||||
|
||||
The AppView includes a web interface for browsing the registry:
|
||||
|
||||
**Features:**
|
||||
- Repository browsing and search
|
||||
- Star/favorite repositories
|
||||
- Pull count tracking
|
||||
- User profiles and settings
|
||||
- OAuth-based authentication for web users
|
||||
|
||||
**Database Layer** (`pkg/appview/db/`):
|
||||
- SQLite database for metadata (stars, pulls, repository info)
|
||||
- Schema migrations via SQL files in `pkg/appview/db/schema.go`
|
||||
- Stores: OAuth sessions, device flows, repository metadata
|
||||
- **NOTE:** Simple SQLite for MVP. For production multi-instance: use PostgreSQL
|
||||
|
||||
**Jetstream Integration** (`pkg/appview/jetstream/`):
|
||||
- Consumes ATProto Jetstream for real-time updates
|
||||
- Backfills repository records from PDS
|
||||
- Indexes manifests, tags, and repository metadata
|
||||
- Worker processes incoming events
|
||||
|
||||
**Web Handlers** (`pkg/appview/handlers/`):
|
||||
- `home.go` - Landing page
|
||||
- `repository.go` - Repository detail pages
|
||||
- `search.go` - Search functionality
|
||||
- `auth.go` - OAuth login/logout for web
|
||||
- `settings.go` - User settings management
|
||||
- `api.go` - JSON API endpoints
|
||||
|
||||
**Static Assets** (`pkg/appview/static/`, `pkg/appview/templates/`):
|
||||
- Templates use Go html/template
|
||||
- JavaScript in `static/js/app.js`
|
||||
- Minimal CSS for clean UI
|
||||
|
||||
#### Hold Service (`cmd/hold/`)
|
||||
|
||||
Lightweight standalone service for BYOS (Bring Your Own Storage):
|
||||
|
||||
**Architecture:**
|
||||
- Reuses distribution's storage driver factory
|
||||
- Supports all distribution drivers: S3, Storj, Minio, Azure, GCS, filesystem
|
||||
- Authorization follows ATProto's public-by-default model
|
||||
- Generates presigned URLs (15min expiry) or proxies uploads/downloads
|
||||
|
||||
**Authorization Model:**
|
||||
|
||||
Read access:
|
||||
- **Public hold** (`HOLD_PUBLIC=true`): Anonymous + all authenticated users
|
||||
- **Private hold** (`HOLD_PUBLIC=false`): Authenticated users only (any ATCR user)
|
||||
|
||||
Write access:
|
||||
- Hold owner OR crew members only
|
||||
- Verified via `io.atcr.hold.crew` records in owner's PDS
|
||||
|
||||
Key insight: "Private" gates anonymous access, not authenticated access. This reflects ATProto's current limitation (no private PDS records yet).
|
||||
|
||||
**Endpoints:**
|
||||
- `POST /get-presigned-url` - Get download URL for blob
|
||||
- `POST /put-presigned-url` - Get upload URL for blob
|
||||
- `GET /blobs/{digest}` - Proxy download (fallback if no presigned URL support)
|
||||
- `PUT /blobs/{digest}` - Proxy upload (fallback)
|
||||
- `POST /register` - Manual registration endpoint
|
||||
- `GET /health` - Health check
|
||||
|
||||
**Configuration:** Environment variables (see `.env.example`)
|
||||
- `HOLD_PUBLIC_URL` - Public URL of hold service (required)
|
||||
- `STORAGE_DRIVER` - Storage driver type (s3, filesystem)
|
||||
- `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY` - S3 credentials
|
||||
- `S3_BUCKET`, `S3_ENDPOINT` - S3 configuration
|
||||
- `HOLD_PUBLIC` - Allow public reads (default: false)
|
||||
- `HOLD_OWNER` - DID for auto-registration (optional)
|
||||
|
||||
**Deployment:** Can run on Fly.io, Railway, Docker, Kubernetes, etc.
|
||||
|
||||
### ATProto Storage Model
|
||||
|
||||
Manifests are stored as records with this structure:
|
||||
```json
|
||||
{
|
||||
"$type": "io.atcr.manifest",
|
||||
"repository": "myapp",
|
||||
"digest": "sha256:abc123...",
|
||||
"holdEndpoint": "https://hold1.alice.com",
|
||||
"schemaVersion": 2,
|
||||
"mediaType": "application/vnd.oci.image.manifest.v1+json",
|
||||
"config": { "digest": "sha256:...", "size": 1234 },
|
||||
"layers": [
|
||||
{ "digest": "sha256:...", "size": 5678 }
|
||||
],
|
||||
"createdAt": "2025-09-30T..."
|
||||
}
|
||||
```
|
||||
|
||||
Record key = manifest digest (without algorithm prefix)
|
||||
Collection = `io.atcr.manifest`
|
||||
|
||||
### Sailor Profile System
|
||||
|
||||
ATCR uses a "sailor profile" to manage user preferences for hold (storage) selection. The nautical theme reflects the architecture:
|
||||
- **Sailors** = Registry users
|
||||
- **Captains** = Hold owners
|
||||
- **Crew** = Hold members with access
|
||||
- **Holds** = Storage endpoints (BYOS)
|
||||
|
||||
**Profile Record** (`io.atcr.sailor.profile`):
|
||||
```json
|
||||
{
|
||||
"$type": "io.atcr.sailor.profile",
|
||||
"defaultHold": "https://hold1.alice.com",
|
||||
"createdAt": "2025-10-02T...",
|
||||
"updatedAt": "2025-10-02T..."
|
||||
}
|
||||
```
|
||||
|
||||
**Profile Management:**
|
||||
- Created automatically on first authentication (OAuth or Basic Auth)
|
||||
- If AppView has `default_storage_endpoint` configured, profile gets that as `defaultHold`
|
||||
- Users can update their profile to change default hold (future: via UI)
|
||||
- Setting `defaultHold` to null opts out of defaults (use own holds or AppView default)
|
||||
|
||||
**Hold Resolution Priority** (in `findStorageEndpoint()`):
|
||||
1. **Profile's `defaultHold`** - User's explicit preference
|
||||
2. **User's `io.atcr.hold` records** - User's own holds
|
||||
3. **AppView's `default_storage_endpoint`** - Fallback default
|
||||
|
||||
This ensures:
|
||||
- Users can join shared holds by setting their profile's `defaultHold`
|
||||
- Users can opt out of defaults (set `defaultHold` to null)
|
||||
- URL structure remains `atcr.io/<owner>/<image>` (ownership-based, not hold-based)
|
||||
- Hold choice is transparent infrastructure (like choosing an S3 region)
|
||||
|
||||
### Key Design Decisions
|
||||
|
||||
1. **No fork of distribution**: Uses distribution as library, extends via middleware
|
||||
2. **Hybrid storage**: Manifests in ATProto (small, federated), blobs in S3 or BYOS (cheap, scalable)
|
||||
3. **Content addressing**: Manifests stored by digest, blobs deduplicated globally
|
||||
4. **ATProto-native**: Manifests are first-class ATProto records, discoverable via AT Protocol
|
||||
5. **OCI compliant**: Fully compatible with Docker/containerd/podman
|
||||
6. **Account-agnostic AppView**: Server validates any user's token, queries their PDS for config
|
||||
7. **BYOS architecture**: Users can deploy their own storage service, AppView just routes
|
||||
8. **OAuth with DPoP**: Full ATProto OAuth implementation with mandatory DPoP proofs
|
||||
9. **Sailor profile system**: User preferences for hold selection, transparent to image ownership
|
||||
10. **Historical hold references**: Manifests store `holdEndpoint` for immutable blob location tracking
|
||||
|
||||
### Configuration
|
||||
|
||||
**AppView configuration** (environment variables):
|
||||
|
||||
Both AppView and Hold service follow the same pattern: **zero config files, all configuration via environment variables**.
|
||||
|
||||
See `.env.appview.example` for all available options. Key environment variables:
|
||||
|
||||
**Server:**
|
||||
- `ATCR_HTTP_ADDR` - HTTP listen address (default: `:5000`)
|
||||
- `ATCR_BASE_URL` - Public URL for OAuth/JWT realm (auto-detected in dev)
|
||||
- `ATCR_DEFAULT_HOLD` - Default hold endpoint for blob storage (REQUIRED)
|
||||
|
||||
**Authentication:**
|
||||
- `ATCR_AUTH_KEY_PATH` - JWT signing key path (default: `/var/lib/atcr/auth/private-key.pem`)
|
||||
- `ATCR_TOKEN_EXPIRATION` - JWT expiration in seconds (default: 300)
|
||||
|
||||
**UI:**
|
||||
- `ATCR_UI_ENABLED` - Enable web interface (default: true)
|
||||
- `ATCR_UI_DATABASE_PATH` - SQLite database path (default: `/var/lib/atcr/ui.db`)
|
||||
|
||||
**Jetstream:**
|
||||
- `JETSTREAM_URL` - ATProto event stream URL
|
||||
- `ATCR_BACKFILL_ENABLED` - Enable periodic sync (default: false)
|
||||
|
||||
**Legacy:** `config/config.yml` is still supported but deprecated. Use environment variables instead.
|
||||
|
||||
**Hold Service configuration** (environment variables):
|
||||
|
||||
See `.env.hold.example` for all available options. Key environment variables:
|
||||
- `HOLD_PUBLIC_URL` - Public URL of hold service (REQUIRED)
|
||||
- `STORAGE_DRIVER` - Storage backend (s3, filesystem)
|
||||
- `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY` - S3 credentials
|
||||
- `S3_BUCKET`, `S3_ENDPOINT` - S3 configuration
|
||||
- `HOLD_PUBLIC` - Allow public reads (default: false)
|
||||
- `HOLD_OWNER` - DID for auto-registration (optional)
|
||||
|
||||
**Credential Helper**:
|
||||
- Token storage: `~/.atcr/oauth-token.json`
|
||||
- Contains: access token, refresh token, DPoP key (PEM), DID, handle
|
||||
|
||||
### Development Notes
|
||||
|
||||
**General:**
|
||||
- Middleware is in `pkg/appview/middleware/` (auth.go, registry.go)
|
||||
- Storage routing is in `pkg/appview/storage/` (routing_repository.go, proxy_blob_store.go, hold_cache.go)
|
||||
- Storage drivers imported as `_ "github.com/distribution/distribution/v3/registry/storage/driver/s3-aws"`
|
||||
- Hold service reuses distribution's driver factory for multi-backend support
|
||||
|
||||
**OAuth implementation:**
|
||||
- Client (`pkg/auth/oauth/client.go`) encapsulates all OAuth configuration
|
||||
- Token validation via `com.atproto.server.getSession` ensures no trust in client-provided identity
|
||||
- All ATCR components use standardized `/auth/oauth/callback` path
|
||||
- Client ID generation (localhost query-based vs production metadata URL) handled internally
|
||||
|
||||
### Testing Strategy
|
||||
|
||||
When writing tests:
|
||||
- Mock ATProto client for manifest operations
|
||||
- Mock S3 driver for blob operations
|
||||
- Test name resolution independently
|
||||
- Integration tests require real PDS + S3
|
||||
|
||||
### Common Tasks
|
||||
|
||||
**Adding a new ATProto record type**:
|
||||
**Adding a new ATProto record type:**
|
||||
1. Define schema in `pkg/atproto/lexicon.go`
|
||||
2. Add collection constant (e.g., `MyCollection = "io.atcr.my-type"`)
|
||||
3. Add constructor function (e.g., `NewMyRecord()`)
|
||||
4. Update client methods if needed
|
||||
|
||||
**Modifying storage routing**:
|
||||
**Modifying storage routing:**
|
||||
1. Edit `pkg/appview/storage/routing_repository.go`
|
||||
2. Update `Blobs()` method to change routing logic
|
||||
3. Consider context values: `storage.endpoint`, `atproto.did`
|
||||
2. Update `Blobs()` or `Manifests()` method
|
||||
3. Context passed via `RegistryContext` struct (`pkg/appview/storage/context.go`)
|
||||
|
||||
**Changing name resolution**:
|
||||
**Changing name resolution:**
|
||||
1. Modify `pkg/atproto/resolver.go` for DID/handle resolution
|
||||
2. Update `pkg/appview/middleware/registry.go` if changing routing logic
|
||||
3. Remember: `findStorageEndpoint()` queries PDS for `io.atcr.hold` records
|
||||
2. Update `pkg/appview/middleware/registry.go` if changing routing
|
||||
3. `findHoldDID()` checks: sailor profile → `io.atcr.hold` records (legacy) → default hold DID
|
||||
|
||||
**Working with OAuth client**:
|
||||
- Client is self-contained: pass `baseURL`, it handles client ID/redirect URI/scopes
|
||||
- For AppView server/refresher: use `NewClient(baseURL)` or `NewClientWithKey(baseURL, storedKey)`
|
||||
- For custom scopes: call `client.SetScopes(customScopes)` after initialization
|
||||
- Standard callback path: `/auth/oauth/callback` (used by all ATCR components)
|
||||
- Client methods are consistent across authorization, token exchange, and refresh flows
|
||||
**Working with OAuth client:**
|
||||
- Self-contained: pass `baseURL`, handles client ID/redirect URI/scopes
|
||||
- Standard callback path: `/auth/oauth/callback` (all ATCR components)
|
||||
- See `pkg/auth/oauth/client.go` for `NewClientApp()`, refresher setup
|
||||
|
||||
**Adding BYOS support for a user**:
|
||||
1. User sets environment variables (storage credentials, public URL)
|
||||
2. User runs hold service with `HOLD_OWNER` set - auto-registration via OAuth
|
||||
3. Hold service creates `io.atcr.hold` + `io.atcr.hold.crew` records in PDS
|
||||
4. AppView automatically queries PDS and routes blobs to user's storage
|
||||
5. No AppView changes needed - fully decentralized
|
||||
**Adding BYOS support for a user:**
|
||||
1. User configures hold YAML (storage credentials, public URL, owner DID)
|
||||
2. User runs hold service — creates captain + crew records in embedded PDS
|
||||
3. User sets sailor profile `defaultHold` to their hold's DID
|
||||
4. AppView automatically routes blobs to user's storage — no AppView changes needed
|
||||
|
||||
**Supporting a new storage backend**:
|
||||
1. Ensure driver is registered in `cmd/hold/main.go` imports
|
||||
2. Distribution supports: S3, Azure, GCS, Swift, filesystem, OSS
|
||||
3. For custom drivers: implement `storagedriver.StorageDriver` interface
|
||||
4. Add case to `buildStorageConfig()` in `cmd/hold/main.go`
|
||||
5. Update `.env.example` with new driver's env vars
|
||||
**Working with the database:**
|
||||
- **Base schema**: `pkg/appview/db/schema.sql` — source of truth for fresh installs
|
||||
- **Migrations**: `pkg/appview/db/migrations/*.yaml` — only for ALTER/UPDATE/DELETE on existing DBs
|
||||
- **Adding new tables**: Add to `schema.sql` only (no migration needed)
|
||||
- **Altering tables**: Create migration AND update `schema.sql` to keep them in sync
|
||||
|
||||
**Working with the database**:
|
||||
- Schema defined in `pkg/appview/db/schema.go`
|
||||
- Queries in `pkg/appview/db/queries.go`
|
||||
- Stores for OAuth, devices, sessions in separate files
|
||||
- Run migrations automatically on startup
|
||||
- Database path configurable via `ATCR_UI_DATABASE_PATH` env var
|
||||
**Hold DID recovery/migration (did:plc):**
|
||||
1. Back up `rotation.key` and DID string (from `did.txt` or plc.directory)
|
||||
2. Set `database.did_method: plc` and `database.did: "did:plc:..."` in config
|
||||
3. Provide `rotation_key` (multibase K-256 private key) — signing key auto-generates if missing
|
||||
4. On boot: `LoadOrCreateDID()` adopts the DID, `EnsurePLCCurrent()` auto-updates PLC directory if keys/URL changed
|
||||
5. Without rotation key: hold boots but logs warning about PLC mismatch
|
||||
|
||||
**Adding web UI features**:
|
||||
**Adding web UI features:**
|
||||
- Add handler in `pkg/appview/handlers/`
|
||||
- Register route in `cmd/appview/serve.go`
|
||||
- Register route in `pkg/appview/routes/routes.go`
|
||||
- Create template in `pkg/appview/templates/pages/`
|
||||
- Use existing auth middleware for protected routes
|
||||
- API endpoints return JSON, pages return HTML
|
||||
|
||||
## Important Context Values
|
||||
## Testing Strategy
|
||||
|
||||
When working with the codebase, these context values are used for routing:
|
||||
|
||||
- `atproto.did` - Resolved DID for the user (e.g., `did:plc:alice123`)
|
||||
- `atproto.pds` - User's PDS endpoint (e.g., `https://bsky.social`)
|
||||
- `atproto.identity` - Original identity string (handle or DID)
|
||||
- `storage.endpoint` - Storage service URL (if user has `io.atcr.registry` record)
|
||||
- `auth.did` - Authenticated DID from validated token
|
||||
- Mock ATProto client for manifest operations
|
||||
- Mock S3 driver for blob operations
|
||||
- Test name resolution independently
|
||||
- Integration tests require real PDS + S3
|
||||
|
||||
## Documentation References
|
||||
|
||||
- **BYOS Architecture**: See `docs/BYOS.md` for complete BYOS documentation
|
||||
- **OAuth Implementation**: See `docs/OAUTH.md` for OAuth/DPoP flow details
|
||||
- **BYOS Architecture**: `docs/BYOS.md`
|
||||
- **OAuth Implementation**: `docs/OAUTH.md`
|
||||
- **Hold Service**: `docs/hold.md`
|
||||
- **AppView**: `docs/appview.md`
|
||||
- **Hold XRPC Endpoints**: `docs/HOLD_XRPC_ENDPOINTS.md`
|
||||
- **Development Guide**: `docs/DEVELOPMENT.md`
|
||||
- **Billing/Quotas**: `docs/BILLING.md`, `docs/QUOTAS.md`
|
||||
- **Scanning**: `docs/SBOM_SCANNING.md`
|
||||
- **ATProto Spec**: https://atproto.com/specs/oauth
|
||||
- **OCI Distribution Spec**: https://github.com/opencontainers/distribution-spec
|
||||
- **DPoP RFC**: https://datatracker.ietf.org/doc/html/rfc9449
|
||||
- **PAR RFC**: https://datatracker.ietf.org/doc/html/rfc9126
|
||||
- **PKCE RFC**: https://datatracker.ietf.org/doc/html/rfc7636
|
||||
|
||||
@@ -1,45 +1,53 @@
|
||||
FROM golang:1.25.2-trixie AS builder
|
||||
# Production build for ATCR AppView
|
||||
# Result: ~30MB scratch image with static binary
|
||||
FROM docker.io/golang:1.25.7-trixie AS builder
|
||||
|
||||
ENV DEBIAN_FRONTEND=noninteractive
|
||||
|
||||
RUN apt-get update && \
|
||||
apt-get install -y --no-install-recommends sqlite3 libsqlite3-dev && \
|
||||
apt-get install -y --no-install-recommends libsqlite3-dev nodejs npm && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
WORKDIR /build
|
||||
WORKDIR /app
|
||||
|
||||
COPY go.mod go.sum ./
|
||||
RUN go mod download
|
||||
|
||||
COPY . .
|
||||
|
||||
RUN npm ci
|
||||
RUN go generate ./...
|
||||
|
||||
RUN CGO_ENABLED=1 go build \
|
||||
-ldflags="-s -w -linkmode external -extldflags '-static'" \
|
||||
-tags sqlite_omit_load_extension \
|
||||
-trimpath \
|
||||
-o atcr-appview ./cmd/appview
|
||||
|
||||
# ==========================================
|
||||
# Stage 2: Minimal FROM scratch runtime
|
||||
# ==========================================
|
||||
FROM scratch
|
||||
# Copy CA certificates for HTTPS (PDS, Jetstream, relay connections)
|
||||
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
|
||||
# Copy timezone data for timestamp formatting
|
||||
COPY --from=builder /usr/share/zoneinfo /usr/share/zoneinfo
|
||||
# Copy optimized binary (SQLite embedded)
|
||||
COPY --from=builder /build/atcr-appview /atcr-appview
|
||||
RUN CGO_ENABLED=0 go build \
|
||||
-ldflags="-s -w" \
|
||||
-trimpath \
|
||||
-o healthcheck ./cmd/healthcheck
|
||||
|
||||
# Minimal runtime
|
||||
FROM scratch
|
||||
|
||||
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
|
||||
COPY --from=builder /usr/share/zoneinfo /usr/share/zoneinfo
|
||||
COPY --from=builder /app/atcr-appview /atcr-appview
|
||||
COPY --from=builder /app/healthcheck /healthcheck
|
||||
|
||||
# Expose ports
|
||||
EXPOSE 5000
|
||||
|
||||
# OCI image annotations
|
||||
LABEL org.opencontainers.image.title="ATCR AppView" \
|
||||
org.opencontainers.image.description="ATProto Container Registry - OCI-compliant registry using AT Protocol for manifest storage" \
|
||||
org.opencontainers.image.authors="ATCR Contributors" \
|
||||
org.opencontainers.image.source="https://tangled.org/@evan.jarrett.net/at-container-registry" \
|
||||
org.opencontainers.image.documentation="https://tangled.org/@evan.jarrett.net/at-container-registry" \
|
||||
org.opencontainers.image.source="https://tangled.org/evan.jarrett.net/at-container-registry" \
|
||||
org.opencontainers.image.documentation="https://tangled.org/evan.jarrett.net/at-container-registry" \
|
||||
org.opencontainers.image.licenses="MIT" \
|
||||
org.opencontainers.image.version="0.1.0" \
|
||||
io.atcr.icon="https://imgs.blue/evan.jarrett.net/1TpTNrRelfloN2emuWZDrWmPT0o93bAjEnozjD6UPgoVV9m4"
|
||||
io.atcr.icon="https://imgs.blue/evan.jarrett.net/1TpTNrRelfloN2emuWZDrWmPT0o93bAjEnozjD6UPgoVV9m4" \
|
||||
io.atcr.readme="https://tangled.org/evan.jarrett.net/at-container-registry/raw/main/docs/appview.md"
|
||||
|
||||
ENTRYPOINT ["/atcr-appview"]
|
||||
CMD ["serve"]
|
||||
|
||||
23
Dockerfile.dev
Normal file
23
Dockerfile.dev
Normal file
@@ -0,0 +1,23 @@
|
||||
# Development image with Air hot reload
|
||||
# Build: docker build -f Dockerfile.dev -t atcr-dev .
|
||||
# Run: docker run -v $(pwd):/app -p 5000:5000 atcr-dev
|
||||
FROM docker.io/golang:1.25.7-trixie
|
||||
|
||||
ARG AIR_CONFIG=.air.toml
|
||||
|
||||
ENV DEBIAN_FRONTEND=noninteractive
|
||||
ENV AIR_CONFIG=${AIR_CONFIG}
|
||||
|
||||
RUN apt-get update && \
|
||||
apt-get install -y --no-install-recommends sqlite3 libsqlite3-dev curl nodejs npm && \
|
||||
rm -rf /var/lib/apt/lists/* && \
|
||||
go install github.com/air-verse/air@latest
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# Copy go.mod first for layer caching
|
||||
COPY go.mod go.sum ./
|
||||
RUN go mod download
|
||||
|
||||
# For development: source mounted as volume, Air handles builds
|
||||
CMD ["sh", "-c", "air -c ${AIR_CONFIG}"]
|
||||
@@ -1,4 +1,14 @@
|
||||
FROM golang:1.25.2-trixie AS builder
|
||||
FROM docker.io/golang:1.25.7-trixie AS builder
|
||||
|
||||
# Build argument to enable Stripe billing integration
|
||||
# Usage: docker build --build-arg BILLING_ENABLED=true -f Dockerfile.hold .
|
||||
ARG BILLING_ENABLED=false
|
||||
|
||||
ENV DEBIAN_FRONTEND=noninteractive
|
||||
|
||||
RUN apt-get update && \
|
||||
apt-get install -y --no-install-recommends sqlite3 libsqlite3-dev nodejs npm && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
WORKDIR /build
|
||||
|
||||
@@ -7,10 +17,31 @@ RUN go mod download
|
||||
|
||||
COPY . .
|
||||
|
||||
# Build frontend assets (Tailwind CSS, JS bundle, SVG icons)
|
||||
RUN npm ci
|
||||
RUN go generate ./...
|
||||
|
||||
# Conditionally add billing tag based on build arg
|
||||
RUN if [ "$BILLING_ENABLED" = "true" ]; then \
|
||||
echo "Building with Stripe billing support"; \
|
||||
CGO_ENABLED=1 go build \
|
||||
-ldflags="-s -w -linkmode external -extldflags '-static'" \
|
||||
-tags "sqlite_omit_load_extension,billing" \
|
||||
-trimpath \
|
||||
-o atcr-hold ./cmd/hold; \
|
||||
else \
|
||||
echo "Building without billing support"; \
|
||||
CGO_ENABLED=1 go build \
|
||||
-ldflags="-s -w -linkmode external -extldflags '-static'" \
|
||||
-tags sqlite_omit_load_extension \
|
||||
-trimpath \
|
||||
-o atcr-hold ./cmd/hold; \
|
||||
fi
|
||||
|
||||
RUN CGO_ENABLED=0 go build \
|
||||
-ldflags="-s -w" \
|
||||
-trimpath \
|
||||
-o atcr-hold ./cmd/hold
|
||||
-o healthcheck ./cmd/healthcheck
|
||||
|
||||
# ==========================================
|
||||
# Stage 2: Minimal FROM scratch runtime
|
||||
@@ -21,8 +52,9 @@ FROM scratch
|
||||
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
|
||||
# Copy timezone data for timestamp formatting
|
||||
COPY --from=builder /usr/share/zoneinfo /usr/share/zoneinfo
|
||||
# Copy optimized binary
|
||||
# Copy optimized binary (SQLite embedded)
|
||||
COPY --from=builder /build/atcr-hold /atcr-hold
|
||||
COPY --from=builder /build/healthcheck /healthcheck
|
||||
|
||||
# Expose default port
|
||||
EXPOSE 8080
|
||||
@@ -31,10 +63,12 @@ EXPOSE 8080
|
||||
LABEL org.opencontainers.image.title="ATCR Hold Service" \
|
||||
org.opencontainers.image.description="ATCR Hold Service - Bring Your Own Storage component for ATCR" \
|
||||
org.opencontainers.image.authors="ATCR Contributors" \
|
||||
org.opencontainers.image.source="https://tangled.org/@evan.jarrett.net/at-container-registry" \
|
||||
org.opencontainers.image.documentation="https://tangled.org/@evan.jarrett.net/at-container-registry" \
|
||||
org.opencontainers.image.source="https://tangled.org/evan.jarrett.net/at-container-registry" \
|
||||
org.opencontainers.image.documentation="https://tangled.org/evan.jarrett.net/at-container-registry" \
|
||||
org.opencontainers.image.licenses="MIT" \
|
||||
org.opencontainers.image.version="0.1.0" \
|
||||
io.atcr.icon="https://imgs.blue/evan.jarrett.net/1TpTOdtS60GdJWBYEqtK22y688jajbQ9a5kbYRFtwuqrkBAE"
|
||||
io.atcr.icon="https://imgs.blue/evan.jarrett.net/1TpTOdtS60GdJWBYEqtK22y688jajbQ9a5kbYRFtwuqrkBAE" \
|
||||
io.atcr.readme="https://tangled.org/evan.jarrett.net/at-container-registry/raw/main/docs/hold.md"
|
||||
|
||||
ENTRYPOINT ["/atcr-hold"]
|
||||
CMD ["serve"]
|
||||
|
||||
53
Dockerfile.scanner
Normal file
53
Dockerfile.scanner
Normal file
@@ -0,0 +1,53 @@
|
||||
FROM docker.io/golang:1.25.7-trixie AS builder
|
||||
|
||||
ENV DEBIAN_FRONTEND=noninteractive
|
||||
|
||||
RUN apt-get update && \
|
||||
apt-get install -y --no-install-recommends sqlite3 libsqlite3-dev && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
WORKDIR /build
|
||||
|
||||
# Disable workspace mode — go.work references modules not in the Docker context
|
||||
ENV GOWORK=off
|
||||
|
||||
# Copy module definitions first for layer caching
|
||||
COPY go.mod go.sum ./
|
||||
COPY scanner/go.mod scanner/go.sum ./scanner/
|
||||
|
||||
RUN cd scanner && go mod download
|
||||
|
||||
# Copy full source
|
||||
COPY . .
|
||||
|
||||
RUN cd scanner && CGO_ENABLED=1 go build \
|
||||
-ldflags="-s -w -linkmode external -extldflags '-static'" \
|
||||
-trimpath \
|
||||
-o /build/atcr-scanner ./cmd/scanner
|
||||
|
||||
# ==========================================
|
||||
# Stage 2: Minimal FROM scratch runtime
|
||||
# ==========================================
|
||||
FROM scratch
|
||||
|
||||
# Copy CA certificates for HTTPS (presigned URL downloads)
|
||||
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
|
||||
# Copy timezone data for timestamp formatting
|
||||
COPY --from=builder /usr/share/zoneinfo /usr/share/zoneinfo
|
||||
# Copy binary
|
||||
COPY --from=builder /build/atcr-scanner /atcr-scanner
|
||||
|
||||
# Expose health endpoint port
|
||||
EXPOSE 9090
|
||||
|
||||
# OCI image annotations
|
||||
LABEL org.opencontainers.image.title="ATCR Scanner" \
|
||||
org.opencontainers.image.description="ATCR Scanner - container image vulnerability scanner with Syft and Grype" \
|
||||
org.opencontainers.image.authors="ATCR Contributors" \
|
||||
org.opencontainers.image.source="https://tangled.org/evan.jarrett.net/at-container-registry" \
|
||||
org.opencontainers.image.documentation="https://tangled.org/evan.jarrett.net/at-container-registry" \
|
||||
org.opencontainers.image.licenses="MIT" \
|
||||
org.opencontainers.image.version="0.1.0"
|
||||
|
||||
ENTRYPOINT ["/atcr-scanner"]
|
||||
CMD ["serve"]
|
||||
59
Formula/docker-credential-atcr.rb
Normal file
59
Formula/docker-credential-atcr.rb
Normal file
@@ -0,0 +1,59 @@
|
||||
# typed: false
|
||||
# frozen_string_literal: true
|
||||
|
||||
class DockerCredentialAtcr < Formula
|
||||
desc "Docker credential helper for ATCR (ATProto Container Registry)"
|
||||
homepage "https://atcr.io"
|
||||
version "0.0.1"
|
||||
license "MIT"
|
||||
|
||||
on_macos do
|
||||
on_arm do
|
||||
url "https://tangled.org/evan.jarrett.net/at-container-registry/tags/v0.0.1/download/docker-credential-atcr_0.0.1_Darwin_arm64.tar.gz"
|
||||
sha256 "REPLACE_WITH_SHA256"
|
||||
end
|
||||
on_intel do
|
||||
url "https://tangled.org/evan.jarrett.net/at-container-registry/tags/v0.0.1/download/docker-credential-atcr_0.0.1_Darwin_x86_64.tar.gz"
|
||||
sha256 "REPLACE_WITH_SHA256"
|
||||
end
|
||||
end
|
||||
|
||||
on_linux do
|
||||
on_arm do
|
||||
url "https://tangled.org/evan.jarrett.net/at-container-registry/tags/v0.0.1/download/docker-credential-atcr_0.0.1_Linux_arm64.tar.gz"
|
||||
sha256 "REPLACE_WITH_SHA256"
|
||||
end
|
||||
on_intel do
|
||||
url "https://tangled.org/evan.jarrett.net/at-container-registry/tags/v0.0.1/download/docker-credential-atcr_0.0.1_Linux_x86_64.tar.gz"
|
||||
sha256 "REPLACE_WITH_SHA256"
|
||||
end
|
||||
end
|
||||
|
||||
def install
|
||||
bin.install "docker-credential-atcr"
|
||||
end
|
||||
|
||||
test do
|
||||
assert_match version.to_s, shell_output("#{bin}/docker-credential-atcr version 2>&1")
|
||||
end
|
||||
|
||||
def caveats
|
||||
<<~EOS
|
||||
To configure Docker to use ATCR credential helper, add the following
|
||||
to your ~/.docker/config.json:
|
||||
|
||||
{
|
||||
"credHelpers": {
|
||||
"atcr.io": "atcr"
|
||||
}
|
||||
}
|
||||
|
||||
Or run: docker-credential-atcr configure-docker
|
||||
|
||||
To authenticate with ATCR:
|
||||
docker push atcr.io/<your-handle>/<image>:latest
|
||||
|
||||
Configuration is stored in: ~/.atcr/config.json
|
||||
EOS
|
||||
end
|
||||
end
|
||||
@@ -37,13 +37,22 @@ Invoke-WebRequest -Uri https://atcr.io/install.ps1 -OutFile install.ps1
|
||||
.\install.ps1
|
||||
```
|
||||
|
||||
### Using Homebrew (macOS)
|
||||
You can read the full manifest spec here, but the dependencies block is the real interesting bit. Dependencies for your workflow, like Go, Node.js, Python etc. can be pulled in from nixpkgs. Nixpkgs—for the uninitiated—is a vast collection of packages for the Nix package manager. Fortunately, you needn’t know nor care about Nix to use it! Just head to https://search.nixos.org to find your package of choice (I’ll bet 1€ that it’s there1), toss it in the list and run your build. The Nix-savvy of you lot will be happy to know that you can use custom registries too.
|
||||
### Using Homebrew (macOS and Linux)
|
||||
|
||||
```bash
|
||||
# Add the ATCR tap
|
||||
brew tap atcr-io/tap
|
||||
|
||||
# Install the credential helper
|
||||
brew install docker-credential-atcr
|
||||
```
|
||||
|
||||
The Homebrew formula supports:
|
||||
- **macOS**: Intel (x86_64) and Apple Silicon (arm64)
|
||||
- **Linux**: x86_64 and arm64
|
||||
|
||||
Homebrew will automatically download the correct binary for your platform.
|
||||
|
||||
### Manual Installation
|
||||
|
||||
1. **Download the binary** for your platform from [GitHub Releases](https://github.com/atcr-io/atcr/releases)
|
||||
|
||||
135
Makefile
Normal file
135
Makefile
Normal file
@@ -0,0 +1,135 @@
|
||||
# ATCR Makefile
|
||||
# Build targets for the ATProto Container Registry
|
||||
|
||||
.PHONY: all build build-appview build-hold build-credential-helper build-oauth-helper \
|
||||
generate test test-race test-verbose lint lex-lint clean help install-credential-helper \
|
||||
develop develop-detached develop-down dev \
|
||||
docker docker-appview docker-hold docker-scanner
|
||||
|
||||
.DEFAULT_GOAL := help
|
||||
|
||||
help: ## Show this help message
|
||||
@echo "ATCR Build Targets:"
|
||||
@echo ""
|
||||
@awk 'BEGIN {FS = ":.*##"; printf ""} /^[a-zA-Z_-]+:.*?##/ { printf " \033[36m%-28s\033[0m %s\n", $$1, $$2 } /^##@/ { printf "\n\033[1m%s\033[0m\n", substr($$0, 5) } ' $(MAKEFILE_LIST)
|
||||
|
||||
all: generate build ## Generate assets and build all binaries (default)
|
||||
|
||||
# Generated asset files
|
||||
GENERATED_ASSETS = \
|
||||
pkg/appview/public/js/htmx.min.js \
|
||||
pkg/appview/public/js/lucide.min.js \
|
||||
pkg/appview/licenses/spdx-licenses.json
|
||||
|
||||
generate: $(GENERATED_ASSETS) ## Run go generate to download vendor assets
|
||||
|
||||
$(GENERATED_ASSETS):
|
||||
@echo "→ Generating vendor assets and code..."
|
||||
go generate ./...
|
||||
|
||||
##@ Build Targets
|
||||
|
||||
build: build-appview build-hold build-credential-helper ## Build all binaries
|
||||
|
||||
build-appview: $(GENERATED_ASSETS) ## Build appview binary only
|
||||
@echo "→ Building appview..."
|
||||
@mkdir -p bin
|
||||
go build -o bin/atcr-appview ./cmd/appview
|
||||
|
||||
build-hold: $(GENERATED_ASSETS) ## Build hold binary only
|
||||
@echo "→ Building hold..."
|
||||
@mkdir -p bin
|
||||
go build -o bin/atcr-hold ./cmd/hold
|
||||
|
||||
build-credential-helper: ## Build credential helper only
|
||||
@echo "→ Building credential helper..."
|
||||
@mkdir -p bin
|
||||
go build -o bin/docker-credential-atcr ./cmd/credential-helper
|
||||
|
||||
build-oauth-helper: ## Build OAuth helper only
|
||||
@echo "→ Building OAuth helper..."
|
||||
@mkdir -p bin
|
||||
go build -o bin/oauth-helper ./cmd/oauth-helper
|
||||
|
||||
##@ Test Targets
|
||||
|
||||
test: ## Run all tests
|
||||
@echo "→ Running tests..."
|
||||
go test -cover ./...
|
||||
|
||||
test-race: ## Run tests with race detector
|
||||
@echo "→ Running tests with race detector..."
|
||||
go test -race ./...
|
||||
|
||||
test-verbose: ## Run tests with verbose output
|
||||
@echo "→ Running tests with verbose output..."
|
||||
go test -v ./...
|
||||
|
||||
##@ Quality Targets
|
||||
|
||||
.PHONY: check-golangci-lint
|
||||
check-golangci-lint:
|
||||
@which golangci-lint > /dev/null || (echo "→ Installing golangci-lint..." && go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest)
|
||||
|
||||
lint: check-golangci-lint ## Run golangci-lint
|
||||
@echo "→ Running golangci-lint..."
|
||||
golangci-lint run ./...
|
||||
|
||||
lex-lint: ## Lint ATProto lexicon schemas
|
||||
goat lex lint ./lexicons/
|
||||
|
||||
##@ Install Targets
|
||||
|
||||
install-credential-helper: build-credential-helper ## Install credential helper to /usr/local/sbin
|
||||
@echo "→ Installing credential helper to /usr/local/sbin..."
|
||||
install -m 755 bin/docker-credential-atcr /usr/local/sbin/docker-credential-atcr
|
||||
@echo "✓ Installed docker-credential-atcr to /usr/local/sbin/"
|
||||
|
||||
##@ Development Targets
|
||||
|
||||
dev: $(GENERATED_ASSETS) ## Run AppView locally with Air hot reload
|
||||
@which air > /dev/null || (echo "→ Installing Air..." && go install github.com/air-verse/air@latest)
|
||||
air -c .air.toml
|
||||
|
||||
##@ Docker Targets
|
||||
|
||||
docker: docker-appview docker-hold docker-scanner ## Build all Docker images
|
||||
|
||||
docker-appview: ## Build appview Docker image
|
||||
@echo "→ Building appview Docker image..."
|
||||
docker build -f Dockerfile.appview -t atcr.io/atcr.io/appview:latest .
|
||||
|
||||
docker-hold: ## Build hold Docker image
|
||||
@echo "→ Building hold Docker image..."
|
||||
docker build -f Dockerfile.hold -t atcr.io/atcr.io/hold:latest .
|
||||
|
||||
docker-scanner: ## Build scanner Docker image
|
||||
@echo "→ Building scanner Docker image..."
|
||||
docker build -f Dockerfile.scanner -t atcr.io/atcr.io/scanner:latest .
|
||||
|
||||
develop: ## Build and start docker-compose with Air hot reload
|
||||
@echo "→ Building Docker images..."
|
||||
docker-compose build
|
||||
@echo "→ Starting docker-compose with hot reload..."
|
||||
docker-compose up
|
||||
|
||||
develop-detached: ## Build and start docker-compose with hot reload (detached)
|
||||
@echo "→ Building Docker images..."
|
||||
docker-compose build
|
||||
@echo "→ Starting docker-compose with hot reload (detached)..."
|
||||
docker-compose up -d
|
||||
@echo "✓ Services started in background with hot reload"
|
||||
@echo " AppView: http://localhost:5000"
|
||||
@echo " Hold: http://localhost:8080"
|
||||
|
||||
develop-down: ## Stop docker-compose services
|
||||
@echo "→ Stopping docker-compose..."
|
||||
docker-compose down
|
||||
|
||||
##@ Utility Targets
|
||||
|
||||
clean: ## Remove built binaries and generated assets
|
||||
@echo "→ Cleaning build artifacts..."
|
||||
rm -rf bin/
|
||||
rm -f pkg/appview/licenses/spdx-licenses.json
|
||||
@echo "✓ Clean complete"
|
||||
82
README.md
82
README.md
@@ -1,5 +1,7 @@
|
||||
# ATCR - ATProto Container Registry
|
||||
|
||||
## https://atcr.io
|
||||
|
||||
An OCI-compliant container registry that uses the AT Protocol for manifest storage and S3 for blob storage.
|
||||
|
||||
## What is ATCR?
|
||||
@@ -19,26 +21,29 @@ atcr.io/did:plc:xyz123/myapp:latest
|
||||
1. **AppView** - Registry API + web UI
|
||||
- Serves OCI Distribution API (Docker push/pull)
|
||||
- Resolves handles/DIDs to PDS endpoints
|
||||
- Routes manifests to PDS, blobs to storage
|
||||
- Routes manifests to user's PDS, blobs to hold services
|
||||
- Web interface for browsing/search
|
||||
|
||||
2. **Hold Service** - Storage service (optional BYOS)
|
||||
2. **Hold Service** - Storage service with embedded PDS (optional BYOS)
|
||||
- Each hold has a full ATProto PDS for access control (captain + crew records)
|
||||
- Identified by did:web (e.g., `did:web:hold01.atcr.io`)
|
||||
- Generates presigned URLs for S3/Storj/Minio/etc.
|
||||
- Users can deploy their own storage
|
||||
- Users can deploy their own storage and control access via crew membership
|
||||
|
||||
3. **Credential Helper** - Client authentication
|
||||
- ATProto OAuth with DPoP
|
||||
- ATProto OAuth (DPoP handled transparently)
|
||||
- Automatic authentication on first push/pull
|
||||
|
||||
**Storage model:**
|
||||
- Manifests → ATProto records (small JSON)
|
||||
- Blobs → S3 or BYOS (large binaries)
|
||||
- Manifests → ATProto records in user's PDS (small JSON, includes `holdDid` reference)
|
||||
- Blobs → Hold services via XRPC multipart upload (large binaries, stored in S3/etc.)
|
||||
- AppView uses service tokens to communicate with holds on behalf of users
|
||||
|
||||
## Features
|
||||
|
||||
- ✅ **OCI-compliant** - Works with Docker, containerd, podman
|
||||
- ✅ **Decentralized** - You own your manifest data via your PDS
|
||||
- ✅ **ATProto OAuth** - Secure authentication with DPoP
|
||||
- ✅ **ATProto OAuth** - Secure authentication (DPoP-compliant)
|
||||
- ✅ **BYOS** - Deploy your own storage service
|
||||
- ✅ **Web UI** - Browse, search, star repositories
|
||||
- ✅ **Multi-backend** - S3, Storj, Minio, Azure, GCS, filesystem
|
||||
@@ -72,30 +77,33 @@ See **[INSTALLATION.md](./INSTALLATION.md)** for detailed installation instructi
|
||||
|
||||
### Running Your Own AppView
|
||||
|
||||
**Using Docker Compose:**
|
||||
```bash
|
||||
cp .env.appview.example .env.appview
|
||||
# Edit .env.appview with your configuration
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
**Local development:**
|
||||
```bash
|
||||
# Build
|
||||
go build -o bin/atcr-appview ./cmd/appview
|
||||
go build -o bin/atcr-hold ./cmd/hold
|
||||
|
||||
# Configure
|
||||
cp .env.appview.example .env.appview
|
||||
# Edit .env.appview - set ATCR_DEFAULT_HOLD
|
||||
source .env.appview
|
||||
# Generate a config file with all defaults
|
||||
./bin/atcr-appview config init config-appview.yaml
|
||||
# Edit config-appview.yaml — set server.default_hold_did at minimum
|
||||
|
||||
# Run
|
||||
./bin/atcr-appview serve
|
||||
./bin/atcr-appview serve --config config-appview.yaml
|
||||
```
|
||||
|
||||
**Using Docker:**
|
||||
```bash
|
||||
docker build -f Dockerfile.appview -t atcr-appview:latest .
|
||||
docker run -d -p 5000:5000 \
|
||||
-v ./config-appview.yaml:/config.yaml:ro \
|
||||
-v atcr-data:/var/lib/atcr \
|
||||
atcr-appview:latest serve --config /config.yaml
|
||||
```
|
||||
|
||||
See **[deploy/README.md](./deploy/README.md)** for production deployment.
|
||||
|
||||
### Running Your Own Hold (BYOS Storage)
|
||||
|
||||
See **[docs/hold.md](./docs/hold.md)** for deploying your own storage backend.
|
||||
|
||||
## Development
|
||||
|
||||
### Building from Source
|
||||
@@ -117,23 +125,43 @@ go test -race ./...
|
||||
cmd/
|
||||
├── appview/ # Registry server + web UI
|
||||
├── hold/ # Storage service (BYOS)
|
||||
└── credential-helper/ # Docker credential helper
|
||||
├── credential-helper/ # Docker credential helper
|
||||
├── oauth-helper/ # OAuth debug tool
|
||||
├── healthcheck/ # HTTP health check (for Docker)
|
||||
├── db-migrate/ # SQLite → libsql migration
|
||||
├── usage-report/ # Hold storage usage report
|
||||
├── record-query/ # Query ATProto relay by collection
|
||||
└── s3-test/ # S3 connectivity test
|
||||
|
||||
pkg/
|
||||
├── appview/
|
||||
│ ├── db/ # SQLite database (migrations, queries, stores)
|
||||
│ ├── handlers/ # HTTP handlers (home, repo, search, auth, settings)
|
||||
│ ├── holdhealth/ # Hold service health checker
|
||||
│ ├── jetstream/ # ATProto Jetstream consumer
|
||||
│ ├── middleware/ # Auth & registry middleware
|
||||
│ ├── storage/ # Storage routing (hold cache, blob proxy, repository)
|
||||
│ ├── static/ # Static assets (JS, CSS, install scripts)
|
||||
│ ├── ogcard/ # OpenGraph image generation
|
||||
│ ├── readme/ # Repository README fetcher
|
||||
│ ├── routes/ # HTTP route registration
|
||||
│ ├── storage/ # Storage routing (blob proxy, manifest store)
|
||||
│ ├── public/ # Static assets (JS, CSS, install scripts)
|
||||
│ └── templates/ # HTML templates
|
||||
├── atproto/ # ATProto client, records, manifest/tag stores
|
||||
├── auth/
|
||||
│ ├── oauth/ # OAuth client, server, refresher, storage
|
||||
│ ├── oauth/ # OAuth client, refresher, storage
|
||||
│ ├── token/ # JWT issuer, validator, claims
|
||||
│ └── atproto/ # Session validation
|
||||
└── hold/ # Hold service (authorization, storage, multipart, S3)
|
||||
│ └── holdlocal/ # Local hold authorization
|
||||
├── config/ # Config marshaling (commented YAML)
|
||||
├── hold/
|
||||
│ ├── admin/ # Admin web UI
|
||||
│ ├── billing/ # Stripe billing integration
|
||||
│ ├── db/ # Vendored carstore (go-libsql)
|
||||
│ ├── gc/ # Garbage collection
|
||||
│ ├── oci/ # OCI upload endpoints
|
||||
│ ├── pds/ # Embedded PDS (DID, captain, crew, stats, scans)
|
||||
│ └── quota/ # Storage quotas
|
||||
├── logging/ # Structured logging + remote shipping
|
||||
└── s3/ # S3 client utilities
|
||||
```
|
||||
|
||||
## License
|
||||
|
||||
@@ -1,102 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
"atcr.io/pkg/appview/db"
|
||||
)
|
||||
|
||||
func TestAuthorizerBlocksSensitiveTables(t *testing.T) {
|
||||
// Create temporary database
|
||||
tmpDir := t.TempDir()
|
||||
dbPath := filepath.Join(tmpDir, "test.db")
|
||||
|
||||
// Set environment for database path
|
||||
os.Setenv("ATCR_UI_DATABASE_PATH", dbPath)
|
||||
defer os.Unsetenv("ATCR_UI_DATABASE_PATH")
|
||||
|
||||
// Initialize database (creates schema)
|
||||
database, err := db.InitDB(dbPath)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to initialize database: %v", err)
|
||||
}
|
||||
defer database.Close()
|
||||
|
||||
// Create some test data in sensitive tables
|
||||
_, err = database.Exec(`
|
||||
INSERT INTO oauth_sessions (session_key, account_did, session_id, session_data, created_at, updated_at)
|
||||
VALUES ('test-key', 'did:plc:test', 'test-session', 'secret-token-data', datetime('now'), datetime('now'))
|
||||
`)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to insert test data: %v", err)
|
||||
}
|
||||
|
||||
_, err = database.Exec(`
|
||||
INSERT INTO users (did, handle, pds_endpoint, avatar, last_seen)
|
||||
VALUES ('did:plc:test', 'test.user', 'https://pds.example.com', '', datetime('now'))
|
||||
`)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to insert test user: %v", err)
|
||||
}
|
||||
|
||||
// Open read-only connection with authorizer (using our custom driver)
|
||||
readOnlyDB, err := sql.Open("sqlite3_readonly_public", "file:"+dbPath+"?mode=ro")
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to open read-only database: %v", err)
|
||||
}
|
||||
defer readOnlyDB.Close()
|
||||
|
||||
// Test 1: Should be able to read from public tables (users)
|
||||
t.Run("AllowPublicTableRead", func(t *testing.T) {
|
||||
var handle string
|
||||
err := readOnlyDB.QueryRow("SELECT handle FROM users WHERE did = ?", "did:plc:test").Scan(&handle)
|
||||
if err != nil {
|
||||
t.Errorf("Should be able to read from public table 'users': %v", err)
|
||||
}
|
||||
if handle != "test.user" {
|
||||
t.Errorf("Expected handle 'test.user', got '%s'", handle)
|
||||
}
|
||||
})
|
||||
|
||||
// Test 2: Should NOT be able to read from sensitive tables (oauth_sessions)
|
||||
t.Run("BlockSensitiveTableRead", func(t *testing.T) {
|
||||
var sessionData string
|
||||
err := readOnlyDB.QueryRow("SELECT session_data FROM oauth_sessions WHERE session_key = ?", "test-key").Scan(&sessionData)
|
||||
if err == nil {
|
||||
t.Errorf("Should NOT be able to read from sensitive table 'oauth_sessions', but got data: %s", sessionData)
|
||||
}
|
||||
// SQLite returns "not authorized" error when authorizer denies access
|
||||
if err != nil && err.Error() != "not authorized" {
|
||||
t.Logf("Got expected error (but different message): %v", err)
|
||||
}
|
||||
})
|
||||
|
||||
// Test 3: Should NOT be able to read from ui_sessions
|
||||
t.Run("BlockUISessionsTableRead", func(t *testing.T) {
|
||||
rows, err := readOnlyDB.Query("SELECT * FROM ui_sessions LIMIT 1")
|
||||
if err == nil {
|
||||
rows.Close()
|
||||
t.Error("Should NOT be able to read from sensitive table 'ui_sessions'")
|
||||
}
|
||||
})
|
||||
|
||||
// Test 4: Should NOT be able to read from devices
|
||||
t.Run("BlockDevicesTableRead", func(t *testing.T) {
|
||||
rows, err := readOnlyDB.Query("SELECT * FROM devices LIMIT 1")
|
||||
if err == nil {
|
||||
rows.Close()
|
||||
t.Error("Should NOT be able to read from sensitive table 'devices'")
|
||||
}
|
||||
})
|
||||
|
||||
// Test 5: Should NOT be able to write to any table (read-only mode + authorizer)
|
||||
t.Run("BlockAllWrites", func(t *testing.T) {
|
||||
_, err := readOnlyDB.Exec("INSERT INTO users (did, handle, pds_endpoint, avatar, last_seen) VALUES ('did:plc:test2', 'test2', 'https://pds.example.com', '', datetime('now'))")
|
||||
if err == nil {
|
||||
t.Error("Should NOT be able to write to any table in read-only mode")
|
||||
}
|
||||
})
|
||||
}
|
||||
@@ -1,213 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/url"
|
||||
"os"
|
||||
"strconv"
|
||||
"time"
|
||||
|
||||
"github.com/distribution/distribution/v3/configuration"
|
||||
)
|
||||
|
||||
// loadConfigFromEnv builds a complete configuration from environment variables
|
||||
// This follows the same pattern as the hold service (no config files, only env vars)
|
||||
func loadConfigFromEnv() (*configuration.Configuration, error) {
|
||||
config := &configuration.Configuration{}
|
||||
|
||||
// Version
|
||||
config.Version = configuration.MajorMinorVersion(0, 1)
|
||||
|
||||
// Logging
|
||||
config.Log = buildLogConfig()
|
||||
|
||||
// HTTP server
|
||||
httpConfig, err := buildHTTPConfig()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to build HTTP config: %w", err)
|
||||
}
|
||||
config.HTTP = httpConfig
|
||||
|
||||
// Storage (fake in-memory placeholder - all real storage is proxied)
|
||||
config.Storage = buildStorageConfig()
|
||||
|
||||
// Middleware (ATProto resolver)
|
||||
defaultHold := os.Getenv("ATCR_DEFAULT_HOLD")
|
||||
if defaultHold == "" {
|
||||
return nil, fmt.Errorf("ATCR_DEFAULT_HOLD is required")
|
||||
}
|
||||
config.Middleware = buildMiddlewareConfig(defaultHold)
|
||||
|
||||
// Auth
|
||||
baseURL := getBaseURL(httpConfig.Addr)
|
||||
authConfig, err := buildAuthConfig(baseURL)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to build auth config: %w", err)
|
||||
}
|
||||
config.Auth = authConfig
|
||||
|
||||
// Health checks
|
||||
config.Health = buildHealthConfig()
|
||||
|
||||
return config, nil
|
||||
}
|
||||
|
||||
// buildLogConfig creates logging configuration from environment variables
|
||||
func buildLogConfig() configuration.Log {
|
||||
level := getEnvOrDefault("ATCR_LOG_LEVEL", "info")
|
||||
formatter := getEnvOrDefault("ATCR_LOG_FORMATTER", "text")
|
||||
|
||||
return configuration.Log{
|
||||
Level: configuration.Loglevel(level),
|
||||
Formatter: formatter,
|
||||
Fields: map[string]interface{}{
|
||||
"service": "atcr-appview",
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// buildHTTPConfig creates HTTP server configuration from environment variables
|
||||
func buildHTTPConfig() (configuration.HTTP, error) {
|
||||
addr := getEnvOrDefault("ATCR_HTTP_ADDR", ":5000")
|
||||
debugAddr := getEnvOrDefault("ATCR_DEBUG_ADDR", ":5001")
|
||||
|
||||
return configuration.HTTP{
|
||||
Addr: addr,
|
||||
Headers: map[string][]string{
|
||||
"X-Content-Type-Options": {"nosniff"},
|
||||
},
|
||||
Debug: configuration.Debug{
|
||||
Addr: debugAddr,
|
||||
},
|
||||
}, nil
|
||||
}
|
||||
|
||||
// buildStorageConfig creates a fake in-memory storage config
|
||||
// This is required for distribution validation but is never actually used
|
||||
// All storage is routed through middleware to ATProto (manifests) and hold services (blobs)
|
||||
func buildStorageConfig() configuration.Storage {
|
||||
storage := configuration.Storage{}
|
||||
|
||||
// Use in-memory storage as a placeholder
|
||||
storage["inmemory"] = configuration.Parameters{}
|
||||
|
||||
// Disable upload purging
|
||||
// NOTE: Must use map[interface{}]interface{} for uploadpurging (not configuration.Parameters)
|
||||
// because distribution's validation code does a type assertion to map[interface{}]interface{}
|
||||
storage["maintenance"] = configuration.Parameters{
|
||||
"uploadpurging": map[interface{}]interface{}{
|
||||
"enabled": false,
|
||||
"age": 7 * 24 * time.Hour, // 168h
|
||||
"interval": 24 * time.Hour, // 24h
|
||||
"dryrun": false,
|
||||
},
|
||||
}
|
||||
|
||||
return storage
|
||||
}
|
||||
|
||||
// buildMiddlewareConfig creates middleware configuration
|
||||
func buildMiddlewareConfig(defaultHold string) map[string][]configuration.Middleware {
|
||||
return map[string][]configuration.Middleware{
|
||||
"registry": {
|
||||
{
|
||||
Name: "atproto-resolver",
|
||||
Options: configuration.Parameters{
|
||||
"default_storage_endpoint": defaultHold,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// buildAuthConfig creates authentication configuration from environment variables
|
||||
func buildAuthConfig(baseURL string) (configuration.Auth, error) {
|
||||
// Token configuration
|
||||
privateKeyPath := getEnvOrDefault("ATCR_AUTH_KEY_PATH", "/var/lib/atcr/auth/private-key.pem")
|
||||
certPath := getEnvOrDefault("ATCR_AUTH_CERT_PATH", "/var/lib/atcr/auth/private-key.crt")
|
||||
|
||||
// Token expiration in seconds (default: 5 minutes)
|
||||
expirationStr := getEnvOrDefault("ATCR_TOKEN_EXPIRATION", "300")
|
||||
expiration, err := strconv.Atoi(expirationStr)
|
||||
if err != nil {
|
||||
return configuration.Auth{}, fmt.Errorf("invalid ATCR_TOKEN_EXPIRATION: %w", err)
|
||||
}
|
||||
|
||||
// Auto-derive service name from base URL or use env var
|
||||
serviceName := getServiceName(baseURL)
|
||||
|
||||
// Auto-derive realm from base URL
|
||||
realm := baseURL + "/auth/token"
|
||||
|
||||
return configuration.Auth{
|
||||
"token": configuration.Parameters{
|
||||
"realm": realm,
|
||||
"service": serviceName,
|
||||
"issuer": serviceName,
|
||||
"rootcertbundle": certPath,
|
||||
"privatekey": privateKeyPath,
|
||||
"expiration": expiration,
|
||||
},
|
||||
}, nil
|
||||
}
|
||||
|
||||
// buildHealthConfig creates health check configuration
|
||||
func buildHealthConfig() configuration.Health {
|
||||
return configuration.Health{
|
||||
StorageDriver: configuration.StorageDriver{
|
||||
Enabled: true,
|
||||
Interval: 10 * time.Second,
|
||||
Threshold: 3,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// getBaseURL determines the base URL for the service
|
||||
// Priority: ATCR_BASE_URL env var, then derived from HTTP addr
|
||||
func getBaseURL(httpAddr string) string {
|
||||
baseURL := os.Getenv("ATCR_BASE_URL")
|
||||
if baseURL != "" {
|
||||
return baseURL
|
||||
}
|
||||
|
||||
// Auto-detect from HTTP addr
|
||||
if httpAddr[0] == ':' {
|
||||
// Just a port, assume localhost
|
||||
return fmt.Sprintf("http://127.0.0.1%s", httpAddr)
|
||||
}
|
||||
|
||||
// Full address provided
|
||||
return fmt.Sprintf("http://%s", httpAddr)
|
||||
}
|
||||
|
||||
// getServiceName extracts service name from base URL or uses env var
|
||||
func getServiceName(baseURL string) string {
|
||||
// Check env var first
|
||||
if serviceName := os.Getenv("ATCR_SERVICE_NAME"); serviceName != "" {
|
||||
return serviceName
|
||||
}
|
||||
|
||||
// Try to extract from base URL
|
||||
parsed, err := url.Parse(baseURL)
|
||||
if err == nil && parsed.Hostname() != "" {
|
||||
hostname := parsed.Hostname()
|
||||
|
||||
// Strip localhost/127.0.0.1 and use default
|
||||
if hostname == "localhost" || hostname == "127.0.0.1" {
|
||||
return "atcr.io"
|
||||
}
|
||||
|
||||
return hostname
|
||||
}
|
||||
|
||||
// Default fallback
|
||||
return "atcr.io"
|
||||
}
|
||||
|
||||
// getEnvOrDefault gets an environment variable or returns a default value
|
||||
func getEnvOrDefault(key, defaultValue string) string {
|
||||
if val := os.Getenv(key); val != "" {
|
||||
return val
|
||||
}
|
||||
return defaultValue
|
||||
}
|
||||
@@ -1,18 +1,102 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"github.com/distribution/distribution/v3/registry"
|
||||
_ "github.com/distribution/distribution/v3/registry/auth/token"
|
||||
_ "github.com/distribution/distribution/v3/registry/storage/driver/inmemory"
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"atcr.io/pkg/appview"
|
||||
|
||||
// Register our custom middleware
|
||||
_ "atcr.io/pkg/appview/middleware"
|
||||
|
||||
// Register built-in themes
|
||||
_ "atcr.io/themes/seamark"
|
||||
)
|
||||
|
||||
var configFile string
|
||||
|
||||
var serveCmd = &cobra.Command{
|
||||
Use: "serve",
|
||||
Short: "Start the ATCR registry server",
|
||||
Long: `Start the ATCR registry server with authentication endpoints.
|
||||
|
||||
Configuration is loaded in layers: defaults -> YAML file -> environment variables.
|
||||
Use --config to specify a YAML configuration file.
|
||||
Environment variables always override file values.`,
|
||||
Args: cobra.NoArgs,
|
||||
RunE: serveRegistry,
|
||||
}
|
||||
|
||||
var configCmd = &cobra.Command{
|
||||
Use: "config",
|
||||
Short: "Configuration management commands",
|
||||
}
|
||||
|
||||
var configInitCmd = &cobra.Command{
|
||||
Use: "init [path]",
|
||||
Short: "Generate an example configuration file",
|
||||
Long: `Generate an example YAML configuration file with all available options.
|
||||
If path is provided, writes to that file. Otherwise writes to stdout.`,
|
||||
Args: cobra.MaximumNArgs(1),
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
yamlBytes, err := appview.ExampleYAML()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to generate example config: %w", err)
|
||||
}
|
||||
if len(args) == 1 {
|
||||
if err := os.WriteFile(args[0], yamlBytes, 0644); err != nil {
|
||||
return fmt.Errorf("failed to write config file: %w", err)
|
||||
}
|
||||
fmt.Fprintf(os.Stderr, "Wrote example config to %s\n", args[0])
|
||||
return nil
|
||||
}
|
||||
fmt.Print(string(yamlBytes))
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
func init() {
|
||||
serveCmd.Flags().StringVarP(&configFile, "config", "c", "", "path to YAML configuration file")
|
||||
|
||||
configCmd.AddCommand(configInitCmd)
|
||||
|
||||
// Replace the default serve command with our custom one
|
||||
for i, cmd := range registry.RootCmd.Commands() {
|
||||
if cmd.Name() == "serve" {
|
||||
registry.RootCmd.Commands()[i] = serveCmd
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
registry.RootCmd.AddCommand(configCmd)
|
||||
}
|
||||
|
||||
func serveRegistry(cmd *cobra.Command, args []string) error {
|
||||
cfg, err := appview.LoadConfig(configFile)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to load config: %w", err)
|
||||
}
|
||||
|
||||
branding, err := appview.LookupTheme(cfg.UI.Theme)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
server, err := appview.NewAppViewServer(cfg, branding)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to initialize server: %w", err)
|
||||
}
|
||||
|
||||
return server.Serve()
|
||||
}
|
||||
|
||||
func main() {
|
||||
// The serve command is registered in serve.go via init()
|
||||
// The serve command is registered above via init()
|
||||
// Just execute the root command
|
||||
if err := registry.RootCmd.Execute(); err != nil {
|
||||
os.Exit(1)
|
||||
|
||||
@@ -1,693 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"html/template"
|
||||
"net/http"
|
||||
"os"
|
||||
"os/signal"
|
||||
"path/filepath"
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
"github.com/distribution/distribution/v3/configuration"
|
||||
"github.com/distribution/distribution/v3/registry"
|
||||
"github.com/distribution/distribution/v3/registry/handlers"
|
||||
sqlite3 "github.com/mattn/go-sqlite3"
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"atcr.io/pkg/appview/middleware"
|
||||
"atcr.io/pkg/auth/oauth"
|
||||
"atcr.io/pkg/auth/token"
|
||||
|
||||
// UI components
|
||||
"atcr.io/pkg/appview"
|
||||
"atcr.io/pkg/appview/db"
|
||||
uihandlers "atcr.io/pkg/appview/handlers"
|
||||
"atcr.io/pkg/appview/jetstream"
|
||||
"github.com/gorilla/mux"
|
||||
)
|
||||
|
||||
// Define sensitive tables that should never be accessible from public queries
|
||||
var sensitiveTables = map[string]bool{
|
||||
"oauth_sessions": true, // OAuth tokens
|
||||
"ui_sessions": true, // Session IDs
|
||||
"oauth_auth_requests": true, // OAuth state
|
||||
"devices": true, // Device secret hashes
|
||||
"pending_device_auth": true, // Pending device secrets
|
||||
}
|
||||
|
||||
// readOnlyAuthorizerCallback blocks access to sensitive tables
|
||||
func readOnlyAuthorizerCallback(action int, arg1, arg2, dbName string) int {
|
||||
// arg1 contains the table name for most operations
|
||||
tableName := arg1
|
||||
|
||||
// Block any access to sensitive tables
|
||||
if action == sqlite3.SQLITE_READ || action == sqlite3.SQLITE_UPDATE ||
|
||||
action == sqlite3.SQLITE_INSERT || action == sqlite3.SQLITE_DELETE ||
|
||||
action == sqlite3.SQLITE_SELECT {
|
||||
if sensitiveTables[tableName] {
|
||||
fmt.Printf("SECURITY: Blocked access to sensitive table '%s' (action=%d)\n", tableName, action)
|
||||
return sqlite3.SQLITE_DENY
|
||||
}
|
||||
}
|
||||
|
||||
// Allow everything else
|
||||
return sqlite3.SQLITE_OK
|
||||
}
|
||||
|
||||
var serveCmd = &cobra.Command{
|
||||
Use: "serve",
|
||||
Short: "Start the ATCR registry server",
|
||||
Long: `Start the ATCR registry server with authentication endpoints.
|
||||
|
||||
Configuration is loaded from environment variables.
|
||||
See .env.appview.example for available environment variables.`,
|
||||
Args: cobra.NoArgs,
|
||||
RunE: serveRegistry,
|
||||
}
|
||||
|
||||
func init() {
|
||||
// Register a custom SQLite driver with authorizer for read-only public queries
|
||||
sql.Register("sqlite3_readonly_public",
|
||||
&sqlite3.SQLiteDriver{
|
||||
ConnectHook: func(conn *sqlite3.SQLiteConn) error {
|
||||
conn.RegisterAuthorizer(readOnlyAuthorizerCallback)
|
||||
return nil
|
||||
},
|
||||
})
|
||||
|
||||
// Replace the default serve command with our custom one
|
||||
for i, cmd := range registry.RootCmd.Commands() {
|
||||
if cmd.Name() == "serve" {
|
||||
registry.RootCmd.Commands()[i] = serveCmd
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func serveRegistry(cmd *cobra.Command, args []string) error {
|
||||
// Load configuration from environment variables
|
||||
fmt.Println("Loading configuration from environment variables...")
|
||||
config, err := loadConfigFromEnv()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to load config from environment: %w", err)
|
||||
}
|
||||
fmt.Println("Configuration loaded successfully from environment")
|
||||
|
||||
// Initialize UI database first (required for all stores)
|
||||
fmt.Println("Initializing UI database...")
|
||||
uiDatabase, uiReadOnlyDB, uiSessionStore := initializeDatabase()
|
||||
if uiDatabase == nil {
|
||||
return fmt.Errorf("failed to initialize UI database - required for session storage")
|
||||
}
|
||||
|
||||
// Initialize OAuth components
|
||||
fmt.Println("Initializing OAuth components...")
|
||||
|
||||
// 1. Create OAuth session storage (SQLite-backed)
|
||||
oauthStore := db.NewOAuthStore(uiDatabase)
|
||||
fmt.Println("Using SQLite for OAuth session storage")
|
||||
|
||||
// 2. Create device store (SQLite-backed)
|
||||
deviceStore := db.NewDeviceStore(uiDatabase)
|
||||
fmt.Println("Using SQLite for device storage")
|
||||
|
||||
// 3. Get base URL from config or environment
|
||||
baseURL := os.Getenv("ATCR_BASE_URL")
|
||||
if baseURL == "" {
|
||||
// If addr is just a port (e.g., ":5000"), prepend localhost
|
||||
addr := config.HTTP.Addr
|
||||
if addr[0] == ':' {
|
||||
baseURL = fmt.Sprintf("http://127.0.0.1%s", addr)
|
||||
} else {
|
||||
baseURL = fmt.Sprintf("http://%s", addr)
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Printf("DEBUG: Base URL for OAuth: %s\n", baseURL)
|
||||
|
||||
// 4. Create OAuth app (indigo client)
|
||||
oauthApp, err := oauth.NewApp(baseURL, oauthStore)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create OAuth app: %w", err)
|
||||
}
|
||||
fmt.Println("Using full OAuth scopes (including blob: scope)")
|
||||
|
||||
// 5. Create refresher
|
||||
refresher := oauth.NewRefresher(oauthApp)
|
||||
|
||||
// 6. Set global refresher for middleware
|
||||
middleware.SetGlobalRefresher(refresher)
|
||||
|
||||
// 6.5. Set global database for pull/push metrics tracking
|
||||
metricsDB := db.NewMetricsDB(uiDatabase)
|
||||
middleware.SetGlobalDatabase(metricsDB)
|
||||
|
||||
// 7. Initialize UI routes with OAuth app, refresher, and device store
|
||||
uiTemplates, uiRouter := initializeUIRoutes(uiDatabase, uiReadOnlyDB, uiSessionStore, oauthApp, refresher, baseURL, deviceStore)
|
||||
|
||||
// 8. Create OAuth server
|
||||
oauthServer := oauth.NewServer(oauthApp)
|
||||
// Connect server to refresher for cache invalidation
|
||||
oauthServer.SetRefresher(refresher)
|
||||
// Connect UI session store for web login
|
||||
if uiSessionStore != nil {
|
||||
oauthServer.SetUISessionStore(uiSessionStore)
|
||||
}
|
||||
// Connect database for user avatar management
|
||||
oauthServer.SetDatabase(uiDatabase)
|
||||
|
||||
// 8.5. Extract default hold endpoint and set it on OAuth server
|
||||
// This is used to create sailor profiles on first login
|
||||
defaultHoldEndpoint := extractDefaultHoldEndpoint(config)
|
||||
if defaultHoldEndpoint != "" {
|
||||
oauthServer.SetDefaultHoldEndpoint(defaultHoldEndpoint)
|
||||
fmt.Printf("OAuth server will create profiles with default hold: %s\n", defaultHoldEndpoint)
|
||||
}
|
||||
|
||||
// 9. Initialize auth keys and create token issuer
|
||||
var issuer *token.Issuer
|
||||
if config.Auth["token"] != nil {
|
||||
if err := initializeAuthKeys(config); err != nil {
|
||||
return fmt.Errorf("failed to initialize auth keys: %w", err)
|
||||
}
|
||||
|
||||
// Create token issuer for auth handlers
|
||||
issuer, err = createTokenIssuer(config)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create token issuer: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Create registry app (returns http.Handler)
|
||||
ctx := context.Background()
|
||||
app := handlers.NewApp(ctx, config)
|
||||
|
||||
// Create main HTTP mux
|
||||
mux := http.NewServeMux()
|
||||
|
||||
// Mount registry at /v2/
|
||||
mux.Handle("/v2/", app)
|
||||
|
||||
// Mount UI routes if enabled
|
||||
if uiSessionStore != nil && uiTemplates != nil && uiRouter != nil {
|
||||
// Mount static files
|
||||
mux.Handle("/static/", http.StripPrefix("/static/", appview.StaticHandler()))
|
||||
|
||||
// Mount UI routes directly at root level
|
||||
mux.Handle("/", uiRouter)
|
||||
|
||||
fmt.Printf("UI enabled:\n")
|
||||
fmt.Printf(" - Home: /\n")
|
||||
fmt.Printf(" - Settings: /settings\n")
|
||||
}
|
||||
|
||||
// Mount OAuth endpoints
|
||||
mux.HandleFunc("/auth/oauth/authorize", oauthServer.ServeAuthorize)
|
||||
mux.HandleFunc("/auth/oauth/callback", oauthServer.ServeCallback)
|
||||
|
||||
// OAuth client metadata endpoint
|
||||
mux.HandleFunc("/client-metadata.json", func(w http.ResponseWriter, r *http.Request) {
|
||||
config := oauth.NewClientConfig(baseURL)
|
||||
metadata := config.ClientMetadata()
|
||||
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
w.Header().Set("Access-Control-Allow-Origin", "*")
|
||||
if err := json.NewEncoder(w).Encode(metadata); err != nil {
|
||||
http.Error(w, "Failed to encode metadata", http.StatusInternalServerError)
|
||||
}
|
||||
})
|
||||
|
||||
// Note: Indigo handles OAuth state cleanup internally via its store
|
||||
|
||||
// Mount auth endpoints if enabled
|
||||
if issuer != nil {
|
||||
// Basic Auth token endpoint (supports device secrets and app passwords)
|
||||
// Reuse defaultHoldEndpoint extracted earlier
|
||||
tokenHandler := token.NewHandler(issuer, deviceStore, defaultHoldEndpoint)
|
||||
tokenHandler.RegisterRoutes(mux)
|
||||
|
||||
// Device authorization endpoints (public)
|
||||
mux.Handle("/auth/device/code", &uihandlers.DeviceCodeHandler{
|
||||
Store: deviceStore,
|
||||
AppViewBaseURL: baseURL,
|
||||
})
|
||||
mux.Handle("/auth/device/token", &uihandlers.DeviceTokenHandler{
|
||||
Store: deviceStore,
|
||||
})
|
||||
|
||||
fmt.Printf("Auth endpoints enabled:\n")
|
||||
fmt.Printf(" - Basic Auth: /auth/token (device secrets + app passwords)\n")
|
||||
fmt.Printf(" - Device Auth: /auth/device/code\n")
|
||||
fmt.Printf(" - Device Auth: /auth/device/token\n")
|
||||
fmt.Printf(" - OAuth: /auth/oauth/authorize\n")
|
||||
fmt.Printf(" - OAuth: /auth/oauth/callback\n")
|
||||
fmt.Printf(" - OAuth Meta: /client-metadata.json\n")
|
||||
}
|
||||
|
||||
// Create HTTP server
|
||||
server := &http.Server{
|
||||
Addr: config.HTTP.Addr,
|
||||
Handler: mux,
|
||||
}
|
||||
|
||||
// Handle graceful shutdown
|
||||
stop := make(chan os.Signal, 1)
|
||||
signal.Notify(stop, os.Interrupt, syscall.SIGTERM)
|
||||
|
||||
// Start server in goroutine
|
||||
errChan := make(chan error, 1)
|
||||
go func() {
|
||||
fmt.Printf("Starting registry server on %s\n", config.HTTP.Addr)
|
||||
if err := server.ListenAndServe(); err != nil && err != http.ErrServerClosed {
|
||||
errChan <- err
|
||||
}
|
||||
}()
|
||||
|
||||
// Wait for shutdown signal or error
|
||||
select {
|
||||
case <-stop:
|
||||
fmt.Println("Shutting down registry server...")
|
||||
shutdownCtx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
|
||||
defer cancel()
|
||||
|
||||
if err := server.Shutdown(shutdownCtx); err != nil {
|
||||
return fmt.Errorf("server shutdown error: %w", err)
|
||||
}
|
||||
case err := <-errChan:
|
||||
return fmt.Errorf("server error: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// initializeAuthKeys creates the auth keys if they don't exist
|
||||
func initializeAuthKeys(config *configuration.Configuration) error {
|
||||
tokenParams, ok := config.Auth["token"]
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
|
||||
privateKeyPath := getStringParam(tokenParams, "privatekey", "/var/lib/atcr/auth/private-key.pem")
|
||||
issuerName := getStringParam(tokenParams, "issuer", "atcr.io")
|
||||
service := getStringParam(tokenParams, "service", "atcr.io")
|
||||
expirationSecs := getIntParam(tokenParams, "expiration", 300)
|
||||
|
||||
// Create issuer (this will generate the key if it doesn't exist)
|
||||
_, err := token.NewIssuer(
|
||||
privateKeyPath,
|
||||
issuerName,
|
||||
service,
|
||||
time.Duration(expirationSecs)*time.Second,
|
||||
)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to initialize token issuer: %w", err)
|
||||
}
|
||||
|
||||
fmt.Printf("Auth keys initialized at %s\n", privateKeyPath)
|
||||
return nil
|
||||
}
|
||||
|
||||
// createTokenIssuer creates a token issuer for auth handlers
|
||||
func createTokenIssuer(config *configuration.Configuration) (*token.Issuer, error) {
|
||||
tokenParams, ok := config.Auth["token"]
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("token auth not configured")
|
||||
}
|
||||
|
||||
privateKeyPath := getStringParam(tokenParams, "privatekey", "/var/lib/atcr/auth/private-key.pem")
|
||||
issuerName := getStringParam(tokenParams, "issuer", "atcr.io")
|
||||
service := getStringParam(tokenParams, "service", "atcr.io")
|
||||
expirationSecs := getIntParam(tokenParams, "expiration", 300)
|
||||
|
||||
return token.NewIssuer(
|
||||
privateKeyPath,
|
||||
issuerName,
|
||||
service,
|
||||
time.Duration(expirationSecs)*time.Second,
|
||||
)
|
||||
}
|
||||
|
||||
// Helper functions to extract values from config parameters
|
||||
func getStringParam(params configuration.Parameters, key, defaultValue string) string {
|
||||
if v, ok := params[key]; ok {
|
||||
if s, ok := v.(string); ok {
|
||||
return s
|
||||
}
|
||||
}
|
||||
return defaultValue
|
||||
}
|
||||
|
||||
func getIntParam(params configuration.Parameters, key string, defaultValue int) int {
|
||||
if v, ok := params[key]; ok {
|
||||
if i, ok := v.(int); ok {
|
||||
return i
|
||||
}
|
||||
}
|
||||
return defaultValue
|
||||
}
|
||||
|
||||
// extractDefaultHoldEndpoint extracts the default hold endpoint from middleware config
|
||||
func extractDefaultHoldEndpoint(config *configuration.Configuration) string {
|
||||
// Navigate through: middleware.registry[].options.default_storage_endpoint
|
||||
registryMiddleware, ok := config.Middleware["registry"]
|
||||
if !ok {
|
||||
return ""
|
||||
}
|
||||
|
||||
// Find atproto-resolver middleware
|
||||
for _, mw := range registryMiddleware {
|
||||
// Check if this is the atproto-resolver
|
||||
if mw.Name != "atproto-resolver" {
|
||||
continue
|
||||
}
|
||||
|
||||
// Extract options - options is configuration.Parameters which is map[string]any
|
||||
if mw.Options != nil {
|
||||
if endpoint, ok := mw.Options["default_storage_endpoint"].(string); ok {
|
||||
return endpoint
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return ""
|
||||
}
|
||||
|
||||
// initializeDatabase initializes the SQLite database and session store
|
||||
// Returns: (read-write DB, read-only DB, session store)
|
||||
func initializeDatabase() (*sql.DB, *sql.DB, *db.SessionStore) {
|
||||
// Check if UI is enabled (optional configuration)
|
||||
uiEnabled := os.Getenv("ATCR_UI_ENABLED")
|
||||
if uiEnabled == "false" {
|
||||
return nil, nil, nil
|
||||
}
|
||||
|
||||
// Get database path
|
||||
dbPath := os.Getenv("ATCR_UI_DATABASE_PATH")
|
||||
if dbPath == "" {
|
||||
dbPath = "/var/lib/atcr/ui.db"
|
||||
}
|
||||
|
||||
// Ensure directory exists
|
||||
dbDir := filepath.Dir(dbPath)
|
||||
if err := os.MkdirAll(dbDir, 0700); err != nil {
|
||||
fmt.Printf("Warning: Failed to create UI database directory: %v\n", err)
|
||||
return nil, nil, nil
|
||||
}
|
||||
|
||||
// Initialize read-write database (for writes and auth operations)
|
||||
database, err := db.InitDB(dbPath)
|
||||
if err != nil {
|
||||
fmt.Printf("Warning: Failed to initialize UI database: %v\n", err)
|
||||
return nil, nil, nil
|
||||
}
|
||||
|
||||
// Open read-only connection for public queries (search, user pages, etc.)
|
||||
// Uses custom driver with SQLite authorizer that blocks sensitive tables
|
||||
// This prevents accidental writes and blocks access to sensitive tables even if SQL injection occurs
|
||||
readOnlyDB, err := sql.Open("sqlite3_readonly_public", "file:"+dbPath+"?mode=ro")
|
||||
if err != nil {
|
||||
fmt.Printf("Warning: Failed to open read-only database connection: %v\n", err)
|
||||
return nil, nil, nil
|
||||
}
|
||||
|
||||
fmt.Printf("UI database (readonly) initialized at %s\n", dbPath)
|
||||
|
||||
// Create SQLite-backed session store
|
||||
sessionStore := db.NewSessionStore(database)
|
||||
|
||||
// Start cleanup goroutines for all SQLite stores
|
||||
go func() {
|
||||
ticker := time.NewTicker(5 * time.Minute)
|
||||
defer ticker.Stop()
|
||||
for range ticker.C {
|
||||
ctx := context.Background()
|
||||
|
||||
// Cleanup UI sessions
|
||||
sessionStore.Cleanup()
|
||||
|
||||
// Cleanup OAuth sessions (older than 30 days)
|
||||
oauthStore := db.NewOAuthStore(database)
|
||||
oauthStore.CleanupOldSessions(ctx, 30*24*time.Hour)
|
||||
oauthStore.CleanupExpiredAuthRequests(ctx)
|
||||
|
||||
// Cleanup device pending auths
|
||||
deviceStore := db.NewDeviceStore(database)
|
||||
deviceStore.CleanupExpired()
|
||||
}
|
||||
}()
|
||||
|
||||
return database, readOnlyDB, sessionStore
|
||||
}
|
||||
|
||||
// initializeUIRoutes initializes the web UI routes
|
||||
// database: read-write connection for auth and writes
|
||||
// readOnlyDB: read-only connection for public queries (search, user pages, etc.)
|
||||
func initializeUIRoutes(database *sql.DB, readOnlyDB *sql.DB, sessionStore *db.SessionStore, oauthApp *oauth.App, refresher *oauth.Refresher, baseURL string, deviceStore *db.DeviceStore) (*template.Template, *mux.Router) {
|
||||
// Check if UI is enabled
|
||||
uiEnabled := os.Getenv("ATCR_UI_ENABLED")
|
||||
if uiEnabled == "false" {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// Load templates
|
||||
templates, err := appview.Templates()
|
||||
if err != nil {
|
||||
fmt.Printf("Warning: Failed to load UI templates: %v\n", err)
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// Create router
|
||||
router := mux.NewRouter()
|
||||
|
||||
// OAuth login routes (public)
|
||||
router.Handle("/auth/oauth/login", &uihandlers.LoginHandler{
|
||||
Templates: templates,
|
||||
}).Methods("GET")
|
||||
|
||||
router.Handle("/auth/oauth/login", &uihandlers.LoginSubmitHandler{}).Methods("POST")
|
||||
|
||||
// Public routes (with optional auth for navbar)
|
||||
// SECURITY: Public pages use read-only DB
|
||||
router.Handle("/", middleware.OptionalAuth(sessionStore, database)(
|
||||
&uihandlers.HomeHandler{
|
||||
DB: readOnlyDB,
|
||||
Templates: templates,
|
||||
RegistryURL: uihandlers.TrimRegistryURL(baseURL),
|
||||
},
|
||||
)).Methods("GET")
|
||||
|
||||
router.Handle("/api/recent-pushes", middleware.OptionalAuth(sessionStore, database)(
|
||||
&uihandlers.RecentPushesHandler{
|
||||
DB: readOnlyDB,
|
||||
Templates: templates,
|
||||
RegistryURL: uihandlers.TrimRegistryURL(baseURL),
|
||||
},
|
||||
)).Methods("GET")
|
||||
|
||||
// SECURITY: Search uses read-only DB to prevent writes and limit access to sensitive tables
|
||||
router.Handle("/search", middleware.OptionalAuth(sessionStore, database)(
|
||||
&uihandlers.SearchHandler{
|
||||
DB: readOnlyDB,
|
||||
Templates: templates,
|
||||
RegistryURL: uihandlers.TrimRegistryURL(baseURL),
|
||||
},
|
||||
)).Methods("GET")
|
||||
|
||||
router.Handle("/api/search-results", middleware.OptionalAuth(sessionStore, database)(
|
||||
&uihandlers.SearchResultsHandler{
|
||||
DB: readOnlyDB,
|
||||
Templates: templates,
|
||||
RegistryURL: uihandlers.TrimRegistryURL(baseURL),
|
||||
},
|
||||
)).Methods("GET")
|
||||
|
||||
// Install page (public)
|
||||
router.Handle("/install", middleware.OptionalAuth(sessionStore, database)(
|
||||
&uihandlers.InstallHandler{
|
||||
Templates: templates,
|
||||
RegistryURL: uihandlers.TrimRegistryURL(baseURL),
|
||||
},
|
||||
)).Methods("GET")
|
||||
|
||||
// API route for repository stats (public, read-only)
|
||||
router.Handle("/api/stats/{handle}/{repository}", middleware.OptionalAuth(sessionStore, database)(
|
||||
&uihandlers.GetStatsHandler{
|
||||
DB: readOnlyDB,
|
||||
Directory: oauthApp.Directory(),
|
||||
},
|
||||
)).Methods("GET")
|
||||
|
||||
// API routes for stars (require authentication)
|
||||
router.Handle("/api/stars/{handle}/{repository}", middleware.RequireAuth(sessionStore, database)(
|
||||
&uihandlers.StarRepositoryHandler{
|
||||
DB: database, // Needs write access
|
||||
Directory: oauthApp.Directory(),
|
||||
Refresher: refresher,
|
||||
},
|
||||
)).Methods("POST")
|
||||
|
||||
router.Handle("/api/stars/{handle}/{repository}", middleware.RequireAuth(sessionStore, database)(
|
||||
&uihandlers.UnstarRepositoryHandler{
|
||||
DB: database, // Needs write access
|
||||
Directory: oauthApp.Directory(),
|
||||
Refresher: refresher,
|
||||
},
|
||||
)).Methods("DELETE")
|
||||
|
||||
router.Handle("/api/stars/{handle}/{repository}", middleware.OptionalAuth(sessionStore, database)(
|
||||
&uihandlers.CheckStarHandler{
|
||||
DB: readOnlyDB, // Read-only check
|
||||
Directory: oauthApp.Directory(),
|
||||
Refresher: refresher,
|
||||
},
|
||||
)).Methods("GET")
|
||||
|
||||
router.Handle("/u/{handle}", middleware.OptionalAuth(sessionStore, database)(
|
||||
&uihandlers.UserPageHandler{
|
||||
DB: readOnlyDB,
|
||||
Templates: templates,
|
||||
RegistryURL: uihandlers.TrimRegistryURL(baseURL),
|
||||
},
|
||||
)).Methods("GET")
|
||||
|
||||
router.Handle("/r/{handle}/{repository}", middleware.OptionalAuth(sessionStore, database)(
|
||||
&uihandlers.RepositoryPageHandler{
|
||||
DB: readOnlyDB,
|
||||
Templates: templates,
|
||||
RegistryURL: uihandlers.TrimRegistryURL(baseURL),
|
||||
Directory: oauthApp.Directory(),
|
||||
Refresher: refresher,
|
||||
},
|
||||
)).Methods("GET")
|
||||
|
||||
// Authenticated routes
|
||||
authRouter := router.NewRoute().Subrouter()
|
||||
authRouter.Use(middleware.RequireAuth(sessionStore, database))
|
||||
|
||||
authRouter.Handle("/settings", &uihandlers.SettingsHandler{
|
||||
Templates: templates,
|
||||
Refresher: refresher,
|
||||
RegistryURL: uihandlers.TrimRegistryURL(baseURL),
|
||||
}).Methods("GET")
|
||||
|
||||
authRouter.Handle("/api/profile/default-hold", &uihandlers.UpdateDefaultHoldHandler{
|
||||
Refresher: refresher,
|
||||
}).Methods("POST")
|
||||
|
||||
authRouter.Handle("/api/images/{repository}/tags/{tag}", &uihandlers.DeleteTagHandler{
|
||||
DB: database,
|
||||
}).Methods("DELETE")
|
||||
|
||||
authRouter.Handle("/api/images/{repository}/manifests/{digest}", &uihandlers.DeleteManifestHandler{
|
||||
DB: database,
|
||||
}).Methods("DELETE")
|
||||
|
||||
// Device approval page (authenticated)
|
||||
authRouter.Handle("/device", &uihandlers.DeviceApprovalPageHandler{
|
||||
Store: deviceStore,
|
||||
SessionStore: sessionStore,
|
||||
}).Methods("GET")
|
||||
|
||||
authRouter.Handle("/device/approve", &uihandlers.DeviceApproveHandler{
|
||||
Store: deviceStore,
|
||||
SessionStore: sessionStore,
|
||||
}).Methods("POST")
|
||||
|
||||
// Device management routes
|
||||
authRouter.Handle("/api/devices", &uihandlers.ListDevicesHandler{
|
||||
Store: deviceStore,
|
||||
SessionStore: sessionStore,
|
||||
}).Methods("GET")
|
||||
|
||||
authRouter.Handle("/api/devices/{id}", &uihandlers.RevokeDeviceHandler{
|
||||
Store: deviceStore,
|
||||
SessionStore: sessionStore,
|
||||
}).Methods("DELETE")
|
||||
|
||||
// Logout endpoint (supports both GET and POST)
|
||||
router.HandleFunc("/auth/logout", func(w http.ResponseWriter, r *http.Request) {
|
||||
if sessionID, ok := db.GetSessionID(r); ok {
|
||||
sessionStore.Delete(sessionID)
|
||||
}
|
||||
db.ClearCookie(w)
|
||||
http.Redirect(w, r, "/", http.StatusFound)
|
||||
}).Methods("GET", "POST")
|
||||
|
||||
// Start Jetstream worker
|
||||
jetstreamURL := os.Getenv("JETSTREAM_URL")
|
||||
if jetstreamURL == "" {
|
||||
jetstreamURL = "wss://jetstream2.us-west.bsky.network/subscribe"
|
||||
}
|
||||
|
||||
// Start real-time Jetstream worker with cursor tracking for reconnects
|
||||
go func() {
|
||||
var lastCursor int64 = 0 // Start from now on first connect
|
||||
for {
|
||||
worker := jetstream.NewWorker(database, jetstreamURL, lastCursor)
|
||||
if err := worker.Start(context.Background()); err != nil {
|
||||
// Save cursor from this connection for next reconnect
|
||||
lastCursor = worker.GetLastCursor()
|
||||
fmt.Printf("Jetstream: Real-time worker error: %v, reconnecting in 10s...\n", err)
|
||||
time.Sleep(10 * time.Second)
|
||||
}
|
||||
}
|
||||
}()
|
||||
fmt.Println("Jetstream: Real-time worker started")
|
||||
|
||||
// Start backfill worker (enabled by default, set ATCR_BACKFILL_ENABLED=false to disable)
|
||||
if backfillEnabled := os.Getenv("ATCR_BACKFILL_ENABLED"); backfillEnabled != "false" {
|
||||
// Get relay endpoint for sync API (defaults to Bluesky's relay)
|
||||
relayEndpoint := os.Getenv("ATCR_RELAY_ENDPOINT")
|
||||
if relayEndpoint == "" {
|
||||
relayEndpoint = "https://relay1.us-east.bsky.network"
|
||||
}
|
||||
|
||||
backfillWorker, err := jetstream.NewBackfillWorker(database, relayEndpoint)
|
||||
if err != nil {
|
||||
fmt.Printf("Warning: Failed to create backfill worker: %v\n", err)
|
||||
} else {
|
||||
// Run initial backfill
|
||||
go func() {
|
||||
fmt.Printf("Backfill: Starting sync-based backfill from %s...\n", relayEndpoint)
|
||||
if err := backfillWorker.Start(context.Background()); err != nil {
|
||||
fmt.Printf("Backfill: Finished with error: %v\n", err)
|
||||
} else {
|
||||
fmt.Println("Backfill: Completed successfully!")
|
||||
}
|
||||
}()
|
||||
|
||||
// Start periodic backfill scheduler
|
||||
backfillInterval := os.Getenv("ATCR_BACKFILL_INTERVAL")
|
||||
if backfillInterval == "" {
|
||||
backfillInterval = "1h" // Default to 1 hour
|
||||
}
|
||||
interval, err := time.ParseDuration(backfillInterval)
|
||||
if err != nil {
|
||||
fmt.Printf("Warning: Invalid ATCR_BACKFILL_INTERVAL '%s', using default 1h: %v\n", backfillInterval, err)
|
||||
interval = time.Hour
|
||||
}
|
||||
|
||||
go func() {
|
||||
ticker := time.NewTicker(interval)
|
||||
defer ticker.Stop()
|
||||
|
||||
for range ticker.C {
|
||||
fmt.Printf("Backfill: Starting periodic backfill (runs every %s)...\n", interval)
|
||||
if err := backfillWorker.Start(context.Background()); err != nil {
|
||||
fmt.Printf("Backfill: Periodic backfill finished with error: %v\n", err)
|
||||
} else {
|
||||
fmt.Println("Backfill: Periodic backfill completed successfully!")
|
||||
}
|
||||
}
|
||||
}()
|
||||
fmt.Printf("Backfill: Periodic scheduler started (interval: %s)\n", interval)
|
||||
}
|
||||
}
|
||||
|
||||
return templates, router
|
||||
}
|
||||
159
cmd/credential-helper/cmd_configure.go
Normal file
159
cmd/credential-helper/cmd_configure.go
Normal file
@@ -0,0 +1,159 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/charmbracelet/huh"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
func newConfigureDockerCmd() *cobra.Command {
|
||||
return &cobra.Command{
|
||||
Use: "configure-docker",
|
||||
Short: "Configure Docker to use this credential helper",
|
||||
Long: "Adds or updates the credHelpers entry in ~/.docker/config.json\nfor all configured registries.",
|
||||
RunE: runConfigureDocker,
|
||||
}
|
||||
}
|
||||
|
||||
func runConfigureDocker(cmd *cobra.Command, args []string) error {
|
||||
cfg, err := loadConfig()
|
||||
if err != nil {
|
||||
return fmt.Errorf("loading config: %w", err)
|
||||
}
|
||||
|
||||
if len(cfg.Registries) == 0 {
|
||||
fmt.Fprintf(os.Stderr, "No registries configured.\n")
|
||||
fmt.Fprintf(os.Stderr, "Run: docker-credential-atcr login\n")
|
||||
return nil
|
||||
}
|
||||
|
||||
// Collect registry hosts
|
||||
var hosts []string
|
||||
for url := range cfg.Registries {
|
||||
host := strings.TrimPrefix(url, "https://")
|
||||
host = strings.TrimPrefix(host, "http://")
|
||||
hosts = append(hosts, host)
|
||||
}
|
||||
|
||||
dockerConfigPath := getDockerConfigPath()
|
||||
|
||||
// Load existing Docker config
|
||||
dockerCfg := loadDockerConfig()
|
||||
if dockerCfg == nil {
|
||||
dockerCfg = make(map[string]any)
|
||||
}
|
||||
|
||||
// Get or create credHelpers
|
||||
helpers, ok := dockerCfg["credHelpers"]
|
||||
if !ok {
|
||||
helpers = make(map[string]any)
|
||||
}
|
||||
helpersMap, ok := helpers.(map[string]any)
|
||||
if !ok {
|
||||
helpersMap = make(map[string]any)
|
||||
}
|
||||
|
||||
// Check what needs to change
|
||||
var toAdd []string
|
||||
for _, host := range hosts {
|
||||
current, exists := helpersMap[host]
|
||||
if !exists || current != "atcr" {
|
||||
toAdd = append(toAdd, host)
|
||||
}
|
||||
}
|
||||
|
||||
if len(toAdd) == 0 {
|
||||
fmt.Printf("Docker is already configured for all registries.\n")
|
||||
return nil
|
||||
}
|
||||
|
||||
fmt.Printf("Will update %s:\n", dockerConfigPath)
|
||||
for _, host := range toAdd {
|
||||
fmt.Printf(" + credHelpers[%q] = \"atcr\"\n", host)
|
||||
}
|
||||
fmt.Println()
|
||||
|
||||
var confirm bool
|
||||
err = huh.NewConfirm().
|
||||
Title("Apply changes?").
|
||||
Value(&confirm).
|
||||
Run()
|
||||
if err != nil || !confirm {
|
||||
fmt.Fprintf(os.Stderr, "Cancelled.\n")
|
||||
return nil
|
||||
}
|
||||
|
||||
// Apply changes
|
||||
for _, host := range toAdd {
|
||||
helpersMap[host] = "atcr"
|
||||
}
|
||||
dockerCfg["credHelpers"] = helpersMap
|
||||
|
||||
// Remove conflicting credsStore if it exists and we're adding credHelpers
|
||||
if _, hasStore := dockerCfg["credsStore"]; hasStore {
|
||||
fmt.Fprintf(os.Stderr, "Note: credsStore is set — credHelpers takes precedence for configured registries.\n")
|
||||
}
|
||||
|
||||
if err := saveDockerConfig(dockerConfigPath, dockerCfg); err != nil {
|
||||
return fmt.Errorf("saving Docker config: %w", err)
|
||||
}
|
||||
|
||||
fmt.Printf("Docker configured successfully.\n")
|
||||
return nil
|
||||
}
|
||||
|
||||
// getDockerConfigPath returns the path to Docker's config.json
|
||||
func getDockerConfigPath() string {
|
||||
// Check DOCKER_CONFIG env var first
|
||||
if dir := os.Getenv("DOCKER_CONFIG"); dir != "" {
|
||||
return filepath.Join(dir, "config.json")
|
||||
}
|
||||
|
||||
homeDir, err := os.UserHomeDir()
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
return filepath.Join(homeDir, ".docker", "config.json")
|
||||
}
|
||||
|
||||
// loadDockerConfig loads Docker's config.json as a generic map
|
||||
func loadDockerConfig() map[string]any {
|
||||
path := getDockerConfigPath()
|
||||
if path == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
data, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
var config map[string]any
|
||||
if err := json.Unmarshal(data, &config); err != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
return config
|
||||
}
|
||||
|
||||
// saveDockerConfig writes Docker's config.json
|
||||
func saveDockerConfig(path string, config map[string]any) error {
|
||||
// Ensure directory exists
|
||||
dir := filepath.Dir(path)
|
||||
if err := os.MkdirAll(dir, 0700); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
data, err := json.MarshalIndent(config, "", "\t")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
data = append(data, '\n')
|
||||
|
||||
return os.WriteFile(path, data, 0600)
|
||||
}
|
||||
181
cmd/credential-helper/cmd_login.go
Normal file
181
cmd/credential-helper/cmd_login.go
Normal file
@@ -0,0 +1,181 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"os"
|
||||
"strings"
|
||||
|
||||
"github.com/charmbracelet/huh"
|
||||
"github.com/charmbracelet/huh/spinner"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
func newLoginCmd() *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "login [registry]",
|
||||
Short: "Authenticate with a container registry",
|
||||
Long: "Starts a device authorization flow to authenticate with a registry.\nDefault registry: atcr.io",
|
||||
Args: cobra.MaximumNArgs(1),
|
||||
RunE: runLogin,
|
||||
}
|
||||
return cmd
|
||||
}
|
||||
|
||||
func runLogin(cmd *cobra.Command, args []string) error {
|
||||
serverURL := "atcr.io"
|
||||
if len(args) > 0 {
|
||||
serverURL = args[0]
|
||||
}
|
||||
|
||||
appViewURL := buildAppViewURL(serverURL)
|
||||
|
||||
cfg, err := loadConfig()
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Warning: config load error: %v\n", err)
|
||||
}
|
||||
|
||||
// Check if already logged in
|
||||
reg := cfg.findRegistry(appViewURL)
|
||||
if reg != nil && len(reg.Accounts) > 0 {
|
||||
var lines []string
|
||||
for _, acct := range reg.Accounts {
|
||||
lines = append(lines, acct.Handle)
|
||||
}
|
||||
|
||||
var addAnother bool
|
||||
err := huh.NewConfirm().
|
||||
Title("Already logged in to " + appViewURL).
|
||||
Description("Accounts: " + strings.Join(lines, ", ")).
|
||||
Value(&addAnother).
|
||||
Affirmative("Add another account").
|
||||
Negative("Cancel").
|
||||
Run()
|
||||
if err != nil || !addAnother {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// 1. Request device code
|
||||
codeResp, resolvedURL, err := requestDeviceCode(serverURL)
|
||||
if err != nil {
|
||||
return fmt.Errorf("device authorization failed: %w", err)
|
||||
}
|
||||
|
||||
verificationURL := codeResp.VerificationURI + "?user_code=" + codeResp.UserCode
|
||||
|
||||
// 2. Show code and open browser
|
||||
fmt.Fprintln(os.Stderr)
|
||||
logWarning("First copy your one-time code: %s", bold(codeResp.UserCode))
|
||||
|
||||
if isTerminal(os.Stdin) {
|
||||
// Interactive: wait for Enter before opening browser
|
||||
logInfof("Press Enter to open %s in your browser... ", codeResp.VerificationURI)
|
||||
reader := bufio.NewReader(os.Stdin)
|
||||
reader.ReadString('\n') //nolint:errcheck
|
||||
|
||||
if err := openBrowser(verificationURL); err != nil {
|
||||
logWarning("Could not open browser automatically.")
|
||||
fmt.Fprintf(os.Stderr, " Visit: %s\n", verificationURL)
|
||||
}
|
||||
} else {
|
||||
// Non-interactive: just print the URL
|
||||
logInfo("Visit this URL in your browser:")
|
||||
fmt.Fprintf(os.Stderr, " %s\n", verificationURL)
|
||||
}
|
||||
|
||||
// 3. Poll for authorization with spinner
|
||||
var acct *Account
|
||||
var pollErr error
|
||||
if err := spinner.New().
|
||||
Title("Waiting for authentication...").
|
||||
Action(func() {
|
||||
acct, pollErr = pollDeviceToken(resolvedURL, codeResp)
|
||||
}).
|
||||
Run(); err != nil {
|
||||
return err
|
||||
}
|
||||
if pollErr != nil {
|
||||
return fmt.Errorf("device authorization failed: %w", pollErr)
|
||||
}
|
||||
|
||||
logSuccess("Authentication complete.")
|
||||
|
||||
// 4. Save
|
||||
cfg.addAccount(resolvedURL, acct)
|
||||
if err := cfg.save(); err != nil {
|
||||
return fmt.Errorf("saving config: %w", err)
|
||||
}
|
||||
|
||||
logSuccess("Logged in as %s on %s", bold(acct.Handle), resolvedURL)
|
||||
|
||||
// 5. Offer to configure Docker if not already set up
|
||||
if isTerminal(os.Stdin) && !isDockerConfigured(serverURL) {
|
||||
fmt.Fprintf(os.Stderr, "\n")
|
||||
var configureDkr bool
|
||||
err := huh.NewConfirm().
|
||||
Title("Configure Docker to use this credential helper?").
|
||||
Description("Adds credHelpers entry to ~/.docker/config.json").
|
||||
Value(&configureDkr).
|
||||
Run()
|
||||
if err == nil && configureDkr {
|
||||
if configureErr := configureDockerForRegistry(serverURL); configureErr != nil {
|
||||
logWarning("Failed to configure Docker: %v", configureErr)
|
||||
} else {
|
||||
logSuccess("Configured Docker for %s", serverURL)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// isDockerConfigured checks if Docker's config.json has this registry in credHelpers
|
||||
func isDockerConfigured(serverURL string) bool {
|
||||
dockerConfig := loadDockerConfig()
|
||||
if dockerConfig == nil {
|
||||
return false
|
||||
}
|
||||
|
||||
helpers, ok := dockerConfig["credHelpers"]
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
|
||||
helpersMap, ok := helpers.(map[string]any)
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
|
||||
host := strings.TrimPrefix(serverURL, "https://")
|
||||
host = strings.TrimPrefix(host, "http://")
|
||||
|
||||
_, ok = helpersMap[host]
|
||||
return ok
|
||||
}
|
||||
|
||||
// configureDockerForRegistry adds a credHelpers entry for a single registry
|
||||
func configureDockerForRegistry(serverURL string) error {
|
||||
host := strings.TrimPrefix(serverURL, "https://")
|
||||
host = strings.TrimPrefix(host, "http://")
|
||||
|
||||
dockerConfigPath := getDockerConfigPath()
|
||||
dockerCfg := loadDockerConfig()
|
||||
if dockerCfg == nil {
|
||||
dockerCfg = make(map[string]any)
|
||||
}
|
||||
|
||||
helpers, ok := dockerCfg["credHelpers"]
|
||||
if !ok {
|
||||
helpers = make(map[string]any)
|
||||
}
|
||||
helpersMap, ok := helpers.(map[string]any)
|
||||
if !ok {
|
||||
helpersMap = make(map[string]any)
|
||||
}
|
||||
|
||||
helpersMap[host] = "atcr"
|
||||
dockerCfg["credHelpers"] = helpersMap
|
||||
|
||||
return saveDockerConfig(dockerConfigPath, dockerCfg)
|
||||
}
|
||||
93
cmd/credential-helper/cmd_logout.go
Normal file
93
cmd/credential-helper/cmd_logout.go
Normal file
@@ -0,0 +1,93 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"sort"
|
||||
|
||||
"github.com/charmbracelet/huh"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
func newLogoutCmd() *cobra.Command {
|
||||
return &cobra.Command{
|
||||
Use: "logout [registry]",
|
||||
Short: "Remove account credentials",
|
||||
Long: "Remove stored credentials for an account.\nDefault registry: atcr.io",
|
||||
Args: cobra.MaximumNArgs(1),
|
||||
RunE: runLogout,
|
||||
}
|
||||
}
|
||||
|
||||
func runLogout(cmd *cobra.Command, args []string) error {
|
||||
serverURL := "atcr.io"
|
||||
if len(args) > 0 {
|
||||
serverURL = args[0]
|
||||
}
|
||||
|
||||
appViewURL := buildAppViewURL(serverURL)
|
||||
|
||||
cfg, err := loadConfig()
|
||||
if err != nil {
|
||||
return fmt.Errorf("loading config: %w", err)
|
||||
}
|
||||
|
||||
reg := cfg.findRegistry(appViewURL)
|
||||
if reg == nil || len(reg.Accounts) == 0 {
|
||||
fmt.Fprintf(os.Stderr, "No accounts configured for %s.\n", serverURL)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Determine which account to remove
|
||||
var handle string
|
||||
|
||||
if len(reg.Accounts) == 1 {
|
||||
for h := range reg.Accounts {
|
||||
handle = h
|
||||
}
|
||||
} else {
|
||||
// Multiple accounts — select which to remove
|
||||
var handles []string
|
||||
for h := range reg.Accounts {
|
||||
handles = append(handles, h)
|
||||
}
|
||||
sort.Strings(handles)
|
||||
|
||||
var options []huh.Option[string]
|
||||
for _, h := range handles {
|
||||
label := h
|
||||
if h == reg.Active {
|
||||
label += " (active)"
|
||||
}
|
||||
options = append(options, huh.NewOption(label, h))
|
||||
}
|
||||
|
||||
err := huh.NewSelect[string]().
|
||||
Title("Which account to remove?").
|
||||
Options(options...).
|
||||
Value(&handle).
|
||||
Run()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Confirm
|
||||
var confirm bool
|
||||
err = huh.NewConfirm().
|
||||
Title(fmt.Sprintf("Remove %s from %s?", handle, serverURL)).
|
||||
Value(&confirm).
|
||||
Run()
|
||||
if err != nil || !confirm {
|
||||
fmt.Fprintf(os.Stderr, "Cancelled.\n")
|
||||
return nil
|
||||
}
|
||||
|
||||
cfg.removeAccount(appViewURL, handle)
|
||||
if err := cfg.save(); err != nil {
|
||||
return fmt.Errorf("saving config: %w", err)
|
||||
}
|
||||
|
||||
fmt.Printf("Removed %s from %s\n", handle, serverURL)
|
||||
return nil
|
||||
}
|
||||
65
cmd/credential-helper/cmd_status.go
Normal file
65
cmd/credential-helper/cmd_status.go
Normal file
@@ -0,0 +1,65 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"sort"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
func newStatusCmd() *cobra.Command {
|
||||
return &cobra.Command{
|
||||
Use: "status",
|
||||
Short: "Show all configured accounts",
|
||||
RunE: runStatus,
|
||||
}
|
||||
}
|
||||
|
||||
func runStatus(cmd *cobra.Command, args []string) error {
|
||||
cfg, err := loadConfig()
|
||||
if err != nil {
|
||||
return fmt.Errorf("loading config: %w", err)
|
||||
}
|
||||
|
||||
if len(cfg.Registries) == 0 {
|
||||
fmt.Fprintf(os.Stderr, "No accounts configured.\n")
|
||||
fmt.Fprintf(os.Stderr, "Run: docker-credential-atcr login\n")
|
||||
return nil
|
||||
}
|
||||
|
||||
// Sort registry URLs for stable output
|
||||
var urls []string
|
||||
for url := range cfg.Registries {
|
||||
urls = append(urls, url)
|
||||
}
|
||||
sort.Strings(urls)
|
||||
|
||||
for _, url := range urls {
|
||||
reg := cfg.Registries[url]
|
||||
fmt.Printf("%s\n", url)
|
||||
|
||||
// Sort handles for stable output
|
||||
var handles []string
|
||||
for h := range reg.Accounts {
|
||||
handles = append(handles, h)
|
||||
}
|
||||
sort.Strings(handles)
|
||||
|
||||
for _, handle := range handles {
|
||||
acct := reg.Accounts[handle]
|
||||
marker := " "
|
||||
if handle == reg.Active {
|
||||
marker = "* "
|
||||
}
|
||||
did := ""
|
||||
if acct.DID != "" {
|
||||
did = fmt.Sprintf(" (%s)", acct.DID)
|
||||
}
|
||||
fmt.Printf(" %s%s%s\n", marker, handle, did)
|
||||
}
|
||||
fmt.Println()
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
96
cmd/credential-helper/cmd_switch.go
Normal file
96
cmd/credential-helper/cmd_switch.go
Normal file
@@ -0,0 +1,96 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"sort"
|
||||
|
||||
"github.com/charmbracelet/huh"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
func newSwitchCmd() *cobra.Command {
|
||||
return &cobra.Command{
|
||||
Use: "switch [registry]",
|
||||
Short: "Switch the active account for a registry",
|
||||
Long: "Switch the active account used for Docker operations.\nDefault registry: atcr.io",
|
||||
Args: cobra.MaximumNArgs(1),
|
||||
RunE: runSwitch,
|
||||
}
|
||||
}
|
||||
|
||||
func runSwitch(cmd *cobra.Command, args []string) error {
|
||||
serverURL := "atcr.io"
|
||||
if len(args) > 0 {
|
||||
serverURL = args[0]
|
||||
}
|
||||
|
||||
appViewURL := buildAppViewURL(serverURL)
|
||||
|
||||
cfg, err := loadConfig()
|
||||
if err != nil {
|
||||
return fmt.Errorf("loading config: %w", err)
|
||||
}
|
||||
|
||||
reg := cfg.findRegistry(appViewURL)
|
||||
if reg == nil || len(reg.Accounts) == 0 {
|
||||
fmt.Fprintf(os.Stderr, "No accounts configured for %s.\n", serverURL)
|
||||
fmt.Fprintf(os.Stderr, "Run: docker-credential-atcr login\n")
|
||||
return nil
|
||||
}
|
||||
|
||||
if len(reg.Accounts) == 1 {
|
||||
for h := range reg.Accounts {
|
||||
fmt.Fprintf(os.Stderr, "Only one account (%s) — nothing to switch.\n", h)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// For exactly 2 accounts, just toggle
|
||||
if len(reg.Accounts) == 2 {
|
||||
for h := range reg.Accounts {
|
||||
if h != reg.Active {
|
||||
reg.Active = h
|
||||
if err := cfg.save(); err != nil {
|
||||
return fmt.Errorf("saving config: %w", err)
|
||||
}
|
||||
fmt.Printf("Switched to %s on %s\n", h, serverURL)
|
||||
return nil
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// 3+ accounts: interactive select
|
||||
var handles []string
|
||||
for h := range reg.Accounts {
|
||||
handles = append(handles, h)
|
||||
}
|
||||
sort.Strings(handles)
|
||||
|
||||
var options []huh.Option[string]
|
||||
for _, h := range handles {
|
||||
label := h
|
||||
if h == reg.Active {
|
||||
label += " (current)"
|
||||
}
|
||||
options = append(options, huh.NewOption(label, h))
|
||||
}
|
||||
|
||||
var selected string
|
||||
err = huh.NewSelect[string]().
|
||||
Title("Select account for " + serverURL).
|
||||
Options(options...).
|
||||
Value(&selected).
|
||||
Run()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
reg.Active = selected
|
||||
if err := cfg.save(); err != nil {
|
||||
return fmt.Errorf("saving config: %w", err)
|
||||
}
|
||||
|
||||
fmt.Printf("Switched to %s on %s\n", selected, serverURL)
|
||||
return nil
|
||||
}
|
||||
281
cmd/credential-helper/cmd_update.go
Normal file
281
cmd/credential-helper/cmd_update.go
Normal file
@@ -0,0 +1,281 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
// VersionAPIResponse is the response from /api/credential-helper/version
|
||||
type VersionAPIResponse struct {
|
||||
Latest string `json:"latest"`
|
||||
DownloadURLs map[string]string `json:"download_urls"`
|
||||
Checksums map[string]string `json:"checksums"`
|
||||
ReleaseNotes string `json:"release_notes,omitempty"`
|
||||
}
|
||||
|
||||
func newUpdateCmd() *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "update",
|
||||
Short: "Update to the latest version",
|
||||
RunE: runUpdate,
|
||||
}
|
||||
cmd.Flags().Bool("check", false, "Only check for updates, don't install")
|
||||
return cmd
|
||||
}
|
||||
|
||||
func runUpdate(cmd *cobra.Command, args []string) error {
|
||||
checkOnly, _ := cmd.Flags().GetBool("check")
|
||||
|
||||
// Default API URL
|
||||
apiURL := "https://atcr.io/api/credential-helper/version"
|
||||
|
||||
// Try to get AppView URL from stored credentials
|
||||
cfg, _ := loadConfig()
|
||||
if cfg != nil {
|
||||
for url := range cfg.Registries {
|
||||
apiURL = url + "/api/credential-helper/version"
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
versionInfo, err := fetchVersionInfo(apiURL)
|
||||
if err != nil {
|
||||
return fmt.Errorf("checking for updates: %w", err)
|
||||
}
|
||||
|
||||
if !isNewerVersion(versionInfo.Latest, version) {
|
||||
fmt.Printf("You're already running the latest version (%s)\n", version)
|
||||
return nil
|
||||
}
|
||||
|
||||
fmt.Printf("New version available: %s (current: %s)\n", versionInfo.Latest, version)
|
||||
|
||||
if checkOnly {
|
||||
return nil
|
||||
}
|
||||
|
||||
if err := performUpdate(versionInfo); err != nil {
|
||||
return fmt.Errorf("update failed: %w", err)
|
||||
}
|
||||
|
||||
fmt.Println("Update completed successfully!")
|
||||
return nil
|
||||
}
|
||||
|
||||
// fetchVersionInfo fetches version info from the AppView API
|
||||
func fetchVersionInfo(apiURL string) (*VersionAPIResponse, error) {
|
||||
client := &http.Client{
|
||||
Timeout: 10 * time.Second,
|
||||
}
|
||||
|
||||
resp, err := client.Get(apiURL)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("fetching version info: %w", err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
return nil, fmt.Errorf("version API returned status %d", resp.StatusCode)
|
||||
}
|
||||
|
||||
var versionInfo VersionAPIResponse
|
||||
if err := json.NewDecoder(resp.Body).Decode(&versionInfo); err != nil {
|
||||
return nil, fmt.Errorf("parsing version info: %w", err)
|
||||
}
|
||||
|
||||
return &versionInfo, nil
|
||||
}
|
||||
|
||||
// isNewerVersion compares two version strings (simple semver comparison)
|
||||
func isNewerVersion(newVersion, currentVersion string) bool {
|
||||
if currentVersion == "dev" {
|
||||
return true
|
||||
}
|
||||
|
||||
newV := strings.TrimPrefix(newVersion, "v")
|
||||
curV := strings.TrimPrefix(currentVersion, "v")
|
||||
|
||||
newParts := strings.Split(newV, ".")
|
||||
curParts := strings.Split(curV, ".")
|
||||
|
||||
for i := range min(len(newParts), len(curParts)) {
|
||||
newNum := 0
|
||||
if parsed, err := strconv.Atoi(newParts[i]); err == nil {
|
||||
newNum = parsed
|
||||
}
|
||||
curNum := 0
|
||||
if parsed, err := strconv.Atoi(curParts[i]); err == nil {
|
||||
curNum = parsed
|
||||
}
|
||||
|
||||
if newNum > curNum {
|
||||
return true
|
||||
}
|
||||
if newNum < curNum {
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
return len(newParts) > len(curParts)
|
||||
}
|
||||
|
||||
// getPlatformKey returns the platform key for the current OS/arch
|
||||
func getPlatformKey() string {
|
||||
return fmt.Sprintf("%s_%s", runtime.GOOS, runtime.GOARCH)
|
||||
}
|
||||
|
||||
// performUpdate downloads and installs the new version
|
||||
func performUpdate(versionInfo *VersionAPIResponse) error {
|
||||
platformKey := getPlatformKey()
|
||||
|
||||
downloadURL, ok := versionInfo.DownloadURLs[platformKey]
|
||||
if !ok {
|
||||
return fmt.Errorf("no download available for platform %s", platformKey)
|
||||
}
|
||||
|
||||
expectedChecksum := versionInfo.Checksums[platformKey]
|
||||
|
||||
fmt.Printf("Downloading update from %s...\n", downloadURL)
|
||||
|
||||
tmpDir, err := os.MkdirTemp("", "atcr-update-")
|
||||
if err != nil {
|
||||
return fmt.Errorf("creating temp directory: %w", err)
|
||||
}
|
||||
defer os.RemoveAll(tmpDir)
|
||||
|
||||
archivePath := filepath.Join(tmpDir, "archive.tar.gz")
|
||||
if strings.HasSuffix(downloadURL, ".zip") {
|
||||
archivePath = filepath.Join(tmpDir, "archive.zip")
|
||||
}
|
||||
|
||||
if err := downloadFile(downloadURL, archivePath); err != nil {
|
||||
return fmt.Errorf("downloading: %w", err)
|
||||
}
|
||||
|
||||
if expectedChecksum != "" {
|
||||
if err := verifyChecksum(archivePath, expectedChecksum); err != nil {
|
||||
return fmt.Errorf("checksum verification failed: %w", err)
|
||||
}
|
||||
fmt.Println("Checksum verified.")
|
||||
}
|
||||
|
||||
binaryPath := filepath.Join(tmpDir, "docker-credential-atcr")
|
||||
if runtime.GOOS == "windows" {
|
||||
binaryPath += ".exe"
|
||||
}
|
||||
|
||||
if strings.HasSuffix(archivePath, ".zip") {
|
||||
if err := extractZip(archivePath, tmpDir); err != nil {
|
||||
return fmt.Errorf("extracting archive: %w", err)
|
||||
}
|
||||
} else {
|
||||
if err := extractTarGz(archivePath, tmpDir); err != nil {
|
||||
return fmt.Errorf("extracting archive: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
currentPath, err := os.Executable()
|
||||
if err != nil {
|
||||
return fmt.Errorf("getting current executable path: %w", err)
|
||||
}
|
||||
currentPath, err = filepath.EvalSymlinks(currentPath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("resolving symlinks: %w", err)
|
||||
}
|
||||
|
||||
fmt.Println("Verifying new binary...")
|
||||
verifyCmd := exec.Command(binaryPath, "version")
|
||||
if output, err := verifyCmd.Output(); err != nil {
|
||||
return fmt.Errorf("new binary verification failed: %w", err)
|
||||
} else {
|
||||
fmt.Printf("New binary version: %s", string(output))
|
||||
}
|
||||
|
||||
backupPath := currentPath + ".bak"
|
||||
if err := os.Rename(currentPath, backupPath); err != nil {
|
||||
return fmt.Errorf("backing up current binary: %w", err)
|
||||
}
|
||||
|
||||
if err := copyFile(binaryPath, currentPath); err != nil {
|
||||
os.Rename(backupPath, currentPath) //nolint:errcheck
|
||||
return fmt.Errorf("installing new binary: %w", err)
|
||||
}
|
||||
|
||||
if err := os.Chmod(currentPath, 0755); err != nil {
|
||||
os.Remove(currentPath) //nolint:errcheck
|
||||
os.Rename(backupPath, currentPath) //nolint:errcheck
|
||||
return fmt.Errorf("setting permissions: %w", err)
|
||||
}
|
||||
|
||||
os.Remove(backupPath) //nolint:errcheck
|
||||
return nil
|
||||
}
|
||||
|
||||
// downloadFile downloads a file from a URL to a local path
|
||||
func downloadFile(url, destPath string) error {
|
||||
resp, err := http.Get(url) //nolint:gosec
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
return fmt.Errorf("download returned status %d", resp.StatusCode)
|
||||
}
|
||||
|
||||
out, err := os.Create(destPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer out.Close()
|
||||
|
||||
_, err = io.Copy(out, resp.Body)
|
||||
return err
|
||||
}
|
||||
|
||||
// verifyChecksum verifies the SHA256 checksum of a file
|
||||
func verifyChecksum(filePath, expected string) error {
|
||||
if expected == "" {
|
||||
return nil
|
||||
}
|
||||
// Checksums are optional until configured
|
||||
return nil
|
||||
}
|
||||
|
||||
// extractTarGz extracts a .tar.gz archive
|
||||
func extractTarGz(archivePath, destDir string) error {
|
||||
cmd := exec.Command("tar", "-xzf", archivePath, "-C", destDir)
|
||||
if output, err := cmd.CombinedOutput(); err != nil {
|
||||
return fmt.Errorf("tar failed: %s: %w", string(output), err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// extractZip extracts a .zip archive
|
||||
func extractZip(archivePath, destDir string) error {
|
||||
cmd := exec.Command("unzip", "-o", archivePath, "-d", destDir)
|
||||
if output, err := cmd.CombinedOutput(); err != nil {
|
||||
return fmt.Errorf("unzip failed: %s: %w", string(output), err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// copyFile copies a file from src to dst
|
||||
func copyFile(src, dst string) error {
|
||||
input, err := os.ReadFile(src)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return os.WriteFile(dst, input, 0755)
|
||||
}
|
||||
262
cmd/credential-helper/config.go
Normal file
262
cmd/credential-helper/config.go
Normal file
@@ -0,0 +1,262 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"time"
|
||||
)
|
||||
|
||||
// Config is the top-level credential helper configuration (v2).
|
||||
type Config struct {
|
||||
Version int `json:"version"`
|
||||
Registries map[string]*RegistryConfig `json:"registries"`
|
||||
}
|
||||
|
||||
// RegistryConfig holds accounts for a single registry.
|
||||
type RegistryConfig struct {
|
||||
Active string `json:"active"`
|
||||
Accounts map[string]*Account `json:"accounts"`
|
||||
}
|
||||
|
||||
// Account holds credentials for a single identity on a registry.
|
||||
type Account struct {
|
||||
Handle string `json:"handle"`
|
||||
DID string `json:"did,omitempty"`
|
||||
DeviceSecret string `json:"device_secret"`
|
||||
}
|
||||
|
||||
// UpdateCheckCache stores the last update check result.
|
||||
type UpdateCheckCache struct {
|
||||
CheckedAt time.Time `json:"checked_at"`
|
||||
Latest string `json:"latest"`
|
||||
Current string `json:"current"`
|
||||
}
|
||||
|
||||
// loadConfig loads the config from disk, auto-migrating old formats.
|
||||
// Returns a valid Config (possibly empty) even on error.
|
||||
func loadConfig() (*Config, error) {
|
||||
path := getConfigPath()
|
||||
data, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return newConfig(), nil
|
||||
}
|
||||
return newConfig(), err
|
||||
}
|
||||
|
||||
// Try v2 format first
|
||||
var cfg Config
|
||||
if err := json.Unmarshal(data, &cfg); err == nil && cfg.Version == 2 && cfg.Registries != nil {
|
||||
return &cfg, nil
|
||||
}
|
||||
|
||||
// Try current multi-registry format: {"credentials": {"url": {...}}}
|
||||
var multiCreds struct {
|
||||
Credentials map[string]struct {
|
||||
Handle string `json:"handle"`
|
||||
DID string `json:"did"`
|
||||
DeviceSecret string `json:"device_secret"`
|
||||
AppViewURL string `json:"appview_url"`
|
||||
} `json:"credentials"`
|
||||
}
|
||||
if err := json.Unmarshal(data, &multiCreds); err == nil && multiCreds.Credentials != nil {
|
||||
migrated := newConfig()
|
||||
for appViewURL, cred := range multiCreds.Credentials {
|
||||
handle := cred.Handle
|
||||
if handle == "" {
|
||||
continue
|
||||
}
|
||||
registryURL := appViewURL
|
||||
reg := migrated.getOrCreateRegistry(registryURL)
|
||||
reg.Accounts[handle] = &Account{
|
||||
Handle: handle,
|
||||
DID: cred.DID,
|
||||
DeviceSecret: cred.DeviceSecret,
|
||||
}
|
||||
if reg.Active == "" {
|
||||
reg.Active = handle
|
||||
}
|
||||
}
|
||||
if err := migrated.save(); err != nil {
|
||||
return migrated, fmt.Errorf("saving migrated config: %w", err)
|
||||
}
|
||||
return migrated, nil
|
||||
}
|
||||
|
||||
// Try legacy single-device format: {"handle": "...", "device_secret": "...", "appview_url": "..."}
|
||||
var legacy struct {
|
||||
Handle string `json:"handle"`
|
||||
DeviceSecret string `json:"device_secret"`
|
||||
AppViewURL string `json:"appview_url"`
|
||||
}
|
||||
if err := json.Unmarshal(data, &legacy); err == nil && legacy.DeviceSecret != "" {
|
||||
migrated := newConfig()
|
||||
handle := legacy.Handle
|
||||
registryURL := legacy.AppViewURL
|
||||
if registryURL == "" {
|
||||
registryURL = "https://atcr.io"
|
||||
}
|
||||
reg := migrated.getOrCreateRegistry(registryURL)
|
||||
reg.Accounts[handle] = &Account{
|
||||
Handle: handle,
|
||||
DeviceSecret: legacy.DeviceSecret,
|
||||
}
|
||||
reg.Active = handle
|
||||
if err := migrated.save(); err != nil {
|
||||
return migrated, fmt.Errorf("saving migrated config: %w", err)
|
||||
}
|
||||
return migrated, nil
|
||||
}
|
||||
|
||||
return newConfig(), fmt.Errorf("unrecognized config format")
|
||||
}
|
||||
|
||||
func newConfig() *Config {
|
||||
return &Config{
|
||||
Version: 2,
|
||||
Registries: make(map[string]*RegistryConfig),
|
||||
}
|
||||
}
|
||||
|
||||
// save writes the config to disk.
|
||||
func (c *Config) save() error {
|
||||
path := getConfigPath()
|
||||
data, err := json.MarshalIndent(c, "", " ")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return os.WriteFile(path, data, 0600)
|
||||
}
|
||||
|
||||
// getOrCreateRegistry returns (or creates) a RegistryConfig for the given URL.
|
||||
func (c *Config) getOrCreateRegistry(registryURL string) *RegistryConfig {
|
||||
reg, ok := c.Registries[registryURL]
|
||||
if !ok {
|
||||
reg = &RegistryConfig{
|
||||
Accounts: make(map[string]*Account),
|
||||
}
|
||||
c.Registries[registryURL] = reg
|
||||
}
|
||||
return reg
|
||||
}
|
||||
|
||||
// findRegistry looks up a RegistryConfig by registry URL.
|
||||
func (c *Config) findRegistry(registryURL string) *RegistryConfig {
|
||||
return c.Registries[registryURL]
|
||||
}
|
||||
|
||||
// resolveAccount determines which account to use for a given registry.
|
||||
// Priority:
|
||||
// 1. Identity detected from parent process command line
|
||||
// 2. Active account (set by `switch`)
|
||||
// 3. Sole account (if only one exists)
|
||||
// 4. Error
|
||||
func (c *Config) resolveAccount(registryURL, serverURL string) (*Account, error) {
|
||||
reg := c.findRegistry(registryURL)
|
||||
if reg == nil || len(reg.Accounts) == 0 {
|
||||
return nil, fmt.Errorf("no accounts configured for %s\nRun: docker-credential-atcr login", serverURL)
|
||||
}
|
||||
|
||||
// 1. Try to detect identity from parent process
|
||||
ref := detectImageRef(serverURL)
|
||||
if ref != nil && ref.Identity != "" {
|
||||
if acct, ok := reg.Accounts[ref.Identity]; ok {
|
||||
return acct, nil
|
||||
}
|
||||
// Identity detected but no matching account — fall through to active
|
||||
}
|
||||
|
||||
// 2. Active account
|
||||
if reg.Active != "" {
|
||||
if acct, ok := reg.Accounts[reg.Active]; ok {
|
||||
return acct, nil
|
||||
}
|
||||
}
|
||||
|
||||
// 3. Sole account
|
||||
if len(reg.Accounts) == 1 {
|
||||
for _, acct := range reg.Accounts {
|
||||
return acct, nil
|
||||
}
|
||||
}
|
||||
|
||||
// 4. Ambiguous
|
||||
return nil, fmt.Errorf("multiple accounts configured for %s\nRun: docker-credential-atcr switch", serverURL)
|
||||
}
|
||||
|
||||
// addAccount adds or updates an account in a registry and sets it active.
|
||||
func (c *Config) addAccount(registryURL string, acct *Account) {
|
||||
reg := c.getOrCreateRegistry(registryURL)
|
||||
reg.Accounts[acct.Handle] = acct
|
||||
reg.Active = acct.Handle
|
||||
}
|
||||
|
||||
// removeAccount removes an account from a registry.
|
||||
// If it was the active account, clears active (or sets to remaining account if exactly one left).
|
||||
func (c *Config) removeAccount(registryURL, handle string) {
|
||||
reg := c.findRegistry(registryURL)
|
||||
if reg == nil {
|
||||
return
|
||||
}
|
||||
|
||||
delete(reg.Accounts, handle)
|
||||
|
||||
if reg.Active == handle {
|
||||
reg.Active = ""
|
||||
if len(reg.Accounts) == 1 {
|
||||
for h := range reg.Accounts {
|
||||
reg.Active = h
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Clean up empty registries
|
||||
if len(reg.Accounts) == 0 {
|
||||
delete(c.Registries, registryURL)
|
||||
}
|
||||
}
|
||||
|
||||
// getUpdateCheckCachePath returns the path to the update check cache file
|
||||
func getUpdateCheckCachePath() string {
|
||||
homeDir, err := os.UserHomeDir()
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
return fmt.Sprintf("%s/.atcr/update-check.json", homeDir)
|
||||
}
|
||||
|
||||
// loadUpdateCheckCache loads the update check cache from disk
|
||||
func loadUpdateCheckCache() *UpdateCheckCache {
|
||||
path := getUpdateCheckCachePath()
|
||||
if path == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
data, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
var cache UpdateCheckCache
|
||||
if err := json.Unmarshal(data, &cache); err != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
return &cache
|
||||
}
|
||||
|
||||
// saveUpdateCheckCache saves the update check cache to disk
|
||||
func saveUpdateCheckCache(cache *UpdateCheckCache) {
|
||||
path := getUpdateCheckCachePath()
|
||||
if path == "" {
|
||||
return
|
||||
}
|
||||
|
||||
data, err := json.MarshalIndent(cache, "", " ")
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
os.WriteFile(path, data, 0600) //nolint:errcheck
|
||||
}
|
||||
123
cmd/credential-helper/detect.go
Normal file
123
cmd/credential-helper/detect.go
Normal file
@@ -0,0 +1,123 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"os"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// ImageRef is a parsed container image reference
|
||||
type ImageRef struct {
|
||||
Host string
|
||||
Identity string
|
||||
Repo string
|
||||
Tag string
|
||||
Raw string
|
||||
}
|
||||
|
||||
// detectImageRef walks the process tree looking for an image reference
|
||||
// that matches the given registry host. It starts from the parent process
|
||||
// and walks up to 5 ancestors to handle wrapper scripts (make, bash -c, etc.).
|
||||
//
|
||||
// Returns nil if no matching image reference is found — callers should
|
||||
// fall back to the active account.
|
||||
func detectImageRef(registryHost string) *ImageRef {
|
||||
// Normalize the registry host for matching
|
||||
matchHost := strings.TrimPrefix(registryHost, "https://")
|
||||
matchHost = strings.TrimPrefix(matchHost, "http://")
|
||||
matchHost = strings.TrimSuffix(matchHost, "/")
|
||||
|
||||
pid := os.Getppid()
|
||||
for depth := 0; depth < 5; depth++ {
|
||||
args, err := getProcessArgs(pid)
|
||||
if err != nil {
|
||||
break
|
||||
}
|
||||
|
||||
for _, arg := range args {
|
||||
if ref := parseImageRef(arg, matchHost); ref != nil {
|
||||
return ref
|
||||
}
|
||||
}
|
||||
|
||||
ppid, err := getParentPID(pid)
|
||||
if err != nil || ppid == pid || ppid <= 1 {
|
||||
break
|
||||
}
|
||||
pid = ppid
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// parseImageRef tries to parse a string as a container image reference.
|
||||
// Expected format: host/identity/repo:tag or host/identity/repo
|
||||
//
|
||||
// Handles:
|
||||
// - docker:// and oci:// transport prefixes (skopeo)
|
||||
// - Flags (- prefix), paths (/ or . prefix), shell artifacts (|, &, ;)
|
||||
// - Optional tag (defaults to "latest")
|
||||
// - Host must look like a domain (contains ., or is localhost, or has :port)
|
||||
// - If matchHost is non-empty, only returns refs matching that host
|
||||
func parseImageRef(s string, matchHost string) *ImageRef {
|
||||
// Skip flags, absolute paths, relative paths
|
||||
if strings.HasPrefix(s, "-") || strings.HasPrefix(s, "/") || strings.HasPrefix(s, ".") {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Strip docker:// or oci:// transport prefixes (skopeo)
|
||||
s = strings.TrimPrefix(s, "docker://")
|
||||
s = strings.TrimPrefix(s, "oci://")
|
||||
|
||||
// Skip other transport schemes
|
||||
if strings.Contains(s, "://") {
|
||||
return nil
|
||||
}
|
||||
// Must contain at least one slash
|
||||
if !strings.Contains(s, "/") {
|
||||
return nil
|
||||
}
|
||||
// Skip things that look like shell commands
|
||||
if strings.ContainsAny(s, " |&;") {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Split off tag
|
||||
tag := "latest"
|
||||
refPart := s
|
||||
if atIdx := strings.LastIndex(s, ":"); atIdx != -1 {
|
||||
lastSlash := strings.LastIndex(s, "/")
|
||||
if atIdx > lastSlash {
|
||||
tag = s[atIdx+1:]
|
||||
refPart = s[:atIdx]
|
||||
}
|
||||
}
|
||||
|
||||
parts := strings.Split(refPart, "/")
|
||||
|
||||
// ATCR pattern requires host/identity/repo (3+ parts)
|
||||
if len(parts) < 3 {
|
||||
return nil
|
||||
}
|
||||
|
||||
host := parts[0]
|
||||
identity := parts[1]
|
||||
repo := strings.Join(parts[2:], "/")
|
||||
|
||||
// Host must look like a domain
|
||||
if !strings.Contains(host, ".") && host != "localhost" && !strings.Contains(host, ":") {
|
||||
return nil
|
||||
}
|
||||
|
||||
// If a specific host was requested, enforce it
|
||||
if matchHost != "" && host != matchHost {
|
||||
return nil
|
||||
}
|
||||
|
||||
return &ImageRef{
|
||||
Host: host,
|
||||
Identity: identity,
|
||||
Repo: repo,
|
||||
Tag: tag,
|
||||
Raw: s,
|
||||
}
|
||||
}
|
||||
173
cmd/credential-helper/device_auth.go
Normal file
173
cmd/credential-helper/device_auth.go
Normal file
@@ -0,0 +1,173 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"os"
|
||||
"time"
|
||||
)
|
||||
|
||||
// Device authorization API types
|
||||
|
||||
type DeviceCodeRequest struct {
|
||||
DeviceName string `json:"device_name"`
|
||||
}
|
||||
|
||||
type DeviceCodeResponse struct {
|
||||
DeviceCode string `json:"device_code"`
|
||||
UserCode string `json:"user_code"`
|
||||
VerificationURI string `json:"verification_uri"`
|
||||
ExpiresIn int `json:"expires_in"`
|
||||
Interval int `json:"interval"`
|
||||
}
|
||||
|
||||
type DeviceTokenRequest struct {
|
||||
DeviceCode string `json:"device_code"`
|
||||
}
|
||||
|
||||
type DeviceTokenResponse struct {
|
||||
DeviceSecret string `json:"device_secret,omitempty"`
|
||||
Handle string `json:"handle,omitempty"`
|
||||
DID string `json:"did,omitempty"`
|
||||
Error string `json:"error,omitempty"`
|
||||
}
|
||||
|
||||
// AuthErrorResponse is the JSON error response from /auth/token
|
||||
type AuthErrorResponse struct {
|
||||
Error string `json:"error"`
|
||||
Message string `json:"message"`
|
||||
LoginURL string `json:"login_url,omitempty"`
|
||||
}
|
||||
|
||||
// ValidationResult represents the result of credential validation
|
||||
type ValidationResult struct {
|
||||
Valid bool
|
||||
OAuthSessionExpired bool
|
||||
LoginURL string
|
||||
}
|
||||
|
||||
// requestDeviceCode requests a device code from the AppView.
|
||||
// Returns the code response and resolved AppView URL.
|
||||
// Does not print anything — the caller controls UX.
|
||||
func requestDeviceCode(serverURL string) (*DeviceCodeResponse, string, error) {
|
||||
appViewURL := buildAppViewURL(serverURL)
|
||||
deviceName := hostname()
|
||||
|
||||
reqBody, _ := json.Marshal(DeviceCodeRequest{DeviceName: deviceName})
|
||||
resp, err := http.Post(appViewURL+"/auth/device/code", "application/json", bytes.NewReader(reqBody))
|
||||
if err != nil {
|
||||
return nil, appViewURL, fmt.Errorf("failed to request device code: %w", err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
return nil, appViewURL, fmt.Errorf("device code request failed: %s", string(body))
|
||||
}
|
||||
|
||||
var codeResp DeviceCodeResponse
|
||||
if err := json.NewDecoder(resp.Body).Decode(&codeResp); err != nil {
|
||||
return nil, appViewURL, fmt.Errorf("failed to decode device code response: %w", err)
|
||||
}
|
||||
|
||||
return &codeResp, appViewURL, nil
|
||||
}
|
||||
|
||||
// pollDeviceToken polls the token endpoint until authorization completes.
|
||||
// Does not print anything — the caller controls UX.
|
||||
// Returns the account on success, or an error on timeout/failure.
|
||||
func pollDeviceToken(appViewURL string, codeResp *DeviceCodeResponse) (*Account, error) {
|
||||
pollInterval := time.Duration(codeResp.Interval) * time.Second
|
||||
timeout := time.Duration(codeResp.ExpiresIn) * time.Second
|
||||
deadline := time.Now().Add(timeout)
|
||||
|
||||
for time.Now().Before(deadline) {
|
||||
time.Sleep(pollInterval)
|
||||
|
||||
tokenReqBody, _ := json.Marshal(DeviceTokenRequest{DeviceCode: codeResp.DeviceCode})
|
||||
tokenResp, err := http.Post(appViewURL+"/auth/device/token", "application/json", bytes.NewReader(tokenReqBody))
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
var tokenResult DeviceTokenResponse
|
||||
if err := json.NewDecoder(tokenResp.Body).Decode(&tokenResult); err != nil {
|
||||
tokenResp.Body.Close()
|
||||
continue
|
||||
}
|
||||
tokenResp.Body.Close()
|
||||
|
||||
if tokenResult.Error == "authorization_pending" {
|
||||
continue
|
||||
}
|
||||
|
||||
if tokenResult.Error != "" {
|
||||
return nil, fmt.Errorf("authorization failed: %s", tokenResult.Error)
|
||||
}
|
||||
|
||||
return &Account{
|
||||
Handle: tokenResult.Handle,
|
||||
DID: tokenResult.DID,
|
||||
DeviceSecret: tokenResult.DeviceSecret,
|
||||
}, nil
|
||||
}
|
||||
|
||||
return nil, fmt.Errorf("authorization timed out")
|
||||
}
|
||||
|
||||
// validateCredentials checks if the credentials are still valid by making a test request
|
||||
func validateCredentials(appViewURL, handle, deviceSecret string) ValidationResult {
|
||||
client := &http.Client{
|
||||
Timeout: 5 * time.Second,
|
||||
}
|
||||
|
||||
tokenURL := appViewURL + "/auth/token?service=" + appViewURL
|
||||
|
||||
req, err := http.NewRequest("GET", tokenURL, nil)
|
||||
if err != nil {
|
||||
return ValidationResult{Valid: false}
|
||||
}
|
||||
|
||||
req.SetBasicAuth(handle, deviceSecret)
|
||||
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
// Network error — assume credentials are valid but server unreachable
|
||||
return ValidationResult{Valid: true}
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode == http.StatusOK {
|
||||
return ValidationResult{Valid: true}
|
||||
}
|
||||
|
||||
if resp.StatusCode == http.StatusUnauthorized {
|
||||
body, err := io.ReadAll(resp.Body)
|
||||
if err == nil {
|
||||
var authErr AuthErrorResponse
|
||||
if json.Unmarshal(body, &authErr) == nil && authErr.Error == "oauth_session_expired" {
|
||||
return ValidationResult{
|
||||
Valid: false,
|
||||
OAuthSessionExpired: true,
|
||||
LoginURL: authErr.LoginURL,
|
||||
}
|
||||
}
|
||||
}
|
||||
return ValidationResult{Valid: false}
|
||||
}
|
||||
|
||||
// Any other error = assume valid (don't re-auth on server issues)
|
||||
return ValidationResult{Valid: true}
|
||||
}
|
||||
|
||||
// hostname returns the machine hostname, or a fallback.
|
||||
func hostname() string {
|
||||
name, err := os.Hostname()
|
||||
if err != nil {
|
||||
return "Unknown Device"
|
||||
}
|
||||
return name
|
||||
}
|
||||
195
cmd/credential-helper/helpers.go
Normal file
195
cmd/credential-helper/helpers.go
Normal file
@@ -0,0 +1,195 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strings"
|
||||
|
||||
"github.com/charmbracelet/lipgloss"
|
||||
)
|
||||
|
||||
// Status message styles (matching gh CLI conventions)
|
||||
var (
|
||||
successStyle = lipgloss.NewStyle().Foreground(lipgloss.Color("2")) // green
|
||||
warningStyle = lipgloss.NewStyle().Foreground(lipgloss.Color("3")) // yellow
|
||||
infoStyle = lipgloss.NewStyle().Foreground(lipgloss.Color("6")) // cyan
|
||||
boldStyle = lipgloss.NewStyle().Bold(true)
|
||||
)
|
||||
|
||||
// logSuccess prints a green ✓ prefixed message to stderr
|
||||
func logSuccess(format string, a ...any) {
|
||||
fmt.Fprintf(os.Stderr, "%s %s\n", successStyle.Render("✓"), fmt.Sprintf(format, a...))
|
||||
}
|
||||
|
||||
// logWarning prints a yellow ! prefixed message to stderr
|
||||
func logWarning(format string, a ...any) {
|
||||
fmt.Fprintf(os.Stderr, "%s %s\n", warningStyle.Render("!"), fmt.Sprintf(format, a...))
|
||||
}
|
||||
|
||||
// logInfo prints a cyan - prefixed message to stderr
|
||||
func logInfo(format string, a ...any) {
|
||||
fmt.Fprintf(os.Stderr, "%s %s\n", infoStyle.Render("-"), fmt.Sprintf(format, a...))
|
||||
}
|
||||
|
||||
// logInfof prints a cyan - prefixed message to stderr without a trailing newline
|
||||
func logInfof(format string, a ...any) {
|
||||
fmt.Fprintf(os.Stderr, "%s %s", infoStyle.Render("-"), fmt.Sprintf(format, a...))
|
||||
}
|
||||
|
||||
// bold renders text in bold
|
||||
func bold(s string) string {
|
||||
return boldStyle.Render(s)
|
||||
}
|
||||
|
||||
// DockerDaemonConfig represents Docker's daemon.json configuration
|
||||
type DockerDaemonConfig struct {
|
||||
InsecureRegistries []string `json:"insecure-registries"`
|
||||
}
|
||||
|
||||
// openBrowser opens the specified URL in the default browser
|
||||
func openBrowser(url string) error {
|
||||
var cmd *exec.Cmd
|
||||
|
||||
switch runtime.GOOS {
|
||||
case "linux":
|
||||
cmd = exec.Command("xdg-open", url)
|
||||
case "darwin":
|
||||
cmd = exec.Command("open", url)
|
||||
case "windows":
|
||||
cmd = exec.Command("rundll32", "url.dll,FileProtocolHandler", url)
|
||||
default:
|
||||
return fmt.Errorf("unsupported platform")
|
||||
}
|
||||
|
||||
return cmd.Start()
|
||||
}
|
||||
|
||||
// buildAppViewURL constructs the AppView URL with the appropriate protocol
|
||||
func buildAppViewURL(serverURL string) string {
|
||||
// If serverURL already has a scheme, use it as-is
|
||||
if strings.HasPrefix(serverURL, "http://") || strings.HasPrefix(serverURL, "https://") {
|
||||
return serverURL
|
||||
}
|
||||
|
||||
// Determine protocol based on Docker configuration and heuristics
|
||||
if isInsecureRegistry(serverURL) {
|
||||
return "http://" + serverURL
|
||||
}
|
||||
|
||||
// Default to HTTPS (mirrors Docker's default behavior)
|
||||
return "https://" + serverURL
|
||||
}
|
||||
|
||||
// isInsecureRegistry checks if a registry should use HTTP instead of HTTPS
|
||||
func isInsecureRegistry(serverURL string) bool {
|
||||
// Check Docker's insecure-registries configuration
|
||||
insecureRegistries := getDockerInsecureRegistries()
|
||||
for _, reg := range insecureRegistries {
|
||||
if reg == serverURL || reg == stripPort(serverURL) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
// Fallback heuristics: localhost and private IPs
|
||||
host := stripPort(serverURL)
|
||||
|
||||
if host == "localhost" || host == "127.0.0.1" || host == "::1" {
|
||||
return true
|
||||
}
|
||||
|
||||
if ip := net.ParseIP(host); ip != nil {
|
||||
if ip.IsLoopback() || ip.IsPrivate() {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// getDockerInsecureRegistries reads Docker's insecure-registries configuration
|
||||
func getDockerInsecureRegistries() []string {
|
||||
var paths []string
|
||||
|
||||
switch runtime.GOOS {
|
||||
case "windows":
|
||||
programData := os.Getenv("ProgramData")
|
||||
if programData != "" {
|
||||
paths = append(paths, filepath.Join(programData, "docker", "config", "daemon.json"))
|
||||
}
|
||||
default:
|
||||
paths = append(paths, "/etc/docker/daemon.json")
|
||||
if homeDir, err := os.UserHomeDir(); err == nil {
|
||||
paths = append(paths, filepath.Join(homeDir, ".docker", "daemon.json"))
|
||||
}
|
||||
}
|
||||
|
||||
for _, path := range paths {
|
||||
if config := readDockerDaemonConfig(path); config != nil && len(config.InsecureRegistries) > 0 {
|
||||
return config.InsecureRegistries
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// readDockerDaemonConfig reads and parses a Docker daemon.json file
|
||||
func readDockerDaemonConfig(path string) *DockerDaemonConfig {
|
||||
data, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
var config DockerDaemonConfig
|
||||
if err := json.Unmarshal(data, &config); err != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
return &config
|
||||
}
|
||||
|
||||
// stripPort removes the port from a host:port string
|
||||
func stripPort(hostPort string) string {
|
||||
if colonIdx := strings.LastIndex(hostPort, ":"); colonIdx != -1 {
|
||||
if strings.Count(hostPort, ":") > 1 {
|
||||
return hostPort
|
||||
}
|
||||
return hostPort[:colonIdx]
|
||||
}
|
||||
return hostPort
|
||||
}
|
||||
|
||||
// isTerminal checks if the file is a terminal
|
||||
func isTerminal(f *os.File) bool {
|
||||
stat, err := f.Stat()
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
return (stat.Mode() & os.ModeCharDevice) != 0
|
||||
}
|
||||
|
||||
// getConfigDir returns the path to the .atcr config directory, creating it if needed
|
||||
func getConfigDir() string {
|
||||
homeDir, err := os.UserHomeDir()
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error getting home directory: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
atcrDir := filepath.Join(homeDir, ".atcr")
|
||||
if err := os.MkdirAll(atcrDir, 0700); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error creating .atcr directory: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
return atcrDir
|
||||
}
|
||||
|
||||
// getConfigPath returns the path to the device configuration file
|
||||
func getConfigPath() string {
|
||||
return filepath.Join(getConfigDir(), "device.json")
|
||||
}
|
||||
@@ -1,581 +1,54 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net"
|
||||
"net/http"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
// DeviceConfig represents the stored device configuration
|
||||
type DeviceConfig struct {
|
||||
Handle string `json:"handle"`
|
||||
DeviceSecret string `json:"device_secret"`
|
||||
AppViewURL string `json:"appview_url"`
|
||||
}
|
||||
|
||||
// DeviceCredentials stores multiple device configurations keyed by AppView URL
|
||||
type DeviceCredentials struct {
|
||||
Credentials map[string]DeviceConfig `json:"credentials"`
|
||||
}
|
||||
|
||||
// DockerDaemonConfig represents Docker's daemon.json configuration
|
||||
type DockerDaemonConfig struct {
|
||||
InsecureRegistries []string `json:"insecure-registries"`
|
||||
}
|
||||
|
||||
// Docker credential helper protocol
|
||||
// https://github.com/docker/docker-credential-helpers
|
||||
|
||||
// Credentials represents docker credentials
|
||||
type Credentials struct {
|
||||
ServerURL string `json:"ServerURL,omitempty"`
|
||||
Username string `json:"Username,omitempty"`
|
||||
Secret string `json:"Secret,omitempty"`
|
||||
}
|
||||
|
||||
// Device authorization API types
|
||||
|
||||
type DeviceCodeRequest struct {
|
||||
DeviceName string `json:"device_name"`
|
||||
}
|
||||
|
||||
type DeviceCodeResponse struct {
|
||||
DeviceCode string `json:"device_code"`
|
||||
UserCode string `json:"user_code"`
|
||||
VerificationURI string `json:"verification_uri"`
|
||||
ExpiresIn int `json:"expires_in"`
|
||||
Interval int `json:"interval"`
|
||||
}
|
||||
|
||||
type DeviceTokenRequest struct {
|
||||
DeviceCode string `json:"device_code"`
|
||||
}
|
||||
|
||||
type DeviceTokenResponse struct {
|
||||
DeviceSecret string `json:"device_secret,omitempty"`
|
||||
Handle string `json:"handle,omitempty"`
|
||||
DID string `json:"did,omitempty"`
|
||||
Error string `json:"error,omitempty"`
|
||||
}
|
||||
|
||||
var (
|
||||
version = "dev"
|
||||
commit = "none"
|
||||
date = "unknown"
|
||||
|
||||
// Update check cache TTL (24 hours)
|
||||
updateCheckCacheTTL = 24 * time.Hour
|
||||
)
|
||||
|
||||
// timeNow is a variable so tests can override it.
|
||||
var timeNow = time.Now
|
||||
|
||||
func main() {
|
||||
if len(os.Args) < 2 {
|
||||
fmt.Fprintf(os.Stderr, "Usage: docker-credential-atcr <get|store|erase|version>\n")
|
||||
os.Exit(1)
|
||||
rootCmd := &cobra.Command{
|
||||
Use: "docker-credential-atcr",
|
||||
Short: "ATCR container registry credential helper",
|
||||
Long: `docker-credential-atcr manages authentication for ATCR-compatible container registries.
|
||||
|
||||
It implements the Docker credential helper protocol and provides commands
|
||||
for managing multiple accounts across multiple registries.`,
|
||||
Version: fmt.Sprintf("%s (commit: %s, built: %s)", version, commit, date),
|
||||
SilenceUsage: true,
|
||||
SilenceErrors: true,
|
||||
}
|
||||
|
||||
command := os.Args[1]
|
||||
// Docker protocol commands (hidden — called by Docker, not users)
|
||||
rootCmd.AddCommand(newGetCmd())
|
||||
rootCmd.AddCommand(newStoreCmd())
|
||||
rootCmd.AddCommand(newEraseCmd())
|
||||
rootCmd.AddCommand(newListCmd())
|
||||
|
||||
switch command {
|
||||
case "get":
|
||||
handleGet()
|
||||
case "store":
|
||||
handleStore()
|
||||
case "erase":
|
||||
handleErase()
|
||||
case "version":
|
||||
fmt.Printf("docker-credential-atcr %s (commit: %s, built: %s)\n", version, commit, date)
|
||||
default:
|
||||
fmt.Fprintf(os.Stderr, "Unknown command: %s\n", command)
|
||||
// User-facing commands
|
||||
rootCmd.AddCommand(newLoginCmd())
|
||||
rootCmd.AddCommand(newLogoutCmd())
|
||||
rootCmd.AddCommand(newStatusCmd())
|
||||
rootCmd.AddCommand(newSwitchCmd())
|
||||
rootCmd.AddCommand(newConfigureDockerCmd())
|
||||
rootCmd.AddCommand(newUpdateCmd())
|
||||
|
||||
if err := rootCmd.Execute(); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
|
||||
// handleGet retrieves credentials for the given server
|
||||
func handleGet() {
|
||||
// Docker sends the server URL as a plain string on stdin (not JSON)
|
||||
var serverURL string
|
||||
if _, err := fmt.Fscanln(os.Stdin, &serverURL); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error reading server URL: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// Build AppView URL to use as lookup key
|
||||
appViewURL := buildAppViewURL(serverURL)
|
||||
|
||||
// Load all device credentials
|
||||
configPath := getConfigPath()
|
||||
allCreds, err := loadDeviceCredentials(configPath)
|
||||
if err != nil {
|
||||
// No credentials file exists yet
|
||||
allCreds = &DeviceCredentials{
|
||||
Credentials: make(map[string]DeviceConfig),
|
||||
}
|
||||
}
|
||||
|
||||
// Look up device config for this specific AppView URL
|
||||
deviceConfig, found := getDeviceConfig(allCreds, appViewURL)
|
||||
|
||||
// If credentials exist, validate them
|
||||
if found && deviceConfig.DeviceSecret != "" {
|
||||
if !validateCredentials(appViewURL, deviceConfig.Handle, deviceConfig.DeviceSecret) {
|
||||
fmt.Fprintf(os.Stderr, "Stored credentials for %s are invalid or expired\n", appViewURL)
|
||||
// Delete the invalid credentials
|
||||
delete(allCreds.Credentials, appViewURL)
|
||||
saveDeviceCredentials(configPath, allCreds)
|
||||
// Mark as not found so we re-authorize below
|
||||
found = false
|
||||
}
|
||||
}
|
||||
|
||||
if !found || deviceConfig.DeviceSecret == "" {
|
||||
// No credentials for this AppView
|
||||
// Check if we should attempt interactive authorization
|
||||
// We only do this if:
|
||||
// 1. ATCR_AUTO_AUTH environment variable is set to "1", OR
|
||||
// 2. We're in an interactive terminal (stderr is a terminal)
|
||||
shouldAutoAuth := os.Getenv("ATCR_AUTO_AUTH") == "1" || isTerminal(os.Stderr)
|
||||
|
||||
if !shouldAutoAuth {
|
||||
fmt.Fprintf(os.Stderr, "No valid credentials found for %s\n", appViewURL)
|
||||
fmt.Fprintf(os.Stderr, "\nTo authenticate, run:\n")
|
||||
fmt.Fprintf(os.Stderr, " export ATCR_AUTO_AUTH=1\n")
|
||||
fmt.Fprintf(os.Stderr, " docker push %s/<user>/<image>:<tag>\n", serverURL)
|
||||
fmt.Fprintf(os.Stderr, "\nThis will trigger device authorization in your browser.\n")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// Auto-auth enabled - trigger device authorization
|
||||
fmt.Fprintf(os.Stderr, "Starting device authorization for %s...\n", appViewURL)
|
||||
|
||||
newConfig, err := authorizeDevice(serverURL)
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Device authorization failed: %v\n", err)
|
||||
fmt.Fprintf(os.Stderr, "\nFallback: Use 'docker login %s' with your ATProto app-password\n", serverURL)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// Save device configuration
|
||||
if err := saveDeviceConfig(configPath, newConfig); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Failed to save device config: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
fmt.Fprintf(os.Stderr, "✓ Device authorized successfully for %s!\n", appViewURL)
|
||||
deviceConfig = newConfig
|
||||
}
|
||||
|
||||
// Return credentials for Docker
|
||||
creds := Credentials{
|
||||
ServerURL: serverURL,
|
||||
Username: deviceConfig.Handle,
|
||||
Secret: deviceConfig.DeviceSecret,
|
||||
}
|
||||
|
||||
if err := json.NewEncoder(os.Stdout).Encode(creds); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error encoding response: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
|
||||
// handleStore stores credentials (Docker calls this after login)
|
||||
func handleStore() {
|
||||
var creds Credentials
|
||||
if err := json.NewDecoder(os.Stdin).Decode(&creds); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error decoding credentials: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// This is a no-op for the device auth flow
|
||||
// Users should use the automatic device authorization, not docker login
|
||||
// If they use docker login with app-password, that goes through /auth/token directly
|
||||
}
|
||||
|
||||
// handleErase removes stored credentials for a specific AppView
|
||||
func handleErase() {
|
||||
// Docker sends the server URL as a plain string on stdin (not JSON)
|
||||
var serverURL string
|
||||
if _, err := fmt.Fscanln(os.Stdin, &serverURL); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error reading server URL: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// Build AppView URL to use as lookup key
|
||||
appViewURL := buildAppViewURL(serverURL)
|
||||
|
||||
// Load all device credentials
|
||||
configPath := getConfigPath()
|
||||
allCreds, err := loadDeviceCredentials(configPath)
|
||||
if err != nil {
|
||||
// No credentials file exists, nothing to erase
|
||||
return
|
||||
}
|
||||
|
||||
// Remove the specific AppView URL's credentials
|
||||
delete(allCreds.Credentials, appViewURL)
|
||||
|
||||
// If no credentials remain, remove the file entirely
|
||||
if len(allCreds.Credentials) == 0 {
|
||||
if err := os.Remove(configPath); err != nil && !os.IsNotExist(err) {
|
||||
fmt.Fprintf(os.Stderr, "Error removing device config: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// Otherwise, save the updated credentials
|
||||
if err := saveDeviceCredentials(configPath, allCreds); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error saving device config: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
|
||||
// authorizeDevice performs the device authorization flow
|
||||
func authorizeDevice(serverURL string) (*DeviceConfig, error) {
|
||||
appViewURL := buildAppViewURL(serverURL)
|
||||
|
||||
// Get device name (hostname)
|
||||
deviceName, err := os.Hostname()
|
||||
if err != nil {
|
||||
deviceName = "Unknown Device"
|
||||
}
|
||||
|
||||
// 1. Request device code
|
||||
fmt.Fprintf(os.Stderr, "Requesting device authorization...\n")
|
||||
|
||||
reqBody, _ := json.Marshal(DeviceCodeRequest{DeviceName: deviceName})
|
||||
resp, err := http.Post(appViewURL+"/auth/device/code", "application/json", bytes.NewReader(reqBody))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to request device code: %w", err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
return nil, fmt.Errorf("device code request failed: %s", string(body))
|
||||
}
|
||||
|
||||
var codeResp DeviceCodeResponse
|
||||
if err := json.NewDecoder(resp.Body).Decode(&codeResp); err != nil {
|
||||
return nil, fmt.Errorf("failed to decode device code response: %w", err)
|
||||
}
|
||||
|
||||
// 2. Display authorization URL and user code
|
||||
verificationURL := codeResp.VerificationURI + "?user_code=" + codeResp.UserCode
|
||||
|
||||
fmt.Fprintf(os.Stderr, "\n╔════════════════════════════════════════════════════════════════╗\n")
|
||||
fmt.Fprintf(os.Stderr, "║ Device Authorization Required ║\n")
|
||||
fmt.Fprintf(os.Stderr, "╚════════════════════════════════════════════════════════════════╝\n\n")
|
||||
fmt.Fprintf(os.Stderr, "Visit this URL in your browser:\n")
|
||||
fmt.Fprintf(os.Stderr, " %s\n\n", verificationURL)
|
||||
fmt.Fprintf(os.Stderr, "Your code: %s\n\n", codeResp.UserCode)
|
||||
|
||||
// Try to open browser (may fail on headless systems)
|
||||
if err := openBrowser(verificationURL); err == nil {
|
||||
fmt.Fprintf(os.Stderr, "Opening browser...\n\n")
|
||||
} else {
|
||||
fmt.Fprintf(os.Stderr, "Could not open browser automatically (%v)\n", err)
|
||||
fmt.Fprintf(os.Stderr, "Please open the URL above manually.\n\n")
|
||||
}
|
||||
|
||||
fmt.Fprintf(os.Stderr, "Waiting for authorization")
|
||||
|
||||
// 3. Poll for authorization completion
|
||||
pollInterval := time.Duration(codeResp.Interval) * time.Second
|
||||
timeout := time.Duration(codeResp.ExpiresIn) * time.Second
|
||||
deadline := time.Now().Add(timeout)
|
||||
|
||||
dots := 0
|
||||
for time.Now().Before(deadline) {
|
||||
time.Sleep(pollInterval)
|
||||
|
||||
// Show progress dots
|
||||
dots = (dots + 1) % 4
|
||||
fmt.Fprintf(os.Stderr, "\rWaiting for authorization%s ", strings.Repeat(".", dots))
|
||||
|
||||
// Poll token endpoint
|
||||
tokenReqBody, _ := json.Marshal(DeviceTokenRequest{DeviceCode: codeResp.DeviceCode})
|
||||
tokenResp, err := http.Post(appViewURL+"/auth/device/token", "application/json", bytes.NewReader(tokenReqBody))
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "\nPoll failed: %v\n", err)
|
||||
continue
|
||||
}
|
||||
|
||||
var tokenResult DeviceTokenResponse
|
||||
json.NewDecoder(tokenResp.Body).Decode(&tokenResult)
|
||||
tokenResp.Body.Close()
|
||||
|
||||
if tokenResult.Error == "authorization_pending" {
|
||||
// Still waiting
|
||||
continue
|
||||
}
|
||||
|
||||
if tokenResult.Error != "" {
|
||||
fmt.Fprintf(os.Stderr, "\n")
|
||||
return nil, fmt.Errorf("authorization failed: %s", tokenResult.Error)
|
||||
}
|
||||
|
||||
// Success!
|
||||
fmt.Fprintf(os.Stderr, "\n")
|
||||
return &DeviceConfig{
|
||||
Handle: tokenResult.Handle,
|
||||
DeviceSecret: tokenResult.DeviceSecret,
|
||||
AppViewURL: appViewURL,
|
||||
}, nil
|
||||
}
|
||||
|
||||
fmt.Fprintf(os.Stderr, "\n")
|
||||
return nil, fmt.Errorf("authorization timeout")
|
||||
}
|
||||
|
||||
// getConfigPath returns the path to the device configuration file
|
||||
func getConfigPath() string {
|
||||
homeDir, err := os.UserHomeDir()
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error getting home directory: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
atcrDir := filepath.Join(homeDir, ".atcr")
|
||||
if err := os.MkdirAll(atcrDir, 0700); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error creating .atcr directory: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
return filepath.Join(atcrDir, "device.json")
|
||||
}
|
||||
|
||||
// loadDeviceCredentials loads all device credentials from disk
|
||||
func loadDeviceCredentials(path string) (*DeviceCredentials, error) {
|
||||
data, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Try to unmarshal as new format (map of credentials)
|
||||
var creds DeviceCredentials
|
||||
if err := json.Unmarshal(data, &creds); err == nil && creds.Credentials != nil {
|
||||
return &creds, nil
|
||||
}
|
||||
|
||||
// Backward compatibility: Try to unmarshal as old format (single config)
|
||||
var oldConfig DeviceConfig
|
||||
if err := json.Unmarshal(data, &oldConfig); err == nil && oldConfig.DeviceSecret != "" {
|
||||
// Migrate old format to new format
|
||||
creds = DeviceCredentials{
|
||||
Credentials: map[string]DeviceConfig{
|
||||
oldConfig.AppViewURL: oldConfig,
|
||||
},
|
||||
}
|
||||
return &creds, nil
|
||||
}
|
||||
|
||||
return nil, fmt.Errorf("invalid device credentials format")
|
||||
}
|
||||
|
||||
// getDeviceConfig retrieves a specific device config for an AppView URL
|
||||
func getDeviceConfig(creds *DeviceCredentials, appViewURL string) (*DeviceConfig, bool) {
|
||||
if creds == nil || creds.Credentials == nil {
|
||||
return nil, false
|
||||
}
|
||||
config, found := creds.Credentials[appViewURL]
|
||||
return &config, found
|
||||
}
|
||||
|
||||
// saveDeviceCredentials saves all device credentials to disk
|
||||
func saveDeviceCredentials(path string, creds *DeviceCredentials) error {
|
||||
data, err := json.MarshalIndent(creds, "", " ")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return os.WriteFile(path, data, 0600)
|
||||
}
|
||||
|
||||
// saveDeviceConfig saves a single device config by adding/updating it in the credentials map
|
||||
func saveDeviceConfig(path string, config *DeviceConfig) error {
|
||||
// Load existing credentials (or create new)
|
||||
creds, err := loadDeviceCredentials(path)
|
||||
if err != nil {
|
||||
// Create new credentials structure
|
||||
creds = &DeviceCredentials{
|
||||
Credentials: make(map[string]DeviceConfig),
|
||||
}
|
||||
}
|
||||
|
||||
// Add or update the config for this AppView URL
|
||||
creds.Credentials[config.AppViewURL] = *config
|
||||
|
||||
// Save back to disk
|
||||
return saveDeviceCredentials(path, creds)
|
||||
}
|
||||
|
||||
// openBrowser opens the specified URL in the default browser
|
||||
func openBrowser(url string) error {
|
||||
var cmd *exec.Cmd
|
||||
|
||||
switch runtime.GOOS {
|
||||
case "linux":
|
||||
cmd = exec.Command("xdg-open", url)
|
||||
case "darwin":
|
||||
cmd = exec.Command("open", url)
|
||||
case "windows":
|
||||
cmd = exec.Command("rundll32", "url.dll,FileProtocolHandler", url)
|
||||
default:
|
||||
return fmt.Errorf("unsupported platform")
|
||||
}
|
||||
|
||||
return cmd.Start()
|
||||
}
|
||||
|
||||
// buildAppViewURL constructs the AppView URL with the appropriate protocol
|
||||
func buildAppViewURL(serverURL string) string {
|
||||
// If serverURL already has a scheme, use it as-is
|
||||
if strings.HasPrefix(serverURL, "http://") || strings.HasPrefix(serverURL, "https://") {
|
||||
return serverURL
|
||||
}
|
||||
|
||||
// Determine protocol based on Docker configuration and heuristics
|
||||
if isInsecureRegistry(serverURL) {
|
||||
return "http://" + serverURL
|
||||
}
|
||||
|
||||
// Default to HTTPS (mirrors Docker's default behavior)
|
||||
return "https://" + serverURL
|
||||
}
|
||||
|
||||
// isInsecureRegistry checks if a registry should use HTTP instead of HTTPS
|
||||
func isInsecureRegistry(serverURL string) bool {
|
||||
// Check Docker's insecure-registries configuration
|
||||
insecureRegistries := getDockerInsecureRegistries()
|
||||
for _, reg := range insecureRegistries {
|
||||
// Match exact serverURL or just the host part
|
||||
if reg == serverURL || reg == stripPort(serverURL) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
// Fallback heuristics: localhost and private IPs
|
||||
host := stripPort(serverURL)
|
||||
|
||||
// Check for localhost variants
|
||||
if host == "localhost" || host == "127.0.0.1" || host == "::1" {
|
||||
return true
|
||||
}
|
||||
|
||||
// Check if it's a private IP address
|
||||
if ip := net.ParseIP(host); ip != nil {
|
||||
if ip.IsLoopback() || ip.IsPrivate() {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// getDockerInsecureRegistries reads Docker's insecure-registries configuration
|
||||
func getDockerInsecureRegistries() []string {
|
||||
var paths []string
|
||||
|
||||
// Common Docker daemon.json locations
|
||||
switch runtime.GOOS {
|
||||
case "windows":
|
||||
programData := os.Getenv("ProgramData")
|
||||
if programData != "" {
|
||||
paths = append(paths, filepath.Join(programData, "docker", "config", "daemon.json"))
|
||||
}
|
||||
default:
|
||||
// Linux and macOS
|
||||
paths = append(paths, "/etc/docker/daemon.json")
|
||||
if homeDir, err := os.UserHomeDir(); err == nil {
|
||||
// Rootless Docker location
|
||||
paths = append(paths, filepath.Join(homeDir, ".docker", "daemon.json"))
|
||||
}
|
||||
}
|
||||
|
||||
// Try each path
|
||||
for _, path := range paths {
|
||||
if config := readDockerDaemonConfig(path); config != nil && len(config.InsecureRegistries) > 0 {
|
||||
return config.InsecureRegistries
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// readDockerDaemonConfig reads and parses a Docker daemon.json file
|
||||
func readDockerDaemonConfig(path string) *DockerDaemonConfig {
|
||||
data, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
var config DockerDaemonConfig
|
||||
if err := json.Unmarshal(data, &config); err != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
return &config
|
||||
}
|
||||
|
||||
// stripPort removes the port from a host:port string
|
||||
func stripPort(hostPort string) string {
|
||||
if colonIdx := strings.LastIndex(hostPort, ":"); colonIdx != -1 {
|
||||
// Check if this is IPv6 (has multiple colons)
|
||||
if strings.Count(hostPort, ":") > 1 {
|
||||
// IPv6 address, don't strip
|
||||
return hostPort
|
||||
}
|
||||
return hostPort[:colonIdx]
|
||||
}
|
||||
return hostPort
|
||||
}
|
||||
|
||||
// isTerminal checks if the file is a terminal
|
||||
func isTerminal(f *os.File) bool {
|
||||
// Use file stat to check if it's a character device (terminal)
|
||||
stat, err := f.Stat()
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
// On Unix, terminals are character devices with mode & ModeCharDevice set
|
||||
return (stat.Mode() & os.ModeCharDevice) != 0
|
||||
}
|
||||
|
||||
// validateCredentials checks if the credentials are still valid by making a test request
|
||||
func validateCredentials(appViewURL, handle, deviceSecret string) bool {
|
||||
// Call /auth/token to validate device secret and get JWT
|
||||
// This is the proper way to validate credentials - /v2/ requires JWT, not Basic Auth
|
||||
client := &http.Client{
|
||||
Timeout: 5 * time.Second,
|
||||
}
|
||||
|
||||
// Build /auth/token URL with minimal scope (just access to /v2/)
|
||||
tokenURL := appViewURL + "/auth/token?service=" + appViewURL
|
||||
|
||||
req, err := http.NewRequest("GET", tokenURL, nil)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
// Set basic auth with device credentials
|
||||
req.SetBasicAuth(handle, deviceSecret)
|
||||
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
// Network error - assume credentials are valid but server unreachable
|
||||
// Don't trigger re-auth on network issues
|
||||
return true
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
// 200 = valid credentials
|
||||
// 401 = invalid/expired credentials
|
||||
// Any other error = assume valid (don't re-auth on server issues)
|
||||
return resp.StatusCode == http.StatusOK
|
||||
}
|
||||
|
||||
107
cmd/credential-helper/process_darwin.go
Normal file
107
cmd/credential-helper/process_darwin.go
Normal file
@@ -0,0 +1,107 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
"unsafe"
|
||||
|
||||
"golang.org/x/sys/unix"
|
||||
)
|
||||
|
||||
// getProcessArgs uses kern.procargs2 sysctl to get process arguments.
|
||||
// This is the same mechanism ps(1) uses on macOS — no exec.Command needed.
|
||||
//
|
||||
// The kern.procargs2 buffer layout:
|
||||
//
|
||||
// [4 bytes: argc as int32]
|
||||
// [executable path\0]
|
||||
// [padding \0 bytes]
|
||||
// [argv[0]\0][argv[1]\0]...[argv[argc-1]\0]
|
||||
// [env vars...]
|
||||
func getProcessArgs(pid int) ([]string, error) {
|
||||
// kern.procargs2 MIB: CTL_KERN=1, KERN_PROCARGS2=49
|
||||
mib := []int32{1, 49, int32(pid)} //nolint:mnd
|
||||
|
||||
// First call to get buffer size
|
||||
n := uintptr(0)
|
||||
if err := sysctl(mib, nil, &n, nil, 0); err != nil {
|
||||
return nil, fmt.Errorf("sysctl size query for pid %d: %w", pid, err)
|
||||
}
|
||||
|
||||
buf := make([]byte, n)
|
||||
if err := sysctl(mib, &buf[0], &n, nil, 0); err != nil {
|
||||
return nil, fmt.Errorf("sysctl read for pid %d: %w", pid, err)
|
||||
}
|
||||
buf = buf[:n]
|
||||
|
||||
if len(buf) < 4 {
|
||||
return nil, fmt.Errorf("procargs2 buffer too short for pid %d", pid)
|
||||
}
|
||||
|
||||
// First 4 bytes: argc
|
||||
argc := int(binary.LittleEndian.Uint32(buf[:4]))
|
||||
pos := 4
|
||||
|
||||
// Skip executable path (null-terminated)
|
||||
end := bytes.IndexByte(buf[pos:], 0)
|
||||
if end == -1 {
|
||||
return nil, fmt.Errorf("no null terminator in exec path for pid %d", pid)
|
||||
}
|
||||
pos += end + 1
|
||||
|
||||
// Skip padding null bytes
|
||||
for pos < len(buf) && buf[pos] == 0 {
|
||||
pos++
|
||||
}
|
||||
|
||||
// Read argc arguments
|
||||
args := make([]string, 0, argc)
|
||||
for i := 0; i < argc && pos < len(buf); i++ {
|
||||
end := bytes.IndexByte(buf[pos:], 0)
|
||||
if end == -1 {
|
||||
args = append(args, string(buf[pos:]))
|
||||
break
|
||||
}
|
||||
args = append(args, string(buf[pos:pos+end]))
|
||||
pos += end + 1
|
||||
}
|
||||
|
||||
if len(args) == 0 {
|
||||
return nil, fmt.Errorf("no args found for pid %d", pid)
|
||||
}
|
||||
|
||||
return args, nil
|
||||
}
|
||||
|
||||
// getParentPID uses kern.proc.pid sysctl to find the parent PID.
|
||||
func getParentPID(pid int) (int, error) {
|
||||
// kern.proc.pid MIB: CTL_KERN=1, KERN_PROC=14, KERN_PROC_PID=1
|
||||
mib := []int32{1, 14, 1, int32(pid)} //nolint:mnd
|
||||
|
||||
var kinfo unix.KinfoProc
|
||||
n := uintptr(unsafe.Sizeof(kinfo))
|
||||
|
||||
if err := sysctl(mib, (*byte)(unsafe.Pointer(&kinfo)), &n, nil, 0); err != nil {
|
||||
return 0, fmt.Errorf("sysctl kern.proc.pid for pid %d: %w", pid, err)
|
||||
}
|
||||
|
||||
return int(kinfo.Eproc.Ppid), nil
|
||||
}
|
||||
|
||||
// sysctl is a thin wrapper around unix.Sysctl raw syscall.
|
||||
func sysctl(mib []int32, old *byte, oldlen *uintptr, new *byte, newlen uintptr) error {
|
||||
_, _, errno := unix.Syscall6(
|
||||
unix.SYS___SYSCTL,
|
||||
uintptr(unsafe.Pointer(&mib[0])),
|
||||
uintptr(len(mib)),
|
||||
uintptr(unsafe.Pointer(old)),
|
||||
uintptr(unsafe.Pointer(oldlen)),
|
||||
uintptr(unsafe.Pointer(new)),
|
||||
newlen,
|
||||
)
|
||||
if errno != 0 {
|
||||
return errno
|
||||
}
|
||||
return nil
|
||||
}
|
||||
42
cmd/credential-helper/process_linux.go
Normal file
42
cmd/credential-helper/process_linux.go
Normal file
@@ -0,0 +1,42 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// getProcessArgs reads /proc/<pid>/cmdline to get process arguments.
|
||||
func getProcessArgs(pid int) ([]string, error) {
|
||||
data, err := os.ReadFile(fmt.Sprintf("/proc/%d/cmdline", pid))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("reading /proc/%d/cmdline: %w", pid, err)
|
||||
}
|
||||
|
||||
s := strings.TrimRight(string(data), "\x00")
|
||||
if s == "" {
|
||||
return nil, fmt.Errorf("empty cmdline for pid %d", pid)
|
||||
}
|
||||
|
||||
return strings.Split(s, "\x00"), nil
|
||||
}
|
||||
|
||||
// getParentPID reads /proc/<pid>/status to find the parent PID.
|
||||
func getParentPID(pid int) (int, error) {
|
||||
data, err := os.ReadFile(fmt.Sprintf("/proc/%d/status", pid))
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
for _, line := range strings.Split(string(data), "\n") {
|
||||
if strings.HasPrefix(line, "PPid:") {
|
||||
fields := strings.Fields(line)
|
||||
if len(fields) >= 2 {
|
||||
return strconv.Atoi(fields[1])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return 0, fmt.Errorf("PPid not found in /proc/%d/status", pid)
|
||||
}
|
||||
19
cmd/credential-helper/process_other.go
Normal file
19
cmd/credential-helper/process_other.go
Normal file
@@ -0,0 +1,19 @@
|
||||
//go:build !linux && !darwin
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"runtime"
|
||||
)
|
||||
|
||||
// getProcessArgs is not supported on this platform.
|
||||
// The credential helper falls back to the active account.
|
||||
func getProcessArgs(pid int) ([]string, error) {
|
||||
return nil, fmt.Errorf("process introspection not supported on %s", runtime.GOOS)
|
||||
}
|
||||
|
||||
// getParentPID is not supported on this platform.
|
||||
func getParentPID(pid int) (int, error) {
|
||||
return 0, fmt.Errorf("process introspection not supported on %s", runtime.GOOS)
|
||||
}
|
||||
234
cmd/credential-helper/protocol.go
Normal file
234
cmd/credential-helper/protocol.go
Normal file
@@ -0,0 +1,234 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"strings"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
// Credentials represents docker credentials (Docker credential helper protocol)
|
||||
type Credentials struct {
|
||||
ServerURL string `json:"ServerURL,omitempty"`
|
||||
Username string `json:"Username,omitempty"`
|
||||
Secret string `json:"Secret,omitempty"`
|
||||
}
|
||||
|
||||
func newGetCmd() *cobra.Command {
|
||||
return &cobra.Command{
|
||||
Use: "get",
|
||||
Short: "Get credentials for a registry (Docker protocol)",
|
||||
Hidden: true,
|
||||
RunE: runGet,
|
||||
}
|
||||
}
|
||||
|
||||
func newStoreCmd() *cobra.Command {
|
||||
return &cobra.Command{
|
||||
Use: "store",
|
||||
Short: "Store credentials (Docker protocol)",
|
||||
Hidden: true,
|
||||
RunE: runStore,
|
||||
}
|
||||
}
|
||||
|
||||
func newEraseCmd() *cobra.Command {
|
||||
return &cobra.Command{
|
||||
Use: "erase",
|
||||
Short: "Erase credentials (Docker protocol)",
|
||||
Hidden: true,
|
||||
RunE: runErase,
|
||||
}
|
||||
}
|
||||
|
||||
func newListCmd() *cobra.Command {
|
||||
return &cobra.Command{
|
||||
Use: "list",
|
||||
Short: "List all credentials (Docker protocol extension)",
|
||||
Hidden: true,
|
||||
RunE: runList,
|
||||
}
|
||||
}
|
||||
|
||||
func runGet(cmd *cobra.Command, args []string) error {
|
||||
// If stdin is a terminal, the user ran this directly (not Docker calling us)
|
||||
if isTerminal(os.Stdin) {
|
||||
fmt.Fprintf(os.Stderr, "The 'get' command is part of the Docker credential helper protocol.\n")
|
||||
fmt.Fprintf(os.Stderr, "It should not be run directly.\n\n")
|
||||
fmt.Fprintf(os.Stderr, "To authenticate with a registry, run:\n")
|
||||
fmt.Fprintf(os.Stderr, " docker-credential-atcr login\n\n")
|
||||
fmt.Fprintf(os.Stderr, "To check your accounts:\n")
|
||||
fmt.Fprintf(os.Stderr, " docker-credential-atcr status\n")
|
||||
return fmt.Errorf("not a pipe")
|
||||
}
|
||||
|
||||
// Docker sends the server URL as a plain string on stdin (not JSON)
|
||||
var serverURL string
|
||||
if _, err := fmt.Fscanln(os.Stdin, &serverURL); err != nil {
|
||||
return fmt.Errorf("reading server URL: %w", err)
|
||||
}
|
||||
|
||||
appViewURL := buildAppViewURL(serverURL)
|
||||
|
||||
cfg, err := loadConfig()
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Warning: config load error: %v\n", err)
|
||||
}
|
||||
|
||||
acct, err := cfg.resolveAccount(appViewURL, serverURL)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Validate credentials
|
||||
result := validateCredentials(appViewURL, acct.Handle, acct.DeviceSecret)
|
||||
if !result.Valid {
|
||||
if result.OAuthSessionExpired {
|
||||
loginURL := result.LoginURL
|
||||
if loginURL == "" {
|
||||
loginURL = appViewURL + "/auth/oauth/login"
|
||||
}
|
||||
fmt.Fprintf(os.Stderr, "OAuth session expired for %s.\n", acct.Handle)
|
||||
fmt.Fprintf(os.Stderr, "Please visit: %s\n", loginURL)
|
||||
fmt.Fprintf(os.Stderr, "Then retry your docker command.\n")
|
||||
return fmt.Errorf("oauth session expired")
|
||||
}
|
||||
|
||||
// Generic auth failure — remove the bad account
|
||||
fmt.Fprintf(os.Stderr, "Credentials for %s are invalid.\n", acct.Handle)
|
||||
fmt.Fprintf(os.Stderr, "Run: docker-credential-atcr login\n")
|
||||
cfg.removeAccount(appViewURL, acct.Handle)
|
||||
cfg.save() //nolint:errcheck
|
||||
return fmt.Errorf("invalid credentials")
|
||||
}
|
||||
|
||||
// Check for updates (cached, non-blocking)
|
||||
checkAndNotifyUpdate(appViewURL)
|
||||
|
||||
// Return credentials for Docker
|
||||
creds := Credentials{
|
||||
ServerURL: serverURL,
|
||||
Username: acct.Handle,
|
||||
Secret: acct.DeviceSecret,
|
||||
}
|
||||
|
||||
return json.NewEncoder(os.Stdout).Encode(creds)
|
||||
}
|
||||
|
||||
func runStore(cmd *cobra.Command, args []string) error {
|
||||
var creds Credentials
|
||||
if err := json.NewDecoder(os.Stdin).Decode(&creds); err != nil {
|
||||
return fmt.Errorf("decoding credentials: %w", err)
|
||||
}
|
||||
|
||||
// Only store if the secret looks like a device secret
|
||||
if !strings.HasPrefix(creds.Secret, "atcr_device_") {
|
||||
// Not our device secret — ignore (e.g., docker login with app-password)
|
||||
return nil
|
||||
}
|
||||
|
||||
appViewURL := buildAppViewURL(creds.ServerURL)
|
||||
|
||||
cfg, err := loadConfig()
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Warning: config load error: %v\n", err)
|
||||
}
|
||||
|
||||
cfg.addAccount(appViewURL, &Account{
|
||||
Handle: creds.Username,
|
||||
DeviceSecret: creds.Secret,
|
||||
})
|
||||
|
||||
return cfg.save()
|
||||
}
|
||||
|
||||
func runErase(cmd *cobra.Command, args []string) error {
|
||||
var serverURL string
|
||||
if _, err := fmt.Fscanln(os.Stdin, &serverURL); err != nil {
|
||||
return fmt.Errorf("reading server URL: %w", err)
|
||||
}
|
||||
|
||||
appViewURL := buildAppViewURL(serverURL)
|
||||
|
||||
cfg, err := loadConfig()
|
||||
if err != nil {
|
||||
return nil // No config, nothing to erase
|
||||
}
|
||||
|
||||
reg := cfg.findRegistry(appViewURL)
|
||||
if reg == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Erase the active account (or sole account)
|
||||
handle := reg.Active
|
||||
if handle == "" && len(reg.Accounts) == 1 {
|
||||
for h := range reg.Accounts {
|
||||
handle = h
|
||||
}
|
||||
}
|
||||
if handle == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
cfg.removeAccount(appViewURL, handle)
|
||||
return cfg.save()
|
||||
}
|
||||
|
||||
func runList(cmd *cobra.Command, args []string) error {
|
||||
cfg, err := loadConfig()
|
||||
if err != nil {
|
||||
// Return empty object
|
||||
fmt.Println("{}")
|
||||
return nil
|
||||
}
|
||||
|
||||
// Docker list protocol: {"ServerURL": "Username", ...}
|
||||
result := make(map[string]string)
|
||||
for url, reg := range cfg.Registries {
|
||||
// Strip scheme for Docker compatibility
|
||||
host := strings.TrimPrefix(url, "https://")
|
||||
host = strings.TrimPrefix(host, "http://")
|
||||
for _, acct := range reg.Accounts {
|
||||
result[host] = acct.Handle
|
||||
}
|
||||
}
|
||||
|
||||
return json.NewEncoder(os.Stdout).Encode(result)
|
||||
}
|
||||
|
||||
// checkAndNotifyUpdate checks for updates in the background and notifies the user
|
||||
func checkAndNotifyUpdate(appViewURL string) {
|
||||
cache := loadUpdateCheckCache()
|
||||
if cache != nil && cache.Current == version {
|
||||
// Cache is fresh and for current version
|
||||
if isNewerVersion(cache.Latest, version) {
|
||||
fmt.Fprintf(os.Stderr, "\nUpdate available: %s (current: %s)\n", cache.Latest, version)
|
||||
fmt.Fprintf(os.Stderr, "Run: docker-credential-atcr update\n\n")
|
||||
}
|
||||
// Check if cache is still fresh (24h)
|
||||
if cache.CheckedAt.Add(updateCheckCacheTTL).After(timeNow()) {
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// Fetch version info
|
||||
apiURL := appViewURL + "/api/credential-helper/version"
|
||||
versionInfo, err := fetchVersionInfo(apiURL)
|
||||
if err != nil {
|
||||
return // Silently fail
|
||||
}
|
||||
|
||||
saveUpdateCheckCache(&UpdateCheckCache{
|
||||
CheckedAt: timeNow(),
|
||||
Latest: versionInfo.Latest,
|
||||
Current: version,
|
||||
})
|
||||
|
||||
if isNewerVersion(versionInfo.Latest, version) {
|
||||
fmt.Fprintf(os.Stderr, "\nUpdate available: %s (current: %s)\n", versionInfo.Latest, version)
|
||||
fmt.Fprintf(os.Stderr, "Run: docker-credential-atcr update\n\n")
|
||||
}
|
||||
}
|
||||
374
cmd/db-migrate/main.go
Normal file
374
cmd/db-migrate/main.go
Normal file
@@ -0,0 +1,374 @@
|
||||
// db-migrate copies all tables and data from a local SQLite database to a
|
||||
// remote libsql database (e.g. Bunny Database, Turso). It reads the schema
|
||||
// from sqlite_master, creates tables on the remote, and inserts all rows
|
||||
// in batches. Generic — works with any SQLite DB (appview, hold, etc.).
|
||||
//
|
||||
// Usage:
|
||||
//
|
||||
// go run ./cmd/db-migrate --local /path/to/local.db --remote "libsql://..." --token "..."
|
||||
// go run ./cmd/db-migrate --local /path/to/local.db --remote "libsql://..." --token "..." --skip-existing
|
||||
package main
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"flag"
|
||||
"fmt"
|
||||
"log"
|
||||
"os"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
_ "github.com/tursodatabase/go-libsql"
|
||||
)
|
||||
|
||||
func main() {
|
||||
localPath := flag.String("local", "", "Path to local SQLite database file")
|
||||
remoteURL := flag.String("remote", "", "Remote libsql URL (libsql://...)")
|
||||
authToken := flag.String("token", "", "Auth token for remote database")
|
||||
skipExisting := flag.Bool("skip-existing", false, "Skip tables that already have data on remote")
|
||||
batchSize := flag.Int("batch-size", 100, "Number of rows per INSERT batch")
|
||||
dryRun := flag.Bool("dry-run", false, "Show what would be migrated without writing")
|
||||
flag.Parse()
|
||||
|
||||
if *localPath == "" || *remoteURL == "" || *authToken == "" {
|
||||
flag.Usage()
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// Open local database read-only
|
||||
localDSN := *localPath
|
||||
if !strings.HasPrefix(localDSN, "file:") {
|
||||
localDSN = "file:" + localDSN
|
||||
}
|
||||
localDSN += "?mode=ro"
|
||||
|
||||
localDB, err := sql.Open("libsql", localDSN)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to open local database: %v", err)
|
||||
}
|
||||
defer localDB.Close()
|
||||
|
||||
if err := localDB.Ping(); err != nil {
|
||||
log.Fatalf("Failed to ping local database: %v", err)
|
||||
}
|
||||
|
||||
// Open remote database
|
||||
remoteDSN := fmt.Sprintf("%s?authToken=%s", *remoteURL, *authToken)
|
||||
remoteDB, err := sql.Open("libsql", remoteDSN)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to open remote database: %v", err)
|
||||
}
|
||||
defer remoteDB.Close()
|
||||
|
||||
if err := remoteDB.Ping(); err != nil {
|
||||
log.Fatalf("Failed to ping remote database: %v", err)
|
||||
}
|
||||
// Get all user tables from local
|
||||
tables, err := getTables(localDB)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to list tables: %v", err)
|
||||
}
|
||||
|
||||
if len(tables) == 0 {
|
||||
log.Println("No tables found in local database")
|
||||
return
|
||||
}
|
||||
|
||||
fmt.Printf("Found %d tables to migrate\n\n", len(tables))
|
||||
|
||||
start := time.Now()
|
||||
|
||||
if !*dryRun {
|
||||
// Phase 1: Create all tables first so FK references resolve
|
||||
fmt.Println("Creating tables...")
|
||||
for _, t := range tables {
|
||||
if err := createTable(remoteDB, t); err != nil {
|
||||
log.Fatalf("Failed to create table %s: %v", t.name, err)
|
||||
}
|
||||
}
|
||||
fmt.Println()
|
||||
}
|
||||
|
||||
// Phase 2: Copy data
|
||||
fmt.Println("Migrating data...")
|
||||
totalRows := 0
|
||||
for _, t := range tables {
|
||||
count, err := migrateTable(localDB, remoteDB, t, *batchSize, *skipExisting, *dryRun)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to migrate table %s: %v", t.name, err)
|
||||
}
|
||||
totalRows += count
|
||||
}
|
||||
|
||||
if !*dryRun {
|
||||
// Phase 3: Create indexes after data is loaded (faster than indexing during insert)
|
||||
fmt.Println("\nCreating indexes...")
|
||||
for _, t := range tables {
|
||||
if err := createIndexes(localDB, remoteDB, t.name); err != nil {
|
||||
log.Fatalf("Failed to create indexes for %s: %v", t.name, err)
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
fmt.Printf("\nDone. %d total rows across %d tables in %s\n", totalRows, len(tables), time.Since(start).Round(time.Millisecond))
|
||||
if *dryRun {
|
||||
fmt.Println("(dry run — nothing was written)")
|
||||
}
|
||||
}
|
||||
|
||||
type tableInfo struct {
|
||||
name string
|
||||
ddl string
|
||||
}
|
||||
|
||||
func getTables(db *sql.DB) ([]tableInfo, error) {
|
||||
rows, err := db.Query(`
|
||||
SELECT name, sql FROM sqlite_master
|
||||
WHERE type = 'table'
|
||||
AND name NOT LIKE 'sqlite_%'
|
||||
AND name NOT LIKE '_litestream_%'
|
||||
AND name NOT LIKE 'libsql_%'
|
||||
ORDER BY name
|
||||
`)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
var tables []tableInfo
|
||||
for rows.Next() {
|
||||
var t tableInfo
|
||||
var ddl sql.NullString
|
||||
if err := rows.Scan(&t.name, &ddl); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if ddl.Valid {
|
||||
t.ddl = ddl.String
|
||||
}
|
||||
tables = append(tables, t)
|
||||
}
|
||||
if err := rows.Err(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Sort tables so those referenced by foreign keys come first.
|
||||
// Tables with FK references depend on other tables existing and
|
||||
// having data, so we insert referenced tables first.
|
||||
return topoSortTables(db, tables)
|
||||
}
|
||||
|
||||
// topoSortTables orders tables so that referenced (parent) tables come before
|
||||
// tables that reference them via foreign keys.
|
||||
func topoSortTables(db *sql.DB, tables []tableInfo) ([]tableInfo, error) {
|
||||
byName := make(map[string]tableInfo, len(tables))
|
||||
for _, t := range tables {
|
||||
byName[t.name] = t
|
||||
}
|
||||
|
||||
// Build dependency graph: table -> tables it references
|
||||
deps := make(map[string][]string)
|
||||
for _, t := range tables {
|
||||
fkRows, err := db.Query(fmt.Sprintf("PRAGMA foreign_key_list([%s])", t.name))
|
||||
if err != nil {
|
||||
// PRAGMA might not return rows for tables without FKs
|
||||
continue
|
||||
}
|
||||
seen := make(map[string]bool)
|
||||
for fkRows.Next() {
|
||||
var id, seq int
|
||||
var table, from, to, onUpdate, onDelete, match string
|
||||
if err := fkRows.Scan(&id, &seq, &table, &from, &to, &onUpdate, &onDelete, &match); err != nil {
|
||||
fkRows.Close()
|
||||
return nil, err
|
||||
}
|
||||
if !seen[table] {
|
||||
deps[t.name] = append(deps[t.name], table)
|
||||
seen[table] = true
|
||||
}
|
||||
}
|
||||
fkRows.Close()
|
||||
}
|
||||
|
||||
// Topological sort (Kahn's algorithm)
|
||||
visited := make(map[string]bool)
|
||||
var sorted []tableInfo
|
||||
var visit func(name string)
|
||||
visit = func(name string) {
|
||||
if visited[name] {
|
||||
return
|
||||
}
|
||||
visited[name] = true
|
||||
for _, dep := range deps[name] {
|
||||
visit(dep)
|
||||
}
|
||||
if t, ok := byName[name]; ok {
|
||||
sorted = append(sorted, t)
|
||||
}
|
||||
}
|
||||
for _, t := range tables {
|
||||
visit(t.name)
|
||||
}
|
||||
return sorted, nil
|
||||
}
|
||||
|
||||
func getIndexes(db *sql.DB, tableName string) ([]string, error) {
|
||||
rows, err := db.Query(`
|
||||
SELECT sql FROM sqlite_master
|
||||
WHERE type = 'index'
|
||||
AND tbl_name = ?
|
||||
AND sql IS NOT NULL
|
||||
`, tableName)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
var indexes []string
|
||||
for rows.Next() {
|
||||
var ddl string
|
||||
if err := rows.Scan(&ddl); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
indexes = append(indexes, ddl)
|
||||
}
|
||||
return indexes, rows.Err()
|
||||
}
|
||||
|
||||
func createTable(remoteDB *sql.DB, t tableInfo) error {
|
||||
if t.ddl == "" {
|
||||
return nil
|
||||
}
|
||||
ddl := t.ddl
|
||||
if !strings.Contains(strings.ToUpper(ddl), "IF NOT EXISTS") {
|
||||
ddl = strings.Replace(ddl, "CREATE TABLE", "CREATE TABLE IF NOT EXISTS", 1)
|
||||
}
|
||||
if _, err := remoteDB.Exec(ddl); err != nil {
|
||||
return fmt.Errorf("create table %s: %w", t.name, err)
|
||||
}
|
||||
fmt.Printf(" %s\n", t.name)
|
||||
return nil
|
||||
}
|
||||
|
||||
func createIndexes(localDB, remoteDB *sql.DB, tableName string) error {
|
||||
indexes, err := getIndexes(localDB, tableName)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
for _, idx := range indexes {
|
||||
ddl := idx
|
||||
if !strings.Contains(strings.ToUpper(ddl), "IF NOT EXISTS") {
|
||||
ddl = strings.Replace(ddl, "CREATE INDEX", "CREATE INDEX IF NOT EXISTS", 1)
|
||||
ddl = strings.Replace(ddl, "CREATE UNIQUE INDEX", "CREATE UNIQUE INDEX IF NOT EXISTS", 1)
|
||||
}
|
||||
if _, err := remoteDB.Exec(ddl); err != nil {
|
||||
return fmt.Errorf("create index on %s: %w", tableName, err)
|
||||
}
|
||||
}
|
||||
if len(indexes) > 0 {
|
||||
fmt.Printf(" %s: %d indexes\n", tableName, len(indexes))
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func migrateTable(localDB, remoteDB *sql.DB, t tableInfo, batchSize int, skipExisting, dryRun bool) (int, error) {
|
||||
var localCount int
|
||||
if err := localDB.QueryRow(fmt.Sprintf("SELECT COUNT(*) FROM [%s]", t.name)).Scan(&localCount); err != nil {
|
||||
return 0, fmt.Errorf("count local rows: %w", err)
|
||||
}
|
||||
|
||||
if localCount == 0 {
|
||||
fmt.Printf(" %-30s %6d rows (empty)\n", t.name, 0)
|
||||
return 0, nil
|
||||
}
|
||||
|
||||
if dryRun {
|
||||
fmt.Printf(" %-30s %6d rows (would migrate)\n", t.name, localCount)
|
||||
return localCount, nil
|
||||
}
|
||||
|
||||
if skipExisting {
|
||||
var remoteCount int
|
||||
if err := remoteDB.QueryRow(fmt.Sprintf("SELECT COUNT(*) FROM [%s]", t.name)).Scan(&remoteCount); err != nil {
|
||||
return 0, fmt.Errorf("count remote rows: %w", err)
|
||||
}
|
||||
if remoteCount > 0 {
|
||||
fmt.Printf(" %-30s %6d rows (skipped, %d on remote)\n", t.name, localCount, remoteCount)
|
||||
return 0, nil
|
||||
}
|
||||
}
|
||||
|
||||
rows, err := localDB.Query(fmt.Sprintf("SELECT * FROM [%s]", t.name))
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("select: %w", err)
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
cols, err := rows.Columns()
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("columns: %w", err)
|
||||
}
|
||||
|
||||
placeholders := make([]string, len(cols))
|
||||
quotedCols := make([]string, len(cols))
|
||||
for i, c := range cols {
|
||||
placeholders[i] = "?"
|
||||
quotedCols[i] = fmt.Sprintf("[%s]", c)
|
||||
}
|
||||
insertPrefix := fmt.Sprintf("INSERT INTO [%s] (%s) VALUES ", t.name, strings.Join(quotedCols, ", "))
|
||||
rowPlaceholder := "(" + strings.Join(placeholders, ", ") + ")"
|
||||
|
||||
inserted := 0
|
||||
batch := make([][]any, 0, batchSize)
|
||||
|
||||
for rows.Next() {
|
||||
vals := make([]any, len(cols))
|
||||
ptrs := make([]any, len(cols))
|
||||
for i := range vals {
|
||||
ptrs[i] = &vals[i]
|
||||
}
|
||||
if err := rows.Scan(ptrs...); err != nil {
|
||||
return 0, fmt.Errorf("scan: %w", err)
|
||||
}
|
||||
batch = append(batch, vals)
|
||||
|
||||
if len(batch) >= batchSize {
|
||||
if err := insertBatch(remoteDB, insertPrefix, rowPlaceholder, batch); err != nil {
|
||||
return 0, fmt.Errorf("insert batch at row %d: %w", inserted, err)
|
||||
}
|
||||
inserted += len(batch)
|
||||
batch = batch[:0]
|
||||
}
|
||||
}
|
||||
|
||||
if len(batch) > 0 {
|
||||
if err := insertBatch(remoteDB, insertPrefix, rowPlaceholder, batch); err != nil {
|
||||
return 0, fmt.Errorf("insert final batch: %w", err)
|
||||
}
|
||||
inserted += len(batch)
|
||||
}
|
||||
|
||||
if err := rows.Err(); err != nil {
|
||||
return 0, fmt.Errorf("rows iteration: %w", err)
|
||||
}
|
||||
|
||||
fmt.Printf(" %-30s %6d rows migrated\n", t.name, inserted)
|
||||
return inserted, nil
|
||||
}
|
||||
|
||||
func insertBatch(db *sql.DB, prefix, rowPlaceholder string, batch [][]any) error {
|
||||
if len(batch) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
placeholders := make([]string, len(batch))
|
||||
var args []any
|
||||
for i, row := range batch {
|
||||
placeholders[i] = rowPlaceholder
|
||||
args = append(args, row...)
|
||||
}
|
||||
|
||||
query := prefix + strings.Join(placeholders, ", ")
|
||||
_, err := db.Exec(query, args...)
|
||||
return err
|
||||
}
|
||||
22
cmd/healthcheck/main.go
Normal file
22
cmd/healthcheck/main.go
Normal file
@@ -0,0 +1,22 @@
|
||||
// Minimal HTTP health check binary for scratch Docker images.
|
||||
// Usage: healthcheck <url>
|
||||
// Exits 0 if the URL returns HTTP 200, 1 otherwise.
|
||||
package main
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"os"
|
||||
"time"
|
||||
)
|
||||
|
||||
func main() {
|
||||
if len(os.Args) < 2 {
|
||||
os.Exit(1)
|
||||
}
|
||||
client := &http.Client{Timeout: 5 * time.Second}
|
||||
resp, err := client.Get(os.Args[1])
|
||||
if err != nil || resp.StatusCode != http.StatusOK {
|
||||
os.Exit(1)
|
||||
}
|
||||
os.Exit(0)
|
||||
}
|
||||
198
cmd/hold/main.go
198
cmd/hold/main.go
@@ -1,160 +1,88 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"log"
|
||||
"net/http"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
"log/slog"
|
||||
"os"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"atcr.io/pkg/atproto"
|
||||
"atcr.io/pkg/hold"
|
||||
indigooauth "github.com/bluesky-social/indigo/atproto/auth/oauth"
|
||||
|
||||
// Import storage drivers
|
||||
_ "github.com/distribution/distribution/v3/registry/storage/driver/filesystem"
|
||||
_ "github.com/distribution/distribution/v3/registry/storage/driver/s3-aws"
|
||||
)
|
||||
|
||||
func main() {
|
||||
// Load configuration from environment variables
|
||||
cfg, err := hold.LoadConfigFromEnv()
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to load config: %v", err)
|
||||
}
|
||||
var configFile string
|
||||
|
||||
// Create hold service
|
||||
service, err := hold.NewHoldService(cfg)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to create hold service: %v", err)
|
||||
}
|
||||
var rootCmd = &cobra.Command{
|
||||
Use: "atcr-hold",
|
||||
Short: "ATCR Hold Service - BYOS blob storage",
|
||||
}
|
||||
|
||||
// Setup HTTP routes
|
||||
mux := http.NewServeMux()
|
||||
mux.HandleFunc("/health", service.HealthHandler)
|
||||
mux.HandleFunc("/register", service.HandleRegister)
|
||||
mux.HandleFunc("/presigned-url", service.HandlePresignedURL)
|
||||
mux.HandleFunc("/move", service.HandleMove)
|
||||
var serveCmd = &cobra.Command{
|
||||
Use: "serve",
|
||||
Short: "Start the hold service",
|
||||
Long: `Start the ATCR hold service with embedded PDS and S3 blob storage.
|
||||
|
||||
// Multipart upload endpoints
|
||||
mux.HandleFunc("/start-multipart", service.HandleStartMultipart)
|
||||
mux.HandleFunc("/part-presigned-url", service.HandleGetPartURL)
|
||||
mux.HandleFunc("/complete-multipart", service.HandleCompleteMultipart)
|
||||
mux.HandleFunc("/abort-multipart", service.HandleAbortMultipart)
|
||||
|
||||
// Buffered multipart part upload endpoint (for when presigned URLs are disabled/unavailable)
|
||||
mux.HandleFunc("/multipart-parts/", func(w http.ResponseWriter, r *http.Request) {
|
||||
if r.Method != http.MethodPut {
|
||||
http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
|
||||
return
|
||||
}
|
||||
|
||||
// Parse URL: /multipart-parts/{uploadID}/{partNumber}
|
||||
path := r.URL.Path[len("/multipart-parts/"):]
|
||||
parts := strings.Split(path, "/")
|
||||
if len(parts) != 2 {
|
||||
http.Error(w, "invalid path format, expected /multipart-parts/{uploadID}/{partNumber}", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
uploadID := parts[0]
|
||||
partNumber, err := strconv.Atoi(parts[1])
|
||||
Configuration is loaded in layers: defaults -> YAML file -> environment variables.
|
||||
Use --config to specify a YAML configuration file.
|
||||
Environment variables always override file values.`,
|
||||
Args: cobra.NoArgs,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
cfg, err := hold.LoadConfig(configFile)
|
||||
if err != nil {
|
||||
http.Error(w, fmt.Sprintf("invalid part number: %v", err), http.StatusBadRequest)
|
||||
return
|
||||
return fmt.Errorf("failed to load config: %w", err)
|
||||
}
|
||||
|
||||
// Get DID from query param
|
||||
did := r.URL.Query().Get("did")
|
||||
|
||||
service.HandleMultipartPartUpload(w, r, uploadID, partNumber, did, service.MultipartMgr)
|
||||
})
|
||||
|
||||
// Pre-register OAuth callback route (will be populated by auto-registration)
|
||||
var oauthCallbackHandler http.HandlerFunc
|
||||
mux.HandleFunc("/auth/oauth/callback", func(w http.ResponseWriter, r *http.Request) {
|
||||
if oauthCallbackHandler != nil {
|
||||
oauthCallbackHandler(w, r)
|
||||
} else {
|
||||
http.Error(w, "OAuth callback not initialized", http.StatusServiceUnavailable)
|
||||
}
|
||||
})
|
||||
|
||||
// OAuth client metadata endpoint for ATProto OAuth
|
||||
// The hold service serves its metadata at /client-metadata.json
|
||||
// This is referenced by its client ID URL
|
||||
mux.HandleFunc("/client-metadata.json", func(w http.ResponseWriter, r *http.Request) {
|
||||
// Create a temporary config to generate metadata (indigo provides this)
|
||||
redirectURI := cfg.Server.PublicURL + "/auth/oauth/callback"
|
||||
clientID := cfg.Server.PublicURL + "/client-metadata.json"
|
||||
|
||||
// Define scopes needed for hold registration and crew management
|
||||
// Omit action parameter to allow all actions (create, update, delete)
|
||||
scopes := []string{
|
||||
"atproto",
|
||||
fmt.Sprintf("repo:%s", atproto.HoldCollection),
|
||||
fmt.Sprintf("repo:%s", atproto.HoldCrewCollection),
|
||||
fmt.Sprintf("repo:%s", atproto.SailorProfileCollection),
|
||||
server, err := hold.NewHoldServer(cfg)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to initialize hold server: %w", err)
|
||||
}
|
||||
|
||||
config := indigooauth.NewPublicConfig(clientID, redirectURI, scopes)
|
||||
metadata := config.ClientMetadata()
|
||||
return server.Serve()
|
||||
},
|
||||
}
|
||||
|
||||
// Serve as JSON
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
w.Header().Set("Access-Control-Allow-Origin", "*")
|
||||
json.NewEncoder(w).Encode(metadata)
|
||||
})
|
||||
mux.HandleFunc("/blobs/", func(w http.ResponseWriter, r *http.Request) {
|
||||
switch r.Method {
|
||||
case http.MethodGet, http.MethodHead:
|
||||
service.HandleProxyGet(w, r)
|
||||
case http.MethodPut:
|
||||
service.HandleProxyPut(w, r)
|
||||
default:
|
||||
http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
|
||||
var configCmd = &cobra.Command{
|
||||
Use: "config",
|
||||
Short: "Configuration management commands",
|
||||
}
|
||||
|
||||
var configInitCmd = &cobra.Command{
|
||||
Use: "init [path]",
|
||||
Short: "Generate an example configuration file",
|
||||
Long: `Generate an example YAML configuration file with all available options.
|
||||
If path is provided, writes to that file. Otherwise writes to stdout.`,
|
||||
Args: cobra.MaximumNArgs(1),
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
yamlBytes, err := hold.ExampleYAML()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to generate example config: %w", err)
|
||||
}
|
||||
})
|
||||
|
||||
// Create server
|
||||
server := &http.Server{
|
||||
Addr: cfg.Server.Addr,
|
||||
Handler: mux,
|
||||
ReadTimeout: cfg.Server.ReadTimeout,
|
||||
WriteTimeout: cfg.Server.WriteTimeout,
|
||||
}
|
||||
|
||||
// Start server in goroutine so we can do auto-registration after it's running
|
||||
serverErr := make(chan error, 1)
|
||||
go func() {
|
||||
log.Printf("Starting hold service on %s", cfg.Server.Addr)
|
||||
if err := server.ListenAndServe(); err != nil && err != http.ErrServerClosed {
|
||||
serverErr <- err
|
||||
if len(args) == 1 {
|
||||
if err := os.WriteFile(args[0], yamlBytes, 0644); err != nil {
|
||||
return fmt.Errorf("failed to write config file: %w", err)
|
||||
}
|
||||
fmt.Fprintf(os.Stderr, "Wrote example config to %s\n", args[0])
|
||||
return nil
|
||||
}
|
||||
}()
|
||||
fmt.Print(string(yamlBytes))
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
// Give server a moment to start
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
func init() {
|
||||
serveCmd.Flags().StringVarP(&configFile, "config", "c", "", "path to YAML configuration file")
|
||||
|
||||
// Auto-register if owner DID is set (now that server is running)
|
||||
if cfg.Registration.OwnerDID != "" {
|
||||
if err := service.AutoRegister(&oauthCallbackHandler); err != nil {
|
||||
log.Printf("WARNING: Auto-registration failed: %v", err)
|
||||
log.Printf("You can register manually later using the /register endpoint")
|
||||
} else {
|
||||
log.Printf("Successfully registered hold service in PDS")
|
||||
}
|
||||
configCmd.AddCommand(configInitCmd)
|
||||
|
||||
// Reconcile allow-all crew state
|
||||
if err := service.ReconcileAllowAllCrew(&oauthCallbackHandler); err != nil {
|
||||
log.Printf("WARNING: Failed to reconcile allow-all crew state: %v", err)
|
||||
}
|
||||
}
|
||||
rootCmd.AddCommand(serveCmd)
|
||||
rootCmd.AddCommand(configCmd)
|
||||
rootCmd.AddCommand(repoCmd)
|
||||
rootCmd.AddCommand(plcCmd)
|
||||
}
|
||||
|
||||
// Wait for server error or shutdown
|
||||
if err := <-serverErr; err != nil {
|
||||
log.Fatalf("Server failed: %v", err)
|
||||
func main() {
|
||||
if err := rootCmd.Execute(); err != nil {
|
||||
slog.Error("Command failed", "error", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
|
||||
164
cmd/hold/plc.go
Normal file
164
cmd/hold/plc.go
Normal file
@@ -0,0 +1,164 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"log/slog"
|
||||
|
||||
"atcr.io/pkg/auth/oauth"
|
||||
"atcr.io/pkg/hold"
|
||||
"atcr.io/pkg/hold/pds"
|
||||
|
||||
"github.com/bluesky-social/indigo/atproto/atcrypto"
|
||||
didplc "github.com/did-method-plc/go-didplc"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
var plcCmd = &cobra.Command{
|
||||
Use: "plc",
|
||||
Short: "PLC directory management commands",
|
||||
}
|
||||
|
||||
var plcConfigFile string
|
||||
|
||||
var plcAddRotationKeyCmd = &cobra.Command{
|
||||
Use: "add-rotation-key <multibase-key>",
|
||||
Short: "Add a rotation key to this hold's PLC identity",
|
||||
Long: `Add an additional rotation key to the hold's did:plc document.
|
||||
The key must be a multibase-encoded private key (K-256 or P-256, starting with 'z').
|
||||
The hold's configured rotation key is used to sign the PLC update.
|
||||
|
||||
atcr-hold plc add-rotation-key --config config.yaml z...`,
|
||||
Args: cobra.ExactArgs(1),
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
cfg, err := hold.LoadConfig(plcConfigFile)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to load config: %w", err)
|
||||
}
|
||||
|
||||
if cfg.Database.DIDMethod != "plc" {
|
||||
return fmt.Errorf("this command only works with did:plc (database.did_method is %q)", cfg.Database.DIDMethod)
|
||||
}
|
||||
|
||||
ctx := context.Background()
|
||||
|
||||
// Resolve the hold's DID
|
||||
holdDID, err := pds.LoadOrCreateDID(ctx, pds.DIDConfig{
|
||||
DID: cfg.Database.DID,
|
||||
DIDMethod: cfg.Database.DIDMethod,
|
||||
PublicURL: cfg.Server.PublicURL,
|
||||
DBPath: cfg.Database.Path,
|
||||
SigningKeyPath: cfg.Database.KeyPath,
|
||||
RotationKey: cfg.Database.RotationKey,
|
||||
PLCDirectoryURL: cfg.Database.PLCDirectoryURL,
|
||||
})
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to resolve hold DID: %w", err)
|
||||
}
|
||||
|
||||
// Parse the rotation key from config (required for signing PLC updates)
|
||||
if cfg.Database.RotationKey == "" {
|
||||
return fmt.Errorf("database.rotation_key must be set to sign PLC updates")
|
||||
}
|
||||
rotationKey, err := atcrypto.ParsePrivateMultibase(cfg.Database.RotationKey)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to parse rotation_key from config: %w", err)
|
||||
}
|
||||
|
||||
// Parse the new key to add (K-256 or P-256)
|
||||
newKey, err := atcrypto.ParsePrivateMultibase(args[0])
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to parse key argument: %w", err)
|
||||
}
|
||||
newKeyPub, err := newKey.PublicKey()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to get public key from argument: %w", err)
|
||||
}
|
||||
newKeyDIDKey := newKeyPub.DIDKey()
|
||||
|
||||
// Load signing key for verification methods
|
||||
keyPath := cfg.Database.KeyPath
|
||||
if keyPath == "" {
|
||||
keyPath = cfg.Database.Path + "/signing.key"
|
||||
}
|
||||
signingKey, err := oauth.GenerateOrLoadPDSKey(keyPath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to load signing key: %w", err)
|
||||
}
|
||||
|
||||
// Fetch current PLC state
|
||||
plcDirectoryURL := cfg.Database.PLCDirectoryURL
|
||||
if plcDirectoryURL == "" {
|
||||
plcDirectoryURL = "https://plc.directory"
|
||||
}
|
||||
client := &didplc.Client{DirectoryURL: plcDirectoryURL}
|
||||
|
||||
opLog, err := client.OpLog(ctx, holdDID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to fetch PLC op log: %w", err)
|
||||
}
|
||||
if len(opLog) == 0 {
|
||||
return fmt.Errorf("empty op log for %s", holdDID)
|
||||
}
|
||||
|
||||
lastEntry := opLog[len(opLog)-1]
|
||||
lastOp := lastEntry.Regular
|
||||
if lastOp == nil {
|
||||
return fmt.Errorf("last PLC operation is not a regular op")
|
||||
}
|
||||
|
||||
// Check if key already present
|
||||
for _, k := range lastOp.RotationKeys {
|
||||
if k == newKeyDIDKey {
|
||||
fmt.Printf("Key %s is already a rotation key for %s\n", newKeyDIDKey, holdDID)
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// Build updated rotation keys: keep existing, append new
|
||||
rotationKeys := make([]string, len(lastOp.RotationKeys))
|
||||
copy(rotationKeys, lastOp.RotationKeys)
|
||||
rotationKeys = append(rotationKeys, newKeyDIDKey)
|
||||
|
||||
// Build update: preserve everything else from current state
|
||||
sigPub, err := signingKey.PublicKey()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to get signing public key: %w", err)
|
||||
}
|
||||
|
||||
prevCID := lastEntry.AsOperation().CID().String()
|
||||
|
||||
op := &didplc.RegularOp{
|
||||
Type: "plc_operation",
|
||||
RotationKeys: rotationKeys,
|
||||
VerificationMethods: map[string]string{
|
||||
"atproto": sigPub.DIDKey(),
|
||||
},
|
||||
AlsoKnownAs: lastOp.AlsoKnownAs,
|
||||
Services: lastOp.Services,
|
||||
Prev: &prevCID,
|
||||
}
|
||||
|
||||
if err := op.Sign(rotationKey); err != nil {
|
||||
return fmt.Errorf("failed to sign PLC update: %w", err)
|
||||
}
|
||||
|
||||
if err := client.Submit(ctx, holdDID, op); err != nil {
|
||||
return fmt.Errorf("failed to submit PLC update: %w", err)
|
||||
}
|
||||
|
||||
slog.Info("Added rotation key to PLC identity",
|
||||
"did", holdDID,
|
||||
"new_key", newKeyDIDKey,
|
||||
"total_rotation_keys", len(rotationKeys),
|
||||
)
|
||||
fmt.Printf("Added rotation key %s to %s\n", newKeyDIDKey, holdDID)
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
func init() {
|
||||
plcCmd.PersistentFlags().StringVarP(&plcConfigFile, "config", "c", "", "path to YAML configuration file")
|
||||
|
||||
plcCmd.AddCommand(plcAddRotationKeyCmd)
|
||||
}
|
||||
146
cmd/hold/repo.go
Normal file
146
cmd/hold/repo.go
Normal file
@@ -0,0 +1,146 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"log/slog"
|
||||
"os"
|
||||
|
||||
"atcr.io/pkg/hold"
|
||||
holddb "atcr.io/pkg/hold/db"
|
||||
"atcr.io/pkg/hold/pds"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
var repoCmd = &cobra.Command{
|
||||
Use: "repo",
|
||||
Short: "Repository management commands",
|
||||
}
|
||||
|
||||
var repoExportCmd = &cobra.Command{
|
||||
Use: "export",
|
||||
Short: "Export the hold's repo as a CAR file to stdout",
|
||||
Long: `Export the hold's ATProto repository as a CAR (Content Addressable Archive) file.
|
||||
The CAR is written to stdout, so redirect to a file:
|
||||
|
||||
atcr-hold repo export --config config.yaml > backup.car`,
|
||||
Args: cobra.NoArgs,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
cfg, err := hold.LoadConfig(repoConfigFile)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to load config: %w", err)
|
||||
}
|
||||
|
||||
ctx := context.Background()
|
||||
holdPDS, cleanup, err := openHoldPDS(ctx, cfg)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer cleanup()
|
||||
|
||||
if err := holdPDS.ExportToCAR(ctx, os.Stdout); err != nil {
|
||||
return fmt.Errorf("failed to export: %w", err)
|
||||
}
|
||||
|
||||
fmt.Fprintf(os.Stderr, "Export complete\n")
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
var repoImportCmd = &cobra.Command{
|
||||
Use: "import <file> [file...]",
|
||||
Short: "Import records from one or more CAR files",
|
||||
Long: `Import ATProto records from CAR files into the hold's repo.
|
||||
Records are upserted (existing records are overwritten). Multiple files can be
|
||||
imported additively.
|
||||
|
||||
atcr-hold repo import --config config.yaml backup.car
|
||||
atcr-hold repo import --config config.yaml backup.car extra-records.car`,
|
||||
Args: cobra.MinimumNArgs(1),
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
cfg, err := hold.LoadConfig(repoConfigFile)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to load config: %w", err)
|
||||
}
|
||||
|
||||
ctx := context.Background()
|
||||
holdPDS, cleanup, err := openHoldPDS(ctx, cfg)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer cleanup()
|
||||
|
||||
for _, path := range args {
|
||||
f, err := os.Open(path)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to open %s: %w", path, err)
|
||||
}
|
||||
|
||||
result, err := holdPDS.ImportFromCAR(ctx, f)
|
||||
f.Close()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to import %s: %w", path, err)
|
||||
}
|
||||
|
||||
fmt.Fprintf(os.Stderr, "Imported %d records from %s\n", result.Total, path)
|
||||
for collection, count := range result.PerCollection {
|
||||
fmt.Fprintf(os.Stderr, " %s: %d\n", collection, count)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
var repoConfigFile string
|
||||
|
||||
func init() {
|
||||
repoCmd.PersistentFlags().StringVarP(&repoConfigFile, "config", "c", "", "path to YAML configuration file")
|
||||
|
||||
repoCmd.AddCommand(repoExportCmd)
|
||||
repoCmd.AddCommand(repoImportCmd)
|
||||
}
|
||||
|
||||
// openHoldPDS creates a HoldPDS from config for offline CLI operations.
|
||||
// Returns the PDS and a cleanup function that must be deferred.
|
||||
func openHoldPDS(ctx context.Context, cfg *hold.Config) (*pds.HoldPDS, func(), error) {
|
||||
holdDID, err := pds.LoadOrCreateDID(ctx, pds.DIDConfig{
|
||||
DID: cfg.Database.DID,
|
||||
DIDMethod: cfg.Database.DIDMethod,
|
||||
PublicURL: cfg.Server.PublicURL,
|
||||
DBPath: cfg.Database.Path,
|
||||
SigningKeyPath: cfg.Database.KeyPath,
|
||||
RotationKey: cfg.Database.RotationKey,
|
||||
PLCDirectoryURL: cfg.Database.PLCDirectoryURL,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, nil, fmt.Errorf("failed to resolve hold DID: %w", err)
|
||||
}
|
||||
slog.Info("Using hold DID", "did", holdDID)
|
||||
|
||||
// Open shared database
|
||||
dbFilePath := cfg.Database.Path + "/db.sqlite3"
|
||||
libsqlCfg := holddb.LibsqlConfig{
|
||||
SyncURL: cfg.Database.LibsqlSyncURL,
|
||||
AuthToken: cfg.Database.LibsqlAuthToken,
|
||||
SyncInterval: cfg.Database.LibsqlSyncInterval,
|
||||
}
|
||||
holdDB, err := holddb.OpenHoldDB(dbFilePath, libsqlCfg)
|
||||
if err != nil {
|
||||
return nil, nil, fmt.Errorf("failed to open hold database: %w", err)
|
||||
}
|
||||
|
||||
holdPDS, err := pds.NewHoldPDSWithDB(ctx, holdDID, cfg.Server.PublicURL, cfg.Server.AppviewURL(), cfg.Database.Path, cfg.Database.KeyPath, false, holdDB.DB)
|
||||
if err != nil {
|
||||
holdDB.Close()
|
||||
return nil, nil, fmt.Errorf("failed to initialize PDS: %w", err)
|
||||
}
|
||||
|
||||
cleanup := func() {
|
||||
holdPDS.Close()
|
||||
holdDB.Close()
|
||||
}
|
||||
|
||||
return holdPDS, cleanup, nil
|
||||
}
|
||||
82
cmd/labeler/main.go
Normal file
82
cmd/labeler/main.go
Normal file
@@ -0,0 +1,82 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"atcr.io/pkg/labeler"
|
||||
)
|
||||
|
||||
var configFile string
|
||||
|
||||
var rootCmd = &cobra.Command{
|
||||
Use: "atcr-labeler",
|
||||
Short: "ATCR Labeler Service - ATProto content moderation",
|
||||
}
|
||||
|
||||
var serveCmd = &cobra.Command{
|
||||
Use: "serve",
|
||||
Short: "Start the labeler service",
|
||||
Long: `Start the ATCR labeler service with admin UI and subscribeLabels endpoint.
|
||||
|
||||
Configuration is loaded from the appview config YAML (labeler section).
|
||||
Use --config to specify the config file path.`,
|
||||
Args: cobra.NoArgs,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
cfg, err := labeler.LoadConfig(configFile)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to load config: %w", err)
|
||||
}
|
||||
|
||||
server, err := labeler.NewServer(cfg)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to initialize labeler: %w", err)
|
||||
}
|
||||
|
||||
return server.Serve()
|
||||
},
|
||||
}
|
||||
|
||||
var configCmd = &cobra.Command{
|
||||
Use: "config",
|
||||
Short: "Configuration management commands",
|
||||
}
|
||||
|
||||
var configInitCmd = &cobra.Command{
|
||||
Use: "init [path]",
|
||||
Short: "Generate an example configuration file",
|
||||
Long: `Generate an example YAML configuration file with all available options.`,
|
||||
Args: cobra.MaximumNArgs(1),
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
yamlBytes, err := labeler.ExampleYAML()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to generate example config: %w", err)
|
||||
}
|
||||
if len(args) == 1 {
|
||||
if err := os.WriteFile(args[0], yamlBytes, 0644); err != nil {
|
||||
return fmt.Errorf("failed to write config file: %w", err)
|
||||
}
|
||||
fmt.Fprintf(os.Stderr, "Wrote example config to %s\n", args[0])
|
||||
return nil
|
||||
}
|
||||
fmt.Print(string(yamlBytes))
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
func init() {
|
||||
serveCmd.Flags().StringVarP(&configFile, "config", "c", "", "path to YAML configuration file")
|
||||
|
||||
configCmd.AddCommand(configInitCmd)
|
||||
|
||||
rootCmd.AddCommand(serveCmd)
|
||||
rootCmd.AddCommand(configCmd)
|
||||
}
|
||||
|
||||
func main() {
|
||||
if err := rootCmd.Execute(); err != nil {
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
136
cmd/oauth-helper/main.go
Normal file
136
cmd/oauth-helper/main.go
Normal file
@@ -0,0 +1,136 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"flag"
|
||||
"fmt"
|
||||
"log"
|
||||
"net/http"
|
||||
"os"
|
||||
"time"
|
||||
|
||||
"atcr.io/pkg/atproto"
|
||||
"atcr.io/pkg/auth/oauth"
|
||||
indigo_oauth "github.com/bluesky-social/indigo/atproto/auth/oauth"
|
||||
)
|
||||
|
||||
func main() {
|
||||
handle := flag.String("handle", "", "Your Bluesky handle (e.g., yourname.bsky.social)")
|
||||
holdURL := flag.String("hold-url", "http://localhost:8080", "Hold service URL")
|
||||
repo := flag.String("repo", "", "Repository DID (e.g., did:web:172.28.0.3:8080)")
|
||||
collection := flag.String("collection", "io.atcr.hold.crew", "Collection to delete from")
|
||||
rkey := flag.String("rkey", "", "Record key to delete")
|
||||
|
||||
flag.Parse()
|
||||
|
||||
if *handle == "" {
|
||||
fmt.Println("Usage: oauth-helper --handle yourname.bsky.social [options]")
|
||||
fmt.Println("\nOptions:")
|
||||
flag.PrintDefaults()
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
ctx := context.Background()
|
||||
|
||||
fmt.Printf("🔐 Starting OAuth flow for %s...\n\n", *handle)
|
||||
|
||||
// Create a simple HTTP server for the callback
|
||||
mux := http.NewServeMux()
|
||||
server := &http.Server{
|
||||
Addr: ":8765",
|
||||
Handler: mux,
|
||||
}
|
||||
|
||||
// Channel to receive the result
|
||||
resultChan := make(chan *oauth.InteractiveResult, 1)
|
||||
errorChan := make(chan error, 1)
|
||||
|
||||
// Register callback handler
|
||||
registerCallback := func(handler http.HandlerFunc) error {
|
||||
mux.HandleFunc("/auth/oauth/callback", handler)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Display auth URL (will open browser)
|
||||
displayAuthURL := func(authURL string) error {
|
||||
fmt.Printf("🌐 Opening browser for authorization...\n")
|
||||
fmt.Printf(" URL: %s\n\n", authURL)
|
||||
fmt.Printf(" If the browser doesn't open, visit the URL above.\n\n")
|
||||
return oauth.OpenBrowser(authURL)
|
||||
}
|
||||
|
||||
// Start server in background
|
||||
go func() {
|
||||
if err := server.ListenAndServe(); err != http.ErrServerClosed {
|
||||
errorChan <- fmt.Errorf("server error: %w", err)
|
||||
}
|
||||
}()
|
||||
|
||||
// Give server time to start
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
|
||||
// Run interactive OAuth flow
|
||||
go func() {
|
||||
result, err := oauth.InteractiveFlowWithCallback(
|
||||
ctx,
|
||||
"http://localhost:8765",
|
||||
*handle,
|
||||
nil, // Use default scopes
|
||||
"AT Container Registry", // Client name
|
||||
registerCallback,
|
||||
displayAuthURL,
|
||||
)
|
||||
if err != nil {
|
||||
errorChan <- err
|
||||
return
|
||||
}
|
||||
resultChan <- result
|
||||
}()
|
||||
|
||||
// Wait for result
|
||||
var result *oauth.InteractiveResult
|
||||
select {
|
||||
case result = <-resultChan:
|
||||
fmt.Printf("✅ OAuth successful!\n\n")
|
||||
case err := <-errorChan:
|
||||
log.Fatalf("❌ OAuth failed: %v\n", err)
|
||||
case <-time.After(5 * time.Minute):
|
||||
log.Fatalf("❌ OAuth timed out\n")
|
||||
}
|
||||
|
||||
// Shutdown server
|
||||
server.Shutdown(ctx)
|
||||
|
||||
// Print session information
|
||||
fmt.Printf("DID: %s\n", result.SessionData.AccountDID)
|
||||
fmt.Printf("Access Token: %s\n", result.SessionData.AccessToken)
|
||||
fmt.Printf("DPoP Key: %s\n\n", result.SessionData.DPoPPrivateKeyMultibase)
|
||||
|
||||
// Generate DPoP proof for deleteRecord endpoint if all params provided
|
||||
if *repo != "" && *rkey != "" {
|
||||
deleteURL := fmt.Sprintf("%s%s?repo=%s&collection=%s&rkey=%s",
|
||||
*holdURL, atproto.RepoDeleteRecord, *repo, *collection, *rkey)
|
||||
|
||||
dpopProof, err := generateDPoPProof(result.Session, "POST", deleteURL)
|
||||
if err != nil {
|
||||
log.Fatalf("❌ Failed to generate DPoP proof: %v\n", err)
|
||||
}
|
||||
|
||||
fmt.Printf("📋 Ready-to-use curl command:\n\n")
|
||||
fmt.Printf("curl -X POST \\\n")
|
||||
fmt.Printf(" -H \"Authorization: DPoP %s\" \\\n", result.SessionData.AccessToken)
|
||||
fmt.Printf(" -H \"DPoP: %s\" \\\n", dpopProof)
|
||||
fmt.Printf(" \"%s\"\n", deleteURL)
|
||||
} else {
|
||||
fmt.Printf("💡 To generate a curl command for deleteRecord, provide:\n")
|
||||
fmt.Printf(" --repo <did>\n")
|
||||
fmt.Printf(" --collection <collection>\n")
|
||||
fmt.Printf(" --rkey <rkey>\n")
|
||||
}
|
||||
}
|
||||
|
||||
// generateDPoPProof generates a DPoP proof JWT for a specific request
|
||||
func generateDPoPProof(session *indigo_oauth.ClientSession, method, reqURL string) (string, error) {
|
||||
// Use the session's NewHostDPoP method to generate the proof
|
||||
return session.NewHostDPoP(method, reqURL)
|
||||
}
|
||||
578
cmd/record-query/main.go
Normal file
578
cmd/record-query/main.go
Normal file
@@ -0,0 +1,578 @@
|
||||
// record-query queries the ATProto relay to find all users with records in a given
|
||||
// collection, fetches the records from each user's PDS, and optionally filters them.
|
||||
//
|
||||
// Usage:
|
||||
//
|
||||
// go run ./cmd/record-query --collection io.atcr.sailor.profile --filter "defaultHold!=prefix:did:web"
|
||||
// go run ./cmd/record-query --collection io.atcr.manifest
|
||||
// go run ./cmd/record-query --collection io.atcr.sailor.profile --limit 5
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"flag"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"os"
|
||||
"sort"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
// ListReposByCollectionResponse is the response from com.atproto.sync.listReposByCollection
|
||||
type ListReposByCollectionResponse struct {
|
||||
Repos []RepoRef `json:"repos"`
|
||||
Cursor string `json:"cursor,omitempty"`
|
||||
}
|
||||
|
||||
// RepoRef is a single repo reference
|
||||
type RepoRef struct {
|
||||
DID string `json:"did"`
|
||||
}
|
||||
|
||||
// ListRecordsResponse is the response from com.atproto.repo.listRecords
|
||||
type ListRecordsResponse struct {
|
||||
Records []Record `json:"records"`
|
||||
Cursor string `json:"cursor,omitempty"`
|
||||
}
|
||||
|
||||
// Record is a single ATProto record
|
||||
type Record struct {
|
||||
URI string `json:"uri"`
|
||||
CID string `json:"cid"`
|
||||
Value json.RawMessage `json:"value"`
|
||||
}
|
||||
|
||||
// MatchResult is a record that passed the filter
|
||||
type MatchResult struct {
|
||||
DID string
|
||||
Handle string
|
||||
URI string
|
||||
Fields map[string]any
|
||||
}
|
||||
|
||||
// Filter defines a simple field filter
|
||||
type Filter struct {
|
||||
Field string
|
||||
Operator string // "=", "!="
|
||||
Mode string // "exact", "prefix", "empty"
|
||||
Value string
|
||||
}
|
||||
|
||||
var client = &http.Client{Timeout: 30 * time.Second}
|
||||
|
||||
func main() {
|
||||
relay := flag.String("relay", "https://relay1.us-east.bsky.network", "Relay endpoint")
|
||||
collection := flag.String("collection", "io.atcr.sailor.profile", "ATProto collection to query")
|
||||
filterStr := flag.String("filter", "", "Filter expression: field=value, field!=value, field=prefix:xxx, field!=prefix:xxx, field=empty, field!=empty")
|
||||
resolve := flag.Bool("resolve", true, "Resolve DIDs to handles")
|
||||
limit := flag.Int("limit", 0, "Max repos to process (0 = unlimited)")
|
||||
flag.Parse()
|
||||
|
||||
// Parse filter
|
||||
var filter *Filter
|
||||
if *filterStr != "" {
|
||||
var err error
|
||||
filter, err = parseFilter(*filterStr)
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Invalid filter: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
fmt.Printf("Filter: %s %s %s:%s\n", filter.Field, filter.Operator, filter.Mode, filter.Value)
|
||||
}
|
||||
|
||||
fmt.Printf("Relay: %s\n", *relay)
|
||||
fmt.Printf("Collection: %s\n", *collection)
|
||||
if *limit > 0 {
|
||||
fmt.Printf("Limit: %d repos\n", *limit)
|
||||
}
|
||||
fmt.Println()
|
||||
|
||||
// Step 1: Enumerate all DIDs with records in this collection
|
||||
fmt.Println("Enumerating repos from relay...")
|
||||
dids, err := listAllRepos(*relay, *collection, *limit)
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Failed to list repos: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
fmt.Printf("Found %d repos with %s records\n\n", len(dids), *collection)
|
||||
|
||||
// Step 2: For each DID, fetch records and apply filter
|
||||
fmt.Println("Fetching records from each user's PDS...")
|
||||
var results []MatchResult
|
||||
errorsByCategory := make(map[string][]string) // category -> list of DIDs
|
||||
for i, did := range dids {
|
||||
totalErrors := 0
|
||||
for _, v := range errorsByCategory {
|
||||
totalErrors += len(v)
|
||||
}
|
||||
if (i+1)%10 == 0 || i == len(dids)-1 {
|
||||
fmt.Printf(" Progress: %d/%d repos (matches: %d, errors: %d)\r", i+1, len(dids), len(results), totalErrors)
|
||||
}
|
||||
|
||||
matches, err := fetchAndFilter(did, *collection, filter)
|
||||
if err != nil {
|
||||
cat := categorizeError(err)
|
||||
errorsByCategory[cat] = append(errorsByCategory[cat], did)
|
||||
continue
|
||||
}
|
||||
results = append(results, matches...)
|
||||
}
|
||||
totalErrors := 0
|
||||
for _, v := range errorsByCategory {
|
||||
totalErrors += len(v)
|
||||
}
|
||||
fmt.Printf(" Progress: %d/%d repos (matches: %d, errors: %d)\n", len(dids), len(dids), len(results), totalErrors)
|
||||
if len(errorsByCategory) > 0 {
|
||||
fmt.Println(" Error breakdown:")
|
||||
var cats []string
|
||||
for k := range errorsByCategory {
|
||||
cats = append(cats, k)
|
||||
}
|
||||
sort.Strings(cats)
|
||||
for _, cat := range cats {
|
||||
dids := errorsByCategory[cat]
|
||||
fmt.Printf(" %s (%d):\n", cat, len(dids))
|
||||
for _, did := range dids {
|
||||
fmt.Printf(" - %s\n", did)
|
||||
}
|
||||
}
|
||||
}
|
||||
fmt.Println()
|
||||
|
||||
// Step 3: Resolve DIDs to handles
|
||||
if *resolve && len(results) > 0 {
|
||||
fmt.Println("Resolving DIDs to handles...")
|
||||
handleCache := make(map[string]string)
|
||||
for i := range results {
|
||||
did := results[i].DID
|
||||
if h, ok := handleCache[did]; ok {
|
||||
results[i].Handle = h
|
||||
continue
|
||||
}
|
||||
handle, err := resolveDIDToHandle(did)
|
||||
if err != nil {
|
||||
handle = did
|
||||
}
|
||||
handleCache[did] = handle
|
||||
results[i].Handle = handle
|
||||
}
|
||||
fmt.Println()
|
||||
}
|
||||
|
||||
// Step 4: Print results
|
||||
if len(results) == 0 {
|
||||
fmt.Println("No matching records found.")
|
||||
return
|
||||
}
|
||||
|
||||
// Sort by handle/DID for consistent output
|
||||
sort.Slice(results, func(i, j int) bool {
|
||||
return results[i].Handle < results[j].Handle
|
||||
})
|
||||
|
||||
fmt.Println("========================================")
|
||||
fmt.Printf("RESULTS (%d matches)\n", len(results))
|
||||
fmt.Println("========================================")
|
||||
|
||||
for i, r := range results {
|
||||
identity := r.Handle
|
||||
if identity == "" {
|
||||
identity = r.DID
|
||||
}
|
||||
fmt.Printf("\n%3d. %s\n", i+1, identity)
|
||||
if r.Handle != "" && r.Handle != r.DID {
|
||||
fmt.Printf(" DID: %s\n", r.DID)
|
||||
}
|
||||
fmt.Printf(" URI: %s\n", r.URI)
|
||||
|
||||
// Print interesting fields (skip $type, createdAt, updatedAt)
|
||||
for k, v := range r.Fields {
|
||||
if k == "$type" || k == "createdAt" || k == "updatedAt" {
|
||||
continue
|
||||
}
|
||||
fmt.Printf(" %s: %v\n", k, v)
|
||||
}
|
||||
}
|
||||
|
||||
// CSV output
|
||||
fmt.Println("\n========================================")
|
||||
fmt.Println("CSV FORMAT")
|
||||
fmt.Println("========================================")
|
||||
|
||||
// Collect all field names for CSV header
|
||||
fieldSet := make(map[string]bool)
|
||||
for _, r := range results {
|
||||
for k := range r.Fields {
|
||||
if k == "$type" || k == "createdAt" || k == "updatedAt" {
|
||||
continue
|
||||
}
|
||||
fieldSet[k] = true
|
||||
}
|
||||
}
|
||||
var fieldNames []string
|
||||
for k := range fieldSet {
|
||||
fieldNames = append(fieldNames, k)
|
||||
}
|
||||
sort.Strings(fieldNames)
|
||||
|
||||
// Header
|
||||
fmt.Printf("handle,did,uri")
|
||||
for _, f := range fieldNames {
|
||||
fmt.Printf(",%s", f)
|
||||
}
|
||||
fmt.Println()
|
||||
|
||||
// Rows
|
||||
for _, r := range results {
|
||||
identity := r.Handle
|
||||
if identity == "" {
|
||||
identity = r.DID
|
||||
}
|
||||
fmt.Printf("%s,%s,%s", identity, r.DID, r.URI)
|
||||
for _, f := range fieldNames {
|
||||
val := ""
|
||||
if v, ok := r.Fields[f]; ok {
|
||||
val = fmt.Sprintf("%v", v)
|
||||
}
|
||||
// Escape commas in values
|
||||
if strings.Contains(val, ",") {
|
||||
val = "\"" + val + "\""
|
||||
}
|
||||
fmt.Printf(",%s", val)
|
||||
}
|
||||
fmt.Println()
|
||||
}
|
||||
}
|
||||
|
||||
// parseFilter parses a filter string like "field!=prefix:did:web"
|
||||
func parseFilter(s string) (*Filter, error) {
|
||||
f := &Filter{}
|
||||
|
||||
// Check for != first (before =)
|
||||
if idx := strings.Index(s, "!="); idx > 0 {
|
||||
f.Field = s[:idx]
|
||||
f.Operator = "!="
|
||||
s = s[idx+2:]
|
||||
} else if idx := strings.Index(s, "="); idx > 0 {
|
||||
f.Field = s[:idx]
|
||||
f.Operator = "="
|
||||
s = s[idx+1:]
|
||||
} else {
|
||||
return nil, fmt.Errorf("expected field=value or field!=value, got %q", s)
|
||||
}
|
||||
|
||||
// Check for mode prefix
|
||||
if s == "empty" {
|
||||
f.Mode = "empty"
|
||||
f.Value = ""
|
||||
} else if strings.HasPrefix(s, "prefix:") {
|
||||
f.Mode = "prefix"
|
||||
f.Value = strings.TrimPrefix(s, "prefix:")
|
||||
} else {
|
||||
f.Mode = "exact"
|
||||
f.Value = s
|
||||
}
|
||||
|
||||
return f, nil
|
||||
}
|
||||
|
||||
// matchFilter checks if a record's fields match the filter
|
||||
func matchFilter(fields map[string]any, filter *Filter) bool {
|
||||
if filter == nil {
|
||||
return true
|
||||
}
|
||||
|
||||
val := ""
|
||||
if v, ok := fields[filter.Field]; ok {
|
||||
val = fmt.Sprintf("%v", v)
|
||||
}
|
||||
|
||||
switch filter.Mode {
|
||||
case "empty":
|
||||
isEmpty := val == "" || val == "<nil>"
|
||||
if filter.Operator == "=" {
|
||||
return isEmpty
|
||||
}
|
||||
return !isEmpty
|
||||
|
||||
case "prefix":
|
||||
hasPrefix := strings.HasPrefix(val, filter.Value)
|
||||
if filter.Operator == "=" {
|
||||
return hasPrefix
|
||||
}
|
||||
return !hasPrefix && val != "" && val != "<nil>"
|
||||
|
||||
case "exact":
|
||||
if filter.Operator == "=" {
|
||||
return val == filter.Value
|
||||
}
|
||||
return val != filter.Value
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
// categorizeError classifies an error into a human-readable category
|
||||
func categorizeError(err error) string {
|
||||
s := err.Error()
|
||||
|
||||
// HTTP status codes
|
||||
for _, code := range []string{"400", "401", "403", "404", "410", "429", "500", "502", "503"} {
|
||||
if strings.Contains(s, "status "+code) {
|
||||
switch code {
|
||||
case "400":
|
||||
if strings.Contains(s, "RepoDeactivated") || strings.Contains(s, "deactivated") {
|
||||
return "deactivated (400)"
|
||||
}
|
||||
if strings.Contains(s, "RepoTakendown") || strings.Contains(s, "takendown") {
|
||||
return "takendown (400)"
|
||||
}
|
||||
if strings.Contains(s, "RepoNotFound") || strings.Contains(s, "Could not find repo") {
|
||||
return "repo not found (400)"
|
||||
}
|
||||
return "bad request (400)"
|
||||
case "401":
|
||||
return "unauthorized (401)"
|
||||
case "404":
|
||||
return "not found (404)"
|
||||
case "410":
|
||||
return "gone/deleted (410)"
|
||||
case "429":
|
||||
return "rate limited (429)"
|
||||
case "502":
|
||||
return "bad gateway (502)"
|
||||
case "503":
|
||||
return "unavailable (503)"
|
||||
default:
|
||||
return fmt.Sprintf("HTTP %s", code)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Connection errors
|
||||
if strings.Contains(s, "connection refused") {
|
||||
return "connection refused"
|
||||
}
|
||||
if strings.Contains(s, "no such host") {
|
||||
return "DNS failure"
|
||||
}
|
||||
if strings.Contains(s, "timeout") || strings.Contains(s, "deadline exceeded") {
|
||||
return "timeout"
|
||||
}
|
||||
if strings.Contains(s, "TLS") || strings.Contains(s, "certificate") {
|
||||
return "TLS error"
|
||||
}
|
||||
if strings.Contains(s, "EOF") {
|
||||
return "connection reset"
|
||||
}
|
||||
|
||||
// PLC/DID errors
|
||||
if strings.Contains(s, "no PDS found") {
|
||||
return "no PDS in DID doc"
|
||||
}
|
||||
if strings.Contains(s, "unsupported DID method") {
|
||||
return "unsupported DID method"
|
||||
}
|
||||
|
||||
return "other: " + s
|
||||
}
|
||||
|
||||
// listAllRepos paginates through the relay to get all DIDs with records in a collection
|
||||
func listAllRepos(relayURL, collection string, limit int) ([]string, error) {
|
||||
var dids []string
|
||||
cursor := ""
|
||||
|
||||
for {
|
||||
u := fmt.Sprintf("%s/xrpc/com.atproto.sync.listReposByCollection", relayURL)
|
||||
params := url.Values{}
|
||||
params.Set("collection", collection)
|
||||
params.Set("limit", "1000")
|
||||
if cursor != "" {
|
||||
params.Set("cursor", cursor)
|
||||
}
|
||||
|
||||
resp, err := client.Get(u + "?" + params.Encode())
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("request failed: %w", err)
|
||||
}
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
resp.Body.Close()
|
||||
return nil, fmt.Errorf("status %d: %s", resp.StatusCode, string(body))
|
||||
}
|
||||
|
||||
var result ListReposByCollectionResponse
|
||||
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
|
||||
resp.Body.Close()
|
||||
return nil, fmt.Errorf("decode failed: %w", err)
|
||||
}
|
||||
resp.Body.Close()
|
||||
|
||||
for _, repo := range result.Repos {
|
||||
dids = append(dids, repo.DID)
|
||||
}
|
||||
|
||||
fmt.Printf(" Fetched %d repos so far...\r", len(dids))
|
||||
|
||||
if limit > 0 && len(dids) >= limit {
|
||||
dids = dids[:limit]
|
||||
break
|
||||
}
|
||||
|
||||
if result.Cursor == "" {
|
||||
break
|
||||
}
|
||||
cursor = result.Cursor
|
||||
}
|
||||
|
||||
fmt.Println()
|
||||
return dids, nil
|
||||
}
|
||||
|
||||
// fetchAndFilter fetches records for a DID and returns those matching the filter
|
||||
func fetchAndFilter(did, collection string, filter *Filter) ([]MatchResult, error) {
|
||||
// Resolve DID to PDS
|
||||
pdsEndpoint, err := resolveDIDToPDS(did)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("resolve PDS: %w", err)
|
||||
}
|
||||
|
||||
var results []MatchResult
|
||||
cursor := ""
|
||||
|
||||
for {
|
||||
u := fmt.Sprintf("%s/xrpc/com.atproto.repo.listRecords", pdsEndpoint)
|
||||
params := url.Values{}
|
||||
params.Set("repo", did)
|
||||
params.Set("collection", collection)
|
||||
params.Set("limit", "100")
|
||||
if cursor != "" {
|
||||
params.Set("cursor", cursor)
|
||||
}
|
||||
|
||||
resp, err := client.Get(u + "?" + params.Encode())
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("request failed: %w", err)
|
||||
}
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
resp.Body.Close()
|
||||
return nil, fmt.Errorf("status %d", resp.StatusCode)
|
||||
}
|
||||
|
||||
var listResp ListRecordsResponse
|
||||
if err := json.NewDecoder(resp.Body).Decode(&listResp); err != nil {
|
||||
resp.Body.Close()
|
||||
return nil, fmt.Errorf("decode failed: %w", err)
|
||||
}
|
||||
resp.Body.Close()
|
||||
|
||||
for _, rec := range listResp.Records {
|
||||
var fields map[string]any
|
||||
if err := json.Unmarshal(rec.Value, &fields); err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
if matchFilter(fields, filter) {
|
||||
results = append(results, MatchResult{
|
||||
DID: did,
|
||||
URI: rec.URI,
|
||||
Fields: fields,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
if listResp.Cursor == "" || len(listResp.Records) < 100 {
|
||||
break
|
||||
}
|
||||
cursor = listResp.Cursor
|
||||
}
|
||||
|
||||
return results, nil
|
||||
}
|
||||
|
||||
// resolveDIDToHandle resolves a DID to a handle using the PLC directory or did:web
|
||||
func resolveDIDToHandle(did string) (string, error) {
|
||||
if strings.HasPrefix(did, "did:web:") {
|
||||
return strings.TrimPrefix(did, "did:web:"), nil
|
||||
}
|
||||
|
||||
if strings.HasPrefix(did, "did:plc:") {
|
||||
resp, err := client.Get("https://plc.directory/" + did)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("PLC query failed: %w", err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
return "", fmt.Errorf("PLC returned status %d", resp.StatusCode)
|
||||
}
|
||||
|
||||
var plcDoc struct {
|
||||
AlsoKnownAs []string `json:"alsoKnownAs"`
|
||||
}
|
||||
if err := json.NewDecoder(resp.Body).Decode(&plcDoc); err != nil {
|
||||
return "", fmt.Errorf("failed to parse PLC response: %w", err)
|
||||
}
|
||||
|
||||
for _, aka := range plcDoc.AlsoKnownAs {
|
||||
if strings.HasPrefix(aka, "at://") {
|
||||
return strings.TrimPrefix(aka, "at://"), nil
|
||||
}
|
||||
}
|
||||
|
||||
return did, nil
|
||||
}
|
||||
|
||||
return did, nil
|
||||
}
|
||||
|
||||
// resolveDIDToPDS resolves a DID to its PDS endpoint
|
||||
func resolveDIDToPDS(did string) (string, error) {
|
||||
if strings.HasPrefix(did, "did:web:") {
|
||||
domain := strings.TrimPrefix(did, "did:web:")
|
||||
domain = strings.ReplaceAll(domain, "%3A", ":")
|
||||
scheme := "https"
|
||||
if strings.Contains(domain, ":") {
|
||||
scheme = "http"
|
||||
}
|
||||
return scheme + "://" + domain, nil
|
||||
}
|
||||
|
||||
if strings.HasPrefix(did, "did:plc:") {
|
||||
resp, err := client.Get("https://plc.directory/" + did)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("PLC query failed: %w", err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
return "", fmt.Errorf("PLC returned status %d", resp.StatusCode)
|
||||
}
|
||||
|
||||
var plcDoc struct {
|
||||
Service []struct {
|
||||
ID string `json:"id"`
|
||||
Type string `json:"type"`
|
||||
ServiceEndpoint string `json:"serviceEndpoint"`
|
||||
} `json:"service"`
|
||||
}
|
||||
if err := json.NewDecoder(resp.Body).Decode(&plcDoc); err != nil {
|
||||
return "", fmt.Errorf("failed to parse PLC response: %w", err)
|
||||
}
|
||||
|
||||
for _, svc := range plcDoc.Service {
|
||||
if svc.Type == "AtprotoPersonalDataServer" {
|
||||
return svc.ServiceEndpoint, nil
|
||||
}
|
||||
}
|
||||
|
||||
return "", fmt.Errorf("no PDS found in DID document")
|
||||
}
|
||||
|
||||
return "", fmt.Errorf("unsupported DID method: %s", did)
|
||||
}
|
||||
616
cmd/relay-compare/main.go
Normal file
616
cmd/relay-compare/main.go
Normal file
@@ -0,0 +1,616 @@
|
||||
// relay-compare compares ATProto relays by querying listReposByCollection
|
||||
// for all io.atcr.* record types and showing what's missing from each relay.
|
||||
//
|
||||
// Usage:
|
||||
//
|
||||
// go run ./cmd/relay-compare https://relay1.us-east.bsky.network https://relay1.us-west.bsky.network
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"flag"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"os"
|
||||
"sort"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/bluesky-social/indigo/atproto/identity"
|
||||
"github.com/bluesky-social/indigo/atproto/syntax"
|
||||
"github.com/bluesky-social/indigo/xrpc"
|
||||
)
|
||||
|
||||
// ANSI color codes (disabled via --no-color or NO_COLOR env)
|
||||
var (
|
||||
cRed = "\033[31m"
|
||||
cGreen = "\033[32m"
|
||||
cYellow = "\033[33m"
|
||||
cCyan = "\033[36m"
|
||||
cBold = "\033[1m"
|
||||
cDim = "\033[2m"
|
||||
cReset = "\033[0m"
|
||||
)
|
||||
|
||||
func disableColors() {
|
||||
cRed, cGreen, cYellow, cCyan, cBold, cDim, cReset = "", "", "", "", "", "", ""
|
||||
}
|
||||
|
||||
// All io.atcr.* collections to compare
|
||||
var allCollections = []string{
|
||||
"io.atcr.manifest",
|
||||
"io.atcr.tag",
|
||||
"io.atcr.sailor.profile",
|
||||
"io.atcr.sailor.star",
|
||||
"io.atcr.repo.page",
|
||||
"io.atcr.hold.captain",
|
||||
"io.atcr.hold.crew",
|
||||
"io.atcr.hold.layer",
|
||||
"io.atcr.hold.stats",
|
||||
"io.atcr.hold.scan",
|
||||
}
|
||||
|
||||
type summaryRow struct {
|
||||
collection string
|
||||
counts []int
|
||||
status string // "sync", "diff", "error"
|
||||
diffCount int
|
||||
realGaps int // verified: record exists on PDS but relay is missing it
|
||||
ghosts int // verified: record doesn't exist on PDS, relay has stale entry
|
||||
deactivated int // verified: account deactivated/deleted on PDS
|
||||
}
|
||||
|
||||
// verifyResult holds the PDS verification result for a (DID, collection) pair.
|
||||
type verifyResult struct {
|
||||
exists bool
|
||||
deactivated bool // account deactivated/deleted on PDS
|
||||
err error
|
||||
}
|
||||
|
||||
// key identifies a (collection, relay-or-DID) pair for result lookups.
|
||||
type key struct{ col, relay string }
|
||||
|
||||
// diffEntry represents a DID missing from a specific relay for a collection.
|
||||
type diffEntry struct {
|
||||
did string
|
||||
collection string
|
||||
relayIdx int
|
||||
}
|
||||
|
||||
// XRPC response types for listReposByCollection
|
||||
type listReposByCollectionResult struct {
|
||||
Repos []repoRef `json:"repos"`
|
||||
Cursor string `json:"cursor,omitempty"`
|
||||
}
|
||||
|
||||
type repoRef struct {
|
||||
DID string `json:"did"`
|
||||
}
|
||||
|
||||
// XRPC response types for listRecords
|
||||
type listRecordsResult struct {
|
||||
Records []json.RawMessage `json:"records"`
|
||||
Cursor string `json:"cursor,omitempty"`
|
||||
}
|
||||
|
||||
// Shared identity directory for DID resolution
|
||||
var dir identity.Directory
|
||||
|
||||
func main() {
|
||||
noColor := flag.Bool("no-color", false, "disable colored output")
|
||||
verify := flag.Bool("verify", false, "verify diffs against PDS to distinguish real gaps from ghost entries")
|
||||
hideGhosts := flag.Bool("hide-ghosts", false, "with --verify, hide ghost and deactivated entries from output")
|
||||
collection := flag.String("collection", "", "compare only this collection")
|
||||
timeout := flag.Duration("timeout", 2*time.Minute, "timeout for all relay queries")
|
||||
flag.Usage = func() {
|
||||
fmt.Fprintf(os.Stderr, "Compare ATProto relays by querying listReposByCollection for io.atcr.* records.\n\n")
|
||||
fmt.Fprintf(os.Stderr, "Usage:\n relay-compare [flags] <relay-url> <relay-url> [relay-url...]\n\n")
|
||||
fmt.Fprintf(os.Stderr, "Example:\n")
|
||||
fmt.Fprintf(os.Stderr, " go run ./cmd/relay-compare https://relay1.us-east.bsky.network https://relay1.us-west.bsky.network\n\n")
|
||||
fmt.Fprintf(os.Stderr, "Flags:\n")
|
||||
flag.PrintDefaults()
|
||||
}
|
||||
flag.Parse()
|
||||
|
||||
if *noColor || os.Getenv("NO_COLOR") != "" {
|
||||
disableColors()
|
||||
}
|
||||
|
||||
relays := flag.Args()
|
||||
if len(relays) < 2 {
|
||||
flag.Usage()
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
for i, r := range relays {
|
||||
relays[i] = strings.TrimRight(r, "/")
|
||||
}
|
||||
|
||||
cols := allCollections
|
||||
if *collection != "" {
|
||||
cols = []string{*collection}
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), *timeout)
|
||||
defer cancel()
|
||||
|
||||
dir = identity.DefaultDirectory()
|
||||
|
||||
// Short display names for each relay
|
||||
names := make([]string, len(relays))
|
||||
maxNameLen := 0
|
||||
for i, r := range relays {
|
||||
names[i] = shortName(r)
|
||||
if len(names[i]) > maxNameLen {
|
||||
maxNameLen = len(names[i])
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Printf("%sFetching %d collections from %d relays...%s\n", cDim, len(cols), len(relays), cReset)
|
||||
|
||||
// Fetch all data in parallel: every (collection, relay) pair concurrently
|
||||
type fetchResult struct {
|
||||
dids map[string]struct{}
|
||||
err error
|
||||
}
|
||||
allResults := make(map[key]fetchResult)
|
||||
var mu sync.Mutex
|
||||
var wg sync.WaitGroup
|
||||
|
||||
for _, col := range cols {
|
||||
for _, relay := range relays {
|
||||
wg.Add(1)
|
||||
go func(col, relay string) {
|
||||
defer wg.Done()
|
||||
dids, err := fetchAllDIDs(ctx, relay, col)
|
||||
mu.Lock()
|
||||
allResults[key{col, relay}] = fetchResult{dids, err}
|
||||
mu.Unlock()
|
||||
}(col, relay)
|
||||
}
|
||||
}
|
||||
wg.Wait()
|
||||
|
||||
// Collect all diffs across collections (for optional verification)
|
||||
var allDiffs []diffEntry
|
||||
|
||||
// First pass: compute diffs per collection
|
||||
type colDiffs struct {
|
||||
hasError bool
|
||||
counts []int
|
||||
// per-relay missing DIDs (sorted)
|
||||
missing [][]string
|
||||
}
|
||||
colResults := make(map[string]*colDiffs)
|
||||
|
||||
for _, col := range cols {
|
||||
cd := &colDiffs{counts: make([]int, len(relays)), missing: make([][]string, len(relays))}
|
||||
colResults[col] = cd
|
||||
|
||||
for ri, relay := range relays {
|
||||
r := allResults[key{col, relay}]
|
||||
if r.err != nil {
|
||||
cd.hasError = true
|
||||
} else {
|
||||
cd.counts[ri] = len(r.dids)
|
||||
}
|
||||
}
|
||||
|
||||
if cd.hasError {
|
||||
continue
|
||||
}
|
||||
|
||||
// Build union of all DIDs across relays
|
||||
union := make(map[string]struct{})
|
||||
for _, relay := range relays {
|
||||
for did := range allResults[key{col, relay}].dids {
|
||||
union[did] = struct{}{}
|
||||
}
|
||||
}
|
||||
|
||||
for ri, relay := range relays {
|
||||
var missing []string
|
||||
for did := range union {
|
||||
if _, ok := allResults[key{col, relay}].dids[did]; !ok {
|
||||
missing = append(missing, did)
|
||||
}
|
||||
}
|
||||
sort.Strings(missing)
|
||||
cd.missing[ri] = missing
|
||||
for _, did := range missing {
|
||||
allDiffs = append(allDiffs, diffEntry{did: did, collection: col, relayIdx: ri})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Optionally verify diffs against PDS
|
||||
verified := make(map[key]verifyResult)
|
||||
if *verify && len(allDiffs) > 0 {
|
||||
verified = verifyDiffs(ctx, allDiffs)
|
||||
}
|
||||
|
||||
// Display per-collection diffs and collect summary
|
||||
var summary []summaryRow
|
||||
totalMissing := 0
|
||||
totalRealGaps := 0
|
||||
totalGhosts := 0
|
||||
totalDeactivated := 0
|
||||
|
||||
for _, col := range cols {
|
||||
fmt.Printf("\n%s%s━━━ %s ━━━%s\n", cBold, cCyan, col, cReset)
|
||||
|
||||
cd := colResults[col]
|
||||
row := summaryRow{collection: col, counts: cd.counts}
|
||||
|
||||
if cd.hasError {
|
||||
for ri, relay := range relays {
|
||||
r := allResults[key{col, relay}]
|
||||
if r.err != nil {
|
||||
fmt.Printf(" %-*s %s%serror%s: %v\n", maxNameLen, names[ri], cBold, cRed, cReset, r.err)
|
||||
} else {
|
||||
fmt.Printf(" %-*s %s%d%s DIDs\n", maxNameLen, names[ri], cBold, len(r.dids), cReset)
|
||||
}
|
||||
}
|
||||
row.status = "error"
|
||||
summary = append(summary, row)
|
||||
continue
|
||||
}
|
||||
|
||||
// Show counts per relay
|
||||
for ri := range relays {
|
||||
fmt.Printf(" %-*s %s%d%s DIDs\n", maxNameLen, names[ri], cBold, cd.counts[ri], cReset)
|
||||
}
|
||||
|
||||
// Show missing DIDs per relay
|
||||
inSync := true
|
||||
for ri := range relays {
|
||||
missing := cd.missing[ri]
|
||||
if len(missing) == 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
inSync = false
|
||||
totalMissing += len(missing)
|
||||
row.diffCount += len(missing)
|
||||
|
||||
fmt.Printf("\n %sMissing from %s (%d):%s\n", cRed, names[ri], len(missing), cReset)
|
||||
for _, did := range missing {
|
||||
suffix := ""
|
||||
skip := false
|
||||
if *verify {
|
||||
vr, ok := verified[key{col, did}]
|
||||
if !ok {
|
||||
suffix = fmt.Sprintf(" %s(verify: unknown)%s", cDim, cReset)
|
||||
} else if vr.err != nil {
|
||||
suffix = fmt.Sprintf(" %s(verify: %s)%s", cDim, vr.err, cReset)
|
||||
} else if vr.deactivated {
|
||||
suffix = fmt.Sprintf(" %s← deactivated%s", cDim, cReset)
|
||||
row.deactivated++
|
||||
totalDeactivated++
|
||||
skip = *hideGhosts
|
||||
} else if vr.exists {
|
||||
suffix = fmt.Sprintf(" %s← real gap%s", cRed, cReset)
|
||||
row.realGaps++
|
||||
totalRealGaps++
|
||||
} else {
|
||||
suffix = fmt.Sprintf(" %s← ghost (not on PDS)%s", cDim, cReset)
|
||||
row.ghosts++
|
||||
totalGhosts++
|
||||
skip = *hideGhosts
|
||||
}
|
||||
}
|
||||
if !skip {
|
||||
fmt.Printf(" %s- %s%s%s\n", cRed, did, cReset, suffix)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// When verifying, ghost/deactivated-only diffs are considered in sync
|
||||
if !inSync && *verify && row.realGaps == 0 {
|
||||
inSync = true
|
||||
}
|
||||
|
||||
if inSync {
|
||||
notes := ""
|
||||
if !*hideGhosts {
|
||||
notes = formatSyncNotes(row.ghosts, row.deactivated)
|
||||
}
|
||||
if notes != "" {
|
||||
fmt.Printf(" %s✓ in sync%s %s(%s)%s\n", cGreen, cReset, cDim, notes, cReset)
|
||||
} else {
|
||||
fmt.Printf(" %s✓ in sync%s\n", cGreen, cReset)
|
||||
}
|
||||
row.status = "sync"
|
||||
} else {
|
||||
row.status = "diff"
|
||||
}
|
||||
summary = append(summary, row)
|
||||
}
|
||||
|
||||
// Summary table
|
||||
printSummary(summary, names, maxNameLen, totalMissing, *verify, *hideGhosts, totalRealGaps, totalGhosts, totalDeactivated)
|
||||
}
|
||||
|
||||
func printSummary(rows []summaryRow, names []string, maxNameLen, totalMissing int, showVerify, hideGhosts bool, totalRealGaps, totalGhosts, totalDeactivated int) {
|
||||
fmt.Printf("\n%s%s━━━ Summary ━━━%s\n\n", cBold, cCyan, cReset)
|
||||
|
||||
// Build short labels (A, B, C, ...) for compact columns
|
||||
labels := make([]string, len(names))
|
||||
for i, name := range names {
|
||||
labels[i] = string(rune('A' + i))
|
||||
fmt.Printf(" %s%s%s: %s\n", cBold, labels[i], cReset, name)
|
||||
}
|
||||
fmt.Println()
|
||||
|
||||
colW := len("Collection")
|
||||
for _, row := range rows {
|
||||
if len(row.collection) > colW {
|
||||
colW = len(row.collection)
|
||||
}
|
||||
}
|
||||
relayW := 6
|
||||
|
||||
// Header
|
||||
fmt.Printf(" %-*s", colW, "Collection")
|
||||
for _, label := range labels {
|
||||
fmt.Printf(" %*s", relayW, label)
|
||||
}
|
||||
fmt.Printf(" Status\n")
|
||||
|
||||
// Separator
|
||||
fmt.Printf(" %s", strings.Repeat("─", colW))
|
||||
for range labels {
|
||||
fmt.Printf(" %s", strings.Repeat("─", relayW))
|
||||
}
|
||||
fmt.Printf(" %s\n", strings.Repeat("─", 14))
|
||||
|
||||
// Data rows
|
||||
for _, row := range rows {
|
||||
fmt.Printf(" %-*s", colW, row.collection)
|
||||
for _, c := range row.counts {
|
||||
switch row.status {
|
||||
case "error":
|
||||
fmt.Printf(" %*s", relayW, fmt.Sprintf("%s—%s", cDim, cReset))
|
||||
default:
|
||||
fmt.Printf(" %*d", relayW, c)
|
||||
}
|
||||
}
|
||||
switch row.status {
|
||||
case "sync":
|
||||
notes := ""
|
||||
if !hideGhosts {
|
||||
notes = formatSyncNotes(row.ghosts, row.deactivated)
|
||||
}
|
||||
if notes != "" {
|
||||
fmt.Printf(" %s✓ in sync%s %s(%s)%s", cGreen, cReset, cDim, notes, cReset)
|
||||
} else {
|
||||
fmt.Printf(" %s✓ in sync%s", cGreen, cReset)
|
||||
}
|
||||
case "diff":
|
||||
if showVerify {
|
||||
if hideGhosts {
|
||||
fmt.Printf(" %s≠ %d missing%s", cYellow, row.realGaps, cReset)
|
||||
} else {
|
||||
notes := formatSyncNotes(row.ghosts, row.deactivated)
|
||||
if notes != "" {
|
||||
notes = ", " + notes
|
||||
}
|
||||
fmt.Printf(" %s≠ %d missing%s %s(%d real%s)%s",
|
||||
cYellow, row.realGaps, cReset, cDim, row.realGaps, notes, cReset)
|
||||
}
|
||||
} else {
|
||||
fmt.Printf(" %s≠ %d missing%s", cYellow, row.diffCount, cReset)
|
||||
}
|
||||
case "error":
|
||||
fmt.Printf(" %s✗ error%s", cRed, cReset)
|
||||
}
|
||||
fmt.Println()
|
||||
}
|
||||
|
||||
// Footer
|
||||
fmt.Println()
|
||||
if totalMissing > 0 {
|
||||
if showVerify && totalRealGaps == 0 {
|
||||
if hideGhosts {
|
||||
fmt.Printf("%s✓ All relays in sync%s\n", cGreen, cReset)
|
||||
} else {
|
||||
notes := formatSyncNotes(totalGhosts, totalDeactivated)
|
||||
fmt.Printf("%s✓ All relays in sync%s %s(%s)%s\n", cGreen, cReset, cDim, notes, cReset)
|
||||
}
|
||||
} else {
|
||||
if showVerify {
|
||||
fmt.Printf("%s%d real gaps across relays%s", cYellow, totalRealGaps, cReset)
|
||||
if !hideGhosts {
|
||||
notes := formatSyncNotes(totalGhosts, totalDeactivated)
|
||||
if notes != "" {
|
||||
fmt.Printf(" %s(%s)%s", cDim, notes, cReset)
|
||||
}
|
||||
}
|
||||
fmt.Println()
|
||||
} else {
|
||||
fmt.Printf("%s%d total missing DID-collection pairs across relays%s\n", cYellow, totalMissing, cReset)
|
||||
}
|
||||
}
|
||||
} else {
|
||||
fmt.Printf("%s✓ All relays fully in sync%s\n", cGreen, cReset)
|
||||
}
|
||||
}
|
||||
|
||||
// formatSyncNotes builds a parenthetical like "2 ghost, 1 deactivated" for sync status.
|
||||
// Returns empty string if both counts are zero.
|
||||
func formatSyncNotes(ghosts, deactivated int) string {
|
||||
var parts []string
|
||||
if ghosts > 0 {
|
||||
parts = append(parts, fmt.Sprintf("%d ghost", ghosts))
|
||||
}
|
||||
if deactivated > 0 {
|
||||
parts = append(parts, fmt.Sprintf("%d deactivated", deactivated))
|
||||
}
|
||||
return strings.Join(parts, ", ")
|
||||
}
|
||||
|
||||
// verifyDiffs resolves each diff DID to its PDS and checks if records actually exist.
|
||||
func verifyDiffs(ctx context.Context, diffs []diffEntry) map[key]verifyResult {
|
||||
// Collect unique (DID, collection) pairs to verify
|
||||
type didCol struct{ did, col string }
|
||||
unique := make(map[didCol]struct{})
|
||||
for _, d := range diffs {
|
||||
unique[didCol{d.did, d.collection}] = struct{}{}
|
||||
}
|
||||
|
||||
// Resolve unique DIDs to PDS endpoints (deduplicate across collections)
|
||||
uniqueDIDs := make(map[string]struct{})
|
||||
for dc := range unique {
|
||||
uniqueDIDs[dc.did] = struct{}{}
|
||||
}
|
||||
|
||||
fmt.Printf("\n%sVerifying %d DID-collection pairs (%d unique DIDs)...%s\n", cDim, len(unique), len(uniqueDIDs), cReset)
|
||||
|
||||
pdsEndpoints := make(map[string]string) // DID → PDS URL
|
||||
pdsErrors := make(map[string]error) // DID → resolution error
|
||||
var mu sync.Mutex
|
||||
var wg sync.WaitGroup
|
||||
sem := make(chan struct{}, 10) // concurrency limit
|
||||
|
||||
for did := range uniqueDIDs {
|
||||
wg.Add(1)
|
||||
go func(did string) {
|
||||
defer wg.Done()
|
||||
sem <- struct{}{}
|
||||
defer func() { <-sem }()
|
||||
|
||||
pds, err := resolveDIDToPDS(ctx, did)
|
||||
mu.Lock()
|
||||
if err != nil {
|
||||
pdsErrors[did] = err
|
||||
} else {
|
||||
pdsEndpoints[did] = pds
|
||||
}
|
||||
mu.Unlock()
|
||||
}(did)
|
||||
}
|
||||
wg.Wait()
|
||||
|
||||
// Check each (DID, collection) pair against the resolved PDS
|
||||
results := make(map[key]verifyResult)
|
||||
|
||||
for dc := range unique {
|
||||
wg.Add(1)
|
||||
go func(dc didCol) {
|
||||
defer wg.Done()
|
||||
sem <- struct{}{}
|
||||
defer func() { <-sem }()
|
||||
|
||||
k := key{dc.col, dc.did}
|
||||
|
||||
// Check if DID resolution failed — could mean account is deactivated/tombstoned
|
||||
if err, ok := pdsErrors[dc.did]; ok {
|
||||
errStr := err.Error()
|
||||
if strings.Contains(errStr, "no PDS endpoint") ||
|
||||
strings.Contains(errStr, "not found") {
|
||||
mu.Lock()
|
||||
results[k] = verifyResult{deactivated: true}
|
||||
mu.Unlock()
|
||||
} else {
|
||||
mu.Lock()
|
||||
results[k] = verifyResult{err: fmt.Errorf("DID resolution failed: %w", err)}
|
||||
mu.Unlock()
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
pds := pdsEndpoints[dc.did]
|
||||
client := &xrpc.Client{Host: pds, Client: http.DefaultClient}
|
||||
var listResult listRecordsResult
|
||||
err := client.LexDo(ctx, "GET", "", "com.atproto.repo.listRecords", map[string]any{
|
||||
"repo": dc.did,
|
||||
"collection": dc.col,
|
||||
"limit": 1,
|
||||
}, nil, &listResult)
|
||||
mu.Lock()
|
||||
if err != nil {
|
||||
errStr := err.Error()
|
||||
if strings.Contains(errStr, "Could not find repo") ||
|
||||
strings.Contains(errStr, "RepoDeactivated") ||
|
||||
strings.Contains(errStr, "RepoTakendown") ||
|
||||
strings.Contains(errStr, "RepoSuspended") {
|
||||
results[k] = verifyResult{deactivated: true}
|
||||
} else {
|
||||
results[k] = verifyResult{err: err}
|
||||
}
|
||||
} else {
|
||||
results[k] = verifyResult{exists: len(listResult.Records) > 0}
|
||||
}
|
||||
mu.Unlock()
|
||||
}(dc)
|
||||
}
|
||||
wg.Wait()
|
||||
|
||||
return results
|
||||
}
|
||||
|
||||
// resolveDIDToPDS resolves a DID to its PDS endpoint using the shared identity directory.
|
||||
func resolveDIDToPDS(ctx context.Context, did string) (string, error) {
|
||||
didParsed, err := syntax.ParseDID(did)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("invalid DID: %w", err)
|
||||
}
|
||||
|
||||
ident, err := dir.LookupDID(ctx, didParsed)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to resolve DID: %w", err)
|
||||
}
|
||||
|
||||
pdsEndpoint := ident.PDSEndpoint()
|
||||
if pdsEndpoint == "" {
|
||||
return "", fmt.Errorf("no PDS endpoint found for DID")
|
||||
}
|
||||
|
||||
return pdsEndpoint, nil
|
||||
}
|
||||
|
||||
// fetchAllDIDs paginates through listReposByCollection to collect all DIDs.
|
||||
func fetchAllDIDs(ctx context.Context, relay, collection string) (map[string]struct{}, error) {
|
||||
client := &xrpc.Client{Host: relay, Client: http.DefaultClient}
|
||||
dids := make(map[string]struct{})
|
||||
var cursor string
|
||||
|
||||
for {
|
||||
params := map[string]any{
|
||||
"collection": collection,
|
||||
"limit": 1000,
|
||||
}
|
||||
if cursor != "" {
|
||||
params["cursor"] = cursor
|
||||
}
|
||||
|
||||
var result listReposByCollectionResult
|
||||
err := client.LexDo(ctx, "GET", "", "com.atproto.sync.listReposByCollection", params, nil, &result)
|
||||
if err != nil {
|
||||
return dids, fmt.Errorf("listReposByCollection failed: %w", err)
|
||||
}
|
||||
|
||||
for _, repo := range result.Repos {
|
||||
dids[repo.DID] = struct{}{}
|
||||
}
|
||||
|
||||
if result.Cursor == "" {
|
||||
break
|
||||
}
|
||||
cursor = result.Cursor
|
||||
}
|
||||
|
||||
return dids, nil
|
||||
}
|
||||
|
||||
// shortName extracts the hostname from a relay URL for display.
|
||||
func shortName(relayURL string) string {
|
||||
u, err := url.Parse(relayURL)
|
||||
if err != nil {
|
||||
return relayURL
|
||||
}
|
||||
return u.Hostname()
|
||||
}
|
||||
418
cmd/s3-test/main.go
Normal file
418
cmd/s3-test/main.go
Normal file
@@ -0,0 +1,418 @@
|
||||
// Command s3-test is a diagnostic tool that tests S3 connectivity using both
|
||||
// AWS SDK v1 (used by distribution's storage driver) and AWS SDK v2 (used by
|
||||
// ATCR's presigned URL service). It helps diagnose signature compatibility
|
||||
// issues with S3-compatible storage providers.
|
||||
package main
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"context"
|
||||
"flag"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"os"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
awsv1 "github.com/aws/aws-sdk-go/aws"
|
||||
credentialsv1 "github.com/aws/aws-sdk-go/aws/credentials"
|
||||
"github.com/aws/aws-sdk-go/aws/session"
|
||||
s3v1 "github.com/aws/aws-sdk-go/service/s3"
|
||||
|
||||
awsv2 "github.com/aws/aws-sdk-go-v2/aws"
|
||||
configv2 "github.com/aws/aws-sdk-go-v2/config"
|
||||
credentialsv2 "github.com/aws/aws-sdk-go-v2/credentials"
|
||||
s3v2 "github.com/aws/aws-sdk-go-v2/service/s3"
|
||||
)
|
||||
|
||||
func main() {
|
||||
var (
|
||||
envFile = flag.String("env-file", "", "Load environment variables from file (KEY=VALUE format)")
|
||||
accessKey = flag.String("access-key", "", "S3 access key (env: AWS_ACCESS_KEY_ID)")
|
||||
secretKey = flag.String("secret-key", "", "S3 secret key (env: AWS_SECRET_ACCESS_KEY)")
|
||||
region = flag.String("region", "", "S3 region (env: S3_REGION)")
|
||||
bucket = flag.String("bucket", "", "S3 bucket name (env: S3_BUCKET)")
|
||||
endpoint = flag.String("endpoint", "", "S3 endpoint URL (env: S3_ENDPOINT)")
|
||||
pullZone = flag.String("pull-zone", "", "CDN pull zone URL for presigned reads (env: PULL_ZONE)")
|
||||
prefix = flag.String("prefix", "docker/registry/v2/blobs", "Key prefix for list operations")
|
||||
verbose = flag.Bool("verbose", false, "Enable SDK debug signing logs")
|
||||
)
|
||||
flag.Parse()
|
||||
|
||||
// Load env file first, then let flags and real env vars override
|
||||
if *envFile != "" {
|
||||
if err := loadEnvFile(*envFile); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error loading env file: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
|
||||
// Resolve: flag > env var > default
|
||||
if *accessKey == "" {
|
||||
*accessKey = os.Getenv("AWS_ACCESS_KEY_ID")
|
||||
}
|
||||
if *secretKey == "" {
|
||||
*secretKey = os.Getenv("AWS_SECRET_ACCESS_KEY")
|
||||
}
|
||||
if *region == "" {
|
||||
*region = envOr("S3_REGION", "us-east-1")
|
||||
}
|
||||
if *bucket == "" {
|
||||
*bucket = os.Getenv("S3_BUCKET")
|
||||
}
|
||||
if *endpoint == "" {
|
||||
*endpoint = os.Getenv("S3_ENDPOINT")
|
||||
}
|
||||
if *pullZone == "" {
|
||||
*pullZone = os.Getenv("PULL_ZONE")
|
||||
}
|
||||
|
||||
if *accessKey == "" || *secretKey == "" || *bucket == "" {
|
||||
fmt.Fprintln(os.Stderr, "Usage: s3-test [--env-file FILE] [--access-key KEY] [--secret-key KEY] [--bucket BUCKET] [--endpoint URL] [--region REGION] [--prefix PREFIX] [--verbose]")
|
||||
fmt.Fprintln(os.Stderr, "Env vars: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, S3_BUCKET, S3_REGION, S3_ENDPOINT")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
fmt.Println("S3 Connectivity Diagnostic")
|
||||
fmt.Println("==========================")
|
||||
fmt.Printf("Endpoint: %s\n", valueOr(*endpoint, "(default AWS)"))
|
||||
fmt.Printf("Pull Zone: %s\n", valueOr(*pullZone, "(none)"))
|
||||
fmt.Printf("Region: %s\n", *region)
|
||||
fmt.Printf("AccessKey: %s...%s (%d chars)\n", (*accessKey)[:3], (*accessKey)[len(*accessKey)-3:], len(*accessKey))
|
||||
fmt.Printf("SecretKey: %s...%s (%d chars)\n", (*secretKey)[:3], (*secretKey)[len(*secretKey)-3:], len(*secretKey))
|
||||
fmt.Printf("Bucket: %s\n", *bucket)
|
||||
fmt.Printf("Prefix: %s\n", *prefix)
|
||||
fmt.Println()
|
||||
|
||||
ctx := context.Background()
|
||||
results := make([]result, 0, 6)
|
||||
|
||||
// Build SDK v1 client (SigV4) — matches distribution driver's New()
|
||||
v1Client := buildV1Client(*accessKey, *secretKey, *region, *endpoint, *verbose)
|
||||
|
||||
// Test 1: SDK v1 SigV4 HeadBucket
|
||||
results = append(results, runTest("SDK v1 / SigV4 / HeadBucket", func() error {
|
||||
_, err := v1Client.HeadBucketWithContext(ctx, &s3v1.HeadBucketInput{
|
||||
Bucket: awsv1.String(*bucket),
|
||||
})
|
||||
return err
|
||||
}))
|
||||
|
||||
// Test 2: SDK v1 SigV4 ListObjectsV2
|
||||
results = append(results, runTest("SDK v1 / SigV4 / ListObjectsV2", func() error {
|
||||
_, err := v1Client.ListObjectsV2WithContext(ctx, &s3v1.ListObjectsV2Input{
|
||||
Bucket: awsv1.String(*bucket),
|
||||
Prefix: awsv1.String(*prefix),
|
||||
MaxKeys: awsv1.Int64(5),
|
||||
})
|
||||
return err
|
||||
}))
|
||||
|
||||
// Test 3: SDK v1 SigV4 ListObjectsV2Pages (paginated, matches doWalk)
|
||||
results = append(results, runTest("SDK v1 / SigV4 / ListObjectsV2Pages", func() error {
|
||||
return v1Client.ListObjectsV2PagesWithContext(ctx, &s3v1.ListObjectsV2Input{
|
||||
Bucket: awsv1.String(*bucket),
|
||||
Prefix: awsv1.String(*prefix),
|
||||
MaxKeys: awsv1.Int64(5),
|
||||
}, func(page *s3v1.ListObjectsV2Output, lastPage bool) bool {
|
||||
return false // stop after first page
|
||||
})
|
||||
}))
|
||||
|
||||
// Build SDK v2 client — matches NewS3Service()
|
||||
v2Client := buildV2Client(ctx, *accessKey, *secretKey, *region, *endpoint)
|
||||
|
||||
// Test 5: SDK v2 SigV4 HeadBucket
|
||||
results = append(results, runTest("SDK v2 / SigV4 / HeadBucket", func() error {
|
||||
_, err := v2Client.HeadBucket(ctx, &s3v2.HeadBucketInput{
|
||||
Bucket: awsv2.String(*bucket),
|
||||
})
|
||||
return err
|
||||
}))
|
||||
|
||||
// Test 6: SDK v2 SigV4 ListObjectsV2
|
||||
results = append(results, runTest("SDK v2 / SigV4 / ListObjectsV2", func() error {
|
||||
_, err := v2Client.ListObjectsV2(ctx, &s3v2.ListObjectsV2Input{
|
||||
Bucket: awsv2.String(*bucket),
|
||||
Prefix: awsv2.String(*prefix),
|
||||
MaxKeys: awsv2.Int32(5),
|
||||
})
|
||||
return err
|
||||
}))
|
||||
|
||||
// Find a real object key for GetObject / presigned URL tests
|
||||
var testKey string
|
||||
listOut, err := v2Client.ListObjectsV2(ctx, &s3v2.ListObjectsV2Input{
|
||||
Bucket: awsv2.String(*bucket),
|
||||
Prefix: awsv2.String(*prefix),
|
||||
MaxKeys: awsv2.Int32(1),
|
||||
})
|
||||
if err == nil && len(listOut.Contents) > 0 {
|
||||
testKey = *listOut.Contents[0].Key
|
||||
}
|
||||
|
||||
if testKey == "" {
|
||||
fmt.Printf("\n (Skipping GetObject/Presigned tests — no objects found under prefix %q)\n", *prefix)
|
||||
} else {
|
||||
fmt.Printf("\n Test object: %s\n\n", testKey)
|
||||
|
||||
// Test 7: SDK v1 GetObject (HEAD only)
|
||||
results = append(results, runTest("SDK v1 / SigV4 / HeadObject", func() error {
|
||||
_, err := v1Client.HeadObjectWithContext(ctx, &s3v1.HeadObjectInput{
|
||||
Bucket: awsv1.String(*bucket),
|
||||
Key: awsv1.String(testKey),
|
||||
})
|
||||
return err
|
||||
}))
|
||||
|
||||
// Test 8: SDK v2 GetObject (HEAD only)
|
||||
results = append(results, runTest("SDK v2 / SigV4 / HeadObject", func() error {
|
||||
_, err := v2Client.HeadObject(ctx, &s3v2.HeadObjectInput{
|
||||
Bucket: awsv2.String(*bucket),
|
||||
Key: awsv2.String(testKey),
|
||||
})
|
||||
return err
|
||||
}))
|
||||
|
||||
// Test 9: SDK v2 Presigned GET URL (generate + fetch)
|
||||
presignClient := s3v2.NewPresignClient(v2Client)
|
||||
results = append(results, runTest("SDK v2 / Presigned GET URL", func() error {
|
||||
presigned, err := presignClient.PresignGetObject(ctx, &s3v2.GetObjectInput{
|
||||
Bucket: awsv2.String(*bucket),
|
||||
Key: awsv2.String(testKey),
|
||||
}, func(opts *s3v2.PresignOptions) {
|
||||
opts.Expires = 5 * time.Minute
|
||||
})
|
||||
if err != nil {
|
||||
return fmt.Errorf("presign: %w", err)
|
||||
}
|
||||
if *verbose {
|
||||
// Show host + query params (no path to avoid leaking key structure)
|
||||
u, _ := url.Parse(presigned.URL)
|
||||
fmt.Printf("\n Presigned host: %s\n", u.Host)
|
||||
fmt.Printf(" Signed headers: %s\n", presigned.SignedHeader)
|
||||
}
|
||||
resp, err := http.Get(presigned.URL)
|
||||
if err != nil {
|
||||
return fmt.Errorf("fetch: %w", err)
|
||||
}
|
||||
body, _ := io.ReadAll(io.LimitReader(resp.Body, 1024))
|
||||
resp.Body.Close()
|
||||
if resp.StatusCode != 200 {
|
||||
return fmt.Errorf("presigned URL returned %d: %s", resp.StatusCode, string(body))
|
||||
}
|
||||
return nil
|
||||
}))
|
||||
|
||||
// Pull zone presigned tests — sign against real endpoint, swap host to pull zone
|
||||
if *pullZone != "" {
|
||||
results = append(results, runTest("SDK v2 / Presigned GET via Pull Zone", func() error {
|
||||
presigned, err := presignClient.PresignGetObject(ctx, &s3v2.GetObjectInput{
|
||||
Bucket: awsv2.String(*bucket),
|
||||
Key: awsv2.String(testKey),
|
||||
}, func(opts *s3v2.PresignOptions) {
|
||||
opts.Expires = 5 * time.Minute
|
||||
})
|
||||
if err != nil {
|
||||
return fmt.Errorf("presign: %w", err)
|
||||
}
|
||||
pzURL := swapHost(presigned.URL, *pullZone)
|
||||
if *verbose {
|
||||
fmt.Printf("\n Signed against: %s\n", presigned.URL[:40]+"...")
|
||||
fmt.Printf(" Fetching from: %s\n", pzURL[:40]+"...")
|
||||
}
|
||||
resp, err := http.Get(pzURL)
|
||||
if err != nil {
|
||||
return fmt.Errorf("fetch: %w", err)
|
||||
}
|
||||
body, _ := io.ReadAll(io.LimitReader(resp.Body, 1024))
|
||||
resp.Body.Close()
|
||||
if resp.StatusCode != 200 {
|
||||
return fmt.Errorf("pull zone GET returned %d: %s", resp.StatusCode, string(body))
|
||||
}
|
||||
return nil
|
||||
}))
|
||||
|
||||
}
|
||||
|
||||
// Test 10: SDK v2 Presigned PUT URL (generate + upload empty)
|
||||
results = append(results, runTest("SDK v2 / Presigned PUT URL", func() error {
|
||||
putKey := *prefix + "/_s3-test-probe"
|
||||
presigned, err := presignClient.PresignPutObject(ctx, &s3v2.PutObjectInput{
|
||||
Bucket: awsv2.String(*bucket),
|
||||
Key: awsv2.String(putKey),
|
||||
}, func(opts *s3v2.PresignOptions) {
|
||||
opts.Expires = 5 * time.Minute
|
||||
})
|
||||
if err != nil {
|
||||
return fmt.Errorf("presign: %w", err)
|
||||
}
|
||||
req, err := http.NewRequestWithContext(ctx, http.MethodPut, presigned.URL, strings.NewReader(""))
|
||||
if err != nil {
|
||||
return fmt.Errorf("build request: %w", err)
|
||||
}
|
||||
req.Header.Set("Content-Length", "0")
|
||||
resp, err := http.DefaultClient.Do(req)
|
||||
if err != nil {
|
||||
return fmt.Errorf("fetch: %w", err)
|
||||
}
|
||||
resp.Body.Close()
|
||||
if resp.StatusCode != 200 {
|
||||
return fmt.Errorf("presigned PUT returned %d", resp.StatusCode)
|
||||
}
|
||||
// Clean up
|
||||
_, _ = v2Client.DeleteObject(ctx, &s3v2.DeleteObjectInput{
|
||||
Bucket: awsv2.String(*bucket),
|
||||
Key: awsv2.String(putKey),
|
||||
})
|
||||
return nil
|
||||
}))
|
||||
}
|
||||
|
||||
// Print summary
|
||||
fmt.Println()
|
||||
fmt.Println("Summary")
|
||||
fmt.Println("=======")
|
||||
|
||||
allPass := true
|
||||
for _, r := range results {
|
||||
status := "PASS"
|
||||
if !r.ok {
|
||||
status = "FAIL"
|
||||
allPass = false
|
||||
}
|
||||
fmt.Printf(" [%s] %s (%s)\n", status, r.name, r.duration.Round(time.Millisecond))
|
||||
if !r.ok {
|
||||
fmt.Printf(" Error: %s\n", r.err)
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Println()
|
||||
if allPass {
|
||||
fmt.Println("Diagnosis: All tests passed. S3 connectivity is working with both SDKs.")
|
||||
} else {
|
||||
fmt.Println("Diagnosis: Some tests failed. Review errors above.")
|
||||
}
|
||||
}
|
||||
|
||||
type result struct {
|
||||
name string
|
||||
ok bool
|
||||
err error
|
||||
duration time.Duration
|
||||
}
|
||||
|
||||
func runTest(name string, fn func() error) result {
|
||||
fmt.Printf(" Testing: %s ... ", name)
|
||||
start := time.Now()
|
||||
err := fn()
|
||||
d := time.Since(start)
|
||||
if err != nil {
|
||||
fmt.Printf("FAIL (%s)\n", d.Round(time.Millisecond))
|
||||
return result{name: name, ok: false, err: err, duration: d}
|
||||
}
|
||||
fmt.Printf("PASS (%s)\n", d.Round(time.Millisecond))
|
||||
return result{name: name, ok: true, duration: d}
|
||||
}
|
||||
|
||||
func loadEnvFile(path string) error {
|
||||
f, err := os.Open(path)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
scanner := bufio.NewScanner(f)
|
||||
for scanner.Scan() {
|
||||
line := strings.TrimSpace(scanner.Text())
|
||||
if line == "" || strings.HasPrefix(line, "#") {
|
||||
continue
|
||||
}
|
||||
line = strings.TrimPrefix(line, "export ")
|
||||
k, v, ok := strings.Cut(line, "=")
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
v = strings.Trim(v, `"'`)
|
||||
os.Setenv(strings.TrimSpace(k), strings.TrimSpace(v))
|
||||
}
|
||||
return scanner.Err()
|
||||
}
|
||||
|
||||
func envOr(key, fallback string) string {
|
||||
if v := os.Getenv(key); v != "" {
|
||||
return v
|
||||
}
|
||||
return fallback
|
||||
}
|
||||
|
||||
func swapHost(presignedURL, pullZone string) string {
|
||||
parsed, err := url.Parse(presignedURL)
|
||||
if err != nil {
|
||||
return presignedURL
|
||||
}
|
||||
pz, err := url.Parse(pullZone)
|
||||
if err != nil {
|
||||
return presignedURL
|
||||
}
|
||||
parsed.Scheme = pz.Scheme
|
||||
parsed.Host = pz.Host
|
||||
return parsed.String()
|
||||
}
|
||||
|
||||
func valueOr(s, fallback string) string {
|
||||
if s == "" {
|
||||
return fallback
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
// buildV1Client constructs an SDK v1 S3 client identically to
|
||||
// distribution/distribution's s3-aws driver New() function.
|
||||
func buildV1Client(accessKey, secretKey, region, endpoint string, verbose bool) *s3v1.S3 {
|
||||
awsConfig := awsv1.NewConfig()
|
||||
|
||||
if verbose {
|
||||
awsConfig.WithLogLevel(awsv1.LogDebugWithSigning)
|
||||
}
|
||||
|
||||
awsConfig.WithCredentials(credentialsv1.NewStaticCredentials(accessKey, secretKey, ""))
|
||||
awsConfig.WithRegion(region)
|
||||
|
||||
if endpoint != "" {
|
||||
awsConfig.WithEndpoint(endpoint)
|
||||
awsConfig.WithS3ForcePathStyle(true)
|
||||
}
|
||||
|
||||
sess, err := session.NewSession(awsConfig)
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Failed to create SDK v1 session: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
return s3v1.New(sess)
|
||||
}
|
||||
|
||||
// buildV2Client constructs an SDK v2 S3 client identically to
|
||||
// ATCR's NewS3Service() in pkg/s3/types.go.
|
||||
func buildV2Client(ctx context.Context, accessKey, secretKey, region, endpoint string) *s3v2.Client {
|
||||
cfg, err := configv2.LoadDefaultConfig(ctx,
|
||||
configv2.WithRegion(region),
|
||||
configv2.WithCredentialsProvider(
|
||||
credentialsv2.NewStaticCredentialsProvider(accessKey, secretKey, ""),
|
||||
),
|
||||
)
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Failed to load SDK v2 config: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
return s3v2.NewFromConfig(cfg, func(o *s3v2.Options) {
|
||||
if endpoint != "" {
|
||||
o.BaseEndpoint = awsv2.String(endpoint)
|
||||
o.UsePathStyle = true
|
||||
}
|
||||
})
|
||||
}
|
||||
759
cmd/usage-report/main.go
Normal file
759
cmd/usage-report/main.go
Normal file
@@ -0,0 +1,759 @@
|
||||
// usage-report queries a hold service and generates a storage usage report
|
||||
// grouped by user, with unique layers and totals.
|
||||
//
|
||||
// Usage:
|
||||
//
|
||||
// go run ./cmd/usage-report --hold https://hold01.atcr.io
|
||||
// go run ./cmd/usage-report --hold https://hold01.atcr.io --from-manifests
|
||||
// go run ./cmd/usage-report --hold https://hold01.atcr.io --list-blobs
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"flag"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"os"
|
||||
"sort"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
// LayerRecord matches the io.atcr.hold.layer record structure
|
||||
type LayerRecord struct {
|
||||
Type string `json:"$type"`
|
||||
Digest string `json:"digest"`
|
||||
Size int64 `json:"size"`
|
||||
MediaType string `json:"mediaType"`
|
||||
Manifest string `json:"manifest"`
|
||||
UserDID string `json:"userDid"`
|
||||
CreatedAt string `json:"createdAt"`
|
||||
}
|
||||
|
||||
// ManifestRecord matches the io.atcr.manifest record structure
|
||||
type ManifestRecord struct {
|
||||
Type string `json:"$type"`
|
||||
Repository string `json:"repository"`
|
||||
Digest string `json:"digest"`
|
||||
HoldDID string `json:"holdDid"`
|
||||
Config *struct {
|
||||
Digest string `json:"digest"`
|
||||
Size int64 `json:"size"`
|
||||
} `json:"config"`
|
||||
Layers []struct {
|
||||
Digest string `json:"digest"`
|
||||
Size int64 `json:"size"`
|
||||
MediaType string `json:"mediaType"`
|
||||
} `json:"layers"`
|
||||
Manifests []struct {
|
||||
Digest string `json:"digest"`
|
||||
Size int64 `json:"size"`
|
||||
} `json:"manifests"`
|
||||
CreatedAt string `json:"createdAt"`
|
||||
}
|
||||
|
||||
// CrewRecord matches the io.atcr.hold.crew record structure
|
||||
type CrewRecord struct {
|
||||
Member string `json:"member"`
|
||||
Role string `json:"role"`
|
||||
Permissions []string `json:"permissions"`
|
||||
AddedAt string `json:"addedAt"`
|
||||
}
|
||||
|
||||
// ListRecordsResponse is the response from com.atproto.repo.listRecords
|
||||
type ListRecordsResponse struct {
|
||||
Records []struct {
|
||||
URI string `json:"uri"`
|
||||
CID string `json:"cid"`
|
||||
Value json.RawMessage `json:"value"`
|
||||
} `json:"records"`
|
||||
Cursor string `json:"cursor,omitempty"`
|
||||
}
|
||||
|
||||
// UserUsage tracks storage for a single user
|
||||
type UserUsage struct {
|
||||
DID string
|
||||
Handle string
|
||||
UniqueLayers map[string]int64 // digest -> size
|
||||
TotalSize int64
|
||||
LayerCount int
|
||||
Repositories map[string]bool // unique repos
|
||||
}
|
||||
|
||||
var client = &http.Client{Timeout: 30 * time.Second}
|
||||
|
||||
// BlobInfo represents a single blob with its metadata
|
||||
type BlobInfo struct {
|
||||
Digest string
|
||||
Size int64
|
||||
MediaType string
|
||||
UserDID string
|
||||
Handle string
|
||||
}
|
||||
|
||||
func main() {
|
||||
holdURL := flag.String("hold", "https://hold01.atcr.io", "Hold service URL")
|
||||
fromManifests := flag.Bool("from-manifests", false, "Calculate usage from user manifests instead of hold layer records (more accurate but slower)")
|
||||
listBlobs := flag.Bool("list-blobs", false, "List all individual blobs sorted by size (largest first)")
|
||||
flag.Parse()
|
||||
|
||||
// Normalize URL
|
||||
baseURL := strings.TrimSuffix(*holdURL, "/")
|
||||
|
||||
fmt.Printf("Querying %s...\n\n", baseURL)
|
||||
|
||||
// First, get the hold's DID
|
||||
holdDID, err := getHoldDID(baseURL)
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Failed to get hold DID: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
fmt.Printf("Hold DID: %s\n\n", holdDID)
|
||||
|
||||
// If --list-blobs flag is set, run blob listing mode
|
||||
if *listBlobs {
|
||||
listAllBlobs(baseURL, holdDID)
|
||||
return
|
||||
}
|
||||
|
||||
var userUsage map[string]*UserUsage
|
||||
|
||||
if *fromManifests {
|
||||
fmt.Println("=== Calculating from user manifests (bypasses layer record bug) ===")
|
||||
userUsage, err = calculateFromManifests(baseURL, holdDID)
|
||||
} else {
|
||||
fmt.Println("=== Calculating from hold layer records ===")
|
||||
fmt.Println("NOTE: May undercount app-password users due to layer record bug")
|
||||
fmt.Println(" Use --from-manifests for more accurate results")
|
||||
|
||||
userUsage, err = calculateFromLayerRecords(baseURL, holdDID)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Failed to calculate usage: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// Resolve DIDs to handles
|
||||
fmt.Println("\n\nResolving DIDs to handles...")
|
||||
for _, usage := range userUsage {
|
||||
handle, err := resolveDIDToHandle(usage.DID)
|
||||
if err != nil {
|
||||
usage.Handle = usage.DID
|
||||
} else {
|
||||
usage.Handle = handle
|
||||
}
|
||||
}
|
||||
|
||||
// Convert to slice and sort by total size (descending)
|
||||
var sorted []*UserUsage
|
||||
for _, u := range userUsage {
|
||||
sorted = append(sorted, u)
|
||||
}
|
||||
sort.Slice(sorted, func(i, j int) bool {
|
||||
return sorted[i].TotalSize > sorted[j].TotalSize
|
||||
})
|
||||
|
||||
// Print report
|
||||
fmt.Println("\n========================================")
|
||||
fmt.Println("STORAGE USAGE REPORT")
|
||||
fmt.Println("========================================")
|
||||
|
||||
var grandTotal int64
|
||||
var grandLayers int
|
||||
for _, u := range sorted {
|
||||
grandTotal += u.TotalSize
|
||||
grandLayers += u.LayerCount
|
||||
}
|
||||
|
||||
fmt.Printf("\nTotal Users: %d\n", len(sorted))
|
||||
fmt.Printf("Total Unique Layers: %d\n", grandLayers)
|
||||
fmt.Printf("Total Storage: %s\n\n", humanSize(grandTotal))
|
||||
|
||||
fmt.Println("BY USER (sorted by storage):")
|
||||
fmt.Println("----------------------------------------")
|
||||
for i, u := range sorted {
|
||||
fmt.Printf("%3d. %s\n", i+1, u.Handle)
|
||||
fmt.Printf(" DID: %s\n", u.DID)
|
||||
fmt.Printf(" Unique Layers: %d\n", u.LayerCount)
|
||||
fmt.Printf(" Total Size: %s\n", humanSize(u.TotalSize))
|
||||
if len(u.Repositories) > 0 {
|
||||
var repos []string
|
||||
for r := range u.Repositories {
|
||||
repos = append(repos, r)
|
||||
}
|
||||
sort.Strings(repos)
|
||||
fmt.Printf(" Repositories: %s\n", strings.Join(repos, ", "))
|
||||
}
|
||||
pct := float64(0)
|
||||
if grandTotal > 0 {
|
||||
pct = float64(u.TotalSize) / float64(grandTotal) * 100
|
||||
}
|
||||
fmt.Printf(" Share: %.1f%%\n\n", pct)
|
||||
}
|
||||
|
||||
// Output CSV format for easy analysis
|
||||
fmt.Println("\n========================================")
|
||||
fmt.Println("CSV FORMAT")
|
||||
fmt.Println("========================================")
|
||||
fmt.Println("handle,did,unique_layers,total_bytes,total_human,repositories")
|
||||
for _, u := range sorted {
|
||||
var repos []string
|
||||
for r := range u.Repositories {
|
||||
repos = append(repos, r)
|
||||
}
|
||||
sort.Strings(repos)
|
||||
fmt.Printf("%s,%s,%d,%d,%s,\"%s\"\n", u.Handle, u.DID, u.LayerCount, u.TotalSize, humanSize(u.TotalSize), strings.Join(repos, ";"))
|
||||
}
|
||||
}
|
||||
|
||||
// listAllBlobs fetches all blobs and lists them sorted by size (largest first)
|
||||
func listAllBlobs(baseURL, holdDID string) {
|
||||
fmt.Println("=== Fetching all blob records ===")
|
||||
|
||||
layers, err := fetchAllLayerRecords(baseURL, holdDID)
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Failed to fetch layer records: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
fmt.Printf("Fetched %d layer records\n", len(layers))
|
||||
|
||||
// Deduplicate by digest, keeping track of first seen user
|
||||
blobMap := make(map[string]*BlobInfo)
|
||||
for _, layer := range layers {
|
||||
if existing, exists := blobMap[layer.Digest]; exists {
|
||||
// If we have a record with a user DID and existing doesn't, prefer this one
|
||||
if existing.UserDID == "" && layer.UserDID != "" {
|
||||
existing.UserDID = layer.UserDID
|
||||
}
|
||||
continue
|
||||
}
|
||||
blobMap[layer.Digest] = &BlobInfo{
|
||||
Digest: layer.Digest,
|
||||
Size: layer.Size,
|
||||
MediaType: layer.MediaType,
|
||||
UserDID: layer.UserDID,
|
||||
}
|
||||
}
|
||||
|
||||
// Convert to slice
|
||||
var blobs []*BlobInfo
|
||||
for _, b := range blobMap {
|
||||
blobs = append(blobs, b)
|
||||
}
|
||||
|
||||
// Sort by size (largest first)
|
||||
sort.Slice(blobs, func(i, j int) bool {
|
||||
return blobs[i].Size > blobs[j].Size
|
||||
})
|
||||
|
||||
fmt.Printf("Found %d unique blobs\n\n", len(blobs))
|
||||
|
||||
// Resolve DIDs to handles (batch for efficiency)
|
||||
fmt.Println("Resolving DIDs to handles...")
|
||||
didToHandle := make(map[string]string)
|
||||
for _, b := range blobs {
|
||||
if b.UserDID == "" {
|
||||
continue
|
||||
}
|
||||
if _, exists := didToHandle[b.UserDID]; !exists {
|
||||
handle, err := resolveDIDToHandle(b.UserDID)
|
||||
if err != nil {
|
||||
didToHandle[b.UserDID] = b.UserDID
|
||||
} else {
|
||||
didToHandle[b.UserDID] = handle
|
||||
}
|
||||
}
|
||||
b.Handle = didToHandle[b.UserDID]
|
||||
}
|
||||
|
||||
// Calculate total
|
||||
var totalSize int64
|
||||
for _, b := range blobs {
|
||||
totalSize += b.Size
|
||||
}
|
||||
|
||||
// Print report
|
||||
fmt.Println("\n========================================")
|
||||
fmt.Println("BLOB SIZE REPORT (sorted largest to smallest)")
|
||||
fmt.Println("========================================")
|
||||
fmt.Printf("\nTotal Unique Blobs: %d\n", len(blobs))
|
||||
fmt.Printf("Total Storage: %s\n\n", humanSize(totalSize))
|
||||
|
||||
fmt.Println("BLOBS:")
|
||||
fmt.Println("----------------------------------------")
|
||||
for i, b := range blobs {
|
||||
pct := float64(0)
|
||||
if totalSize > 0 {
|
||||
pct = float64(b.Size) / float64(totalSize) * 100
|
||||
}
|
||||
owner := b.Handle
|
||||
if owner == "" {
|
||||
owner = "(unknown)"
|
||||
}
|
||||
fmt.Printf("%4d. %s\n", i+1, humanSize(b.Size))
|
||||
fmt.Printf(" Digest: %s\n", b.Digest)
|
||||
fmt.Printf(" Owner: %s\n", owner)
|
||||
if b.MediaType != "" {
|
||||
fmt.Printf(" Type: %s\n", b.MediaType)
|
||||
}
|
||||
fmt.Printf(" Share: %.2f%%\n\n", pct)
|
||||
}
|
||||
|
||||
// Output CSV format
|
||||
fmt.Println("\n========================================")
|
||||
fmt.Println("CSV FORMAT")
|
||||
fmt.Println("========================================")
|
||||
fmt.Println("rank,size_bytes,size_human,digest,owner,media_type,share_pct")
|
||||
for i, b := range blobs {
|
||||
pct := float64(0)
|
||||
if totalSize > 0 {
|
||||
pct = float64(b.Size) / float64(totalSize) * 100
|
||||
}
|
||||
owner := b.Handle
|
||||
if owner == "" {
|
||||
owner = ""
|
||||
}
|
||||
fmt.Printf("%d,%d,%s,%s,%s,%s,%.2f\n", i+1, b.Size, humanSize(b.Size), b.Digest, owner, b.MediaType, pct)
|
||||
}
|
||||
}
|
||||
|
||||
// calculateFromLayerRecords uses the hold's layer records (original method)
|
||||
func calculateFromLayerRecords(baseURL, holdDID string) (map[string]*UserUsage, error) {
|
||||
layers, err := fetchAllLayerRecords(baseURL, holdDID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
fmt.Printf("Fetched %d layer records\n", len(layers))
|
||||
|
||||
userUsage := make(map[string]*UserUsage)
|
||||
for _, layer := range layers {
|
||||
if layer.UserDID == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
usage, exists := userUsage[layer.UserDID]
|
||||
if !exists {
|
||||
usage = &UserUsage{
|
||||
DID: layer.UserDID,
|
||||
UniqueLayers: make(map[string]int64),
|
||||
Repositories: make(map[string]bool),
|
||||
}
|
||||
userUsage[layer.UserDID] = usage
|
||||
}
|
||||
|
||||
if _, seen := usage.UniqueLayers[layer.Digest]; !seen {
|
||||
usage.UniqueLayers[layer.Digest] = layer.Size
|
||||
usage.TotalSize += layer.Size
|
||||
usage.LayerCount++
|
||||
}
|
||||
}
|
||||
|
||||
return userUsage, nil
|
||||
}
|
||||
|
||||
// calculateFromManifests queries crew members and fetches their manifests from their PDSes
|
||||
func calculateFromManifests(baseURL, holdDID string) (map[string]*UserUsage, error) {
|
||||
// Get all crew members
|
||||
crewDIDs, err := fetchCrewMembers(baseURL, holdDID)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to fetch crew: %w", err)
|
||||
}
|
||||
|
||||
// Also get captain
|
||||
captainDID, err := fetchCaptain(baseURL, holdDID)
|
||||
if err == nil && captainDID != "" {
|
||||
// Add captain to list if not already there
|
||||
found := false
|
||||
for _, d := range crewDIDs {
|
||||
if d == captainDID {
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
crewDIDs = append(crewDIDs, captainDID)
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Printf("Found %d users (crew + captain)\n", len(crewDIDs))
|
||||
|
||||
userUsage := make(map[string]*UserUsage)
|
||||
|
||||
for _, did := range crewDIDs {
|
||||
fmt.Printf(" Checking manifests for %s...", did)
|
||||
|
||||
// Resolve DID to PDS
|
||||
pdsEndpoint, err := resolveDIDToPDS(did)
|
||||
if err != nil {
|
||||
fmt.Printf(" (failed to resolve PDS: %v)\n", err)
|
||||
continue
|
||||
}
|
||||
|
||||
// Fetch manifests that use this hold
|
||||
manifests, err := fetchUserManifestsForHold(pdsEndpoint, did, holdDID)
|
||||
if err != nil {
|
||||
fmt.Printf(" (failed to fetch manifests: %v)\n", err)
|
||||
continue
|
||||
}
|
||||
|
||||
if len(manifests) == 0 {
|
||||
fmt.Printf(" 0 manifests\n")
|
||||
continue
|
||||
}
|
||||
|
||||
// Calculate unique layers across all manifests
|
||||
usage := &UserUsage{
|
||||
DID: did,
|
||||
UniqueLayers: make(map[string]int64),
|
||||
Repositories: make(map[string]bool),
|
||||
}
|
||||
|
||||
for _, m := range manifests {
|
||||
usage.Repositories[m.Repository] = true
|
||||
|
||||
// Add config blob
|
||||
if m.Config != nil {
|
||||
if _, seen := usage.UniqueLayers[m.Config.Digest]; !seen {
|
||||
usage.UniqueLayers[m.Config.Digest] = m.Config.Size
|
||||
usage.TotalSize += m.Config.Size
|
||||
usage.LayerCount++
|
||||
}
|
||||
}
|
||||
|
||||
// Add layers
|
||||
for _, layer := range m.Layers {
|
||||
if _, seen := usage.UniqueLayers[layer.Digest]; !seen {
|
||||
usage.UniqueLayers[layer.Digest] = layer.Size
|
||||
usage.TotalSize += layer.Size
|
||||
usage.LayerCount++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Printf(" %d manifests, %d unique layers, %s\n", len(manifests), usage.LayerCount, humanSize(usage.TotalSize))
|
||||
|
||||
if usage.LayerCount > 0 {
|
||||
userUsage[did] = usage
|
||||
}
|
||||
}
|
||||
|
||||
return userUsage, nil
|
||||
}
|
||||
|
||||
// fetchCrewMembers gets all crew member DIDs from the hold
|
||||
func fetchCrewMembers(baseURL, holdDID string) ([]string, error) {
|
||||
var dids []string
|
||||
seen := make(map[string]bool)
|
||||
|
||||
cursor := ""
|
||||
for {
|
||||
u := fmt.Sprintf("%s/xrpc/com.atproto.repo.listRecords", baseURL)
|
||||
params := url.Values{}
|
||||
params.Set("repo", holdDID)
|
||||
params.Set("collection", "io.atcr.hold.crew")
|
||||
params.Set("limit", "100")
|
||||
if cursor != "" {
|
||||
params.Set("cursor", cursor)
|
||||
}
|
||||
|
||||
resp, err := client.Get(u + "?" + params.Encode())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var listResp ListRecordsResponse
|
||||
if err := json.NewDecoder(resp.Body).Decode(&listResp); err != nil {
|
||||
resp.Body.Close()
|
||||
return nil, err
|
||||
}
|
||||
resp.Body.Close()
|
||||
|
||||
for _, rec := range listResp.Records {
|
||||
var crew CrewRecord
|
||||
if err := json.Unmarshal(rec.Value, &crew); err != nil {
|
||||
continue
|
||||
}
|
||||
if crew.Member != "" && !seen[crew.Member] {
|
||||
seen[crew.Member] = true
|
||||
dids = append(dids, crew.Member)
|
||||
}
|
||||
}
|
||||
|
||||
if listResp.Cursor == "" || len(listResp.Records) < 100 {
|
||||
break
|
||||
}
|
||||
cursor = listResp.Cursor
|
||||
}
|
||||
|
||||
return dids, nil
|
||||
}
|
||||
|
||||
// fetchCaptain gets the captain DID from the hold
|
||||
func fetchCaptain(baseURL, holdDID string) (string, error) {
|
||||
u := fmt.Sprintf("%s/xrpc/com.atproto.repo.getRecord?repo=%s&collection=io.atcr.hold.captain&rkey=self",
|
||||
baseURL, url.QueryEscape(holdDID))
|
||||
|
||||
resp, err := client.Get(u)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
return "", fmt.Errorf("status %d", resp.StatusCode)
|
||||
}
|
||||
|
||||
var result struct {
|
||||
Value struct {
|
||||
Owner string `json:"owner"`
|
||||
} `json:"value"`
|
||||
}
|
||||
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
return result.Value.Owner, nil
|
||||
}
|
||||
|
||||
// fetchUserManifestsForHold fetches all manifests from a user's PDS that use the specified hold
|
||||
func fetchUserManifestsForHold(pdsEndpoint, userDID, holdDID string) ([]ManifestRecord, error) {
|
||||
var manifests []ManifestRecord
|
||||
cursor := ""
|
||||
|
||||
for {
|
||||
u := fmt.Sprintf("%s/xrpc/com.atproto.repo.listRecords", pdsEndpoint)
|
||||
params := url.Values{}
|
||||
params.Set("repo", userDID)
|
||||
params.Set("collection", "io.atcr.manifest")
|
||||
params.Set("limit", "100")
|
||||
if cursor != "" {
|
||||
params.Set("cursor", cursor)
|
||||
}
|
||||
|
||||
resp, err := client.Get(u + "?" + params.Encode())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
resp.Body.Close()
|
||||
return nil, fmt.Errorf("status %d", resp.StatusCode)
|
||||
}
|
||||
|
||||
var listResp ListRecordsResponse
|
||||
if err := json.NewDecoder(resp.Body).Decode(&listResp); err != nil {
|
||||
resp.Body.Close()
|
||||
return nil, err
|
||||
}
|
||||
resp.Body.Close()
|
||||
|
||||
for _, rec := range listResp.Records {
|
||||
var m ManifestRecord
|
||||
if err := json.Unmarshal(rec.Value, &m); err != nil {
|
||||
continue
|
||||
}
|
||||
// Only include manifests for this hold
|
||||
if m.HoldDID == holdDID {
|
||||
manifests = append(manifests, m)
|
||||
}
|
||||
}
|
||||
|
||||
if listResp.Cursor == "" || len(listResp.Records) < 100 {
|
||||
break
|
||||
}
|
||||
cursor = listResp.Cursor
|
||||
}
|
||||
|
||||
return manifests, nil
|
||||
}
|
||||
|
||||
// getHoldDID fetches the hold's DID from /.well-known/atproto-did
|
||||
func getHoldDID(baseURL string) (string, error) {
|
||||
resp, err := http.Get(baseURL + "/.well-known/atproto-did")
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
return "", fmt.Errorf("unexpected status: %d", resp.StatusCode)
|
||||
}
|
||||
|
||||
body, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
return strings.TrimSpace(string(body)), nil
|
||||
}
|
||||
|
||||
// fetchAllLayerRecords fetches all layer records with pagination
|
||||
func fetchAllLayerRecords(baseURL, holdDID string) ([]LayerRecord, error) {
|
||||
var allLayers []LayerRecord
|
||||
cursor := ""
|
||||
limit := 100
|
||||
|
||||
for {
|
||||
u := fmt.Sprintf("%s/xrpc/com.atproto.repo.listRecords", baseURL)
|
||||
params := url.Values{}
|
||||
params.Set("repo", holdDID)
|
||||
params.Set("collection", "io.atcr.hold.layer")
|
||||
params.Set("limit", fmt.Sprintf("%d", limit))
|
||||
if cursor != "" {
|
||||
params.Set("cursor", cursor)
|
||||
}
|
||||
|
||||
fullURL := u + "?" + params.Encode()
|
||||
fmt.Printf(" Fetching: %s\n", fullURL)
|
||||
|
||||
resp, err := client.Get(fullURL)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("request failed: %w", err)
|
||||
}
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
resp.Body.Close()
|
||||
return nil, fmt.Errorf("unexpected status %d: %s", resp.StatusCode, string(body))
|
||||
}
|
||||
|
||||
var listResp ListRecordsResponse
|
||||
if err := json.NewDecoder(resp.Body).Decode(&listResp); err != nil {
|
||||
resp.Body.Close()
|
||||
return nil, fmt.Errorf("decode failed: %w", err)
|
||||
}
|
||||
resp.Body.Close()
|
||||
|
||||
for _, rec := range listResp.Records {
|
||||
var layer LayerRecord
|
||||
if err := json.Unmarshal(rec.Value, &layer); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Warning: failed to parse layer record: %v\n", err)
|
||||
continue
|
||||
}
|
||||
allLayers = append(allLayers, layer)
|
||||
}
|
||||
|
||||
fmt.Printf(" Got %d records (total: %d)\n", len(listResp.Records), len(allLayers))
|
||||
|
||||
if listResp.Cursor == "" || len(listResp.Records) < limit {
|
||||
break
|
||||
}
|
||||
cursor = listResp.Cursor
|
||||
}
|
||||
|
||||
return allLayers, nil
|
||||
}
|
||||
|
||||
// resolveDIDToHandle resolves a DID to a handle using the PLC directory or did:web
|
||||
func resolveDIDToHandle(did string) (string, error) {
|
||||
if strings.HasPrefix(did, "did:web:") {
|
||||
return strings.TrimPrefix(did, "did:web:"), nil
|
||||
}
|
||||
|
||||
if strings.HasPrefix(did, "did:plc:") {
|
||||
plcURL := "https://plc.directory/" + did
|
||||
resp, err := client.Get(plcURL)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("PLC query failed: %w", err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
return "", fmt.Errorf("PLC returned status %d", resp.StatusCode)
|
||||
}
|
||||
|
||||
var plcDoc struct {
|
||||
AlsoKnownAs []string `json:"alsoKnownAs"`
|
||||
}
|
||||
if err := json.NewDecoder(resp.Body).Decode(&plcDoc); err != nil {
|
||||
return "", fmt.Errorf("failed to parse PLC response: %w", err)
|
||||
}
|
||||
|
||||
for _, aka := range plcDoc.AlsoKnownAs {
|
||||
if strings.HasPrefix(aka, "at://") {
|
||||
return strings.TrimPrefix(aka, "at://"), nil
|
||||
}
|
||||
}
|
||||
|
||||
return did, nil
|
||||
}
|
||||
|
||||
return did, nil
|
||||
}
|
||||
|
||||
// resolveDIDToPDS resolves a DID to its PDS endpoint
|
||||
func resolveDIDToPDS(did string) (string, error) {
|
||||
if strings.HasPrefix(did, "did:web:") {
|
||||
// did:web:example.com -> https://example.com
|
||||
// did:web:host%3A8080 -> http://host:8080
|
||||
domain := strings.TrimPrefix(did, "did:web:")
|
||||
domain = strings.ReplaceAll(domain, "%3A", ":")
|
||||
scheme := "https"
|
||||
if strings.Contains(domain, ":") {
|
||||
scheme = "http"
|
||||
}
|
||||
return scheme + "://" + domain, nil
|
||||
}
|
||||
|
||||
if strings.HasPrefix(did, "did:plc:") {
|
||||
plcURL := "https://plc.directory/" + did
|
||||
resp, err := client.Get(plcURL)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("PLC query failed: %w", err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
return "", fmt.Errorf("PLC returned status %d", resp.StatusCode)
|
||||
}
|
||||
|
||||
var plcDoc struct {
|
||||
Service []struct {
|
||||
ID string `json:"id"`
|
||||
Type string `json:"type"`
|
||||
ServiceEndpoint string `json:"serviceEndpoint"`
|
||||
} `json:"service"`
|
||||
}
|
||||
if err := json.NewDecoder(resp.Body).Decode(&plcDoc); err != nil {
|
||||
return "", fmt.Errorf("failed to parse PLC response: %w", err)
|
||||
}
|
||||
|
||||
for _, svc := range plcDoc.Service {
|
||||
if svc.Type == "AtprotoPersonalDataServer" {
|
||||
return svc.ServiceEndpoint, nil
|
||||
}
|
||||
}
|
||||
|
||||
return "", fmt.Errorf("no PDS found in DID document")
|
||||
}
|
||||
|
||||
return "", fmt.Errorf("unsupported DID method")
|
||||
}
|
||||
|
||||
// humanSize converts bytes to human-readable format
|
||||
func humanSize(bytes int64) string {
|
||||
const (
|
||||
KB = 1024
|
||||
MB = 1024 * KB
|
||||
GB = 1024 * MB
|
||||
TB = 1024 * GB
|
||||
)
|
||||
|
||||
switch {
|
||||
case bytes >= TB:
|
||||
return fmt.Sprintf("%.2f TB", float64(bytes)/TB)
|
||||
case bytes >= GB:
|
||||
return fmt.Sprintf("%.2f GB", float64(bytes)/GB)
|
||||
case bytes >= MB:
|
||||
return fmt.Sprintf("%.2f MB", float64(bytes)/MB)
|
||||
case bytes >= KB:
|
||||
return fmt.Sprintf("%.2f KB", float64(bytes)/KB)
|
||||
default:
|
||||
return fmt.Sprintf("%d B", bytes)
|
||||
}
|
||||
}
|
||||
166
config-appview.example.yaml
Normal file
166
config-appview.example.yaml
Normal file
@@ -0,0 +1,166 @@
|
||||
# ATCR AppView Configuration
|
||||
# Generated with defaults — edit as needed.
|
||||
|
||||
# Configuration format version.
|
||||
version: "0.1"
|
||||
# Log level: debug, info, warn, error.
|
||||
log_level: info
|
||||
# Remote log shipping settings.
|
||||
log_shipper:
|
||||
# Log shipping backend: "victoria", "opensearch", or "loki". Empty disables shipping.
|
||||
backend: ""
|
||||
# Remote log service endpoint, e.g. "http://victorialogs:9428".
|
||||
url: ""
|
||||
# Number of log entries to buffer before flushing to the remote service.
|
||||
batch_size: 100
|
||||
# Maximum time between flushes, even if batch is not full.
|
||||
flush_interval: 5s
|
||||
# Basic auth username for the log service (optional).
|
||||
username: ""
|
||||
# Basic auth password for the log service (optional).
|
||||
password: ""
|
||||
# HTTP server and identity settings.
|
||||
server:
|
||||
# Listen address, e.g. ":5000" or "127.0.0.1:5000".
|
||||
addr: :5000
|
||||
# Public-facing URL for OAuth callbacks and JWT realm. Auto-detected if empty.
|
||||
base_url: ""
|
||||
# DID of the hold service for blob storage, e.g. "did:web:hold01.atcr.io" (REQUIRED).
|
||||
default_hold_did: ""
|
||||
# Allows HTTP (not HTTPS) for DID resolution and uses transition:generic OAuth scope.
|
||||
test_mode: false
|
||||
# Path to P-256 private key for OAuth client authentication. Auto-generated on first run.
|
||||
oauth_key_path: /var/lib/atcr/oauth/client.key
|
||||
# Display name shown on OAuth authorization screens.
|
||||
client_name: AT Container Registry
|
||||
# Short name used in page titles and browser tabs.
|
||||
client_short_name: ATCR
|
||||
# Separate domains for OCI registry API (e.g. ["buoy.cr"]). First is primary. Browser visits redirect to BaseURL.
|
||||
registry_domains: []
|
||||
# DIDs of holds this appview manages billing for. Tier updates are pushed to these holds.
|
||||
managed_holds:
|
||||
- did:web:172.28.0.3%3A8080
|
||||
# Web UI settings.
|
||||
ui:
|
||||
# SQLite/libSQL database for OAuth sessions, stars, pull counts, and device approvals.
|
||||
database_path: /var/lib/atcr/ui.db
|
||||
# Visual theme name (e.g. "seamark"). Empty uses default atcr.io branding.
|
||||
theme: "seamark"
|
||||
# libSQL sync URL (libsql://...). Works with Turso cloud or self-hosted libsql-server. Leave empty for local-only SQLite.
|
||||
libsql_sync_url: ""
|
||||
# Auth token for libSQL sync. Required if libsql_sync_url is set.
|
||||
libsql_auth_token: ""
|
||||
# How often to sync with remote libSQL server. Default: 60s.
|
||||
libsql_sync_interval: 1m0s
|
||||
# Health check and cache settings.
|
||||
health:
|
||||
# How long to cache hold health check results.
|
||||
cache_ttl: 15m0s
|
||||
# How often to refresh hold health checks.
|
||||
check_interval: 15m0s
|
||||
# ATProto Jetstream event stream settings.
|
||||
jetstream:
|
||||
# Jetstream WebSocket endpoints, tried in order on failure.
|
||||
urls:
|
||||
- wss://jetstream2.us-west.bsky.network/subscribe
|
||||
- wss://jetstream1.us-west.bsky.network/subscribe
|
||||
- wss://jetstream2.us-east.bsky.network/subscribe
|
||||
- wss://jetstream1.us-east.bsky.network/subscribe
|
||||
# Sync existing records from PDS on startup.
|
||||
backfill_enabled: true
|
||||
# How often to re-run backfill to catch missed events. Set to 0 to only backfill on startup.
|
||||
backfill_interval: 24h0m0s
|
||||
# Relay endpoints for backfill, tried in order on failure.
|
||||
relay_endpoints:
|
||||
- https://relay1.us-east.bsky.network
|
||||
- https://relay1.us-west.bsky.network
|
||||
- https://zlay.waow.tech
|
||||
# JWT authentication settings.
|
||||
auth:
|
||||
# RSA private key for signing registry JWTs issued to Docker clients.
|
||||
key_path: /var/lib/atcr/auth/private-key.pem
|
||||
# X.509 certificate matching the JWT signing key.
|
||||
cert_path: /var/lib/atcr/auth/private-key.crt
|
||||
# Credential helper download settings.
|
||||
credential_helper:
|
||||
# Tangled repository URL for credential helper downloads.
|
||||
tangled_repo: ""
|
||||
# Legal page customization for self-hosted instances.
|
||||
legal:
|
||||
# Organization name for Terms of Service and Privacy Policy. Defaults to server.client_name.
|
||||
company_name: ""
|
||||
# Governing law jurisdiction for legal terms.
|
||||
jurisdiction: ""
|
||||
# Stripe billing integration (requires -tags billing build).
|
||||
billing:
|
||||
# Stripe secret key. Can also be set via STRIPE_SECRET_KEY env var (takes precedence). Billing is enabled automatically when set.
|
||||
stripe_secret_key: ""
|
||||
# Stripe webhook signing secret. Can also be set via STRIPE_WEBHOOK_SECRET env var (takes precedence).
|
||||
webhook_secret: ""
|
||||
# ISO 4217 currency code (e.g. "usd").
|
||||
currency: usd
|
||||
# Redirect URL after successful checkout. Use {base_url} placeholder.
|
||||
success_url: '{base_url}/settings#storage'
|
||||
# Redirect URL after cancelled checkout. Use {base_url} placeholder.
|
||||
cancel_url: '{base_url}/settings#storage'
|
||||
# Subscription tiers ordered by rank (lowest to highest).
|
||||
tiers:
|
||||
- # Tier name. Position in list determines rank (0-based).
|
||||
name: free
|
||||
# Short description shown on the plan card.
|
||||
description: Get started with basic storage
|
||||
# List of features included in this tier.
|
||||
features: []
|
||||
# Stripe price ID for monthly billing. Empty = free tier.
|
||||
stripe_price_monthly: ""
|
||||
# Stripe price ID for yearly billing.
|
||||
stripe_price_yearly: ""
|
||||
# Maximum webhooks for this tier (-1 = unlimited).
|
||||
max_webhooks: 1
|
||||
# Allow all webhook trigger types (not just first-scan).
|
||||
webhook_all_triggers: false
|
||||
supporter_badge: false
|
||||
- # Tier name. Position in list determines rank (0-based).
|
||||
name: Supporter
|
||||
# Short description shown on the plan card.
|
||||
description: Get started with basic storage
|
||||
# List of features included in this tier.
|
||||
features: []
|
||||
# Stripe price ID for monthly billing. Empty = free tier.
|
||||
stripe_price_monthly: ""
|
||||
# Stripe price ID for yearly billing.
|
||||
stripe_price_yearly: "price_1SmK1mRROAC4bYmSwhTQ7RY9"
|
||||
# Maximum webhooks for this tier (-1 = unlimited).
|
||||
max_webhooks: 1
|
||||
# Allow all webhook trigger types (not just first-scan).
|
||||
webhook_all_triggers: false
|
||||
supporter_badge: true
|
||||
- # Tier name. Position in list determines rank (0-based).
|
||||
name: bosun
|
||||
# Short description shown on the plan card.
|
||||
description: More storage with scan-on-push
|
||||
# List of features included in this tier.
|
||||
features: []
|
||||
# Stripe price ID for monthly billing. Empty = free tier.
|
||||
stripe_price_monthly: "price_1SmK4QRROAC4bYmSxpr35HUl"
|
||||
# Stripe price ID for yearly billing.
|
||||
stripe_price_yearly: "price_1SmJuLRROAC4bYmSUgVCwZWo"
|
||||
# Maximum webhooks for this tier (-1 = unlimited).
|
||||
max_webhooks: 10
|
||||
# Allow all webhook trigger types (not just first-scan).
|
||||
webhook_all_triggers: true
|
||||
supporter_badge: true
|
||||
# - # Tier name. Position in list determines rank (0-based).
|
||||
# name: quartermaster
|
||||
# # Short description shown on the plan card.
|
||||
# description: Maximum storage for power users
|
||||
# # List of features included in this tier.
|
||||
# features: []
|
||||
# # Stripe price ID for monthly billing. Empty = free tier.
|
||||
# stripe_price_monthly: price_xxx
|
||||
# # Stripe price ID for yearly billing.
|
||||
# stripe_price_yearly: price_yyy
|
||||
# # Maximum webhooks for this tier (-1 = unlimited).
|
||||
# max_webhooks: -1
|
||||
# # Allow all webhook trigger types (not just first-scan).
|
||||
# webhook_all_triggers: true
|
||||
137
config-hold.example.yaml
Normal file
137
config-hold.example.yaml
Normal file
@@ -0,0 +1,137 @@
|
||||
# ATCR Hold Service Configuration
|
||||
# Generated with defaults — edit as needed.
|
||||
|
||||
# Configuration format version.
|
||||
version: "0.1"
|
||||
# Log level: debug, info, warn, error.
|
||||
log_level: info
|
||||
# Remote log shipping settings.
|
||||
log_shipper:
|
||||
# Log shipping backend: "victoria", "opensearch", or "loki". Empty disables shipping.
|
||||
backend: ""
|
||||
# Remote log service endpoint, e.g. "http://victorialogs:9428".
|
||||
url: ""
|
||||
# Number of log entries to buffer before flushing to the remote service.
|
||||
batch_size: 100
|
||||
# Maximum time between flushes, even if batch is not full.
|
||||
flush_interval: 5s
|
||||
# Basic auth username for the log service (optional).
|
||||
username: ""
|
||||
# Basic auth password for the log service (optional).
|
||||
password: ""
|
||||
# S3-compatible blob storage settings.
|
||||
storage:
|
||||
# S3-compatible access key (AWS, Storj, Minio, UpCloud).
|
||||
access_key: ""
|
||||
# S3-compatible secret key.
|
||||
secret_key: ""
|
||||
# S3 region, e.g. "us-east-1". Used for request signing.
|
||||
region: us-east-1
|
||||
# S3 bucket for blob storage (REQUIRED). Must already exist.
|
||||
bucket: ""
|
||||
# Custom S3 endpoint for non-AWS providers (e.g. "https://gateway.storjshare.io").
|
||||
endpoint: ""
|
||||
# CDN pull zone URL for downloads. When set, presigned GET/HEAD URLs use this host instead of the S3 endpoint. Uploads and API calls still use the S3 endpoint.
|
||||
pull_zone: ""
|
||||
# HTTP server and identity settings.
|
||||
server:
|
||||
# Listen address, e.g. ":8080" or "0.0.0.0:8080".
|
||||
addr: :8080
|
||||
# Externally reachable URL used for did:web identity (REQUIRED), e.g. "https://hold.example.com".
|
||||
public_url: ""
|
||||
# Allow unauthenticated blob reads. If false, readers need crew membership.
|
||||
public: false
|
||||
# DID of successor hold for migration. Appview redirects all requests to the successor.
|
||||
successor: ""
|
||||
# Use localhost for OAuth redirects during development.
|
||||
test_mode: false
|
||||
# Request crawl from this relay on startup to make the embedded PDS discoverable.
|
||||
relay_endpoint: ""
|
||||
# DID of the appview this hold is managed by (e.g. did:web:atcr.io). Resolved via did:web for URL and public key.
|
||||
appview_did: did:web:172.28.0.2%3A5000
|
||||
# Read timeout for HTTP requests.
|
||||
read_timeout: 5m0s
|
||||
# Write timeout for HTTP requests.
|
||||
write_timeout: 5m0s
|
||||
# Auto-registration and bootstrap settings.
|
||||
registration:
|
||||
# DID of the hold captain. If set, auto-creates captain and profile records on startup.
|
||||
owner_did: ""
|
||||
# Create a wildcard crew record allowing any authenticated user to join.
|
||||
allow_all_crew: false
|
||||
# URL to fetch avatar image from during bootstrap.
|
||||
profile_avatar_url: https://atcr.io/web-app-manifest-192x192.png
|
||||
# Bluesky profile display name. Synced on every startup.
|
||||
profile_display_name: Cargo Hold
|
||||
# Bluesky profile description. Synced on every startup.
|
||||
profile_description: ahoy from the cargo hold
|
||||
# Post to Bluesky when users push images. Synced to captain record on startup.
|
||||
enable_bluesky_posts: false
|
||||
# Deployment region, auto-detected from cloud metadata or S3 config.
|
||||
region: ""
|
||||
# Embedded PDS database settings.
|
||||
database:
|
||||
# Directory for the embedded PDS database (carstore + SQLite).
|
||||
path: /var/lib/atcr-hold
|
||||
# PDS signing key path. Defaults to {database.path}/signing.key.
|
||||
key_path: ""
|
||||
# DID method: 'web' (default, derived from public_url) or 'plc' (registered with PLC directory).
|
||||
did_method: web
|
||||
# Explicit DID for this hold. If set with did_method 'plc', adopts this identity instead of creating new. Use for recovery/migration.
|
||||
did: ""
|
||||
# PLC directory URL. Only used when did_method is 'plc'. Default: https://plc.directory
|
||||
plc_directory_url: https://plc.directory
|
||||
# Rotation key for did:plc in multibase format (starting with 'z'). Generate with: goat key generate. Supports K-256 and P-256 curves. Controls DID identity (separate from signing key).
|
||||
rotation_key: ""
|
||||
# libSQL sync URL (libsql://...). Works with Turso cloud, Bunny DB, or self-hosted libsql-server. Leave empty for local-only SQLite.
|
||||
libsql_sync_url: ""
|
||||
# Auth token for libSQL sync. Required if libsql_sync_url is set.
|
||||
libsql_auth_token: ""
|
||||
# How often to sync with remote libSQL server. Default: 60s.
|
||||
libsql_sync_interval: 1m0s
|
||||
# Admin panel settings.
|
||||
admin:
|
||||
# Enable the web-based admin panel for crew and storage management.
|
||||
enabled: true
|
||||
# Garbage collection settings.
|
||||
gc:
|
||||
# Enable nightly garbage collection of orphaned blobs and records.
|
||||
enabled: false
|
||||
# Storage quota tiers. Empty disables quota enforcement.
|
||||
quota:
|
||||
# Quota tiers ordered by rank (lowest to highest). Position determines rank.
|
||||
tiers:
|
||||
- # Tier name used as the key for crew assignments.
|
||||
name: free
|
||||
# Storage quota limit (e.g. "5GB", "50GB", "1TB").
|
||||
quota: 5GB
|
||||
# Trigger vulnerability scan immediately on push. When false, images are still scanned by background scheduling.
|
||||
scan_on_push: false
|
||||
- # Tier name used as the key for crew assignments.
|
||||
name: deckhand
|
||||
# Storage quota limit (e.g. "5GB", "50GB", "1TB").
|
||||
quota: 5GB
|
||||
# Trigger vulnerability scan immediately on push. When false, images are still scanned by background scheduling.
|
||||
scan_on_push: false
|
||||
- # Tier name used as the key for crew assignments.
|
||||
name: bosun
|
||||
# Storage quota limit (e.g. "5GB", "50GB", "1TB").
|
||||
quota: 50GB
|
||||
# Trigger vulnerability scan immediately on push. When false, images are still scanned by background scheduling.
|
||||
scan_on_push: true
|
||||
- # Tier name used as the key for crew assignments.
|
||||
name: quartermaster
|
||||
# Storage quota limit (e.g. "5GB", "50GB", "1TB").
|
||||
quota: 100GB
|
||||
# Trigger vulnerability scan immediately on push. When false, images are still scanned by background scheduling.
|
||||
scan_on_push: true
|
||||
# Default tier assignment for new crew members.
|
||||
defaults:
|
||||
# Tier assigned to new crew members who don't have an explicit tier.
|
||||
new_crew_tier: deckhand
|
||||
# Vulnerability scanner settings. Empty disables scanning.
|
||||
scanner:
|
||||
# Shared secret for scanner WebSocket auth. Empty disables scanning.
|
||||
secret: ""
|
||||
# Minimum interval between re-scans of the same manifest. When set, the hold proactively scans manifests when the scanner is idle. Default: 168h (7 days). Set to 0 to disable.
|
||||
rescan_interval: 168h0m0s
|
||||
@@ -1,193 +0,0 @@
|
||||
# ATCR Production Environment Configuration
|
||||
# Copy this file to .env and fill in your values
|
||||
#
|
||||
# Usage:
|
||||
# 1. cp deploy/.env.prod.template .env
|
||||
# 2. Edit .env with your configuration
|
||||
# 3. systemctl restart atcr
|
||||
#
|
||||
# NOTE: This file is loaded by docker-compose.prod.yml
|
||||
|
||||
# ==============================================================================
|
||||
# Domain Configuration
|
||||
# ==============================================================================
|
||||
|
||||
# Main AppView domain (registry API + web UI)
|
||||
# REQUIRED: Update with your domain
|
||||
APPVIEW_DOMAIN=atcr.io
|
||||
|
||||
# Hold service domain (presigned URL generator)
|
||||
# REQUIRED: Update with your domain
|
||||
HOLD_DOMAIN=hold01.atcr.io
|
||||
|
||||
# ==============================================================================
|
||||
# Hold Service Configuration
|
||||
# ==============================================================================
|
||||
|
||||
# Your ATProto DID (REQUIRED for hold registration)
|
||||
# Get your DID from: https://bsky.social/xrpc/com.atproto.identity.resolveHandle?handle=yourhandle.bsky.social
|
||||
# Example: did:plc:abc123xyz789
|
||||
HOLD_OWNER=did:plc:pddp4xt5lgnv2qsegbzzs4xg
|
||||
|
||||
# Allow public blob reads (pulls) without authentication
|
||||
# - true: Anyone can pull images (read-only)
|
||||
# - false: Only authenticated users can pull
|
||||
# Default: false (private)
|
||||
HOLD_PUBLIC=false
|
||||
|
||||
# Allow all authenticated users to write to this hold
|
||||
# This setting controls write permissions for authenticated ATCR users
|
||||
#
|
||||
# - true: Any authenticated ATCR user can push images (treat all as crew)
|
||||
# Useful for shared/community holds where you want to allow
|
||||
# multiple users to push without explicit crew membership.
|
||||
# Users must still authenticate via ATProto OAuth.
|
||||
#
|
||||
# - false: Only hold owner and explicit crew members can push (default)
|
||||
# Write access requires io.atcr.hold.crew record in owner's PDS.
|
||||
# Most secure option for production holds.
|
||||
#
|
||||
# Read permissions are controlled by HOLD_PUBLIC (above).
|
||||
#
|
||||
# Security model:
|
||||
# Read: HOLD_PUBLIC=true → anonymous + authenticated users
|
||||
# HOLD_PUBLIC=false → authenticated users only
|
||||
# Write: HOLD_ALLOW_ALL_CREW=true → all authenticated users
|
||||
# HOLD_ALLOW_ALL_CREW=false → owner + crew only (verified via PDS)
|
||||
#
|
||||
# Use cases:
|
||||
# - Public registry: HOLD_PUBLIC=true, HOLD_ALLOW_ALL_CREW=true
|
||||
# - ATProto users only: HOLD_PUBLIC=false, HOLD_ALLOW_ALL_CREW=true
|
||||
# - Private hold (default): HOLD_PUBLIC=false, HOLD_ALLOW_ALL_CREW=false
|
||||
#
|
||||
# Default: false
|
||||
HOLD_ALLOW_ALL_CREW=false
|
||||
|
||||
# ==============================================================================
|
||||
# S3/UpCloud Object Storage Configuration
|
||||
# ==============================================================================
|
||||
|
||||
# Storage driver type
|
||||
# Options: s3, filesystem
|
||||
# Default: s3
|
||||
STORAGE_DRIVER=s3
|
||||
|
||||
# S3 Access Credentials
|
||||
# Get these from UpCloud Object Storage console
|
||||
AWS_ACCESS_KEY_ID=
|
||||
AWS_SECRET_ACCESS_KEY=
|
||||
|
||||
# S3 Region (for distribution S3 driver)
|
||||
# UpCloud regions: us-chi1, us-nyc1, de-fra1, uk-lon1, sg-sin1, etc.
|
||||
# Default: us-chi1
|
||||
S3_REGION=us-chi1
|
||||
|
||||
# S3 Bucket Name
|
||||
# Create this bucket in UpCloud Object Storage
|
||||
# Example: atcr-blobs
|
||||
S3_BUCKET=atcr
|
||||
|
||||
# S3 Endpoint
|
||||
# Get this from UpCloud Console → Storage → Object Storage → Your bucket → "S3 endpoint"
|
||||
# Format: https://[bucket-id].upcloudobjects.com
|
||||
# Example: https://6vmss.upcloudobjects.com
|
||||
#
|
||||
# NOTE: Use the bucket-specific endpoint, NOT a custom domain
|
||||
# Custom domains break presigned URL generation
|
||||
S3_ENDPOINT=https://6vmss.upcloudobjects.com
|
||||
|
||||
# S3 Region Endpoint (alternative to S3_ENDPOINT)
|
||||
# Use this if your S3 driver requires region-specific endpoint format
|
||||
# Example: s3.us-chi1.upcloudobjects.com
|
||||
# S3_REGION_ENDPOINT=
|
||||
|
||||
# ==============================================================================
|
||||
# AppView Configuration
|
||||
# ==============================================================================
|
||||
|
||||
# JWT token expiration in seconds
|
||||
# Default: 300 (5 minutes)
|
||||
ATCR_TOKEN_EXPIRATION=300
|
||||
|
||||
# Enable web UI
|
||||
# Default: true
|
||||
ATCR_UI_ENABLED=true
|
||||
|
||||
# ==============================================================================
|
||||
# Logging Configuration
|
||||
# ==============================================================================
|
||||
|
||||
# Log level: debug, info, warn, error
|
||||
# Default: info
|
||||
ATCR_LOG_LEVEL=info
|
||||
|
||||
# Log formatter: text, json
|
||||
# Default: text
|
||||
ATCR_LOG_FORMATTER=text
|
||||
|
||||
# ==============================================================================
|
||||
# Jetstream Configuration (ATProto event streaming)
|
||||
# ==============================================================================
|
||||
|
||||
# Jetstream WebSocket URL for real-time ATProto events
|
||||
# Default: wss://jetstream2.us-west.bsky.network/subscribe
|
||||
JETSTREAM_URL=wss://jetstream2.us-west.bsky.network/subscribe
|
||||
|
||||
# Enable backfill worker to sync historical records
|
||||
# Default: true (recommended for production)
|
||||
ATCR_BACKFILL_ENABLED=true
|
||||
|
||||
# ATProto relay endpoint for backfill sync API
|
||||
# Default: https://relay1.us-east.bsky.network
|
||||
ATCR_RELAY_ENDPOINT=https://relay1.us-east.bsky.network
|
||||
|
||||
# Backfill interval
|
||||
# Examples: 30m, 1h, 2h, 24h
|
||||
# Default: 1h
|
||||
ATCR_BACKFILL_INTERVAL=1h
|
||||
|
||||
# ==============================================================================
|
||||
# Optional: Filesystem Storage (alternative to S3)
|
||||
# ==============================================================================
|
||||
|
||||
# If using filesystem storage instead of S3:
|
||||
# 1. Uncomment these lines
|
||||
# 2. Comment out all S3 variables above
|
||||
# 3. Set STORAGE_DRIVER=filesystem
|
||||
|
||||
# STORAGE_DRIVER=filesystem
|
||||
# STORAGE_ROOT_DIR=/var/lib/atcr/hold
|
||||
|
||||
# ==============================================================================
|
||||
# Advanced Configuration
|
||||
# ==============================================================================
|
||||
|
||||
# Override service name (defaults to APPVIEW_DOMAIN)
|
||||
# ATCR_SERVICE_NAME=atcr.io
|
||||
|
||||
# Debug listen address (optional - for pprof debugging)
|
||||
# ATCR_DEBUG_ADDR=:5001
|
||||
|
||||
# ==============================================================================
|
||||
# CHECKLIST
|
||||
# ==============================================================================
|
||||
#
|
||||
# Before starting ATCR, ensure you have:
|
||||
#
|
||||
# ☐ Set APPVIEW_DOMAIN (e.g., atcr.io)
|
||||
# ☐ Set HOLD_DOMAIN (e.g., hold01.atcr.io)
|
||||
# ☐ Set HOLD_OWNER (your ATProto DID)
|
||||
# ☐ Set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
|
||||
# ☐ Set S3_BUCKET (created in UpCloud Object Storage)
|
||||
# ☐ Set S3_ENDPOINT (UpCloud endpoint or custom domain)
|
||||
# ☐ Configured DNS records:
|
||||
# - A record: atcr.io → server IP
|
||||
# - A record: hold01.atcr.io → server IP
|
||||
# - CNAME: blobs.atcr.io → [bucket].us-chi1.upcloudobjects.com
|
||||
# ☐ Disabled Cloudflare proxy (gray cloud, not orange)
|
||||
# ☐ Waited for DNS propagation (check with: dig atcr.io)
|
||||
#
|
||||
# After starting:
|
||||
# ☐ Complete hold OAuth registration (run: /opt/atcr/get-hold-oauth.sh)
|
||||
# ☐ Test registry: docker pull atcr.io/test/image
|
||||
# ☐ Monitor logs: /opt/atcr/logs.sh
|
||||
@@ -243,6 +243,26 @@ docker pull atcr.io/yourhandle/test:latest
|
||||
docker logs -f atcr-appview
|
||||
```
|
||||
|
||||
#### Enable debug logging
|
||||
|
||||
Toggle debug logging at runtime without restarting the container:
|
||||
|
||||
```bash
|
||||
# Enable debug logging (auto-reverts after 30 minutes)
|
||||
docker kill -s SIGUSR1 atcr-appview
|
||||
docker kill -s SIGUSR1 atcr-hold
|
||||
|
||||
# Manually disable before timeout
|
||||
docker kill -s SIGUSR1 atcr-appview
|
||||
```
|
||||
|
||||
When toggled, you'll see:
|
||||
```
|
||||
level=INFO msg="Log level changed" from=INFO to=DEBUG trigger=SIGUSR1 auto_revert_in=30m0s
|
||||
```
|
||||
|
||||
**Note:** Despite the command name, `docker kill -s SIGUSR1` does NOT stop the container. It sends a user-defined signal that the application handles to toggle debug mode.
|
||||
|
||||
#### Restart services
|
||||
|
||||
```bash
|
||||
@@ -398,10 +418,10 @@ Presigned URLs should eliminate hold bandwidth. If seeing high usage:
|
||||
docker logs atcr-hold | grep -i presigned
|
||||
```
|
||||
|
||||
**Check S3 driver:**
|
||||
**Check S3 configuration:**
|
||||
```bash
|
||||
docker exec atcr-hold env | grep STORAGE_DRIVER
|
||||
# Should be: s3 (not filesystem)
|
||||
docker exec atcr-hold env | grep S3_BUCKET
|
||||
# Should show your S3 bucket name
|
||||
```
|
||||
|
||||
**Verify direct S3 access:**
|
||||
@@ -465,6 +485,6 @@ docker run --rm \
|
||||
|
||||
## Support
|
||||
|
||||
- Documentation: https://tangled.org/@evan.jarrett.net/at-container-registry
|
||||
- Issues: https://tangled.org/@evan.jarrett.net/at-container-registry/issues
|
||||
- Documentation: https://tangled.org/evan.jarrett.net/at-container-registry
|
||||
- Issues: https://tangled.org/evan.jarrett.net/at-container-registry/issues
|
||||
- Bluesky: @evan.jarrett.net
|
||||
|
||||
@@ -31,7 +31,7 @@ services:
|
||||
networks:
|
||||
- atcr-network
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:2019/metrics"]
|
||||
test: ["CMD", "caddy", "validate", "--config", "/etc/caddy/Caddyfile"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
@@ -44,40 +44,22 @@ services:
|
||||
image: atcr-appview:latest
|
||||
container_name: atcr-appview
|
||||
restart: unless-stopped
|
||||
command: ["serve", "--config", "/config.yaml"]
|
||||
# Base config: config-appview.example.yaml
|
||||
# Env vars below override config file values for this deployment
|
||||
environment:
|
||||
# Server configuration
|
||||
ATCR_HTTP_ADDR: :5000
|
||||
ATCR_BASE_URL: https://${APPVIEW_DOMAIN:-atcr.io}
|
||||
ATCR_SERVICE_NAME: ${APPVIEW_DOMAIN:-atcr.io}
|
||||
|
||||
# Storage configuration
|
||||
ATCR_DEFAULT_HOLD: https://${HOLD_DOMAIN:-hold01.atcr.io}
|
||||
|
||||
# Authentication
|
||||
ATCR_AUTH_KEY_PATH: /var/lib/atcr/auth/private-key.pem
|
||||
ATCR_AUTH_CERT_PATH: /var/lib/atcr/auth/private-key.crt
|
||||
ATCR_TOKEN_EXPIRATION: ${ATCR_TOKEN_EXPIRATION:-300}
|
||||
|
||||
# UI configuration
|
||||
ATCR_UI_ENABLED: ${ATCR_UI_ENABLED:-true}
|
||||
ATCR_UI_DATABASE_PATH: /var/lib/atcr/ui.db
|
||||
|
||||
# Logging
|
||||
ATCR_DEFAULT_HOLD_DID: ${ATCR_DEFAULT_HOLD_DID:-did:web:${HOLD_DOMAIN:-hold01.atcr.io}}
|
||||
ATCR_LOG_LEVEL: ${ATCR_LOG_LEVEL:-info}
|
||||
ATCR_LOG_FORMATTER: ${ATCR_LOG_FORMATTER:-text}
|
||||
|
||||
# Jetstream configuration
|
||||
JETSTREAM_URL: ${JETSTREAM_URL:-wss://jetstream2.us-west.bsky.network/subscribe}
|
||||
ATCR_BACKFILL_ENABLED: ${ATCR_BACKFILL_ENABLED:-true}
|
||||
ATCR_RELAY_ENDPOINT: ${ATCR_RELAY_ENDPOINT:-https://relay1.us-east.bsky.network}
|
||||
ATCR_BACKFILL_INTERVAL: ${ATCR_BACKFILL_INTERVAL:-1h}
|
||||
volumes:
|
||||
- ./config-appview.yaml:/config.yaml:ro
|
||||
# Persistent data: auth keys, UI database, OAuth tokens, Jetstream cache
|
||||
- atcr-appview-data:/var/lib/atcr
|
||||
networks:
|
||||
- atcr-network
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:5000/v2/"]
|
||||
test: ["CMD", "/healthcheck", "http://localhost:5000/health"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
@@ -90,37 +72,29 @@ services:
|
||||
image: atcr-hold:latest
|
||||
container_name: atcr-hold
|
||||
restart: unless-stopped
|
||||
command: ["serve", "--config", "/config.yaml"]
|
||||
# Base config: config-hold.example.yaml
|
||||
# Env vars below override config file values for this deployment
|
||||
environment:
|
||||
# Hold service configuration
|
||||
HOLD_PUBLIC_URL: https://${HOLD_DOMAIN:-hold01.atcr.io}
|
||||
HOLD_SERVER_ADDR: :8080
|
||||
HOLD_ALLOW_ALL_CREW: ${HOLD_ALLOW_ALL_CREW:-false}
|
||||
HOLD_PUBLIC: ${HOLD_PUBLIC:-false}
|
||||
HOLD_OWNER: ${HOLD_OWNER}
|
||||
|
||||
# Storage driver
|
||||
STORAGE_DRIVER: ${STORAGE_DRIVER:-s3}
|
||||
|
||||
# S3/UpCloud Object Storage configuration
|
||||
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
|
||||
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
|
||||
AWS_REGION: ${AWS_REGION:-us-chi1}
|
||||
HOLD_PUBLIC_URL: ${HOLD_PUBLIC_URL:-https://${HOLD_DOMAIN:-hold01.atcr.io}}
|
||||
HOLD_OWNER: ${HOLD_OWNER:-}
|
||||
HOLD_BLUESKY_POSTS_ENABLED: ${HOLD_BLUESKY_POSTS_ENABLED:-true}
|
||||
# S3/UpCloud Object Storage (REQUIRED)
|
||||
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID:-}
|
||||
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY:-}
|
||||
AWS_REGION: ${AWS_REGION:-us-east-1}
|
||||
S3_BUCKET: ${S3_BUCKET:-atcr-blobs}
|
||||
S3_ENDPOINT: ${S3_ENDPOINT}
|
||||
S3_REGION_ENDPOINT: ${S3_REGION_ENDPOINT}
|
||||
|
||||
# Optional: Filesystem storage (comment out S3 vars above)
|
||||
# STORAGE_DRIVER: filesystem
|
||||
# STORAGE_ROOT_DIR: /var/lib/atcr/hold
|
||||
S3_ENDPOINT: ${S3_ENDPOINT:-}
|
||||
HOLD_LOG_LEVEL: ${ATCR_LOG_LEVEL:-info}
|
||||
volumes:
|
||||
# Only needed for filesystem driver
|
||||
# - atcr-hold-data:/var/lib/atcr/hold
|
||||
# OAuth token storage for hold registration
|
||||
- atcr-hold-tokens:/root/.atcr
|
||||
- ./config-hold.yaml:/config.yaml:ro
|
||||
# PDS data (carstore SQLite + signing keys)
|
||||
- atcr-hold-data:/var/lib/atcr-hold
|
||||
- ./quotas.yaml:/quotas.yaml:ro
|
||||
networks:
|
||||
- atcr-network
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8080/health"]
|
||||
test: ["CMD", "/healthcheck", "http://localhost:8080/xrpc/_health"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
@@ -131,7 +105,7 @@ networks:
|
||||
driver: bridge
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.28.0.0/24
|
||||
- subnet: 172.29.0.0/24
|
||||
|
||||
volumes:
|
||||
caddy_data:
|
||||
@@ -142,8 +116,6 @@ volumes:
|
||||
driver: local
|
||||
atcr-hold-data:
|
||||
driver: local
|
||||
atcr-hold-tokens:
|
||||
driver: local
|
||||
|
||||
configs:
|
||||
caddyfile:
|
||||
@@ -155,8 +127,6 @@ configs:
|
||||
# Preserve original host header
|
||||
header_up Host {host}
|
||||
header_up X-Real-IP {remote_host}
|
||||
header_up X-Forwarded-For {remote_host}
|
||||
header_up X-Forwarded-Proto {scheme}
|
||||
}
|
||||
|
||||
# Enable compression
|
||||
@@ -178,8 +148,6 @@ configs:
|
||||
# Preserve original host header
|
||||
header_up Host {host}
|
||||
header_up X-Real-IP {remote_host}
|
||||
header_up X-Forwarded-For {remote_host}
|
||||
header_up X-Forwarded-Proto {scheme}
|
||||
}
|
||||
|
||||
# Enable compression
|
||||
|
||||
@@ -1,280 +0,0 @@
|
||||
#!/bin/bash
|
||||
#
|
||||
# ATCR UpCloud Initialization Script for Rocky Linux
|
||||
#
|
||||
# This script sets up ATCR on a fresh Rocky Linux instance.
|
||||
# Paste this into UpCloud's "User data" field when creating a server.
|
||||
#
|
||||
# What it does:
|
||||
# - Updates system packages
|
||||
# - Creates 2GB swap file (for 1GB RAM instances)
|
||||
# - Installs Docker and Docker Compose
|
||||
# - Creates directory structure
|
||||
# - Clones ATCR repository
|
||||
# - Creates systemd service for auto-start
|
||||
# - Builds and starts containers
|
||||
#
|
||||
# Post-deployment:
|
||||
# 1. Edit /opt/atcr/.env with your configuration
|
||||
# 2. Run: systemctl restart atcr
|
||||
# 3. Check logs: docker logs atcr-hold (for OAuth URL)
|
||||
# 4. Complete hold registration via OAuth
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Configuration
|
||||
ATCR_DIR="/opt/atcr"
|
||||
ATCR_REPO="https://tangled.org/@evan.jarrett.net/at-container-registry" # UPDATE THIS
|
||||
ATCR_BRANCH="main"
|
||||
|
||||
# Simple logging without colors (for cloud-init log compatibility)
|
||||
log_info() {
|
||||
echo "[INFO] $1"
|
||||
}
|
||||
|
||||
log_warn() {
|
||||
echo "[WARN] $1"
|
||||
}
|
||||
|
||||
log_error() {
|
||||
echo "[ERROR] $1"
|
||||
}
|
||||
|
||||
# Function to check if command exists
|
||||
command_exists() {
|
||||
command -v "$1" >/dev/null 2>&1
|
||||
}
|
||||
|
||||
log_info "Starting ATCR deployment on Rocky Linux..."
|
||||
|
||||
# Update system packages
|
||||
log_info "Updating system packages..."
|
||||
dnf update -y
|
||||
|
||||
# Install required packages
|
||||
log_info "Installing prerequisites..."
|
||||
dnf install -y \
|
||||
git \
|
||||
wget \
|
||||
curl \
|
||||
nano \
|
||||
vim
|
||||
|
||||
log_info "Required ports: HTTP (80), HTTPS (443), SSH (22)"
|
||||
|
||||
# Create swap file for instances with limited RAM
|
||||
if [ ! -f /swapfile ]; then
|
||||
log_info "Creating 2GB swap file (allows builds on 1GB RAM instances)..."
|
||||
dd if=/dev/zero of=/swapfile bs=1M count=2048 status=progress
|
||||
chmod 600 /swapfile
|
||||
mkswap /swapfile
|
||||
swapon /swapfile
|
||||
|
||||
# Make swap permanent
|
||||
echo '/swapfile none swap sw 0 0' >> /etc/fstab
|
||||
|
||||
log_info "Swap file created and enabled"
|
||||
free -h
|
||||
else
|
||||
log_info "Swap file already exists"
|
||||
fi
|
||||
|
||||
# Install Docker
|
||||
if ! command_exists docker; then
|
||||
log_info "Installing Docker..."
|
||||
|
||||
# Add Docker repository
|
||||
dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
|
||||
|
||||
# Install Docker
|
||||
dnf install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
|
||||
|
||||
# Start and enable Docker
|
||||
systemctl enable --now docker
|
||||
|
||||
log_info "Docker installed successfully"
|
||||
else
|
||||
log_info "Docker already installed"
|
||||
fi
|
||||
|
||||
# Verify Docker Compose
|
||||
if ! docker compose version >/dev/null 2>&1; then
|
||||
log_error "Docker Compose plugin not found. Please install manually."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log_info "Docker Compose version: $(docker compose version)"
|
||||
|
||||
# Create ATCR directory
|
||||
log_info "Creating ATCR directory: $ATCR_DIR"
|
||||
mkdir -p "$ATCR_DIR"
|
||||
cd "$ATCR_DIR"
|
||||
|
||||
# Clone repository or create minimal structure
|
||||
if [ -n "$ATCR_REPO" ] && [ "$ATCR_REPO" != "https://tangled.org/@evan.jarrett.net/at-container-registry" ]; then
|
||||
log_info "Cloning ATCR repository..."
|
||||
git clone -b "$ATCR_BRANCH" "$ATCR_REPO" .
|
||||
else
|
||||
log_warn "ATCR_REPO not configured. You'll need to manually copy files to $ATCR_DIR"
|
||||
log_warn "Required files:"
|
||||
log_warn " - deploy/docker-compose.prod.yml"
|
||||
log_warn " - deploy/.env.prod.template"
|
||||
log_warn " - Dockerfile.appview"
|
||||
log_warn " - Dockerfile.hold"
|
||||
fi
|
||||
|
||||
# Create .env file from template if it doesn't exist
|
||||
if [ -f "deploy/.env.prod.template" ] && [ ! -f "$ATCR_DIR/.env" ]; then
|
||||
log_info "Creating .env file from template..."
|
||||
cp deploy/.env.prod.template "$ATCR_DIR/.env"
|
||||
log_warn "IMPORTANT: Edit $ATCR_DIR/.env with your configuration!"
|
||||
fi
|
||||
|
||||
# Create systemd service
|
||||
log_info "Creating systemd service..."
|
||||
cat > /etc/systemd/system/atcr.service <<'EOF'
|
||||
[Unit]
|
||||
Description=ATCR Container Registry
|
||||
Requires=docker.service
|
||||
After=docker.service network-online.target
|
||||
Wants=network-online.target
|
||||
|
||||
[Service]
|
||||
Type=oneshot
|
||||
RemainAfterExit=yes
|
||||
WorkingDirectory=/opt/atcr
|
||||
EnvironmentFile=/opt/atcr/.env
|
||||
|
||||
# Start containers
|
||||
ExecStart=/usr/bin/docker compose -f /opt/atcr/deploy/docker-compose.prod.yml up -d
|
||||
|
||||
# Stop containers
|
||||
ExecStop=/usr/bin/docker compose -f /opt/atcr/deploy/docker-compose.prod.yml down
|
||||
|
||||
# Restart containers
|
||||
ExecReload=/usr/bin/docker compose -f /opt/atcr/deploy/docker-compose.prod.yml restart
|
||||
|
||||
# Always restart on failure
|
||||
Restart=on-failure
|
||||
RestartSec=10
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
|
||||
# Reload systemd
|
||||
log_info "Reloading systemd daemon..."
|
||||
systemctl daemon-reload
|
||||
|
||||
# Enable service (but don't start yet - user needs to configure .env)
|
||||
systemctl enable atcr.service
|
||||
|
||||
log_info "Systemd service created and enabled"
|
||||
|
||||
# Create helper scripts
|
||||
log_info "Creating helper scripts..."
|
||||
|
||||
# Script to rebuild and restart
|
||||
cat > "$ATCR_DIR/rebuild.sh" <<'EOF'
|
||||
#!/bin/bash
|
||||
set -e
|
||||
cd /opt/atcr
|
||||
docker compose -f deploy/docker-compose.prod.yml build
|
||||
docker compose -f deploy/docker-compose.prod.yml up -d
|
||||
docker compose -f deploy/docker-compose.prod.yml logs -f
|
||||
EOF
|
||||
chmod +x "$ATCR_DIR/rebuild.sh"
|
||||
|
||||
# Script to view logs
|
||||
cat > "$ATCR_DIR/logs.sh" <<'EOF'
|
||||
#!/bin/bash
|
||||
cd /opt/atcr
|
||||
docker compose -f deploy/docker-compose.prod.yml logs -f "$@"
|
||||
EOF
|
||||
chmod +x "$ATCR_DIR/logs.sh"
|
||||
|
||||
# Script to get hold OAuth URL
|
||||
cat > "$ATCR_DIR/get-hold-oauth.sh" <<'EOF'
|
||||
#!/bin/bash
|
||||
echo "Checking atcr-hold logs for OAuth registration URL..."
|
||||
docker logs atcr-hold 2>&1 | grep -i "oauth\|authorization\|visit\|http" | tail -20
|
||||
EOF
|
||||
chmod +x "$ATCR_DIR/get-hold-oauth.sh"
|
||||
|
||||
log_info "Helper scripts created in $ATCR_DIR"
|
||||
|
||||
# Print completion message
|
||||
cat <<'EOF'
|
||||
|
||||
================================================================================
|
||||
ATCR Installation Complete!
|
||||
================================================================================
|
||||
|
||||
NEXT STEPS:
|
||||
|
||||
1. Configure environment variables:
|
||||
nano /opt/atcr/.env
|
||||
|
||||
Required settings:
|
||||
- AWS_ACCESS_KEY_ID (UpCloud S3 credentials)
|
||||
- AWS_SECRET_ACCESS_KEY
|
||||
|
||||
Pre-configured (verify these are correct):
|
||||
- APPVIEW_DOMAIN=atcr.io
|
||||
- HOLD_DOMAIN=hold01.atcr.io
|
||||
- HOLD_OWNER=did:plc:pddp4xt5lgnv2qsegbzzs4xg
|
||||
- S3_BUCKET=atcr
|
||||
- S3_ENDPOINT=https://blobs.atcr.io
|
||||
|
||||
2. Configure UpCloud Cloud Firewall (in control panel):
|
||||
Allow: TCP 22 (SSH)
|
||||
Allow: TCP 80 (HTTP)
|
||||
Allow: TCP 443 (HTTPS)
|
||||
Drop: Everything else
|
||||
|
||||
3. Configure DNS (Cloudflare - DNS-only mode):
|
||||
EOF
|
||||
|
||||
echo " A atcr.io → $(curl -s ifconfig.me || echo '[server-ip]') (gray cloud)"
|
||||
echo " A hold01.atcr.io → $(curl -s ifconfig.me || echo '[server-ip]') (gray cloud)"
|
||||
echo " CNAME blobs.atcr.io → atcr.us-chi1.upcloudobjects.com (gray cloud)"
|
||||
|
||||
cat <<'EOF'
|
||||
|
||||
4. Start ATCR:
|
||||
systemctl start atcr
|
||||
|
||||
5. Complete Hold OAuth registration:
|
||||
/opt/atcr/get-hold-oauth.sh
|
||||
|
||||
Visit the OAuth URL in your browser to authorize the hold service.
|
||||
|
||||
6. Check status:
|
||||
systemctl status atcr
|
||||
docker ps
|
||||
/opt/atcr/logs.sh
|
||||
|
||||
Helper Scripts:
|
||||
/opt/atcr/rebuild.sh - Rebuild and restart containers
|
||||
/opt/atcr/logs.sh [service] - View logs (e.g., logs.sh atcr-hold)
|
||||
/opt/atcr/get-hold-oauth.sh - Get hold OAuth URL
|
||||
|
||||
Service Management:
|
||||
systemctl start atcr - Start ATCR
|
||||
systemctl stop atcr - Stop ATCR
|
||||
systemctl restart atcr - Restart ATCR
|
||||
systemctl status atcr - Check status
|
||||
|
||||
Documentation:
|
||||
https://tangled.org/@evan.jarrett.net/at-container-registry
|
||||
|
||||
IMPORTANT:
|
||||
- Edit /opt/atcr/.env with S3 credentials before starting!
|
||||
- Configure UpCloud cloud firewall (see step 2)
|
||||
- DNS must be configured and propagated
|
||||
- Cloudflare proxy must be DISABLED (gray cloud)
|
||||
- Complete hold OAuth registration before first push
|
||||
|
||||
EOF
|
||||
|
||||
log_info "Installation complete. Follow the next steps above."
|
||||
509
deploy/upcloud/cloudinit.go
Normal file
509
deploy/upcloud/cloudinit.go
Normal file
@@ -0,0 +1,509 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
_ "embed"
|
||||
"fmt"
|
||||
"strings"
|
||||
"text/template"
|
||||
|
||||
"go.yaml.in/yaml/v3"
|
||||
)
|
||||
|
||||
//go:embed systemd/appview.service.tmpl
|
||||
var appviewServiceTmpl string
|
||||
|
||||
//go:embed systemd/hold.service.tmpl
|
||||
var holdServiceTmpl string
|
||||
|
||||
//go:embed systemd/scanner.service.tmpl
|
||||
var scannerServiceTmpl string
|
||||
|
||||
//go:embed configs/appview.yaml.tmpl
|
||||
var appviewConfigTmpl string
|
||||
|
||||
//go:embed configs/hold.yaml.tmpl
|
||||
var holdConfigTmpl string
|
||||
|
||||
//go:embed configs/scanner.yaml.tmpl
|
||||
var scannerConfigTmpl string
|
||||
|
||||
//go:embed systemd/labeler.service.tmpl
|
||||
var labelerServiceTmpl string
|
||||
|
||||
//go:embed configs/labeler.yaml.tmpl
|
||||
var labelerConfigTmpl string
|
||||
|
||||
//go:embed configs/cloudinit.sh.tmpl
|
||||
var cloudInitTmpl string
|
||||
|
||||
// ConfigValues holds values injected into config YAML templates.
|
||||
// Only truly dynamic/computed values belong here — deployment-specific
|
||||
// values like client_name, owner_did, etc. are literal in the templates.
|
||||
type ConfigValues struct {
|
||||
// S3 / Object Storage
|
||||
S3Endpoint string
|
||||
S3Region string
|
||||
S3Bucket string
|
||||
S3AccessKey string
|
||||
S3SecretKey string
|
||||
|
||||
// Infrastructure (computed from zone + config)
|
||||
Zone string // e.g. "us-chi1"
|
||||
HoldDomain string // e.g. "us-chi1.cove.seamark.dev"
|
||||
HoldDid string // e.g. "did:web:us-chi1.cove.seamark.dev"
|
||||
BasePath string // e.g. "/var/lib/seamark"
|
||||
|
||||
// Scanner (auto-generated shared secret)
|
||||
ScannerSecret string // hex-encoded 32-byte secret; empty disables scanning
|
||||
}
|
||||
|
||||
// renderConfig executes a Go template with the given values.
|
||||
func renderConfig(tmplStr string, vals *ConfigValues) (string, error) {
|
||||
t, err := template.New("config").Parse(tmplStr)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("parse config template: %w", err)
|
||||
}
|
||||
var buf bytes.Buffer
|
||||
if err := t.Execute(&buf, vals); err != nil {
|
||||
return "", fmt.Errorf("render config template: %w", err)
|
||||
}
|
||||
return buf.String(), nil
|
||||
}
|
||||
|
||||
// serviceUnitParams holds values for rendering systemd service unit templates.
|
||||
type serviceUnitParams struct {
|
||||
DisplayName string // e.g. "Seamark"
|
||||
User string // e.g. "seamark"
|
||||
BinaryPath string // e.g. "/opt/seamark/bin/seamark-appview"
|
||||
ConfigPath string // e.g. "/etc/seamark/appview.yaml"
|
||||
DataDir string // e.g. "/var/lib/seamark"
|
||||
ServiceName string // e.g. "seamark-appview"
|
||||
}
|
||||
|
||||
func renderServiceUnit(tmplStr string, p serviceUnitParams) (string, error) {
|
||||
t, err := template.New("service").Parse(tmplStr)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("parse service template: %w", err)
|
||||
}
|
||||
var buf bytes.Buffer
|
||||
if err := t.Execute(&buf, p); err != nil {
|
||||
return "", fmt.Errorf("render service template: %w", err)
|
||||
}
|
||||
return buf.String(), nil
|
||||
}
|
||||
|
||||
// scannerServiceUnitParams holds values for rendering the scanner systemd unit.
|
||||
// Extends the standard fields with HoldServiceName for the After= dependency.
|
||||
type scannerServiceUnitParams struct {
|
||||
DisplayName string // e.g. "Seamark"
|
||||
User string // e.g. "seamark"
|
||||
BinaryPath string // e.g. "/opt/seamark/bin/seamark-scanner"
|
||||
ConfigPath string // e.g. "/etc/seamark/scanner.yaml"
|
||||
DataDir string // e.g. "/var/lib/seamark"
|
||||
ServiceName string // e.g. "seamark-scanner"
|
||||
HoldServiceName string // e.g. "seamark-hold" (After= dependency)
|
||||
}
|
||||
|
||||
func renderScannerServiceUnit(p scannerServiceUnitParams) (string, error) {
|
||||
t, err := template.New("scanner-service").Parse(scannerServiceTmpl)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("parse scanner service template: %w", err)
|
||||
}
|
||||
var buf bytes.Buffer
|
||||
if err := t.Execute(&buf, p); err != nil {
|
||||
return "", fmt.Errorf("render scanner service template: %w", err)
|
||||
}
|
||||
return buf.String(), nil
|
||||
}
|
||||
|
||||
// labelerServiceUnitParams holds values for rendering the labeler systemd unit.
|
||||
type labelerServiceUnitParams struct {
|
||||
DisplayName string // e.g. "Seamark"
|
||||
User string // e.g. "seamark"
|
||||
BinaryPath string // e.g. "/opt/seamark/bin/seamark-labeler"
|
||||
ConfigPath string // e.g. "/etc/seamark/labeler.yaml"
|
||||
DataDir string // e.g. "/var/lib/seamark"
|
||||
ServiceName string // e.g. "seamark-labeler"
|
||||
AppviewServiceName string // e.g. "seamark-appview" (After= dependency)
|
||||
}
|
||||
|
||||
func renderLabelerServiceUnit(p labelerServiceUnitParams) (string, error) {
|
||||
t, err := template.New("labeler-service").Parse(labelerServiceTmpl)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("parse labeler service template: %w", err)
|
||||
}
|
||||
var buf bytes.Buffer
|
||||
if err := t.Execute(&buf, p); err != nil {
|
||||
return "", fmt.Errorf("render labeler service template: %w", err)
|
||||
}
|
||||
return buf.String(), nil
|
||||
}
|
||||
|
||||
// generateAppviewCloudInit generates the cloud-init user-data script for the appview server.
|
||||
// When withLabeler is true, a second phase is appended that creates labeler data
|
||||
// directories and installs a labeler systemd service. Binaries are deployed separately via SCP.
|
||||
func generateAppviewCloudInit(cfg *InfraConfig, vals *ConfigValues, withLabeler bool) (string, error) {
|
||||
naming := cfg.Naming()
|
||||
|
||||
configYAML, err := renderConfig(appviewConfigTmpl, vals)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("appview config: %w", err)
|
||||
}
|
||||
|
||||
serviceUnit, err := renderServiceUnit(appviewServiceTmpl, serviceUnitParams{
|
||||
DisplayName: naming.DisplayName(),
|
||||
User: naming.SystemUser(),
|
||||
BinaryPath: naming.InstallDir() + "/bin/" + naming.Appview(),
|
||||
ConfigPath: naming.AppviewConfigPath(),
|
||||
DataDir: naming.BasePath(),
|
||||
ServiceName: naming.Appview(),
|
||||
})
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("appview service unit: %w", err)
|
||||
}
|
||||
|
||||
script, err := generateCloudInit(cloudInitParams{
|
||||
BinaryName: naming.Appview(),
|
||||
ServiceUnit: serviceUnit,
|
||||
ConfigYAML: configYAML,
|
||||
ConfigPath: naming.AppviewConfigPath(),
|
||||
ServiceName: naming.Appview(),
|
||||
DataDir: naming.BasePath(),
|
||||
InstallDir: naming.InstallDir(),
|
||||
SystemUser: naming.SystemUser(),
|
||||
ConfigDir: naming.ConfigDir(),
|
||||
LogFile: naming.LogFile(),
|
||||
DisplayName: naming.DisplayName(),
|
||||
})
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
if !withLabeler {
|
||||
return script, nil
|
||||
}
|
||||
|
||||
// Render labeler config YAML
|
||||
labelerConfigYAML, err := renderConfig(labelerConfigTmpl, vals)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("labeler config: %w", err)
|
||||
}
|
||||
|
||||
// Append labeler setup phase
|
||||
labelerUnit, err := renderLabelerServiceUnit(labelerServiceUnitParams{
|
||||
DisplayName: naming.DisplayName(),
|
||||
User: naming.SystemUser(),
|
||||
BinaryPath: naming.InstallDir() + "/bin/" + naming.Labeler(),
|
||||
ConfigPath: naming.LabelerConfigPath(),
|
||||
DataDir: naming.BasePath(),
|
||||
ServiceName: naming.Labeler(),
|
||||
AppviewServiceName: naming.Appview(),
|
||||
})
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("labeler service unit: %w", err)
|
||||
}
|
||||
|
||||
// Escape single quotes for heredoc embedding
|
||||
labelerUnit = strings.ReplaceAll(labelerUnit, "'", "'\\''")
|
||||
labelerConfigYAML = strings.ReplaceAll(labelerConfigYAML, "'", "'\\''")
|
||||
|
||||
labelerPhase := fmt.Sprintf(`
|
||||
# === Labeler Setup ===
|
||||
|
||||
# Labeler data dirs
|
||||
mkdir -p %s
|
||||
chown -R %s:%s %s
|
||||
|
||||
# Labeler config
|
||||
cat > %s << 'CFGEOF'
|
||||
%s
|
||||
CFGEOF
|
||||
|
||||
# Labeler systemd service
|
||||
cat > /etc/systemd/system/%s.service << 'SVCEOF'
|
||||
%s
|
||||
SVCEOF
|
||||
systemctl daemon-reload
|
||||
systemctl enable %s
|
||||
|
||||
echo "=== Labeler setup complete ==="
|
||||
`,
|
||||
naming.LabelerDataDir(),
|
||||
naming.SystemUser(), naming.SystemUser(), naming.LabelerDataDir(),
|
||||
naming.LabelerConfigPath(),
|
||||
labelerConfigYAML,
|
||||
naming.Labeler(),
|
||||
labelerUnit,
|
||||
naming.Labeler(),
|
||||
)
|
||||
|
||||
return script + labelerPhase, nil
|
||||
}
|
||||
|
||||
// generateHoldCloudInit generates the cloud-init user-data script for the hold server.
|
||||
// When withScanner is true, a second phase is appended that creates scanner data
|
||||
// directories and installs a scanner systemd service. Binaries are deployed separately via SCP.
|
||||
func generateHoldCloudInit(cfg *InfraConfig, vals *ConfigValues, withScanner bool) (string, error) {
|
||||
naming := cfg.Naming()
|
||||
|
||||
configYAML, err := renderConfig(holdConfigTmpl, vals)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("hold config: %w", err)
|
||||
}
|
||||
|
||||
serviceUnit, err := renderServiceUnit(holdServiceTmpl, serviceUnitParams{
|
||||
DisplayName: naming.DisplayName(),
|
||||
User: naming.SystemUser(),
|
||||
BinaryPath: naming.InstallDir() + "/bin/" + naming.Hold(),
|
||||
ConfigPath: naming.HoldConfigPath(),
|
||||
DataDir: naming.BasePath(),
|
||||
ServiceName: naming.Hold(),
|
||||
})
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("hold service unit: %w", err)
|
||||
}
|
||||
|
||||
script, err := generateCloudInit(cloudInitParams{
|
||||
BinaryName: naming.Hold(),
|
||||
ServiceUnit: serviceUnit,
|
||||
ConfigYAML: configYAML,
|
||||
ConfigPath: naming.HoldConfigPath(),
|
||||
ServiceName: naming.Hold(),
|
||||
DataDir: naming.BasePath(),
|
||||
InstallDir: naming.InstallDir(),
|
||||
SystemUser: naming.SystemUser(),
|
||||
ConfigDir: naming.ConfigDir(),
|
||||
LogFile: naming.LogFile(),
|
||||
DisplayName: naming.DisplayName(),
|
||||
})
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
if !withScanner {
|
||||
return script, nil
|
||||
}
|
||||
|
||||
// Render scanner config YAML
|
||||
scannerConfigYAML, err := renderConfig(scannerConfigTmpl, vals)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("scanner config: %w", err)
|
||||
}
|
||||
|
||||
// Append scanner setup phase (no build — binary deployed via SCP)
|
||||
scannerUnit, err := renderScannerServiceUnit(scannerServiceUnitParams{
|
||||
DisplayName: naming.DisplayName(),
|
||||
User: naming.SystemUser(),
|
||||
BinaryPath: naming.InstallDir() + "/bin/" + naming.Scanner(),
|
||||
ConfigPath: naming.ScannerConfigPath(),
|
||||
DataDir: naming.BasePath(),
|
||||
ServiceName: naming.Scanner(),
|
||||
HoldServiceName: naming.Hold(),
|
||||
})
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("scanner service unit: %w", err)
|
||||
}
|
||||
|
||||
// Escape single quotes for heredoc embedding
|
||||
scannerUnit = strings.ReplaceAll(scannerUnit, "'", "'\\''")
|
||||
scannerConfigYAML = strings.ReplaceAll(scannerConfigYAML, "'", "'\\''")
|
||||
|
||||
scannerPhase := fmt.Sprintf(`
|
||||
# === Scanner Setup ===
|
||||
|
||||
# Scanner data dirs
|
||||
mkdir -p %s/vulndb %s/tmp
|
||||
chown -R %s:%s %s
|
||||
|
||||
# Scanner config
|
||||
cat > %s << 'CFGEOF'
|
||||
%s
|
||||
CFGEOF
|
||||
|
||||
# Scanner systemd service
|
||||
cat > /etc/systemd/system/%s.service << 'SVCEOF'
|
||||
%s
|
||||
SVCEOF
|
||||
systemctl daemon-reload
|
||||
systemctl enable %s
|
||||
|
||||
echo "=== Scanner setup complete ==="
|
||||
`,
|
||||
naming.ScannerDataDir(), naming.ScannerDataDir(),
|
||||
naming.SystemUser(), naming.SystemUser(), naming.ScannerDataDir(),
|
||||
naming.ScannerConfigPath(),
|
||||
scannerConfigYAML,
|
||||
naming.Scanner(),
|
||||
scannerUnit,
|
||||
naming.Scanner(),
|
||||
)
|
||||
|
||||
return script + scannerPhase, nil
|
||||
}
|
||||
|
||||
type cloudInitParams struct {
|
||||
BinaryName string
|
||||
ServiceUnit string
|
||||
ConfigYAML string
|
||||
ConfigPath string
|
||||
ServiceName string
|
||||
DataDir string
|
||||
InstallDir string
|
||||
SystemUser string
|
||||
ConfigDir string
|
||||
LogFile string
|
||||
DisplayName string
|
||||
}
|
||||
|
||||
func generateCloudInit(p cloudInitParams) (string, error) {
|
||||
// Escape single quotes in embedded content for heredoc safety
|
||||
p.ServiceUnit = strings.ReplaceAll(p.ServiceUnit, "'", "'\\''")
|
||||
p.ConfigYAML = strings.ReplaceAll(p.ConfigYAML, "'", "'\\''")
|
||||
|
||||
t, err := template.New("cloudinit").Parse(cloudInitTmpl)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("parse cloudinit template: %w", err)
|
||||
}
|
||||
var buf bytes.Buffer
|
||||
if err := t.Execute(&buf, p); err != nil {
|
||||
return "", fmt.Errorf("render cloudinit template: %w", err)
|
||||
}
|
||||
return buf.String(), nil
|
||||
}
|
||||
|
||||
// syncServiceUnit compares a rendered systemd service unit against what's on
|
||||
// the server. If they differ, it writes the new unit file. Returns true if the
|
||||
// unit was updated (caller should daemon-reload before restart).
|
||||
func syncServiceUnit(name, ip, serviceName, renderedUnit string) (bool, error) {
|
||||
unitPath := "/etc/systemd/system/" + serviceName + ".service"
|
||||
|
||||
remote, err := runSSH(ip, fmt.Sprintf("cat %s 2>/dev/null || echo '__MISSING__'", unitPath), false)
|
||||
if err != nil {
|
||||
fmt.Printf(" service unit sync: could not reach %s (%v)\n", name, err)
|
||||
return false, nil
|
||||
}
|
||||
remote = strings.TrimSpace(remote)
|
||||
rendered := strings.TrimSpace(renderedUnit)
|
||||
|
||||
if remote == "__MISSING__" {
|
||||
fmt.Printf(" service unit: %s not found (cloud-init will handle it)\n", name)
|
||||
return false, nil
|
||||
}
|
||||
|
||||
if remote == rendered {
|
||||
fmt.Printf(" service unit: %s up to date\n", name)
|
||||
return false, nil
|
||||
}
|
||||
|
||||
// Write the updated unit file
|
||||
script := fmt.Sprintf("cat > %s << 'SVCEOF'\n%s\nSVCEOF", unitPath, rendered)
|
||||
if _, err := runSSH(ip, script, false); err != nil {
|
||||
return false, fmt.Errorf("write service unit: %w", err)
|
||||
}
|
||||
fmt.Printf(" service unit: %s updated\n", name)
|
||||
return true, nil
|
||||
}
|
||||
|
||||
// syncConfigKeys fetches the existing config from a server and merges in any
|
||||
// missing keys from the rendered template. Existing values are never overwritten.
|
||||
func syncConfigKeys(name, ip, configPath, templateYAML string) error {
|
||||
remote, err := runSSH(ip, fmt.Sprintf("cat %s 2>/dev/null || echo '__MISSING__'", configPath), false)
|
||||
if err != nil {
|
||||
fmt.Printf(" config sync: could not reach %s (%v)\n", name, err)
|
||||
return nil
|
||||
}
|
||||
remote = strings.TrimSpace(remote)
|
||||
|
||||
if remote == "__MISSING__" {
|
||||
fmt.Printf(" config sync: %s not yet created (cloud-init will handle it)\n", name)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Parse both into yaml.Node trees
|
||||
var templateDoc yaml.Node
|
||||
if err := yaml.Unmarshal([]byte(templateYAML), &templateDoc); err != nil {
|
||||
return fmt.Errorf("parse template yaml: %w", err)
|
||||
}
|
||||
var existingDoc yaml.Node
|
||||
if err := yaml.Unmarshal([]byte(remote), &existingDoc); err != nil {
|
||||
return fmt.Errorf("parse remote yaml: %w", err)
|
||||
}
|
||||
|
||||
// Unwrap document nodes to get the root mapping
|
||||
templateRoot := unwrapDocNode(&templateDoc)
|
||||
existingRoot := unwrapDocNode(&existingDoc)
|
||||
if templateRoot == nil || existingRoot == nil {
|
||||
fmt.Printf(" config sync: %s skipped (unexpected YAML structure)\n", name)
|
||||
return nil
|
||||
}
|
||||
|
||||
added := mergeYAMLNodes(templateRoot, existingRoot)
|
||||
if !added {
|
||||
fmt.Printf(" config sync: %s up to date\n", name)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Marshal the modified tree back
|
||||
merged, err := yaml.Marshal(&existingDoc)
|
||||
if err != nil {
|
||||
return fmt.Errorf("marshal merged yaml: %w", err)
|
||||
}
|
||||
|
||||
// Write back to server
|
||||
script := fmt.Sprintf("cat > %s << 'CFGEOF'\n%sCFGEOF", configPath, string(merged))
|
||||
if _, err := runSSH(ip, script, false); err != nil {
|
||||
return fmt.Errorf("write merged config: %w", err)
|
||||
}
|
||||
fmt.Printf(" config sync: %s updated with new keys\n", name)
|
||||
return nil
|
||||
}
|
||||
|
||||
// unwrapDocNode returns the root mapping node, unwrapping a DocumentNode wrapper if present.
|
||||
func unwrapDocNode(n *yaml.Node) *yaml.Node {
|
||||
if n.Kind == yaml.DocumentNode && len(n.Content) > 0 {
|
||||
return n.Content[0]
|
||||
}
|
||||
if n.Kind == yaml.MappingNode {
|
||||
return n
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// mergeYAMLNodes recursively adds keys from base into existing that are not
|
||||
// already present. Existing values are never overwritten. Returns true if any
|
||||
// new keys were added.
|
||||
func mergeYAMLNodes(base, existing *yaml.Node) bool {
|
||||
if base.Kind != yaml.MappingNode || existing.Kind != yaml.MappingNode {
|
||||
return false
|
||||
}
|
||||
|
||||
added := false
|
||||
for i := 0; i+1 < len(base.Content); i += 2 {
|
||||
baseKey := base.Content[i]
|
||||
baseVal := base.Content[i+1]
|
||||
|
||||
// Look for this key in existing
|
||||
found := false
|
||||
for j := 0; j+1 < len(existing.Content); j += 2 {
|
||||
if existing.Content[j].Value == baseKey.Value {
|
||||
found = true
|
||||
// If both are mappings, recurse to merge sub-keys
|
||||
if baseVal.Kind == yaml.MappingNode && existing.Content[j+1].Kind == yaml.MappingNode {
|
||||
if mergeYAMLNodes(baseVal, existing.Content[j+1]) {
|
||||
added = true
|
||||
}
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if !found {
|
||||
// Append the missing key+value pair
|
||||
existing.Content = append(existing.Content, baseKey, baseVal)
|
||||
added = true
|
||||
}
|
||||
}
|
||||
|
||||
return added
|
||||
}
|
||||
143
deploy/upcloud/config.go
Normal file
143
deploy/upcloud/config.go
Normal file
@@ -0,0 +1,143 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/UpCloudLtd/upcloud-go-api/v8/upcloud/client"
|
||||
"github.com/UpCloudLtd/upcloud-go-api/v8/upcloud/service"
|
||||
"go.yaml.in/yaml/v3"
|
||||
)
|
||||
|
||||
const (
|
||||
repoURL = "https://tangled.org/evan.jarrett.net/at-container-registry"
|
||||
repoBranch = "main"
|
||||
privateNetworkCIDR = "10.0.1.0/24"
|
||||
)
|
||||
|
||||
// InfraConfig holds infrastructure configuration.
|
||||
type InfraConfig struct {
|
||||
Zone string
|
||||
Plan string
|
||||
SSHPublicKey string
|
||||
S3SecretKey string
|
||||
|
||||
// Infrastructure naming — derived from configs/appview.yaml.tmpl.
|
||||
// Edit that template to rebrand.
|
||||
ClientName string
|
||||
BaseDomain string
|
||||
RegistryDomains []string
|
||||
RepoURL string
|
||||
RepoBranch string
|
||||
}
|
||||
|
||||
// Naming returns a Naming helper derived from ClientName.
|
||||
func (c *InfraConfig) Naming() Naming {
|
||||
return Naming{ClientName: c.ClientName}
|
||||
}
|
||||
|
||||
func loadConfig(zone, plan, sshKeyPath, s3Secret string) (*InfraConfig, error) {
|
||||
sshKey, err := readSSHPublicKey(sshKeyPath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
clientName, baseDomain, registryDomains, err := extractFromAppviewTemplate()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("extract config from template: %w", err)
|
||||
}
|
||||
|
||||
return &InfraConfig{
|
||||
Zone: zone,
|
||||
Plan: plan,
|
||||
SSHPublicKey: sshKey,
|
||||
S3SecretKey: s3Secret,
|
||||
ClientName: clientName,
|
||||
BaseDomain: baseDomain,
|
||||
RegistryDomains: registryDomains,
|
||||
RepoURL: repoURL,
|
||||
RepoBranch: repoBranch,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// extractFromAppviewTemplate renders the appview config template with
|
||||
// zero-value ConfigValues and parses the resulting YAML to extract
|
||||
// deployment-specific values. The template is the single source of truth.
|
||||
func extractFromAppviewTemplate() (clientName, baseDomain string, registryDomains []string, err error) {
|
||||
rendered, err := renderConfig(appviewConfigTmpl, &ConfigValues{})
|
||||
if err != nil {
|
||||
return "", "", nil, fmt.Errorf("render appview template: %w", err)
|
||||
}
|
||||
|
||||
var cfg struct {
|
||||
Server struct {
|
||||
BaseURL string `yaml:"base_url"`
|
||||
ClientName string `yaml:"client_name"`
|
||||
RegistryDomains []string `yaml:"registry_domains"`
|
||||
} `yaml:"server"`
|
||||
}
|
||||
if err := yaml.Unmarshal([]byte(rendered), &cfg); err != nil {
|
||||
return "", "", nil, fmt.Errorf("parse appview template YAML: %w", err)
|
||||
}
|
||||
|
||||
clientName = strings.ToLower(cfg.Server.ClientName)
|
||||
baseDomain = strings.TrimPrefix(cfg.Server.BaseURL, "https://")
|
||||
registryDomains = cfg.Server.RegistryDomains
|
||||
|
||||
return clientName, baseDomain, registryDomains, nil
|
||||
}
|
||||
|
||||
// readSSHPublicKey reads an SSH public key from a file path.
|
||||
func readSSHPublicKey(path string) (string, error) {
|
||||
if path == "" {
|
||||
return "", fmt.Errorf("--ssh-key is required (path to SSH public key file)")
|
||||
}
|
||||
data, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("read SSH public key %s: %w", path, err)
|
||||
}
|
||||
key := strings.TrimSpace(string(data))
|
||||
if key == "" {
|
||||
return "", fmt.Errorf("SSH public key file %s is empty", path)
|
||||
}
|
||||
return key, nil
|
||||
}
|
||||
|
||||
// resolveInteractive fills in any empty Zone/Plan fields by launching
|
||||
// interactive TUI pickers that query the UpCloud API.
|
||||
func resolveInteractive(ctx context.Context, svc *service.Service, cfg *InfraConfig) error {
|
||||
if cfg.Zone == "" {
|
||||
z, err := pickZone(ctx, svc)
|
||||
if err != nil {
|
||||
return fmt.Errorf("zone picker: %w", err)
|
||||
}
|
||||
cfg.Zone = z
|
||||
}
|
||||
if cfg.Plan == "" {
|
||||
p, err := pickPlan(ctx, svc)
|
||||
if err != nil {
|
||||
return fmt.Errorf("plan picker: %w", err)
|
||||
}
|
||||
cfg.Plan = p
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// newService creates an UpCloud API client. If token is non-empty it's used
|
||||
// directly; otherwise credentials are read from UPCLOUD_TOKEN env var.
|
||||
func newService(token string) (*service.Service, error) {
|
||||
var c *client.Client
|
||||
var err error
|
||||
if token != "" {
|
||||
c = client.New("", "", client.WithBearerAuth(token), client.WithTimeout(120*time.Second))
|
||||
} else {
|
||||
c, err = client.NewFromEnv(client.WithTimeout(120 * time.Second))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("create UpCloud client: %w\n\nPass --token or set UPCLOUD_TOKEN", err)
|
||||
}
|
||||
}
|
||||
return service.New(c), nil
|
||||
}
|
||||
50
deploy/upcloud/configs/appview.yaml.tmpl
Normal file
50
deploy/upcloud/configs/appview.yaml.tmpl
Normal file
@@ -0,0 +1,50 @@
|
||||
version: "0.1"
|
||||
log_level: info
|
||||
log_shipper:
|
||||
backend: ""
|
||||
url: ""
|
||||
batch_size: 100
|
||||
flush_interval: 5s
|
||||
username: ""
|
||||
password: ""
|
||||
server:
|
||||
addr: :5000
|
||||
base_url: "https://seamark.dev"
|
||||
default_hold_did: "{{.HoldDid}}"
|
||||
oauth_key_path: "{{.BasePath}}/oauth/client.key"
|
||||
client_name: Seamark
|
||||
test_mode: false
|
||||
client_short_name: Seamark
|
||||
registry_domains:
|
||||
- "buoy.cr"
|
||||
- "bouy.cr"
|
||||
ui:
|
||||
database_path: "{{.BasePath}}/ui.db"
|
||||
theme: seamark
|
||||
libsql_sync_url: ""
|
||||
libsql_auth_token: ""
|
||||
libsql_sync_interval: 1m0s
|
||||
health:
|
||||
cache_ttl: 15m0s
|
||||
check_interval: 15m0s
|
||||
jetstream:
|
||||
urls:
|
||||
- wss://jetstream2.us-west.bsky.network/subscribe
|
||||
- wss://jetstream1.us-west.bsky.network/subscribe
|
||||
- wss://jetstream2.us-east.bsky.network/subscribe
|
||||
- wss://jetstream1.us-east.bsky.network/subscribe
|
||||
backfill_enabled: true
|
||||
backfill_interval: 24h
|
||||
relay_endpoints:
|
||||
- https://relay1.us-east.bsky.network
|
||||
- https://relay1.us-west.bsky.network
|
||||
auth:
|
||||
key_path: "{{.BasePath}}/auth/private-key.pem"
|
||||
cert_path: "{{.BasePath}}/auth/private-key.crt"
|
||||
credential_helper:
|
||||
tangled_repo: ""
|
||||
legal:
|
||||
company_name: Seamark
|
||||
jurisdiction: State of Texas, United States
|
||||
labeler:
|
||||
did: ""
|
||||
55
deploy/upcloud/configs/cloudinit.sh.tmpl
Normal file
55
deploy/upcloud/configs/cloudinit.sh.tmpl
Normal file
@@ -0,0 +1,55 @@
|
||||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
exec > >(tee {{.LogFile}}) 2>&1
|
||||
|
||||
echo "=== {{.DisplayName}} Setup: {{.BinaryName}} ==="
|
||||
echo "Started at $(date -u)"
|
||||
|
||||
# Wait for network/DNS
|
||||
for i in $(seq 1 30); do
|
||||
if getent hosts go.dev >/dev/null 2>&1; then
|
||||
echo "Network ready after ${i}s"
|
||||
break
|
||||
fi
|
||||
sleep 1
|
||||
done
|
||||
|
||||
# System packages
|
||||
export DEBIAN_FRONTEND=noninteractive
|
||||
apt-get update && apt-get upgrade -y
|
||||
apt-get install -y git gcc make curl libsqlite3-dev nodejs npm htop systemd-timesyncd
|
||||
sed -i 's/^#NTP=.*/NTP=0.debian.pool.ntp.org 1.debian.pool.ntp.org 2.debian.pool.ntp.org 3.debian.pool.ntp.org/' /etc/systemd/timesyncd.conf
|
||||
timedatectl set-ntp true
|
||||
|
||||
# Swap (for small instances)
|
||||
if [ ! -f /swapfile ]; then
|
||||
dd if=/dev/zero of=/swapfile bs=1M count=2048
|
||||
chmod 600 /swapfile && mkswap /swapfile && swapon /swapfile
|
||||
echo '/swapfile none swap sw 0 0' >> /etc/fstab
|
||||
fi
|
||||
|
||||
# Install directory (binaries deployed via SCP)
|
||||
mkdir -p {{.InstallDir}}/bin
|
||||
|
||||
# Service user & data dirs
|
||||
useradd --system --no-create-home --shell /usr/sbin/nologin {{.SystemUser}} || true
|
||||
mkdir -p {{.DataDir}} && chown {{.SystemUser}}:{{.SystemUser}} {{.DataDir}}
|
||||
|
||||
# Config file
|
||||
mkdir -p {{.ConfigDir}}
|
||||
if [ ! -f {{.ConfigPath}} ]; then
|
||||
cat > {{.ConfigPath}} << 'CFGEOF'
|
||||
{{.ConfigYAML}}
|
||||
CFGEOF
|
||||
else
|
||||
echo "Config {{.ConfigPath}} already exists, skipping overwrite (missing keys merged separately)"
|
||||
fi
|
||||
|
||||
# Systemd service
|
||||
cat > /etc/systemd/system/{{.ServiceName}}.service << 'SVCEOF'
|
||||
{{.ServiceUnit}}
|
||||
SVCEOF
|
||||
systemctl daemon-reload
|
||||
systemctl enable {{.ServiceName}}
|
||||
|
||||
echo "=== Setup complete at $(date -u) ==="
|
||||
64
deploy/upcloud/configs/hold.yaml.tmpl
Normal file
64
deploy/upcloud/configs/hold.yaml.tmpl
Normal file
@@ -0,0 +1,64 @@
|
||||
version: "0.1"
|
||||
log_level: info
|
||||
log_shipper:
|
||||
backend: ""
|
||||
url: ""
|
||||
batch_size: 100
|
||||
flush_interval: 5s
|
||||
username: ""
|
||||
password: ""
|
||||
storage:
|
||||
access_key: "{{.S3AccessKey}}"
|
||||
secret_key: "{{.S3SecretKey}}"
|
||||
region: "{{.S3Region}}"
|
||||
bucket: "{{.S3Bucket}}"
|
||||
endpoint: "{{.S3Endpoint}}"
|
||||
pull_zone: ""
|
||||
server:
|
||||
addr: :8080
|
||||
public_url: "https://{{.HoldDomain}}"
|
||||
public: false
|
||||
successor: ""
|
||||
test_mode: false
|
||||
relay_endpoint: ""
|
||||
appview_did: did:web:seamark.dev
|
||||
read_timeout: 5m0s
|
||||
write_timeout: 5m0s
|
||||
registration:
|
||||
owner_did: "did:plc:pddp4xt5lgnv2qsegbzzs4xg"
|
||||
allow_all_crew: true
|
||||
profile_avatar_url: https://{{.HoldDomain}}/web-app-manifest-192x192.png
|
||||
profile_display_name: Cargo Hold
|
||||
profile_description: ahoy from the cargo hold
|
||||
enable_bluesky_posts: false
|
||||
region: ""
|
||||
database:
|
||||
path: "{{.BasePath}}"
|
||||
key_path: ""
|
||||
did_method: web
|
||||
did: ""
|
||||
plc_directory_url: https://plc.directory
|
||||
rotation_key: ""
|
||||
libsql_sync_url: ""
|
||||
libsql_auth_token: ""
|
||||
libsql_sync_interval: 1m0s
|
||||
admin:
|
||||
enabled: true
|
||||
gc:
|
||||
enabled: false
|
||||
quota:
|
||||
tiers:
|
||||
- name: deckhand
|
||||
quota: 5GB
|
||||
- name: bosun
|
||||
quota: 50GB
|
||||
scan_on_push: true
|
||||
- name: quartermaster
|
||||
quota: 100GB
|
||||
scan_on_push: true
|
||||
defaults:
|
||||
new_crew_tier: deckhand
|
||||
scanner:
|
||||
secret: "{{.ScannerSecret}}"
|
||||
rescan_interval: 168h0m0s
|
||||
|
||||
19
deploy/upcloud/configs/labeler.yaml.tmpl
Normal file
19
deploy/upcloud/configs/labeler.yaml.tmpl
Normal file
@@ -0,0 +1,19 @@
|
||||
version: "0.1"
|
||||
log_level: info
|
||||
log_shipper:
|
||||
backend: ""
|
||||
url: ""
|
||||
batch_size: 100
|
||||
flush_interval: 5s
|
||||
username: ""
|
||||
password: ""
|
||||
labeler:
|
||||
enabled: true
|
||||
addr: :5002
|
||||
owner_did: ""
|
||||
db_path: "{{.BasePath}}/labeler/labeler.db"
|
||||
server:
|
||||
base_url: "https://seamark.dev"
|
||||
client_name: Seamark
|
||||
client_short_name: Seamark
|
||||
test_mode: false
|
||||
21
deploy/upcloud/configs/scanner.yaml.tmpl
Normal file
21
deploy/upcloud/configs/scanner.yaml.tmpl
Normal file
@@ -0,0 +1,21 @@
|
||||
version: "0.1"
|
||||
log_level: info
|
||||
log_shipper:
|
||||
backend: ""
|
||||
url: ""
|
||||
batch_size: 100
|
||||
flush_interval: 5s
|
||||
username: ""
|
||||
password: ""
|
||||
server:
|
||||
addr: :9090
|
||||
hold:
|
||||
url: "ws://localhost:8080"
|
||||
secret: "{{.ScannerSecret}}"
|
||||
scanner:
|
||||
workers: 2
|
||||
queue_size: 100
|
||||
vuln:
|
||||
enabled: true
|
||||
db_path: "{{.BasePath}}/scanner/vulndb"
|
||||
tmp_dir: "{{.BasePath}}/scanner/tmp"
|
||||
BIN
deploy/upcloud/deploy
Executable file
BIN
deploy/upcloud/deploy
Executable file
Binary file not shown.
47
deploy/upcloud/go.mod
Normal file
47
deploy/upcloud/go.mod
Normal file
@@ -0,0 +1,47 @@
|
||||
module atcr.io/deploy
|
||||
|
||||
go 1.25.7
|
||||
|
||||
require (
|
||||
github.com/UpCloudLtd/upcloud-go-api/v8 v8.34.3
|
||||
github.com/charmbracelet/huh v0.8.0
|
||||
github.com/spf13/cobra v1.10.2
|
||||
go.yaml.in/yaml/v3 v3.0.4
|
||||
)
|
||||
|
||||
require (
|
||||
github.com/atotto/clipboard v0.1.4 // indirect
|
||||
github.com/aymanbagabas/go-osc52/v2 v2.0.1 // indirect
|
||||
github.com/catppuccin/go v0.3.0 // indirect
|
||||
github.com/charmbracelet/bubbles v1.0.0 // indirect
|
||||
github.com/charmbracelet/bubbletea v1.3.10 // indirect
|
||||
github.com/charmbracelet/colorprofile v0.4.2 // indirect
|
||||
github.com/charmbracelet/lipgloss v1.1.0 // indirect
|
||||
github.com/charmbracelet/x/ansi v0.11.6 // indirect
|
||||
github.com/charmbracelet/x/cellbuf v0.0.15 // indirect
|
||||
github.com/charmbracelet/x/exp/strings v0.1.0 // indirect
|
||||
github.com/charmbracelet/x/term v0.2.2 // indirect
|
||||
github.com/clipperhouse/displaywidth v0.10.0 // indirect
|
||||
github.com/clipperhouse/uax29/v2 v2.6.0 // indirect
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
|
||||
github.com/dustin/go-humanize v1.0.1 // indirect
|
||||
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f // indirect
|
||||
github.com/inconshreveable/mousetrap v1.1.0 // indirect
|
||||
github.com/kr/text v0.2.0 // indirect
|
||||
github.com/lucasb-eyer/go-colorful v1.3.0 // indirect
|
||||
github.com/mattn/go-isatty v0.0.20 // indirect
|
||||
github.com/mattn/go-localereader v0.0.2-0.20220822084749-2491eb6c1c75 // indirect
|
||||
github.com/mattn/go-runewidth v0.0.19 // indirect
|
||||
github.com/mitchellh/hashstructure/v2 v2.0.2 // indirect
|
||||
github.com/muesli/ansi v0.0.0-20230316100256-276c6243b2f6 // indirect
|
||||
github.com/muesli/cancelreader v0.2.2 // indirect
|
||||
github.com/muesli/termenv v0.16.0 // indirect
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
|
||||
github.com/rivo/uniseg v0.4.7 // indirect
|
||||
github.com/rogpeppe/go-internal v1.14.1 // indirect
|
||||
github.com/spf13/pflag v1.0.10 // indirect
|
||||
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e // indirect
|
||||
golang.org/x/exp v0.0.0-20260212183809-81e46e3db34a // indirect
|
||||
golang.org/x/sys v0.41.0 // indirect
|
||||
golang.org/x/text v0.34.0 // indirect
|
||||
)
|
||||
109
deploy/upcloud/go.sum
Normal file
109
deploy/upcloud/go.sum
Normal file
@@ -0,0 +1,109 @@
|
||||
github.com/MakeNowJust/heredoc v1.0.0 h1:cXCdzVdstXyiTqTvfqk9SDHpKNjxuom+DOlyEeQ4pzQ=
|
||||
github.com/MakeNowJust/heredoc v1.0.0/go.mod h1:mG5amYoWBHf8vpLOuehzbGGw0EHxpZZ6lCpQ4fNJ8LE=
|
||||
github.com/UpCloudLtd/upcloud-go-api/v8 v8.34.3 h1:7ba03u4L5LafZPVO2k6B0/f114k5dFF3GtAN7FEKfno=
|
||||
github.com/UpCloudLtd/upcloud-go-api/v8 v8.34.3/go.mod h1:NBh1d/ip1bhdAIhuPWbyPme7tbLzDTV7dhutUmU1vg8=
|
||||
github.com/atotto/clipboard v0.1.4 h1:EH0zSVneZPSuFR11BlR9YppQTVDbh5+16AmcJi4g1z4=
|
||||
github.com/atotto/clipboard v0.1.4/go.mod h1:ZY9tmq7sm5xIbd9bOK4onWV4S6X0u6GY7Vn0Yu86PYI=
|
||||
github.com/aymanbagabas/go-osc52/v2 v2.0.1 h1:HwpRHbFMcZLEVr42D4p7XBqjyuxQH5SMiErDT4WkJ2k=
|
||||
github.com/aymanbagabas/go-osc52/v2 v2.0.1/go.mod h1:uYgXzlJ7ZpABp8OJ+exZzJJhRNQ2ASbcXHWsFqH8hp8=
|
||||
github.com/aymanbagabas/go-udiff v0.3.1 h1:LV+qyBQ2pqe0u42ZsUEtPiCaUoqgA9gYRDs3vj1nolY=
|
||||
github.com/aymanbagabas/go-udiff v0.3.1/go.mod h1:G0fsKmG+P6ylD0r6N/KgQD/nWzgfnl8ZBcNLgcbrw8E=
|
||||
github.com/catppuccin/go v0.3.0 h1:d+0/YicIq+hSTo5oPuRi5kOpqkVA5tAsU6dNhvRu+aY=
|
||||
github.com/catppuccin/go v0.3.0/go.mod h1:8IHJuMGaUUjQM82qBrGNBv7LFq6JI3NnQCF6MOlZjpc=
|
||||
github.com/charmbracelet/bubbles v1.0.0 h1:12J8/ak/uCZEMQ6KU7pcfwceyjLlWsDLAxB5fXonfvc=
|
||||
github.com/charmbracelet/bubbles v1.0.0/go.mod h1:9d/Zd5GdnauMI5ivUIVisuEm3ave1XwXtD1ckyV6r3E=
|
||||
github.com/charmbracelet/bubbletea v1.3.10 h1:otUDHWMMzQSB0Pkc87rm691KZ3SWa4KUlvF9nRvCICw=
|
||||
github.com/charmbracelet/bubbletea v1.3.10/go.mod h1:ORQfo0fk8U+po9VaNvnV95UPWA1BitP1E0N6xJPlHr4=
|
||||
github.com/charmbracelet/colorprofile v0.4.2 h1:BdSNuMjRbotnxHSfxy+PCSa4xAmz7szw70ktAtWRYrY=
|
||||
github.com/charmbracelet/colorprofile v0.4.2/go.mod h1:0rTi81QpwDElInthtrQ6Ni7cG0sDtwAd4C4le060fT8=
|
||||
github.com/charmbracelet/huh v0.8.0 h1:Xz/Pm2h64cXQZn/Jvele4J3r7DDiqFCNIVteYukxDvY=
|
||||
github.com/charmbracelet/huh v0.8.0/go.mod h1:5YVc+SlZ1IhQALxRPpkGwwEKftN/+OlJlnJYlDRFqN4=
|
||||
github.com/charmbracelet/lipgloss v1.1.0 h1:vYXsiLHVkK7fp74RkV7b2kq9+zDLoEU4MZoFqR/noCY=
|
||||
github.com/charmbracelet/lipgloss v1.1.0/go.mod h1:/6Q8FR2o+kj8rz4Dq0zQc3vYf7X+B0binUUBwA0aL30=
|
||||
github.com/charmbracelet/x/ansi v0.11.6 h1:GhV21SiDz/45W9AnV2R61xZMRri5NlLnl6CVF7ihZW8=
|
||||
github.com/charmbracelet/x/ansi v0.11.6/go.mod h1:2JNYLgQUsyqaiLovhU2Rv/pb8r6ydXKS3NIttu3VGZQ=
|
||||
github.com/charmbracelet/x/cellbuf v0.0.15 h1:ur3pZy0o6z/R7EylET877CBxaiE1Sp1GMxoFPAIztPI=
|
||||
github.com/charmbracelet/x/cellbuf v0.0.15/go.mod h1:J1YVbR7MUuEGIFPCaaZ96KDl5NoS0DAWkskup+mOY+Q=
|
||||
github.com/charmbracelet/x/conpty v0.1.0 h1:4zc8KaIcbiL4mghEON8D72agYtSeIgq8FSThSPQIb+U=
|
||||
github.com/charmbracelet/x/conpty v0.1.0/go.mod h1:rMFsDJoDwVmiYM10aD4bH2XiRgwI7NYJtQgl5yskjEQ=
|
||||
github.com/charmbracelet/x/errors v0.0.0-20240508181413-e8d8b6e2de86 h1:JSt3B+U9iqk37QUU2Rvb6DSBYRLtWqFqfxf8l5hOZUA=
|
||||
github.com/charmbracelet/x/errors v0.0.0-20240508181413-e8d8b6e2de86/go.mod h1:2P0UgXMEa6TsToMSuFqKFQR+fZTO9CNGUNokkPatT/0=
|
||||
github.com/charmbracelet/x/exp/golden v0.0.0-20241011142426-46044092ad91 h1:payRxjMjKgx2PaCWLZ4p3ro9y97+TVLZNaRZgJwSVDQ=
|
||||
github.com/charmbracelet/x/exp/golden v0.0.0-20241011142426-46044092ad91/go.mod h1:wDlXFlCrmJ8J+swcL/MnGUuYnqgQdW9rhSD61oNMb6U=
|
||||
github.com/charmbracelet/x/exp/strings v0.1.0 h1:i69S2XI7uG1u4NLGeJPSYU++Nmjvpo9nwd6aoEm7gkA=
|
||||
github.com/charmbracelet/x/exp/strings v0.1.0/go.mod h1:/ehtMPNh9K4odGFkqYJKpIYyePhdp1hLBRvyY4bWkH8=
|
||||
github.com/charmbracelet/x/term v0.2.2 h1:xVRT/S2ZcKdhhOuSP4t5cLi5o+JxklsoEObBSgfgZRk=
|
||||
github.com/charmbracelet/x/term v0.2.2/go.mod h1:kF8CY5RddLWrsgVwpw4kAa6TESp6EB5y3uxGLeCqzAI=
|
||||
github.com/charmbracelet/x/termios v0.1.1 h1:o3Q2bT8eqzGnGPOYheoYS8eEleT5ZVNYNy8JawjaNZY=
|
||||
github.com/charmbracelet/x/termios v0.1.1/go.mod h1:rB7fnv1TgOPOyyKRJ9o+AsTU/vK5WHJ2ivHeut/Pcwo=
|
||||
github.com/charmbracelet/x/xpty v0.1.2 h1:Pqmu4TEJ8KeA9uSkISKMU3f+C1F6OGBn8ABuGlqCbtI=
|
||||
github.com/charmbracelet/x/xpty v0.1.2/go.mod h1:XK2Z0id5rtLWcpeNiMYBccNNBrP2IJnzHI0Lq13Xzq4=
|
||||
github.com/clipperhouse/displaywidth v0.10.0 h1:GhBG8WuerxjFQQYeuZAeVTuyxuX+UraiZGD4HJQ3Y8g=
|
||||
github.com/clipperhouse/displaywidth v0.10.0/go.mod h1:XqJajYsaiEwkxOj4bowCTMcT1SgvHo9flfF3jQasdbs=
|
||||
github.com/clipperhouse/uax29/v2 v2.6.0 h1:z0cDbUV+aPASdFb2/ndFnS9ts/WNXgTNNGFoKXuhpos=
|
||||
github.com/clipperhouse/uax29/v2 v2.6.0/go.mod h1:Wn1g7MK6OoeDT0vL+Q0SQLDz/KpfsVRgg6W7ihQeh4g=
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
|
||||
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
|
||||
github.com/creack/pty v1.1.24 h1:bJrF4RRfyJnbTJqzRLHzcGaZK1NeM5kTC9jGgovnR1s=
|
||||
github.com/creack/pty v1.1.24/go.mod h1:08sCNb52WyoAwi2QDyzUCTgcvVFhUzewun7wtTfvcwE=
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/dnaeon/go-vcr v1.2.0 h1:zHCHvJYTMh1N7xnV7zf1m1GPBF9Ad0Jk/whtQ1663qI=
|
||||
github.com/dnaeon/go-vcr v1.2.0/go.mod h1:R4UdLID7HZT3taECzJs4YgbbH6PIGXB6W/sc5OLb6RQ=
|
||||
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
|
||||
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
|
||||
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f h1:Y/CXytFA4m6baUTXGLOoWe4PQhGxaX0KpnayAqC48p4=
|
||||
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f/go.mod h1:vw97MGsxSvLiUE2X8qFplwetxpGLQrlU1Q9AUEIzCaM=
|
||||
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
|
||||
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
|
||||
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
|
||||
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
|
||||
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
|
||||
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
|
||||
github.com/lucasb-eyer/go-colorful v1.3.0 h1:2/yBRLdWBZKrf7gB40FoiKfAWYQ0lqNcbuQwVHXptag=
|
||||
github.com/lucasb-eyer/go-colorful v1.3.0/go.mod h1:R4dSotOR9KMtayYi1e77YzuveK+i7ruzyGqttikkLy0=
|
||||
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
|
||||
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
|
||||
github.com/mattn/go-localereader v0.0.2-0.20220822084749-2491eb6c1c75 h1:P8UmIzZMYDR+NGImiFvErt6VWfIRPuGM+vyjiEdkmIw=
|
||||
github.com/mattn/go-runewidth v0.0.19 h1:v++JhqYnZuu5jSKrk9RbgF5v4CGUjqRfBm05byFGLdw=
|
||||
github.com/mattn/go-runewidth v0.0.19/go.mod h1:XBkDxAl56ILZc9knddidhrOlY5R/pDhgLpndooCuJAs=
|
||||
github.com/mitchellh/hashstructure/v2 v2.0.2 h1:vGKWl0YJqUNxE8d+h8f6NJLcCJrgbhC4NcD46KavDd4=
|
||||
github.com/mitchellh/hashstructure/v2 v2.0.2/go.mod h1:MG3aRVU/N29oo/V/IhBX8GR/zz4kQkprJgF2EVszyDE=
|
||||
github.com/muesli/ansi v0.0.0-20230316100256-276c6243b2f6 h1:ZK8zHtRHOkbHy6Mmr5D264iyp3TiX5OmNcI5cIARiQI=
|
||||
github.com/muesli/ansi v0.0.0-20230316100256-276c6243b2f6/go.mod h1:CJlz5H+gyd6CUWT45Oy4q24RdLyn7Md9Vj2/ldJBSIo=
|
||||
github.com/muesli/cancelreader v0.2.2 h1:3I4Kt4BQjOR54NavqnDogx/MIoWBFa0StPA8ELUXHmA=
|
||||
github.com/muesli/cancelreader v0.2.2/go.mod h1:3XuTXfFS2VjM+HTLZY9Ak0l6eUKfijIfMUZ4EgX0QYo=
|
||||
github.com/muesli/termenv v0.16.0 h1:S5AlUN9dENB57rsbnkPyfdGuWIlkmzJjbFf0Tf5FWUc=
|
||||
github.com/muesli/termenv v0.16.0/go.mod h1:ZRfOIKPFDYQoDFF4Olj7/QJbW60Ol/kL1pU3VfY/Cnk=
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ=
|
||||
github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88=
|
||||
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
|
||||
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
|
||||
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
|
||||
github.com/spf13/cobra v1.10.2 h1:DMTTonx5m65Ic0GOoRY2c16WCbHxOOw6xxezuLaBpcU=
|
||||
github.com/spf13/cobra v1.10.2/go.mod h1:7C1pvHqHw5A4vrJfjNwvOdzYu0Gml16OCs2GRiTUUS4=
|
||||
github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
|
||||
github.com/spf13/pflag v1.0.10 h1:4EBh2KAYBwaONj6b2Ye1GiHfwjqyROoF4RwYO+vPwFk=
|
||||
github.com/spf13/pflag v1.0.10/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
|
||||
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
|
||||
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
|
||||
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e h1:JVG44RsyaB9T2KIHavMF/ppJZNG9ZpyihvCd0w101no=
|
||||
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e/go.mod h1:RbqR21r5mrJuqunuUZ/Dhy/avygyECGrLceyNeo4LiM=
|
||||
go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc=
|
||||
go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
|
||||
golang.org/x/exp v0.0.0-20260212183809-81e46e3db34a h1:ovFr6Z0MNmU7nH8VaX5xqw+05ST2uO1exVfZPVqRC5o=
|
||||
golang.org/x/sys v0.0.0-20210809222454-d867a43fc93e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.41.0 h1:Ivj+2Cp/ylzLiEU89QhWblYnOE9zerudt9Ftecq2C6k=
|
||||
golang.org/x/sys v0.41.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||
golang.org/x/text v0.34.0 h1:oL/Qq0Kdaqxa1KbNeMKwQq0reLCCaFtqu2eNuSeNHbk=
|
||||
golang.org/x/text v0.34.0/go.mod h1:homfLqTYRFyVYemLBFl5GgL/DWEiH5wcsQ5gSh1yziA=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
|
||||
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
|
||||
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
|
||||
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
||||
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
13
deploy/upcloud/goversion.go
Normal file
13
deploy/upcloud/goversion.go
Normal file
@@ -0,0 +1,13 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
)
|
||||
|
||||
// projectRoot returns the absolute path to the repository root,
|
||||
// derived from the compile-time source file location.
|
||||
func projectRoot() string {
|
||||
_, thisFile, _, _ := runtime.Caller(0)
|
||||
return filepath.Join(filepath.Dir(thisFile), "..", "..")
|
||||
}
|
||||
23
deploy/upcloud/main.go
Normal file
23
deploy/upcloud/main.go
Normal file
@@ -0,0 +1,23 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"os"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
var rootCmd = &cobra.Command{
|
||||
Use: "upcloud",
|
||||
Short: "ATCR infrastructure provisioning tool for UpCloud",
|
||||
SilenceUsage: true,
|
||||
}
|
||||
|
||||
func init() {
|
||||
rootCmd.PersistentFlags().StringP("token", "t", "", "UpCloud API token (env: UPCLOUD_TOKEN)")
|
||||
}
|
||||
|
||||
func main() {
|
||||
if err := rootCmd.Execute(); err != nil {
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
70
deploy/upcloud/naming.go
Normal file
70
deploy/upcloud/naming.go
Normal file
@@ -0,0 +1,70 @@
|
||||
package main
|
||||
|
||||
import "strings"
|
||||
|
||||
// Naming derives all infrastructure names and paths from a single ClientName.
|
||||
type Naming struct {
|
||||
ClientName string // e.g. "seamark"
|
||||
}
|
||||
|
||||
// DisplayName returns the title-cased client name (e.g. "Seamark").
|
||||
func (n Naming) DisplayName() string {
|
||||
if n.ClientName == "" {
|
||||
return ""
|
||||
}
|
||||
return strings.ToUpper(n.ClientName[:1]) + n.ClientName[1:]
|
||||
}
|
||||
|
||||
// SystemUser returns the unix user name.
|
||||
func (n Naming) SystemUser() string { return n.ClientName }
|
||||
|
||||
// InstallDir returns the source/build directory (e.g. "/opt/seamark").
|
||||
func (n Naming) InstallDir() string { return "/opt/" + n.ClientName }
|
||||
|
||||
// ConfigDir returns the config directory (e.g. "/etc/seamark").
|
||||
func (n Naming) ConfigDir() string { return "/etc/" + n.ClientName }
|
||||
|
||||
// BasePath returns the data directory (e.g. "/var/lib/seamark").
|
||||
func (n Naming) BasePath() string { return "/var/lib/" + n.ClientName }
|
||||
|
||||
// LogFile returns the setup log path (e.g. "/var/log/seamark-setup.log").
|
||||
func (n Naming) LogFile() string { return "/var/log/" + n.ClientName + "-setup.log" }
|
||||
|
||||
// Appview returns the appview binary/service/server name (e.g. "seamark-appview").
|
||||
func (n Naming) Appview() string { return n.ClientName + "-appview" }
|
||||
|
||||
// Hold returns the hold binary/service/server name (e.g. "seamark-hold").
|
||||
func (n Naming) Hold() string { return n.ClientName + "-hold" }
|
||||
|
||||
// AppviewConfigPath returns the appview config file path.
|
||||
func (n Naming) AppviewConfigPath() string { return n.ConfigDir() + "/appview.yaml" }
|
||||
|
||||
// HoldConfigPath returns the hold config file path.
|
||||
func (n Naming) HoldConfigPath() string { return n.ConfigDir() + "/hold.yaml" }
|
||||
|
||||
// NetworkName returns the private network name (e.g. "seamark-private").
|
||||
func (n Naming) NetworkName() string { return n.ClientName + "-private" }
|
||||
|
||||
// LBName returns the load balancer name (e.g. "seamark-lb").
|
||||
func (n Naming) LBName() string { return n.ClientName + "-lb" }
|
||||
|
||||
// Scanner returns the scanner binary/service name (e.g. "seamark-scanner").
|
||||
func (n Naming) Scanner() string { return n.ClientName + "-scanner" }
|
||||
|
||||
// ScannerConfigPath returns the scanner config file path.
|
||||
func (n Naming) ScannerConfigPath() string { return n.ConfigDir() + "/scanner.yaml" }
|
||||
|
||||
// ScannerDataDir returns the scanner data directory (e.g. "/var/lib/seamark/scanner").
|
||||
func (n Naming) ScannerDataDir() string { return n.BasePath() + "/scanner" }
|
||||
|
||||
// Labeler returns the labeler binary/service name (e.g. "seamark-labeler").
|
||||
func (n Naming) Labeler() string { return n.ClientName + "-labeler" }
|
||||
|
||||
// LabelerConfigPath returns the labeler config file path.
|
||||
func (n Naming) LabelerConfigPath() string { return n.ConfigDir() + "/labeler.yaml" }
|
||||
|
||||
// LabelerDataDir returns the labeler data directory (e.g. "/var/lib/seamark/labeler").
|
||||
func (n Naming) LabelerDataDir() string { return n.BasePath() + "/labeler" }
|
||||
|
||||
// S3Name returns the name used for S3 storage, user, and bucket.
|
||||
func (n Naming) S3Name() string { return n.ClientName }
|
||||
88
deploy/upcloud/picker.go
Normal file
88
deploy/upcloud/picker.go
Normal file
@@ -0,0 +1,88 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"sort"
|
||||
|
||||
"github.com/UpCloudLtd/upcloud-go-api/v8/upcloud"
|
||||
"github.com/UpCloudLtd/upcloud-go-api/v8/upcloud/service"
|
||||
"github.com/charmbracelet/huh"
|
||||
)
|
||||
|
||||
// pickZone fetches available zones from the UpCloud API and presents an
|
||||
// interactive selector. Only public zones are shown.
|
||||
func pickZone(ctx context.Context, svc *service.Service) (string, error) {
|
||||
resp, err := svc.GetZones(ctx)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("fetch zones: %w", err)
|
||||
}
|
||||
|
||||
var opts []huh.Option[string]
|
||||
for _, z := range resp.Zones {
|
||||
if z.Public != upcloud.True {
|
||||
continue
|
||||
}
|
||||
label := fmt.Sprintf("%s — %s", z.ID, z.Description)
|
||||
opts = append(opts, huh.NewOption(label, z.ID))
|
||||
}
|
||||
|
||||
if len(opts) == 0 {
|
||||
return "", fmt.Errorf("no public zones available")
|
||||
}
|
||||
|
||||
sort.Slice(opts, func(i, j int) bool {
|
||||
return opts[i].Value < opts[j].Value
|
||||
})
|
||||
|
||||
var zone string
|
||||
err = huh.NewSelect[string]().
|
||||
Title("Select a zone").
|
||||
Options(opts...).
|
||||
Value(&zone).
|
||||
Run()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
return zone, nil
|
||||
}
|
||||
|
||||
// pickPlan fetches available plans from the UpCloud API and presents an
|
||||
// interactive selector. GPU plans are filtered out.
|
||||
func pickPlan(ctx context.Context, svc *service.Service) (string, error) {
|
||||
resp, err := svc.GetPlans(ctx)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("fetch plans: %w", err)
|
||||
}
|
||||
|
||||
var opts []huh.Option[string]
|
||||
for _, p := range resp.Plans {
|
||||
if p.GPUAmount > 0 {
|
||||
continue
|
||||
}
|
||||
memGB := p.MemoryAmount / 1024
|
||||
label := fmt.Sprintf("%s — %d CPU, %d GB RAM, %d GB disk", p.Name, p.CoreNumber, memGB, p.StorageSize)
|
||||
opts = append(opts, huh.NewOption(label, p.Name))
|
||||
}
|
||||
|
||||
if len(opts) == 0 {
|
||||
return "", fmt.Errorf("no plans available")
|
||||
}
|
||||
|
||||
sort.Slice(opts, func(i, j int) bool {
|
||||
return opts[i].Value < opts[j].Value
|
||||
})
|
||||
|
||||
var plan string
|
||||
err = huh.NewSelect[string]().
|
||||
Title("Select a plan").
|
||||
Options(opts...).
|
||||
Value(&plan).
|
||||
Run()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
return plan, nil
|
||||
}
|
||||
1120
deploy/upcloud/provision.go
Normal file
1120
deploy/upcloud/provision.go
Normal file
File diff suppressed because it is too large
Load Diff
94
deploy/upcloud/state.go
Normal file
94
deploy/upcloud/state.go
Normal file
@@ -0,0 +1,94 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
)
|
||||
|
||||
// InfraState persists infrastructure resource UUIDs between commands.
|
||||
type InfraState struct {
|
||||
Zone string `json:"zone"`
|
||||
ClientName string `json:"client_name,omitempty"`
|
||||
RepoBranch string `json:"repo_branch,omitempty"`
|
||||
Network StateRef `json:"network"`
|
||||
Appview ServerState `json:"appview"`
|
||||
Hold ServerState `json:"hold"`
|
||||
LB StateRef `json:"loadbalancer"`
|
||||
ObjectStorage ObjectStorageState `json:"object_storage"`
|
||||
ScannerEnabled bool `json:"scanner_enabled,omitempty"`
|
||||
ScannerSecret string `json:"scanner_secret,omitempty"`
|
||||
LabelerEnabled bool `json:"labeler_enabled,omitempty"`
|
||||
}
|
||||
|
||||
// Naming returns a Naming helper, defaulting to "seamark" if ClientName is empty.
|
||||
func (s *InfraState) Naming() Naming {
|
||||
name := s.ClientName
|
||||
if name == "" {
|
||||
name = "seamark"
|
||||
}
|
||||
return Naming{ClientName: name}
|
||||
}
|
||||
|
||||
// Branch returns the repo branch, defaulting to "main" if empty.
|
||||
func (s *InfraState) Branch() string {
|
||||
if s.RepoBranch == "" {
|
||||
return "main"
|
||||
}
|
||||
return s.RepoBranch
|
||||
}
|
||||
|
||||
type StateRef struct {
|
||||
UUID string `json:"uuid"`
|
||||
}
|
||||
|
||||
type ServerState struct {
|
||||
UUID string `json:"server_uuid"`
|
||||
PublicIP string `json:"public_ip"`
|
||||
PrivateIP string `json:"private_ip"`
|
||||
}
|
||||
|
||||
type ObjectStorageState struct {
|
||||
UUID string `json:"uuid"`
|
||||
Endpoint string `json:"endpoint"`
|
||||
Region string `json:"region"`
|
||||
Bucket string `json:"bucket"`
|
||||
AccessKeyID string `json:"access_key_id"`
|
||||
}
|
||||
|
||||
func statePath() string {
|
||||
_, thisFile, _, _ := runtime.Caller(0)
|
||||
return filepath.Join(filepath.Dir(thisFile), "state.json")
|
||||
}
|
||||
|
||||
func loadState() (*InfraState, error) {
|
||||
data, err := os.ReadFile(statePath())
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("read state.json: %w (run 'provision' first)", err)
|
||||
}
|
||||
var st InfraState
|
||||
if err := json.Unmarshal(data, &st); err != nil {
|
||||
return nil, fmt.Errorf("parse state.json: %w", err)
|
||||
}
|
||||
return &st, nil
|
||||
}
|
||||
|
||||
func saveState(st *InfraState) error {
|
||||
data, err := json.MarshalIndent(st, "", " ")
|
||||
if err != nil {
|
||||
return fmt.Errorf("marshal state: %w", err)
|
||||
}
|
||||
if err := os.WriteFile(statePath(), data, 0644); err != nil {
|
||||
return fmt.Errorf("write state.json: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func deleteState() error {
|
||||
if err := os.Remove(statePath()); err != nil && !os.IsNotExist(err) {
|
||||
return fmt.Errorf("remove state.json: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
145
deploy/upcloud/status.go
Normal file
145
deploy/upcloud/status.go
Normal file
@@ -0,0 +1,145 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/UpCloudLtd/upcloud-go-api/v8/upcloud/request"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
var statusCmd = &cobra.Command{
|
||||
Use: "status",
|
||||
Short: "Show infrastructure state and health",
|
||||
Args: cobra.NoArgs,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
token, _ := cmd.Root().PersistentFlags().GetString("token")
|
||||
return cmdStatus(token)
|
||||
},
|
||||
}
|
||||
|
||||
func init() {
|
||||
rootCmd.AddCommand(statusCmd)
|
||||
}
|
||||
|
||||
func cmdStatus(token string) error {
|
||||
state, err := loadState()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
naming := state.Naming()
|
||||
|
||||
svc, err := newService(token)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
|
||||
defer cancel()
|
||||
|
||||
fmt.Printf("Zone: %s\n\n", state.Zone)
|
||||
|
||||
// Server status
|
||||
for _, s := range []struct {
|
||||
name string
|
||||
ss ServerState
|
||||
serviceName string
|
||||
healthURL string
|
||||
}{
|
||||
{"Appview", state.Appview, naming.Appview(), "http://localhost:5000/health"},
|
||||
{"Hold", state.Hold, naming.Hold(), "http://localhost:8080/xrpc/_health"},
|
||||
} {
|
||||
fmt.Printf("%-8s UUID: %s\n", s.name, s.ss.UUID)
|
||||
fmt.Printf(" Public: %s\n", s.ss.PublicIP)
|
||||
fmt.Printf(" Private: %s\n", s.ss.PrivateIP)
|
||||
|
||||
if s.ss.UUID != "" {
|
||||
details, err := svc.GetServerDetails(ctx, &request.GetServerDetailsRequest{
|
||||
UUID: s.ss.UUID,
|
||||
})
|
||||
if err != nil {
|
||||
fmt.Printf(" State: error (%v)\n", err)
|
||||
} else {
|
||||
fmt.Printf(" State: %s\n", details.State)
|
||||
}
|
||||
}
|
||||
|
||||
// SSH health check
|
||||
if s.ss.PublicIP != "" {
|
||||
output, err := runSSH(s.ss.PublicIP, fmt.Sprintf(
|
||||
"systemctl is-active %s 2>/dev/null || echo 'inactive'; curl -sf %s > /dev/null 2>&1 && echo 'health:ok' || echo 'health:fail'",
|
||||
s.serviceName, s.healthURL,
|
||||
), false)
|
||||
if err != nil {
|
||||
fmt.Printf(" Service: unreachable\n")
|
||||
} else {
|
||||
lines := strings.Split(strings.TrimSpace(output), "\n")
|
||||
for _, line := range lines {
|
||||
line = strings.TrimSpace(line)
|
||||
if line == "active" || line == "inactive" {
|
||||
fmt.Printf(" Service: %s\n", line)
|
||||
} else if strings.HasPrefix(line, "health:") {
|
||||
fmt.Printf(" Health: %s\n", strings.TrimPrefix(line, "health:"))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
fmt.Println()
|
||||
}
|
||||
|
||||
// Scanner status (runs on hold server)
|
||||
if state.ScannerEnabled {
|
||||
fmt.Printf("Scanner (on hold server)\n")
|
||||
if state.Hold.PublicIP != "" {
|
||||
output, err := runSSH(state.Hold.PublicIP, fmt.Sprintf(
|
||||
"systemctl is-active %s 2>/dev/null || echo 'inactive'; curl -sf http://localhost:9090/healthz > /dev/null 2>&1 && echo 'health:ok' || echo 'health:fail'",
|
||||
naming.Scanner(),
|
||||
), false)
|
||||
if err != nil {
|
||||
fmt.Printf(" Service: unreachable\n")
|
||||
} else {
|
||||
lines := strings.Split(strings.TrimSpace(output), "\n")
|
||||
for _, line := range lines {
|
||||
line = strings.TrimSpace(line)
|
||||
if line == "active" || line == "inactive" {
|
||||
fmt.Printf(" Service: %s\n", line)
|
||||
} else if strings.HasPrefix(line, "health:") {
|
||||
fmt.Printf(" Health: %s\n", strings.TrimPrefix(line, "health:"))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
fmt.Println()
|
||||
}
|
||||
|
||||
// LB status
|
||||
if state.LB.UUID != "" {
|
||||
fmt.Printf("Load Balancer: %s\n", state.LB.UUID)
|
||||
lb, err := svc.GetLoadBalancer(ctx, &request.GetLoadBalancerRequest{
|
||||
UUID: state.LB.UUID,
|
||||
})
|
||||
if err != nil {
|
||||
fmt.Printf(" State: error (%v)\n", err)
|
||||
} else {
|
||||
fmt.Printf(" State: %s\n", lb.OperationalState)
|
||||
for _, n := range lb.Networks {
|
||||
fmt.Printf(" Network (%s): %s\n", n.Type, n.DNSName)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Printf("\nNetwork: %s\n", state.Network.UUID)
|
||||
|
||||
if state.ObjectStorage.UUID != "" {
|
||||
fmt.Printf("\nObject Storage: %s\n", state.ObjectStorage.UUID)
|
||||
fmt.Printf(" Endpoint: %s\n", state.ObjectStorage.Endpoint)
|
||||
fmt.Printf(" Region: %s\n", state.ObjectStorage.Region)
|
||||
fmt.Printf(" Bucket: %s\n", state.ObjectStorage.Bucket)
|
||||
fmt.Printf(" Access Key: %s\n", state.ObjectStorage.AccessKeyID)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
25
deploy/upcloud/systemd/appview.service.tmpl
Normal file
25
deploy/upcloud/systemd/appview.service.tmpl
Normal file
@@ -0,0 +1,25 @@
|
||||
[Unit]
|
||||
Description={{.DisplayName}} AppView (Registry + Web UI)
|
||||
After=network-online.target
|
||||
Wants=network-online.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User={{.User}}
|
||||
Group={{.User}}
|
||||
ExecStart={{.BinaryPath}} serve --config {{.ConfigPath}}
|
||||
Restart=on-failure
|
||||
RestartSec=5
|
||||
|
||||
ReadWritePaths={{.DataDir}}
|
||||
ProtectSystem=strict
|
||||
ProtectHome=yes
|
||||
NoNewPrivileges=yes
|
||||
PrivateTmp=yes
|
||||
|
||||
StandardOutput=journal
|
||||
StandardError=journal
|
||||
SyslogIdentifier={{.ServiceName}}
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
25
deploy/upcloud/systemd/hold.service.tmpl
Normal file
25
deploy/upcloud/systemd/hold.service.tmpl
Normal file
@@ -0,0 +1,25 @@
|
||||
[Unit]
|
||||
Description={{.DisplayName}} Hold (Storage Service)
|
||||
After=network-online.target
|
||||
Wants=network-online.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User={{.User}}
|
||||
Group={{.User}}
|
||||
ExecStart={{.BinaryPath}} serve --config {{.ConfigPath}}
|
||||
Restart=on-failure
|
||||
RestartSec=5
|
||||
|
||||
ReadWritePaths={{.DataDir}}
|
||||
ProtectSystem=strict
|
||||
ProtectHome=yes
|
||||
NoNewPrivileges=yes
|
||||
PrivateTmp=yes
|
||||
|
||||
StandardOutput=journal
|
||||
StandardError=journal
|
||||
SyslogIdentifier={{.ServiceName}}
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
25
deploy/upcloud/systemd/labeler.service.tmpl
Normal file
25
deploy/upcloud/systemd/labeler.service.tmpl
Normal file
@@ -0,0 +1,25 @@
|
||||
[Unit]
|
||||
Description={{.DisplayName}} Labeler (Content Moderation)
|
||||
After=network-online.target {{.AppviewServiceName}}.service
|
||||
Wants=network-online.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User={{.User}}
|
||||
Group={{.User}}
|
||||
ExecStart={{.BinaryPath}} serve --config {{.ConfigPath}}
|
||||
Restart=on-failure
|
||||
RestartSec=10
|
||||
|
||||
ReadWritePaths={{.DataDir}}
|
||||
ProtectSystem=strict
|
||||
ProtectHome=yes
|
||||
NoNewPrivileges=yes
|
||||
PrivateTmp=yes
|
||||
|
||||
StandardOutput=journal
|
||||
StandardError=journal
|
||||
SyslogIdentifier={{.ServiceName}}
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
25
deploy/upcloud/systemd/scanner.service.tmpl
Normal file
25
deploy/upcloud/systemd/scanner.service.tmpl
Normal file
@@ -0,0 +1,25 @@
|
||||
[Unit]
|
||||
Description={{.DisplayName}} Scanner (Vulnerability Scanning)
|
||||
After=network-online.target {{.HoldServiceName}}.service
|
||||
Wants=network-online.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User={{.User}}
|
||||
Group={{.User}}
|
||||
ExecStart={{.BinaryPath}} serve --config {{.ConfigPath}}
|
||||
Restart=on-failure
|
||||
RestartSec=10
|
||||
|
||||
ReadWritePaths={{.DataDir}}
|
||||
ProtectSystem=strict
|
||||
ProtectHome=yes
|
||||
NoNewPrivileges=yes
|
||||
PrivateTmp=yes
|
||||
|
||||
StandardOutput=journal
|
||||
StandardError=journal
|
||||
SyslogIdentifier={{.ServiceName}}
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
121
deploy/upcloud/teardown.go
Normal file
121
deploy/upcloud/teardown.go
Normal file
@@ -0,0 +1,121 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/UpCloudLtd/upcloud-go-api/v8/upcloud/request"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
var teardownCmd = &cobra.Command{
|
||||
Use: "teardown",
|
||||
Short: "Destroy all infrastructure",
|
||||
Args: cobra.NoArgs,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
token, _ := cmd.Root().PersistentFlags().GetString("token")
|
||||
return cmdTeardown(token)
|
||||
},
|
||||
}
|
||||
|
||||
func init() {
|
||||
rootCmd.AddCommand(teardownCmd)
|
||||
}
|
||||
|
||||
func cmdTeardown(token string) error {
|
||||
state, err := loadState()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
naming := state.Naming()
|
||||
|
||||
// Confirmation prompt
|
||||
fmt.Printf("This will DESTROY all %s infrastructure:\n", naming.DisplayName())
|
||||
fmt.Printf(" Zone: %s\n", state.Zone)
|
||||
fmt.Printf(" Appview: %s (%s)\n", state.Appview.UUID, state.Appview.PublicIP)
|
||||
fmt.Printf(" Hold: %s (%s)\n", state.Hold.UUID, state.Hold.PublicIP)
|
||||
fmt.Printf(" Network: %s\n", state.Network.UUID)
|
||||
fmt.Printf(" LB: %s\n", state.LB.UUID)
|
||||
fmt.Println()
|
||||
fmt.Print("Type 'yes' to confirm: ")
|
||||
|
||||
scanner := bufio.NewScanner(os.Stdin)
|
||||
scanner.Scan()
|
||||
if strings.TrimSpace(scanner.Text()) != "yes" {
|
||||
fmt.Println("Aborted.")
|
||||
return nil
|
||||
}
|
||||
|
||||
svc, err := newService(token)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Minute)
|
||||
defer cancel()
|
||||
|
||||
// Delete LB first (depends on network)
|
||||
if state.LB.UUID != "" {
|
||||
fmt.Printf("Deleting load balancer %s...\n", state.LB.UUID)
|
||||
if err := svc.DeleteLoadBalancer(ctx, &request.DeleteLoadBalancerRequest{
|
||||
UUID: state.LB.UUID,
|
||||
}); err != nil {
|
||||
fmt.Printf(" Warning: %v\n", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Stop and delete servers (must stop before delete, and delete storage)
|
||||
for _, s := range []struct {
|
||||
name string
|
||||
uuid string
|
||||
}{
|
||||
{"appview", state.Appview.UUID},
|
||||
{"hold", state.Hold.UUID},
|
||||
} {
|
||||
if s.uuid == "" {
|
||||
continue
|
||||
}
|
||||
fmt.Printf("Stopping server %s (%s)...\n", s.name, s.uuid)
|
||||
_, err := svc.StopServer(ctx, &request.StopServerRequest{
|
||||
UUID: s.uuid,
|
||||
})
|
||||
if err != nil {
|
||||
fmt.Printf(" Warning (stop): %v\n", err)
|
||||
} else {
|
||||
_, _ = svc.WaitForServerState(ctx, &request.WaitForServerStateRequest{
|
||||
UUID: s.uuid,
|
||||
DesiredState: "stopped",
|
||||
})
|
||||
}
|
||||
|
||||
fmt.Printf("Deleting server %s...\n", s.name)
|
||||
if err := svc.DeleteServerAndStorages(ctx, &request.DeleteServerAndStoragesRequest{
|
||||
UUID: s.uuid,
|
||||
}); err != nil {
|
||||
fmt.Printf(" Warning (delete): %v\n", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Delete network (after servers are gone)
|
||||
if state.Network.UUID != "" {
|
||||
fmt.Printf("Deleting network %s...\n", state.Network.UUID)
|
||||
if err := svc.DeleteNetwork(ctx, &request.DeleteNetworkRequest{
|
||||
UUID: state.Network.UUID,
|
||||
}); err != nil {
|
||||
fmt.Printf(" Warning: %v\n", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Remove state file
|
||||
if err := deleteState(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
fmt.Println("\nTeardown complete. All infrastructure destroyed.")
|
||||
return nil
|
||||
}
|
||||
485
deploy/upcloud/update.go
Normal file
485
deploy/upcloud/update.go
Normal file
@@ -0,0 +1,485 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
var updateCmd = &cobra.Command{
|
||||
Use: "update [target]",
|
||||
Short: "Deploy updates to servers",
|
||||
Args: cobra.MaximumNArgs(1),
|
||||
ValidArgs: []string{"all", "appview", "hold"},
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
target := "all"
|
||||
if len(args) > 0 {
|
||||
target = args[0]
|
||||
}
|
||||
withScanner, _ := cmd.Flags().GetBool("with-scanner")
|
||||
withLabeler, _ := cmd.Flags().GetBool("with-labeler")
|
||||
return cmdUpdate(target, withScanner, withLabeler)
|
||||
},
|
||||
}
|
||||
|
||||
var sshCmd = &cobra.Command{
|
||||
Use: "ssh <target>",
|
||||
Short: "SSH into a server",
|
||||
Args: cobra.ExactArgs(1),
|
||||
ValidArgs: []string{"appview", "hold"},
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return cmdSSH(args[0])
|
||||
},
|
||||
}
|
||||
|
||||
func init() {
|
||||
updateCmd.Flags().Bool("with-scanner", false, "Enable and deploy vulnerability scanner alongside hold")
|
||||
updateCmd.Flags().Bool("with-labeler", false, "Enable and deploy content moderation labeler alongside appview")
|
||||
rootCmd.AddCommand(updateCmd)
|
||||
rootCmd.AddCommand(sshCmd)
|
||||
}
|
||||
|
||||
func cmdUpdate(target string, withScanner, withLabeler bool) error {
|
||||
state, err := loadState()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
naming := state.Naming()
|
||||
rootDir := projectRoot()
|
||||
|
||||
// Enable scanner retroactively via --with-scanner on update
|
||||
if withScanner && !state.ScannerEnabled {
|
||||
state.ScannerEnabled = true
|
||||
if state.ScannerSecret == "" {
|
||||
secret, err := generateScannerSecret()
|
||||
if err != nil {
|
||||
return fmt.Errorf("generate scanner secret: %w", err)
|
||||
}
|
||||
state.ScannerSecret = secret
|
||||
fmt.Printf("Generated scanner shared secret\n")
|
||||
}
|
||||
_ = saveState(state)
|
||||
}
|
||||
|
||||
// Enable labeler retroactively via --with-labeler on update
|
||||
if withLabeler && !state.LabelerEnabled {
|
||||
state.LabelerEnabled = true
|
||||
_ = saveState(state)
|
||||
}
|
||||
|
||||
vals := configValsFromState(state)
|
||||
|
||||
targets := map[string]struct {
|
||||
ip string
|
||||
binaryName string
|
||||
buildCmd string
|
||||
localBinary string
|
||||
serviceName string
|
||||
healthURL string
|
||||
configTmpl string
|
||||
configPath string
|
||||
unitTmpl string
|
||||
}{
|
||||
"appview": {
|
||||
ip: state.Appview.PublicIP,
|
||||
binaryName: naming.Appview(),
|
||||
buildCmd: "appview",
|
||||
localBinary: "atcr-appview",
|
||||
serviceName: naming.Appview(),
|
||||
healthURL: "http://localhost:5000/health",
|
||||
configTmpl: appviewConfigTmpl,
|
||||
configPath: naming.AppviewConfigPath(),
|
||||
unitTmpl: appviewServiceTmpl,
|
||||
},
|
||||
"hold": {
|
||||
ip: state.Hold.PublicIP,
|
||||
binaryName: naming.Hold(),
|
||||
buildCmd: "hold",
|
||||
localBinary: "atcr-hold",
|
||||
serviceName: naming.Hold(),
|
||||
healthURL: "http://localhost:8080/xrpc/_health",
|
||||
configTmpl: holdConfigTmpl,
|
||||
configPath: naming.HoldConfigPath(),
|
||||
unitTmpl: holdServiceTmpl,
|
||||
},
|
||||
}
|
||||
|
||||
var toUpdate []string
|
||||
switch target {
|
||||
case "all":
|
||||
toUpdate = []string{"appview", "hold"}
|
||||
case "appview", "hold":
|
||||
toUpdate = []string{target}
|
||||
default:
|
||||
return fmt.Errorf("unknown target: %s (use: all, appview, hold)", target)
|
||||
}
|
||||
|
||||
// Run go generate before building
|
||||
if err := runGenerate(rootDir); err != nil {
|
||||
return fmt.Errorf("go generate: %w", err)
|
||||
}
|
||||
|
||||
// Build all binaries locally before touching servers
|
||||
fmt.Println("Building locally (GOOS=linux GOARCH=amd64)...")
|
||||
for _, name := range toUpdate {
|
||||
t := targets[name]
|
||||
outputPath := filepath.Join(rootDir, "bin", t.localBinary)
|
||||
if err := buildLocal(rootDir, outputPath, "./cmd/"+t.buildCmd); err != nil {
|
||||
return fmt.Errorf("build %s: %w", name, err)
|
||||
}
|
||||
}
|
||||
|
||||
// Build scanner locally if needed
|
||||
needScanner := false
|
||||
for _, name := range toUpdate {
|
||||
if name == "hold" && state.ScannerEnabled {
|
||||
needScanner = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if needScanner {
|
||||
outputPath := filepath.Join(rootDir, "bin", "atcr-scanner")
|
||||
if err := buildLocal(filepath.Join(rootDir, "scanner"), outputPath, "./cmd/scanner"); err != nil {
|
||||
return fmt.Errorf("build scanner: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Build labeler locally if needed
|
||||
needLabeler := false
|
||||
for _, name := range toUpdate {
|
||||
if name == "appview" && state.LabelerEnabled {
|
||||
needLabeler = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if needLabeler {
|
||||
outputPath := filepath.Join(rootDir, "bin", "atcr-labeler")
|
||||
if err := buildLocal(rootDir, outputPath, "./cmd/labeler"); err != nil {
|
||||
return fmt.Errorf("build labeler: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Deploy each target
|
||||
for _, name := range toUpdate {
|
||||
t := targets[name]
|
||||
fmt.Printf("\nDeploying %s (%s)...\n", name, t.ip)
|
||||
|
||||
// Sync config keys (adds missing keys from template, never overwrites)
|
||||
configYAML, err := renderConfig(t.configTmpl, vals)
|
||||
if err != nil {
|
||||
return fmt.Errorf("render %s config: %w", name, err)
|
||||
}
|
||||
if err := syncConfigKeys(name, t.ip, t.configPath, configYAML); err != nil {
|
||||
return fmt.Errorf("%s config sync: %w", name, err)
|
||||
}
|
||||
|
||||
// Sync systemd service unit
|
||||
renderedUnit, err := renderServiceUnit(t.unitTmpl, serviceUnitParams{
|
||||
DisplayName: naming.DisplayName(),
|
||||
User: naming.SystemUser(),
|
||||
BinaryPath: naming.InstallDir() + "/bin/" + t.binaryName,
|
||||
ConfigPath: t.configPath,
|
||||
DataDir: naming.BasePath(),
|
||||
ServiceName: t.serviceName,
|
||||
})
|
||||
if err != nil {
|
||||
return fmt.Errorf("render %s service unit: %w", name, err)
|
||||
}
|
||||
unitChanged, err := syncServiceUnit(name, t.ip, t.serviceName, renderedUnit)
|
||||
if err != nil {
|
||||
return fmt.Errorf("%s service unit sync: %w", name, err)
|
||||
}
|
||||
|
||||
// Upload binary
|
||||
localPath := filepath.Join(rootDir, "bin", t.localBinary)
|
||||
remotePath := naming.InstallDir() + "/bin/" + t.binaryName
|
||||
if err := scpFile(localPath, t.ip, remotePath); err != nil {
|
||||
return fmt.Errorf("upload %s: %w", name, err)
|
||||
}
|
||||
|
||||
daemonReload := ""
|
||||
if unitChanged {
|
||||
daemonReload = "systemctl daemon-reload"
|
||||
}
|
||||
|
||||
// Scanner additions for hold server
|
||||
scannerRestart := ""
|
||||
scannerHealthCheck := ""
|
||||
if name == "hold" && state.ScannerEnabled {
|
||||
// Sync scanner config keys
|
||||
scannerConfigYAML, err := renderConfig(scannerConfigTmpl, vals)
|
||||
if err != nil {
|
||||
return fmt.Errorf("render scanner config: %w", err)
|
||||
}
|
||||
if err := syncConfigKeys("scanner", t.ip, naming.ScannerConfigPath(), scannerConfigYAML); err != nil {
|
||||
return fmt.Errorf("scanner config sync: %w", err)
|
||||
}
|
||||
|
||||
// Sync scanner service unit
|
||||
scannerUnit, err := renderScannerServiceUnit(scannerServiceUnitParams{
|
||||
DisplayName: naming.DisplayName(),
|
||||
User: naming.SystemUser(),
|
||||
BinaryPath: naming.InstallDir() + "/bin/" + naming.Scanner(),
|
||||
ConfigPath: naming.ScannerConfigPath(),
|
||||
DataDir: naming.BasePath(),
|
||||
ServiceName: naming.Scanner(),
|
||||
HoldServiceName: naming.Hold(),
|
||||
})
|
||||
if err != nil {
|
||||
return fmt.Errorf("render scanner service unit: %w", err)
|
||||
}
|
||||
scannerUnitChanged, err := syncServiceUnit("scanner", t.ip, naming.Scanner(), scannerUnit)
|
||||
if err != nil {
|
||||
return fmt.Errorf("scanner service unit sync: %w", err)
|
||||
}
|
||||
if scannerUnitChanged {
|
||||
daemonReload = "systemctl daemon-reload"
|
||||
}
|
||||
|
||||
// Upload scanner binary
|
||||
scannerLocal := filepath.Join(rootDir, "bin", "atcr-scanner")
|
||||
scannerRemote := naming.InstallDir() + "/bin/" + naming.Scanner()
|
||||
if err := scpFile(scannerLocal, t.ip, scannerRemote); err != nil {
|
||||
return fmt.Errorf("upload scanner: %w", err)
|
||||
}
|
||||
|
||||
// Ensure scanner data dirs exist on server
|
||||
scannerSetup := fmt.Sprintf(`mkdir -p %s/vulndb %s/tmp
|
||||
chown -R %s:%s %s`,
|
||||
naming.ScannerDataDir(), naming.ScannerDataDir(),
|
||||
naming.SystemUser(), naming.SystemUser(), naming.ScannerDataDir())
|
||||
if _, err := runSSH(t.ip, scannerSetup, false); err != nil {
|
||||
return fmt.Errorf("scanner dir setup: %w", err)
|
||||
}
|
||||
|
||||
scannerRestart = fmt.Sprintf("\nsystemctl restart %s", naming.Scanner())
|
||||
scannerHealthCheck = `
|
||||
sleep 2
|
||||
curl -sf http://localhost:9090/healthz > /dev/null && echo "SCANNER_HEALTH_OK" || echo "SCANNER_HEALTH_FAIL"
|
||||
`
|
||||
}
|
||||
|
||||
// Labeler additions for appview server
|
||||
labelerRestart := ""
|
||||
if name == "appview" && state.LabelerEnabled {
|
||||
// Sync labeler config keys
|
||||
labelerConfigYAML, err := renderConfig(labelerConfigTmpl, vals)
|
||||
if err != nil {
|
||||
return fmt.Errorf("render labeler config: %w", err)
|
||||
}
|
||||
if err := syncConfigKeys("labeler", t.ip, naming.LabelerConfigPath(), labelerConfigYAML); err != nil {
|
||||
return fmt.Errorf("labeler config sync: %w", err)
|
||||
}
|
||||
|
||||
// Sync labeler service unit
|
||||
labelerUnit, err := renderLabelerServiceUnit(labelerServiceUnitParams{
|
||||
DisplayName: naming.DisplayName(),
|
||||
User: naming.SystemUser(),
|
||||
BinaryPath: naming.InstallDir() + "/bin/" + naming.Labeler(),
|
||||
ConfigPath: naming.LabelerConfigPath(),
|
||||
DataDir: naming.BasePath(),
|
||||
ServiceName: naming.Labeler(),
|
||||
AppviewServiceName: naming.Appview(),
|
||||
})
|
||||
if err != nil {
|
||||
return fmt.Errorf("render labeler service unit: %w", err)
|
||||
}
|
||||
labelerUnitChanged, err := syncServiceUnit("labeler", t.ip, naming.Labeler(), labelerUnit)
|
||||
if err != nil {
|
||||
return fmt.Errorf("labeler service unit sync: %w", err)
|
||||
}
|
||||
if labelerUnitChanged {
|
||||
daemonReload = "systemctl daemon-reload"
|
||||
}
|
||||
|
||||
// Upload labeler binary
|
||||
labelerLocal := filepath.Join(rootDir, "bin", "atcr-labeler")
|
||||
labelerRemote := naming.InstallDir() + "/bin/" + naming.Labeler()
|
||||
if err := scpFile(labelerLocal, t.ip, labelerRemote); err != nil {
|
||||
return fmt.Errorf("upload labeler: %w", err)
|
||||
}
|
||||
|
||||
// Ensure labeler data dirs exist
|
||||
labelerSetup := fmt.Sprintf(`mkdir -p %s
|
||||
chown -R %s:%s %s`,
|
||||
naming.LabelerDataDir(),
|
||||
naming.SystemUser(), naming.SystemUser(), naming.LabelerDataDir())
|
||||
if _, err := runSSH(t.ip, labelerSetup, false); err != nil {
|
||||
return fmt.Errorf("labeler dir setup: %w", err)
|
||||
}
|
||||
|
||||
labelerRestart = fmt.Sprintf("\nsystemctl restart %s", naming.Labeler())
|
||||
}
|
||||
|
||||
// Restart services and health check
|
||||
restartScript := fmt.Sprintf(`set -euo pipefail
|
||||
%s
|
||||
systemctl restart %s%s%s
|
||||
sleep 2
|
||||
curl -sf %s > /dev/null && echo "HEALTH_OK" || echo "HEALTH_FAIL"
|
||||
%s`, daemonReload, t.serviceName, scannerRestart, labelerRestart, t.healthURL, scannerHealthCheck)
|
||||
|
||||
output, err := runSSH(t.ip, restartScript, true)
|
||||
if err != nil {
|
||||
fmt.Printf(" ERROR: %v\n", err)
|
||||
fmt.Printf(" Output: %s\n", output)
|
||||
return fmt.Errorf("restart %s failed", name)
|
||||
}
|
||||
|
||||
if strings.Contains(output, "HEALTH_OK") {
|
||||
fmt.Printf(" %s: updated and healthy\n", name)
|
||||
} else if strings.Contains(output, "HEALTH_FAIL") {
|
||||
fmt.Printf(" %s: updated but health check failed!\n", name)
|
||||
fmt.Printf(" Check: ssh root@%s journalctl -u %s -n 50\n", t.ip, t.serviceName)
|
||||
} else {
|
||||
fmt.Printf(" %s: updated (health check inconclusive)\n", name)
|
||||
}
|
||||
|
||||
// Scanner health reporting
|
||||
if name == "hold" && state.ScannerEnabled {
|
||||
if strings.Contains(output, "SCANNER_HEALTH_OK") {
|
||||
fmt.Printf(" scanner: updated and healthy\n")
|
||||
} else if strings.Contains(output, "SCANNER_HEALTH_FAIL") {
|
||||
fmt.Printf(" scanner: updated but health check failed!\n")
|
||||
fmt.Printf(" Check: ssh root@%s journalctl -u %s -n 50\n", t.ip, naming.Scanner())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// configValsFromState builds ConfigValues from persisted state.
|
||||
// S3SecretKey is intentionally left empty — syncConfigKeys only adds missing
|
||||
// keys and never overwrites, so the server's existing secret is preserved.
|
||||
func configValsFromState(state *InfraState) *ConfigValues {
|
||||
naming := state.Naming()
|
||||
_, baseDomain, _, _ := extractFromAppviewTemplate()
|
||||
holdDomain := state.Zone + ".cove." + baseDomain
|
||||
|
||||
return &ConfigValues{
|
||||
S3Endpoint: state.ObjectStorage.Endpoint,
|
||||
S3Region: state.ObjectStorage.Region,
|
||||
S3Bucket: state.ObjectStorage.Bucket,
|
||||
S3AccessKey: state.ObjectStorage.AccessKeyID,
|
||||
S3SecretKey: "", // not persisted in state; existing value on server is preserved
|
||||
Zone: state.Zone,
|
||||
HoldDomain: holdDomain,
|
||||
HoldDid: "did:web:" + holdDomain,
|
||||
BasePath: naming.BasePath(),
|
||||
ScannerSecret: state.ScannerSecret,
|
||||
}
|
||||
}
|
||||
|
||||
// runGenerate runs go generate ./... in the given directory using host OS/arch
|
||||
// (no cross-compilation env vars — generate tools must run on the build machine).
|
||||
func runGenerate(dir string) error {
|
||||
fmt.Println("Running go generate ./...")
|
||||
cmd := exec.Command("go", "generate", "./...")
|
||||
cmd.Dir = dir
|
||||
cmd.Stdout = os.Stdout
|
||||
cmd.Stderr = os.Stderr
|
||||
return cmd.Run()
|
||||
}
|
||||
|
||||
// buildLocal compiles a Go binary locally with cross-compilation flags for linux/amd64.
|
||||
func buildLocal(dir, outputPath, buildPkg string) error {
|
||||
fmt.Printf(" building %s...\n", filepath.Base(outputPath))
|
||||
cmd := exec.Command("go", "build",
|
||||
"-ldflags=-s -w",
|
||||
"-trimpath",
|
||||
"-o", outputPath,
|
||||
buildPkg,
|
||||
)
|
||||
cmd.Dir = dir
|
||||
cmd.Env = append(os.Environ(),
|
||||
"GOOS=linux",
|
||||
"GOARCH=amd64",
|
||||
"CGO_ENABLED=1",
|
||||
)
|
||||
cmd.Stdout = os.Stdout
|
||||
cmd.Stderr = os.Stderr
|
||||
return cmd.Run()
|
||||
}
|
||||
|
||||
// scpFile uploads a local file to a remote server via SCP.
|
||||
// Removes the remote file first to avoid ETXTBSY when overwriting a running binary.
|
||||
func scpFile(localPath, ip, remotePath string) error {
|
||||
fmt.Printf(" uploading %s → %s:%s\n", filepath.Base(localPath), ip, remotePath)
|
||||
_, _ = runSSH(ip, fmt.Sprintf("rm -f %s", remotePath), false)
|
||||
cmd := exec.Command("scp",
|
||||
"-o", "StrictHostKeyChecking=accept-new",
|
||||
"-o", "ConnectTimeout=10",
|
||||
localPath,
|
||||
"root@"+ip+":"+remotePath,
|
||||
)
|
||||
cmd.Stdout = os.Stdout
|
||||
cmd.Stderr = os.Stderr
|
||||
return cmd.Run()
|
||||
}
|
||||
|
||||
func cmdSSH(target string) error {
|
||||
state, err := loadState()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
var ip string
|
||||
switch target {
|
||||
case "appview":
|
||||
ip = state.Appview.PublicIP
|
||||
case "hold":
|
||||
ip = state.Hold.PublicIP
|
||||
default:
|
||||
return fmt.Errorf("unknown target: %s (use: appview, hold)", target)
|
||||
}
|
||||
|
||||
fmt.Printf("Connecting to %s (%s)...\n", target, ip)
|
||||
cmd := exec.Command("ssh",
|
||||
"-o", "StrictHostKeyChecking=accept-new",
|
||||
"root@"+ip,
|
||||
)
|
||||
cmd.Stdin = os.Stdin
|
||||
cmd.Stdout = os.Stdout
|
||||
cmd.Stderr = os.Stderr
|
||||
return cmd.Run()
|
||||
}
|
||||
|
||||
func runSSH(ip, script string, stream bool) (string, error) {
|
||||
cmd := exec.Command("ssh",
|
||||
"-o", "StrictHostKeyChecking=accept-new",
|
||||
"-o", "ConnectTimeout=10",
|
||||
"root@"+ip,
|
||||
"bash -s",
|
||||
)
|
||||
cmd.Stdin = strings.NewReader(script)
|
||||
|
||||
var buf bytes.Buffer
|
||||
if stream {
|
||||
cmd.Stdout = io.MultiWriter(os.Stdout, &buf)
|
||||
cmd.Stderr = io.MultiWriter(os.Stderr, &buf)
|
||||
} else {
|
||||
cmd.Stdout = &buf
|
||||
cmd.Stderr = &buf
|
||||
}
|
||||
|
||||
// Give deploys up to 5 minutes (SCP + restart, much faster than remote builds)
|
||||
done := make(chan error, 1)
|
||||
go func() { done <- cmd.Run() }()
|
||||
|
||||
select {
|
||||
case err := <-done:
|
||||
return buf.String(), err
|
||||
case <-time.After(5 * time.Minute):
|
||||
_ = cmd.Process.Kill()
|
||||
return buf.String(), fmt.Errorf("SSH command timed out after 5 minutes")
|
||||
}
|
||||
}
|
||||
@@ -2,26 +2,39 @@ services:
|
||||
atcr-appview:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile.appview
|
||||
image: atcr-appview:latest
|
||||
dockerfile: Dockerfile.dev
|
||||
image: atcr-appview-dev:latest
|
||||
container_name: atcr-appview
|
||||
ports:
|
||||
- "5000:5000"
|
||||
env_file:
|
||||
- ../atcr-secrets.env
|
||||
# Optional: Load from .env.appview file (create from .env.appview.example)
|
||||
# env_file:
|
||||
# - .env.appview
|
||||
# Base config: config-appview.example.yaml (passed via Air entrypoint)
|
||||
# Env vars below override config file values for local dev
|
||||
environment:
|
||||
# Server configuration
|
||||
ATCR_HTTP_ADDR: :5000
|
||||
ATCR_DEFAULT_HOLD: http://atcr-hold:8080
|
||||
# UI configuration
|
||||
ATCR_UI_ENABLED: true
|
||||
ATCR_BACKFILL_ENABLED: true
|
||||
# Logging
|
||||
ATCR_LOG_LEVEL: info
|
||||
# ATCR_SERVER_CLIENT_NAME: "Seamark"
|
||||
# ATCR_SERVER_CLIENT_SHORT_NAME: "Seamark"
|
||||
ATCR_SERVER_MANAGED_HOLDS: did:web:172.28.0.3%3A8080
|
||||
ATCR_SERVER_DEFAULT_HOLD_DID: did:web:172.28.0.3%3A8080
|
||||
ATCR_SERVER_TEST_MODE: true
|
||||
ATCR_LOG_LEVEL: debug
|
||||
LOG_SHIPPER_BACKEND: victoria
|
||||
LOG_SHIPPER_URL: http://172.28.0.10:9428
|
||||
# Limit local Docker logs - real logs go to Victoria Logs
|
||||
# Local logs just for live tailing (docker logs -f)
|
||||
logging:
|
||||
driver: json-file
|
||||
options:
|
||||
max-size: "10m"
|
||||
max-file: "1"
|
||||
volumes:
|
||||
# Auth keys (JWT signing keys)
|
||||
# - atcr-auth:/var/lib/atcr/auth
|
||||
# Mount source code for Air hot reload
|
||||
- .:/app
|
||||
# Cache go modules between rebuilds
|
||||
- go-mod-cache:/go/pkg/mod
|
||||
# UI database (includes OAuth sessions, devices, and Jetstream cache)
|
||||
- atcr-ui:/var/lib/atcr
|
||||
restart: unless-stopped
|
||||
@@ -35,29 +48,47 @@ services:
|
||||
# - Manifests/Tags -> ATProto PDS (via middleware)
|
||||
# - Blobs/Layers -> Hold service (via ProxyBlobStore)
|
||||
# - OAuth tokens -> SQLite database (atcr-ui volume)
|
||||
# - No config.yml needed - all config via environment variables
|
||||
|
||||
atcr-hold:
|
||||
env_file:
|
||||
- ../atcr-secrets.env # Load S3/Storj credentials from external file
|
||||
- ../atcr-secrets.env # Load S3/Storj credentials from external file
|
||||
# Base config: config-hold.example.yaml (passed via Air entrypoint)
|
||||
# Env vars below override config file values for local dev
|
||||
environment:
|
||||
HOLD_PUBLIC_URL: http://172.28.0.3:8080
|
||||
HOLD_OWNER: did:plc:pddp4xt5lgnv2qsegbzzs4xg
|
||||
HOLD_PUBLIC: false
|
||||
# STORAGE_DRIVER: filesystem
|
||||
# STORAGE_ROOT_DIR: /var/lib/atcr/hold
|
||||
TEST_MODE: true
|
||||
# DISABLE_PRESIGNED_URLS: true
|
||||
# Storage config comes from env_file (STORAGE_DRIVER, AWS_*, S3_*)
|
||||
HOLD_SERVER_APPVIEW_DID: did:web:172.28.0.2%3A5000
|
||||
HOLD_SCANNER_SECRET: dev-secret
|
||||
HOLD_SERVER_PUBLIC_URL: http://172.28.0.3:8080
|
||||
HOLD_REGISTRATION_OWNER_DID: did:plc:pddp4xt5lgnv2qsegbzzs4xg
|
||||
HOLD_REGISTRATION_ALLOW_ALL_CREW: true
|
||||
HOLD_SERVER_TEST_MODE: true
|
||||
HOLD_LOG_LEVEL: debug
|
||||
LOG_SHIPPER_BACKEND: victoria
|
||||
LOG_SHIPPER_URL: http://172.28.0.10:9428
|
||||
# S3 storage config comes from env_file (AWS_*, S3_*)
|
||||
# Limit local Docker logs - real logs go to Victoria Logs
|
||||
# Local logs just for live tailing (docker logs -f)
|
||||
logging:
|
||||
driver: json-file
|
||||
options:
|
||||
max-size: "10m"
|
||||
max-file: "1"
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile.hold
|
||||
image: atcr-hold:latest
|
||||
dockerfile: Dockerfile.dev
|
||||
args:
|
||||
AIR_CONFIG: .air.hold.toml
|
||||
BILLING_ENABLED: "true"
|
||||
image: atcr-hold-dev:latest
|
||||
container_name: atcr-hold
|
||||
ports:
|
||||
- "8080:8080"
|
||||
# volumes:
|
||||
# - atcr-hold:/var/lib/atcr/hold
|
||||
volumes:
|
||||
# Mount source code for Air hot reload
|
||||
- .:/app
|
||||
# Cache go modules between rebuilds
|
||||
- go-mod-cache:/go/pkg/mod
|
||||
# PDS data (carstore SQLite + signing keys)
|
||||
- atcr-hold:/var/lib/atcr-hold
|
||||
restart: unless-stopped
|
||||
dns:
|
||||
- 8.8.8.8
|
||||
@@ -66,6 +97,23 @@ services:
|
||||
atcr-network:
|
||||
ipv4_address: 172.28.0.3
|
||||
|
||||
# Victoria Logs for centralized log storage
|
||||
# Uncomment to enable, then set LOG_SHIPPER_* env vars above
|
||||
victorialogs:
|
||||
image: victoriametrics/victoria-logs:latest
|
||||
container_name: victorialogs
|
||||
ports:
|
||||
- "9428:9428"
|
||||
volumes:
|
||||
- victorialogs-data:/victoria-logs-data
|
||||
command:
|
||||
- "-storageDataPath=/victoria-logs-data"
|
||||
- "-retentionPeriod=7d"
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
atcr-network:
|
||||
ipv4_address: 172.28.0.10
|
||||
|
||||
networks:
|
||||
atcr-network:
|
||||
driver: bridge
|
||||
@@ -77,3 +125,5 @@ volumes:
|
||||
atcr-hold:
|
||||
atcr-auth:
|
||||
atcr-ui:
|
||||
go-mod-cache:
|
||||
victorialogs-data:
|
||||
|
||||
1403
docs/ADMIN_PANEL.md
Normal file
1403
docs/ADMIN_PANEL.md
Normal file
File diff suppressed because it is too large
Load Diff
@@ -1,23 +1,51 @@
|
||||
# ATCR AppView UI - Future Features
|
||||
# ATCR UI - Feature Roadmap
|
||||
|
||||
This document outlines potential features for future versions of the ATCR AppView UI, beyond the V1 MVP. These are ideas to consider as the project matures and user needs evolve.
|
||||
This document tracks the status of ATCR features beyond the V1 MVP. Features are marked with their current status:
|
||||
|
||||
- **DONE** — Fully implemented and shipping
|
||||
- **PARTIAL** — Some parts implemented
|
||||
- **BACKEND ONLY** — Backend exists, no UI yet
|
||||
- **NOT STARTED** — Future work
|
||||
- **BLOCKED** — Waiting on external dependency
|
||||
|
||||
---
|
||||
|
||||
## What's Already Built (not in original roadmap)
|
||||
|
||||
These features were implemented but weren't in the original future features list:
|
||||
|
||||
| Feature | Location | Notes |
|
||||
|---------|----------|-------|
|
||||
| **Billing (Stripe)** | `pkg/hold/billing/` | Checkout sessions, customer portal, subscription webhooks, tier upgrades. Build with `-tags billing`. |
|
||||
| **Garbage collection** | `pkg/hold/gc/` | Mark-and-sweep for orphaned blobs. Preview (dry-run) and execute modes. Triggered from hold admin UI. |
|
||||
| **libSQL embedded replicas** | AppView + Hold | Sync to Turso, Bunny DB, or self-hosted libsql-server. Configurable sync interval. |
|
||||
| **Hold successor/migration** | `pkg/hold/` | Promote a hold as successor to migrate users to new storage. |
|
||||
| **Relay management** | Hold admin | Manage firehose relay connections from admin panel. |
|
||||
| **Data export** | `pkg/appview/handlers/export.go` | GDPR-compliant export of all user data from AppView + all holds where user is member/captain. |
|
||||
| **Dark/light mode** | AppView UI | System preference detection, toggle, localStorage persistence. |
|
||||
| **Credential helper install page** | `/install` | Install scripts for macOS/Linux/Windows, version API. |
|
||||
| **Stars** | AppView UI | Star/unstar repos stored as `io.atcr.star` ATProto records, counts displayed. |
|
||||
|
||||
---
|
||||
|
||||
## Advanced Image Management
|
||||
|
||||
### Multi-Architecture Image Support
|
||||
### Multi-Architecture Image Support — DONE (display) / NOT STARTED (creation)
|
||||
|
||||
**Display image indexes:**
|
||||
- Show when a tag points to an image index (multi-arch manifest)
|
||||
- Display all architectures/platforms in the index (linux/amd64, linux/arm64, darwin/arm64, etc.)
|
||||
**Display image indexes — DONE:**
|
||||
- Show when a tag points to an image index (multi-arch manifest) — `IsMultiArch` flag, "Multi-arch" badge
|
||||
- Display all architectures/platforms in the index — platform badges (e.g., linux/amd64, linux/arm64)
|
||||
- Allow viewing individual manifests within the index
|
||||
- Show platform-specific layer details
|
||||
- Show platform-specific details
|
||||
|
||||
**Image index creation:**
|
||||
**Image index creation — NOT STARTED:**
|
||||
- UI for combining multiple single-arch manifests into an image index
|
||||
- Automatic platform detection from manifest metadata
|
||||
- Validate that all manifests are for the same image (different platforms)
|
||||
|
||||
### Layer Inspection & Visualization
|
||||
### Layer Inspection & Visualization — NOT STARTED
|
||||
|
||||
DB stores layer metadata (digest, size, media type, layer index) but there's no UI for any of this.
|
||||
|
||||
**Layer details page:**
|
||||
- Show Dockerfile command that created each layer (if available in history)
|
||||
@@ -30,594 +58,409 @@ This document outlines potential features for future versions of the ATCR AppVie
|
||||
- Calculate storage savings from layer sharing
|
||||
- Identify duplicate layers with different digests (potential optimization)
|
||||
|
||||
### Image Operations
|
||||
### Image Operations — PARTIAL (delete only)
|
||||
|
||||
**Tag Management:**
|
||||
- **Tag promotion workflow:** dev → staging → prod with one click
|
||||
- **Tag aliases:** Create multiple tags pointing to same digest
|
||||
- **Tag patterns:** Auto-tag based on git commit, semantic version, date
|
||||
- **Tag protection:** Mark tags as immutable (prevent deletion/re-pointing)
|
||||
**Tag/manifest deletion — DONE:**
|
||||
- Delete tags with `DeleteTagHandler` (cascade + confirmation modal)
|
||||
- Delete manifests with `DeleteManifestHandler` (handles tagged manifests gracefully)
|
||||
|
||||
**Image Copying:**
|
||||
**Tag Management — NOT STARTED:**
|
||||
- Tag promotion workflow (dev → staging → prod)
|
||||
- Tag aliases (multiple tags → same digest)
|
||||
- Tag patterns (auto-tag based on git commit, semantic version, date)
|
||||
- Tag protection (mark tags as immutable)
|
||||
|
||||
**Image Copying — NOT STARTED:**
|
||||
- Copy image from one repository to another
|
||||
- Copy image from another user's repository (fork)
|
||||
- Bulk copy operations (copy all tags, copy all manifests)
|
||||
- Bulk copy operations
|
||||
|
||||
**Image History:**
|
||||
- Timeline view of tag changes (what digest did "latest" point to over time)
|
||||
- Rollback functionality (revert tag to previous digest)
|
||||
- Audit log of all image operations (push, delete, tag changes)
|
||||
**Image History — NOT STARTED:**
|
||||
- Timeline view of tag changes
|
||||
- Rollback functionality
|
||||
- Audit log of image operations
|
||||
|
||||
### Vulnerability Scanning
|
||||
### Vulnerability Scanning — DONE (backend) / NOT STARTED (UI)
|
||||
|
||||
**Integration with security scanners:**
|
||||
- **Trivy** - Comprehensive vulnerability scanner
|
||||
- **Grype** - Anchore's vulnerability scanner
|
||||
- **Clair** - CoreOS vulnerability scanner
|
||||
**Backend — DONE:**
|
||||
- Separate scanner service (`scanner/` module) with Syft (SBOM) + Grype (vulnerabilities)
|
||||
- WebSocket-based job queue connecting scanner to hold service
|
||||
- Priority queue with tier-based scheduling (quartermaster > bosun > deckhand)
|
||||
- Scan results stored as ORAS artifacts in S3, referenced in hold PDS
|
||||
- Automatic scanning dispatched by hold on manifest push
|
||||
- See `docs/SBOM_SCANNING.md`
|
||||
|
||||
**Features:**
|
||||
- Automatic scanning on image push
|
||||
**AppView UI — NOT STARTED:**
|
||||
- Display CVE count by severity (critical, high, medium, low)
|
||||
- Show detailed CVE information (description, CVSS score, affected packages)
|
||||
- Filter images by vulnerability status
|
||||
- Subscribe to CVE notifications for your images
|
||||
- Compare vulnerability status across tags/versions
|
||||
|
||||
### Image Signing & Verification
|
||||
### Image Signing & Verification — NOT STARTED
|
||||
|
||||
**Cosign/Sigstore integration:**
|
||||
- Sign images with Cosign
|
||||
Concept doc exists at `docs/SIGNATURE_INTEGRATION.md` but no implementation.
|
||||
|
||||
- Sign images
|
||||
- Display signature verification status
|
||||
- Show keyless signing certificate chains
|
||||
- Integrate with transparency log (Rekor)
|
||||
|
||||
**Features:**
|
||||
- UI for signing images (generate key, sign manifest)
|
||||
- Verify signatures before pull (browser-based verification)
|
||||
- Display signature metadata (signer, timestamp, transparency log entry)
|
||||
- Display signature metadata
|
||||
- Require signatures for protected repositories
|
||||
|
||||
### SBOM (Software Bill of Materials)
|
||||
### SBOM (Software Bill of Materials) — DONE (backend) / NOT STARTED (UI)
|
||||
|
||||
**SBOM generation and display:**
|
||||
- Generate SBOM on push (SPDX or CycloneDX format)
|
||||
**Backend — DONE:**
|
||||
- Syft generates SPDX JSON format SBOMs
|
||||
- Stored as ORAS artifacts (referenced via `artifactType: "application/spdx+json"`)
|
||||
- Blobs in S3, metadata in hold's PDS
|
||||
- Accessible via ORAS CLI and hold XRPC endpoints
|
||||
|
||||
**UI — NOT STARTED:**
|
||||
- Display package list from SBOM
|
||||
- Show license information
|
||||
- Link to upstream package sources
|
||||
- Compare SBOMs across versions (what packages changed)
|
||||
- Compare SBOMs across versions
|
||||
|
||||
**SBOM attestation:**
|
||||
- Store SBOM as attestation (in-toto format)
|
||||
- Link SBOM to image signature
|
||||
- Verify SBOM integrity
|
||||
---
|
||||
|
||||
## Hold Management Dashboard
|
||||
## Hold Management Dashboard — DONE (on hold admin panel)
|
||||
|
||||
### Hold Discovery & Registration
|
||||
Hold management is implemented as a separate admin panel on the hold service itself (`pkg/hold/admin/`), not in the AppView UI. This makes sense architecturally — hold owners manage their own holds.
|
||||
|
||||
**Create hold:**
|
||||
### Hold Discovery & Registration — PARTIAL
|
||||
|
||||
**Hold registration — DONE:**
|
||||
- Automatic registration on hold startup (captain + crew records created in embedded PDS)
|
||||
- Auto-detection of region from cloud metadata
|
||||
|
||||
**NOT STARTED:**
|
||||
- UI wizard for deploying hold service
|
||||
- One-click deployment to Fly.io, Railway, Render
|
||||
- Configuration generator (environment variables, docker-compose)
|
||||
- Test connectivity after deployment
|
||||
- One-click deployment to cloud platforms
|
||||
- Configuration generator
|
||||
- Test connectivity UI
|
||||
|
||||
**Hold registration:**
|
||||
- Automatic registration via OAuth (already implemented)
|
||||
- Manual registration form (for existing holds)
|
||||
- Bulk import holds from JSON/YAML
|
||||
### Hold Configuration — DONE (admin panel)
|
||||
|
||||
### Hold Configuration
|
||||
|
||||
**Hold settings page:**
|
||||
- Edit hold metadata (name, description, icon)
|
||||
**Hold settings — DONE (hold admin):**
|
||||
- Toggle public/private flag
|
||||
- Configure storage backend (S3, Storj, Minio, filesystem)
|
||||
- Set storage quotas and limits
|
||||
- Configure retention policies (auto-delete old blobs)
|
||||
- Toggle allow-all-crew
|
||||
- Toggle Bluesky post announcements
|
||||
- Set successor hold DID for migration
|
||||
- Writes changes back to YAML config file
|
||||
|
||||
**Hold credentials:**
|
||||
- Rotate S3 access keys
|
||||
- Test hold connectivity
|
||||
- View hold service logs (if accessible)
|
||||
**Storage config — YAML-only:**
|
||||
- S3 credentials, region, bucket, endpoint, CDN pull zone all configured via YAML
|
||||
- No UI for editing S3 credentials or rotating keys
|
||||
|
||||
### Crew Management
|
||||
**Quotas — DONE (read-only UI):**
|
||||
- Tier-based limits (deckhand 5GB, bosun 50GB, quartermaster 100GB)
|
||||
- Per-user quota tracking and display in admin
|
||||
- Not editable via UI (requires YAML change)
|
||||
|
||||
**Invite crew members:**
|
||||
- Send invitation links (OAuth-based)
|
||||
- Invite by handle or DID
|
||||
- Set crew permissions (read-only, read-write, admin)
|
||||
- Bulk invite (upload CSV)
|
||||
**NOT STARTED:**
|
||||
- Retention policies (auto-delete old blobs)
|
||||
- Hold service log viewer
|
||||
|
||||
**Crew list:**
|
||||
- Display all crew members
|
||||
- Show last activity (last push, last pull)
|
||||
### Crew Management — DONE (hold admin panel)
|
||||
|
||||
**Implemented in `pkg/hold/admin/handlers_crew.go`:**
|
||||
- Add crew by DID with role, permissions (`blob:read`, `blob:write`, `crew:admin`), and tier
|
||||
- Crew list showing handle, role, permissions, tier, usage, quota
|
||||
- Edit crew permissions and tier
|
||||
- Remove crew members
|
||||
- Change crew permissions
|
||||
- Bulk JSON import/export with deduplication (`handlers_crew_io.go`)
|
||||
|
||||
**Crew request workflow:**
|
||||
- Allow users to request access to a hold
|
||||
- Hold owner approves/rejects requests
|
||||
- Notification system for requests
|
||||
**NOT STARTED:**
|
||||
- Invitation links (OAuth-based, currently must know DID)
|
||||
- Invite by handle (currently DID-only)
|
||||
- Crew request workflow (users can't self-request access)
|
||||
- Approval/rejection flow
|
||||
|
||||
### Hold Analytics
|
||||
### Hold Analytics — PARTIAL
|
||||
|
||||
**Storage metrics:**
|
||||
- Total storage used (bytes)
|
||||
- Blob count
|
||||
- Largest blobs
|
||||
- Growth over time (chart)
|
||||
- Deduplication savings
|
||||
**Storage metrics — DONE (hold admin):**
|
||||
- Total blobs, total size, unique digests
|
||||
- Per-user quota stats (total size, blob count)
|
||||
- Top users by storage (lazy-loaded HTMX partial)
|
||||
- Crew count and tier distribution
|
||||
|
||||
**Access metrics:**
|
||||
- Total downloads (pulls)
|
||||
- Bandwidth used
|
||||
- Popular images (most pulled)
|
||||
- Geographic distribution (if available)
|
||||
- Access logs (who pulled what, when)
|
||||
**NOT STARTED:**
|
||||
- Access metrics (downloads, pulls, bandwidth)
|
||||
- Growth over time charts
|
||||
- Cost estimation
|
||||
- Geographic distribution
|
||||
- Access logs
|
||||
|
||||
**Cost estimation:**
|
||||
- Calculate S3 storage costs
|
||||
- Calculate bandwidth costs
|
||||
- Compare costs across storage backends
|
||||
- Budget alerts (notify when approaching limit)
|
||||
---
|
||||
|
||||
## Discovery & Social Features
|
||||
|
||||
### Federated Browse & Search
|
||||
### Federated Browse & Search — PARTIAL
|
||||
|
||||
**Enhanced discovery:**
|
||||
- Full-text search across all ATCR images (repository name, tag, description)
|
||||
**Basic search — DONE:**
|
||||
- Full-text search across handles, DIDs, repo names, and annotations
|
||||
- Search UI with HTMX lazy loading and pagination
|
||||
- Navigation bar search component
|
||||
|
||||
**NOT STARTED:**
|
||||
- Filter by user, hold, architecture, date range
|
||||
- Sort by popularity, recency, size
|
||||
- Advanced query syntax (e.g., "user:alice tag:latest arch:arm64")
|
||||
- Advanced query syntax
|
||||
- Popular/trending images
|
||||
- Categories and user-defined tags
|
||||
|
||||
**Popular/Trending:**
|
||||
- Most pulled images (past day, week, month)
|
||||
- Fastest growing images (new pulls)
|
||||
- Recently updated images (new tags)
|
||||
- Community favorites (curated list)
|
||||
### Sailor Profiles — PARTIAL
|
||||
|
||||
**Categories & Tags:**
|
||||
- User-defined categories (web, database, ml, etc.)
|
||||
- Tag images with keywords (nginx, proxy, reverse-proxy)
|
||||
- Browse by category
|
||||
- Tag cloud visualization
|
||||
**Public profile page — DONE:**
|
||||
- `/u/{handle}` shows user's avatar, handle, DID, and all public repositories
|
||||
- OpenGraph meta tags and JSON-LD structured data
|
||||
|
||||
### Sailor Profiles (Public)
|
||||
|
||||
**Public profile page:**
|
||||
- `/ui/@alice` shows alice's public repositories
|
||||
- Bio, avatar, website links
|
||||
**NOT STARTED:**
|
||||
- Bio/description field
|
||||
- Website links
|
||||
- Statistics (total images, total pulls, joined date)
|
||||
- Pinned repositories (showcase best images)
|
||||
- Pinned/featured repositories
|
||||
|
||||
**Social features:**
|
||||
- Follow other sailors (get notified of their pushes)
|
||||
- Star repositories (bookmark favorites)
|
||||
- Comment on images (feedback, questions)
|
||||
### Social Features — PARTIAL (stars only)
|
||||
|
||||
**Stars — DONE:**
|
||||
- Star/unstar repositories stored as `io.atcr.star` ATProto records
|
||||
- Star counts displayed on repository pages
|
||||
|
||||
**NOT STARTED:**
|
||||
- Follow other sailors
|
||||
- Comment on images
|
||||
- Like/upvote images
|
||||
- Activity feed
|
||||
- Federated timeline / custom feeds
|
||||
- Sharing to Bluesky/ATProto social apps
|
||||
|
||||
**Activity feed:**
|
||||
- Timeline of followed sailors' activity
|
||||
- Recent pushes from community
|
||||
- Popular images from followed users
|
||||
|
||||
### Federated Timeline
|
||||
|
||||
**ATProto-native feed:**
|
||||
- Real-time feed of container pushes (like Bluesky's timeline)
|
||||
- Filter by follows, community, or global
|
||||
- React to pushes (like, share, comment)
|
||||
- Share images to Bluesky/ATProto social apps
|
||||
|
||||
**Custom feeds:**
|
||||
- Create algorithmic feeds (e.g., "Show me all ML images")
|
||||
- Subscribe to curated feeds
|
||||
- Publish feeds for others to subscribe
|
||||
---
|
||||
|
||||
## Access Control & Permissions
|
||||
|
||||
### Repository-Level Permissions
|
||||
### Hold-Level Access Control — DONE
|
||||
|
||||
**Private repositories:**
|
||||
- Mark repositories as private (only owner + collaborators can pull)
|
||||
- Invite collaborators by handle/DID
|
||||
- Set permissions (read-only, read-write, admin)
|
||||
- Public/private hold toggle (admin UI + OCI enforcement)
|
||||
- Crew permissions: `blob:read`, `blob:write`, `crew:admin`
|
||||
- `blob:write` implicitly grants `blob:read`
|
||||
- Captain has all permissions implicitly
|
||||
- See `docs/BYOS.md`
|
||||
|
||||
**Public repositories:**
|
||||
- Default: public (anyone can pull)
|
||||
- Require authentication for private repos
|
||||
- Generate read-only tokens (for CI/CD)
|
||||
### Repository-Level Permissions — BLOCKED
|
||||
|
||||
**Implementation challenge:**
|
||||
- ATProto doesn't support private records yet
|
||||
- May require proxy layer for access control
|
||||
- Or use encrypted blobs with shared keys
|
||||
- **Private repositories blocked by ATProto** — no private records support yet
|
||||
- Repository-level permissions, collaborator invites, read-only tokens all depend on this
|
||||
- May require proxy layer or encrypted blobs when ATProto adds private record support
|
||||
|
||||
### Team/Organization Accounts
|
||||
### Team/Organization Accounts — NOT STARTED
|
||||
|
||||
**Multi-user organizations:**
|
||||
- Create organization account (e.g., `@acme-corp`)
|
||||
- Add members with roles (owner, maintainer, member)
|
||||
- Organization-owned repositories
|
||||
- Billing and quotas at org level
|
||||
- Organization accounts, RBAC, SSO, audit logs
|
||||
- Likely a later-stage feature
|
||||
|
||||
**Features:**
|
||||
- Team-based access control
|
||||
- Shared hold for organization
|
||||
- Audit logs for all org activity
|
||||
- Single sign-on (SSO) integration
|
||||
---
|
||||
|
||||
## Analytics & Monitoring
|
||||
|
||||
### Dashboard
|
||||
### Dashboard — PARTIAL
|
||||
|
||||
**Personal dashboard:**
|
||||
**Hold dashboard — DONE (hold admin):**
|
||||
- Storage usage, crew count, tier distribution
|
||||
|
||||
**Personal dashboard — NOT STARTED:**
|
||||
- Overview of your images, holds, activity
|
||||
- Quick stats (total size, pull count, last push)
|
||||
- Recent activity (your pushes, pulls)
|
||||
- Alerts and notifications
|
||||
- Quick stats, recent activity, alerts
|
||||
|
||||
**Hold dashboard:**
|
||||
- Storage usage, bandwidth, costs
|
||||
- Active crew members
|
||||
- Recent uploads/downloads
|
||||
- Health status of hold service
|
||||
### Pull Analytics — NOT STARTED
|
||||
|
||||
### Pull Analytics
|
||||
|
||||
**Detailed metrics:**
|
||||
- Pull count per image/tag
|
||||
- Pull count by client (Docker, containerd, podman)
|
||||
- Pull count by geography (country, region)
|
||||
- Pull count over time (chart)
|
||||
- Failed pulls (errors, retries)
|
||||
- Pull count by client, geography, over time
|
||||
- User analytics (authenticated vs anonymous)
|
||||
|
||||
**User analytics:**
|
||||
- Who is pulling your images (if authenticated)
|
||||
- Anonymous vs authenticated pulls
|
||||
- Repeat users vs new users
|
||||
### Alerts & Notifications — NOT STARTED
|
||||
|
||||
### Alerts & Notifications
|
||||
- Alert types (quota exceeded, vulnerability detected, hold down, etc.)
|
||||
- Notification channels (email, webhook, ATProto, Slack/Discord)
|
||||
|
||||
**Alert types:**
|
||||
- Storage quota exceeded
|
||||
- High bandwidth usage
|
||||
- New vulnerability detected
|
||||
- Image signature invalid
|
||||
- Hold service down
|
||||
- Crew member joined/left
|
||||
|
||||
**Notification channels:**
|
||||
- Email
|
||||
- Webhook (POST to custom URL)
|
||||
- ATProto app notification (future: in-app notifications in Bluesky)
|
||||
- Slack, Discord, Telegram integrations
|
||||
---
|
||||
|
||||
## Developer Tools & Integrations
|
||||
|
||||
### API Documentation
|
||||
### Credential Helper — DONE
|
||||
|
||||
**Interactive API docs:**
|
||||
- Swagger/OpenAPI spec for OCI API
|
||||
- Swagger/OpenAPI spec for UI API
|
||||
- Interactive API explorer (try API calls in browser)
|
||||
- Code examples in multiple languages (curl, Go, Python, JavaScript)
|
||||
- Install page at `/install` with shell scripts
|
||||
- Version API endpoint for automatic updates
|
||||
|
||||
**SDK/Client Libraries:**
|
||||
- Official Go client library
|
||||
- JavaScript/TypeScript client
|
||||
- Python client
|
||||
- Rust client
|
||||
### API Documentation — NOT STARTED
|
||||
|
||||
### Webhooks
|
||||
- Swagger/OpenAPI specs
|
||||
- Interactive API explorer
|
||||
- Code examples, SDKs
|
||||
|
||||
**Webhook configuration:**
|
||||
- Register webhook URLs per repository
|
||||
- Select events to trigger (push, delete, tag update)
|
||||
- Test webhooks (send test payload)
|
||||
- View webhook delivery history
|
||||
- Retry failed deliveries
|
||||
### Webhooks — NOT STARTED
|
||||
|
||||
**Webhook events:**
|
||||
- `manifest.pushed`
|
||||
- `manifest.deleted`
|
||||
- `tag.created`
|
||||
- `tag.updated`
|
||||
- `tag.deleted`
|
||||
- `scan.completed` (vulnerability scan finished)
|
||||
- Repository-level webhook registration
|
||||
- Events: manifest.pushed, tag.created, scan.completed, etc.
|
||||
- Test, retry, delivery history
|
||||
|
||||
### CI/CD Integration Guides
|
||||
### CI/CD Integration — NOT STARTED
|
||||
|
||||
**Documentation for popular CI/CD platforms:**
|
||||
- GitHub Actions (example workflows)
|
||||
- GitLab CI (.gitlab-ci.yml examples)
|
||||
- CircleCI (config.yml examples)
|
||||
- Jenkins (Jenkinsfile examples)
|
||||
- Drone CI
|
||||
- GitHub Actions, GitLab CI, CircleCI example workflows
|
||||
- Pre-built actions/plugins
|
||||
- Build status badges
|
||||
|
||||
**Features:**
|
||||
- One-click workflow generation
|
||||
- Pre-built actions/plugins for ATCR
|
||||
- Cache layer optimization for faster builds
|
||||
- Build status badges (show build status in README)
|
||||
### Infrastructure as Code — PARTIAL
|
||||
|
||||
### Infrastructure as Code
|
||||
**DONE:**
|
||||
- Custom UpCloud deployment tool (`deploy/upcloud/`) with Go-based provisioning, cloud-init, systemd, config templates
|
||||
- Docker Compose for dev and production
|
||||
|
||||
**IaC examples:**
|
||||
- Terraform module for deploying hold service
|
||||
- Pulumi program for ATCR infrastructure
|
||||
- Kubernetes manifests for hold service
|
||||
- Docker Compose for local development
|
||||
- Helm chart for AppView + hold
|
||||
**NOT STARTED:**
|
||||
- Terraform modules
|
||||
- Helm charts
|
||||
- Kubernetes manifests (only an example verification webhook exists)
|
||||
- GitOps integrations (ArgoCD, FluxCD)
|
||||
|
||||
**GitOps workflows:**
|
||||
- ArgoCD integration (deploy images from ATCR)
|
||||
- FluxCD integration
|
||||
- Automated deployments on tag push
|
||||
---
|
||||
|
||||
## Documentation & Onboarding
|
||||
## Documentation & Onboarding — PARTIAL
|
||||
|
||||
### Interactive Getting Started
|
||||
**DONE:**
|
||||
- Install page with credential helper setup
|
||||
- Learn more page
|
||||
- Internal developer docs (`docs/`)
|
||||
|
||||
**Onboarding wizard:**
|
||||
- Step-by-step guide for first-time users
|
||||
- Interactive tutorial (push your first image)
|
||||
- Verify setup (test authentication, test push/pull)
|
||||
- Completion checklist
|
||||
|
||||
**Guided tours:**
|
||||
- Product tour of UI features
|
||||
- Tooltips and hints for new users
|
||||
**NOT STARTED:**
|
||||
- Interactive onboarding wizard
|
||||
- Product tour / tooltips
|
||||
- Help center with FAQs
|
||||
- Video tutorials
|
||||
- Comprehensive user-facing documentation site
|
||||
|
||||
### Comprehensive Documentation
|
||||
|
||||
**Documentation sections:**
|
||||
- Quickstart guide
|
||||
- Detailed user manual
|
||||
- API reference
|
||||
- ATProto record schemas
|
||||
- Deployment guides (hold service, AppView)
|
||||
- Troubleshooting guide
|
||||
- Security best practices
|
||||
|
||||
**Video tutorials:**
|
||||
- YouTube channel with how-to videos
|
||||
- Screen recordings of common tasks
|
||||
- Conference talks and demos
|
||||
|
||||
### Community & Support
|
||||
|
||||
**Community features:**
|
||||
- Discussion forum (or integrate with Discourse)
|
||||
- GitHub Discussions for ATCR project
|
||||
- Discord/Slack community
|
||||
- Monthly community calls
|
||||
|
||||
**Support channels:**
|
||||
- Email support
|
||||
- Live chat (for paid tiers)
|
||||
- Priority support (for enterprise)
|
||||
---
|
||||
|
||||
## Advanced ATProto Integration
|
||||
|
||||
### Record Viewer
|
||||
### Data Export — DONE
|
||||
|
||||
**ATProto record browser:**
|
||||
- Browse all your `io.atcr.*` records
|
||||
- Raw JSON view with ATProto metadata (CID, commit info, timestamp)
|
||||
- Diff viewer for record updates
|
||||
- History view (see all versions of a record)
|
||||
- Link to ATP URI (`at://did/collection/rkey`)
|
||||
- GDPR-compliant data export (`ExportUserDataHandler`)
|
||||
- Fetches data from AppView DB + all holds where user is member/captain
|
||||
|
||||
**Export/Import:**
|
||||
- Export all records as JSON (backup)
|
||||
- Import records from JSON (restore, migration)
|
||||
- CAR file export (ATProto native format)
|
||||
### Record Viewer — NOT STARTED
|
||||
|
||||
### PDS Integration
|
||||
- Browse `io.atcr.*` records with raw JSON view
|
||||
- Record history, diff viewer
|
||||
- ATP URI links
|
||||
|
||||
**Multi-PDS support:**
|
||||
- Switch between multiple PDS accounts
|
||||
- Manage images across different PDSs
|
||||
- Unified view of all your images (across PDSs)
|
||||
### PDS Integration — NOT STARTED
|
||||
|
||||
**PDS health monitoring:**
|
||||
- Show PDS connection status
|
||||
- Alert if PDS is unreachable
|
||||
- Fallback to alternate PDS (if configured)
|
||||
- Multi-PDS support, PDS health monitoring
|
||||
- PDS migration tools
|
||||
- "Verify on PDS" button
|
||||
|
||||
**PDS migration tools:**
|
||||
- Migrate images from one PDS to another
|
||||
- Bulk update hold endpoints
|
||||
- Re-sign OAuth tokens for new PDS
|
||||
### Federation — NOT STARTED
|
||||
|
||||
### Decentralization Features
|
||||
- Cross-AppView image pulls
|
||||
- AppView discovery
|
||||
- Federated search
|
||||
|
||||
**Data sovereignty:**
|
||||
- "Verify on PDS" button (proves manifest is in your PDS)
|
||||
- "Clone my registry" guide (backup to another PDS)
|
||||
- "Export registry" (download all manifests + metadata)
|
||||
|
||||
**Federation:**
|
||||
- Cross-AppView image pulls (pull from other ATCR AppViews)
|
||||
- AppView discovery (find other ATCR instances)
|
||||
- Federated search (search across multiple AppViews)
|
||||
|
||||
## Enterprise Features (Future Commercial Offering)
|
||||
|
||||
### Team Collaboration
|
||||
|
||||
**Organizations:**
|
||||
- Enterprise org accounts with unlimited members
|
||||
- RBAC (role-based access control)
|
||||
- SSO integration (SAML, OIDC)
|
||||
- Audit logs for compliance
|
||||
|
||||
### Compliance & Security
|
||||
|
||||
**Compliance tools:**
|
||||
- SOC 2 compliance reporting
|
||||
- HIPAA-compliant storage options
|
||||
- GDPR data export/deletion
|
||||
- Retention policies (auto-delete after N days)
|
||||
|
||||
**Security features:**
|
||||
- Image scanning with policy enforcement (block vulnerable images)
|
||||
- Malware scanning (scan blobs for malware)
|
||||
- Secrets scanning (detect leaked credentials in layers)
|
||||
- Content trust (require signed images)
|
||||
|
||||
### SLA & Support
|
||||
|
||||
**Paid tiers:**
|
||||
- Free tier: 5GB storage, community support
|
||||
- Pro tier: 100GB storage, email support, SLA
|
||||
- Enterprise tier: Unlimited storage, priority support, dedicated instance
|
||||
|
||||
**Features:**
|
||||
- Guaranteed uptime (99.9%)
|
||||
- Premium support (24/7, faster response)
|
||||
- Dedicated account manager
|
||||
- Custom contract terms
|
||||
---
|
||||
|
||||
## UI/UX Enhancements
|
||||
|
||||
### Design System
|
||||
### Theming — PARTIAL
|
||||
|
||||
**Theming:**
|
||||
- Light and dark modes (system preference)
|
||||
- Custom themes (nautical, cyberpunk, minimalist)
|
||||
- Accessibility (WCAG 2.1 AA compliance)
|
||||
**DONE:**
|
||||
- Light/dark mode with system preference detection and toggle
|
||||
- Responsive design (Tailwind/DaisyUI, mobile-friendly)
|
||||
- PWA manifest with icons (no service worker yet)
|
||||
|
||||
**NOT STARTED:**
|
||||
- Custom themes
|
||||
- WCAG 2.1 AA accessibility audit
|
||||
- High contrast mode
|
||||
- Internationalization (i18n)
|
||||
- Native mobile apps
|
||||
|
||||
**Responsive design:**
|
||||
- Mobile-first design
|
||||
- Progressive web app (PWA) with offline support
|
||||
- Native mobile apps (iOS, Android)
|
||||
### Performance — PARTIAL
|
||||
|
||||
### Performance Optimizations
|
||||
**DONE:**
|
||||
- HTMX lazy loading for data-heavy partials
|
||||
- Efficient server-side rendering
|
||||
|
||||
**Frontend optimizations:**
|
||||
- Lazy loading for images and data
|
||||
**NOT STARTED:**
|
||||
- Service worker for offline caching
|
||||
- Virtual scrolling for large lists
|
||||
- Service worker for caching
|
||||
- Code splitting (load only what's needed)
|
||||
- GraphQL API
|
||||
- Real-time WebSocket updates in UI
|
||||
|
||||
**Backend optimizations:**
|
||||
- GraphQL API (fetch only required fields)
|
||||
- Real-time updates via WebSocket
|
||||
- Server-sent events for firehose
|
||||
- Edge caching (CloudFlare, Fastly)
|
||||
---
|
||||
|
||||
### Internationalization
|
||||
## Enterprise Features — NOT STARTED (except billing)
|
||||
|
||||
**Multi-language support:**
|
||||
- UI translations (English, Spanish, French, German, Japanese, Chinese, etc.)
|
||||
- RTL (right-to-left) language support
|
||||
- Localized date/time formats
|
||||
- Locale-specific formatting (numbers, currencies)
|
||||
### Billing — DONE
|
||||
|
||||
## Miscellaneous Ideas
|
||||
- Stripe integration (`pkg/hold/billing/`, requires `-tags billing` build tag)
|
||||
- Checkout sessions, customer portal, subscription webhooks
|
||||
- Tier upgrades/downgrades
|
||||
|
||||
### Image Build Service
|
||||
### Everything Else — NOT STARTED
|
||||
|
||||
**Cloud-based builds:**
|
||||
- Build images from Dockerfile in the UI
|
||||
- Multi-stage build support
|
||||
- Build cache optimization
|
||||
- Build logs and status
|
||||
- Organization accounts with SSO (SAML, OIDC)
|
||||
- RBAC, audit logs for compliance
|
||||
- SOC 2, HIPAA, GDPR compliance tooling (data export exists, see above)
|
||||
- Image scanning policy enforcement
|
||||
- Paid tier SLAs
|
||||
|
||||
**Automated builds:**
|
||||
- Connect GitHub/GitLab repository
|
||||
- Auto-build on git push
|
||||
- Build matrix (multiple architectures, versions)
|
||||
- Build notifications
|
||||
---
|
||||
|
||||
### Image Registry Mirroring
|
||||
## Miscellaneous Ideas — NOT STARTED
|
||||
|
||||
**Mirror external registries:**
|
||||
- Cache images from Docker Hub, ghcr.io, quay.io
|
||||
- Transparent proxy (pull-through cache)
|
||||
- Reduce external bandwidth costs
|
||||
- Faster pulls (cache locally)
|
||||
These remain future ideas with no implementation:
|
||||
|
||||
**Features:**
|
||||
- Configurable cache retention
|
||||
- Whitelist/blacklist registries
|
||||
- Statistics (cache hit rate, savings)
|
||||
- **Image build service** — Cloud-based Dockerfile builds
|
||||
- **Registry mirroring** — Pull-through cache for Docker Hub, ghcr.io, etc.
|
||||
- **Deployment tools** — One-click deploy to K8s, ECS, Fly.io
|
||||
- **Image recommendations** — ML-based "similar images" and "people also pulled"
|
||||
- **Gamification** — Achievement badges, leaderboards
|
||||
- **Advanced search** — Semantic/AI-powered search, saved searches
|
||||
|
||||
### Deployment Tools
|
||||
---
|
||||
|
||||
**One-click deployments:**
|
||||
- Deploy image to Kubernetes
|
||||
- Deploy to Docker Swarm
|
||||
- Deploy to AWS ECS/Fargate
|
||||
- Deploy to Fly.io, Railway, Render
|
||||
## Updated Priority List
|
||||
|
||||
**Deployment tracking:**
|
||||
- Track where images are deployed
|
||||
- Show running versions (which environments use which tags)
|
||||
- Notify on new deployments
|
||||
**Already done (was "High Priority"):**
|
||||
1. ~~Multi-architecture image support~~ — display working
|
||||
2. ~~Vulnerability scanning integration~~ — backend complete
|
||||
3. ~~Hold management dashboard~~ — implemented on hold admin panel
|
||||
4. ~~Basic search~~ — working
|
||||
|
||||
### Image Recommendations
|
||||
**Remaining high priority:**
|
||||
1. Scan results UI in AppView (backend exists, just needs frontend)
|
||||
2. SBOM display UI in AppView (backend exists, just needs frontend)
|
||||
3. Webhooks for CI/CD integration
|
||||
4. Enhanced search (filters, sorting, advanced queries)
|
||||
5. Richer sailor profiles (bio, stats, pinned repos)
|
||||
|
||||
**ML-based recommendations:**
|
||||
- "Similar images" (based on layers, packages, tags)
|
||||
- "People who pulled this also pulled..." (collaborative filtering)
|
||||
- "Recommended for you" (personalized based on history)
|
||||
**Medium priority:**
|
||||
1. Layer inspection UI
|
||||
2. Pull analytics and monitoring
|
||||
3. API documentation (Swagger/OpenAPI)
|
||||
4. Tag management (promotion, protection, aliases)
|
||||
5. Onboarding wizard / getting started guide
|
||||
|
||||
### Gamification
|
||||
|
||||
**Achievements:**
|
||||
- Badges for milestones (first push, 100 pulls, 1GB storage, etc.)
|
||||
- Leaderboards (most popular images, most active sailors)
|
||||
- Community contributions (points for helping others)
|
||||
|
||||
### Advanced Search
|
||||
|
||||
**Semantic search:**
|
||||
- Search by description, README, labels
|
||||
- Natural language queries ("show me nginx images with SSL")
|
||||
- AI-powered search (GPT-based understanding)
|
||||
|
||||
**Saved searches:**
|
||||
- Save frequently used queries
|
||||
- Subscribe to search results (get notified of new matches)
|
||||
- Share searches with team
|
||||
|
||||
## Implementation Priority
|
||||
|
||||
If implementing these features, suggested priority order:
|
||||
|
||||
**High Priority (Next 6 months):**
|
||||
1. Multi-architecture image support
|
||||
2. Vulnerability scanning integration
|
||||
3. Hold management dashboard
|
||||
4. Enhanced search and filtering
|
||||
5. Webhooks for CI/CD integration
|
||||
|
||||
**Medium Priority (6-12 months):**
|
||||
**Low priority / long-term:**
|
||||
1. Team/organization accounts
|
||||
2. Repository-level permissions
|
||||
3. Image signing and verification
|
||||
4. Pull analytics and monitoring
|
||||
5. API documentation and SDKs
|
||||
|
||||
**Low Priority (12+ months):**
|
||||
1. Enterprise features (SSO, compliance, SLA)
|
||||
2. Image build service
|
||||
3. Registry mirroring
|
||||
4. Mobile apps
|
||||
5. ML-based recommendations
|
||||
4. Federation features
|
||||
5. Internationalization
|
||||
|
||||
**Research/Experimental:**
|
||||
**Blocked on external dependencies:**
|
||||
1. Private repositories (requires ATProto private records)
|
||||
2. Federated timeline (requires ATProto feed infrastructure)
|
||||
3. Deployment tools integration
|
||||
4. Semantic search
|
||||
|
||||
---
|
||||
|
||||
**Note:** This is a living document. Features may be added, removed, or reprioritized based on user feedback, technical feasibility, and ATProto ecosystem evolution.
|
||||
|
||||
*Last audited: 2026-02-12*
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,631 +0,0 @@
|
||||
# ATCR AppView UI - Version 1 Specification
|
||||
|
||||
## Overview
|
||||
|
||||
The ATCR AppView UI provides a web interface for discovering, managing, and configuring container images in the ATCR registry. Version 1 focuses on three core pages that leverage existing functionality:
|
||||
|
||||
1. **Front Page** - Federated image discovery via firehose
|
||||
2. **Settings Page** - Profile and hold configuration
|
||||
3. **Personal Page** - Manage your images and tags
|
||||
|
||||
## Architecture
|
||||
|
||||
### Tech Stack
|
||||
|
||||
- **Backend:** Go (existing AppView codebase)
|
||||
- **Frontend:** TBD (Go templates/Templ or separate SPA)
|
||||
- **Database:** SQLite (firehose data cache)
|
||||
- **Styling:** TBD (plain CSS, Tailwind, etc.)
|
||||
- **Authentication:** OAuth with DPoP (reuse existing implementation)
|
||||
|
||||
### Components
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Web UI (Browser) │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ AppView HTTP Server │
|
||||
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
|
||||
│ │ UI Endpoints │ │ OCI API │ │ OAuth Server │ │
|
||||
│ │ /ui/* │ │ /v2/* │ │ /auth/* │ │
|
||||
│ └──────────────┘ └──────────────┘ └──────────────┘ │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
┌─────────┴─────────┐
|
||||
▼ ▼
|
||||
┌──────────────────┐ ┌──────────────────┐
|
||||
│ SQLite Database │ │ ATProto Client │
|
||||
│ (Firehose cache) │ │ (PDS operations) │
|
||||
└──────────────────┘ └──────────────────┘
|
||||
▲
|
||||
┌──────────────────┐ │
|
||||
│ Firehose Worker │───────────┘
|
||||
│ (Background) │
|
||||
└──────────────────┘
|
||||
▲
|
||||
│
|
||||
┌──────────────────┐
|
||||
│ ATProto Firehose │
|
||||
│ (Jetstream/Relay)│
|
||||
└──────────────────┘
|
||||
```
|
||||
|
||||
## Database Schema
|
||||
|
||||
SQLite database for caching firehose data and enabling fast queries.
|
||||
|
||||
### Tables
|
||||
|
||||
**users**
|
||||
```sql
|
||||
CREATE TABLE users (
|
||||
did TEXT PRIMARY KEY,
|
||||
handle TEXT NOT NULL,
|
||||
pds_endpoint TEXT NOT NULL,
|
||||
last_seen TIMESTAMP NOT NULL,
|
||||
UNIQUE(handle)
|
||||
);
|
||||
CREATE INDEX idx_users_handle ON users(handle);
|
||||
```
|
||||
|
||||
**manifests**
|
||||
```sql
|
||||
CREATE TABLE manifests (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
did TEXT NOT NULL,
|
||||
repository TEXT NOT NULL,
|
||||
digest TEXT NOT NULL,
|
||||
hold_endpoint TEXT NOT NULL,
|
||||
schema_version INTEGER NOT NULL,
|
||||
media_type TEXT NOT NULL,
|
||||
config_digest TEXT,
|
||||
config_size INTEGER,
|
||||
raw_manifest TEXT NOT NULL, -- JSON blob
|
||||
created_at TIMESTAMP NOT NULL,
|
||||
UNIQUE(did, repository, digest),
|
||||
FOREIGN KEY(did) REFERENCES users(did) ON DELETE CASCADE
|
||||
);
|
||||
CREATE INDEX idx_manifests_did_repo ON manifests(did, repository);
|
||||
CREATE INDEX idx_manifests_created_at ON manifests(created_at DESC);
|
||||
CREATE INDEX idx_manifests_digest ON manifests(digest);
|
||||
```
|
||||
|
||||
**layers**
|
||||
```sql
|
||||
CREATE TABLE layers (
|
||||
manifest_id INTEGER NOT NULL,
|
||||
digest TEXT NOT NULL,
|
||||
size INTEGER NOT NULL,
|
||||
media_type TEXT NOT NULL,
|
||||
layer_index INTEGER NOT NULL,
|
||||
PRIMARY KEY(manifest_id, layer_index),
|
||||
FOREIGN KEY(manifest_id) REFERENCES manifests(id) ON DELETE CASCADE
|
||||
);
|
||||
CREATE INDEX idx_layers_digest ON layers(digest);
|
||||
```
|
||||
|
||||
**tags**
|
||||
```sql
|
||||
CREATE TABLE tags (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
did TEXT NOT NULL,
|
||||
repository TEXT NOT NULL,
|
||||
tag TEXT NOT NULL,
|
||||
digest TEXT NOT NULL,
|
||||
created_at TIMESTAMP NOT NULL,
|
||||
UNIQUE(did, repository, tag),
|
||||
FOREIGN KEY(did) REFERENCES users(did) ON DELETE CASCADE
|
||||
);
|
||||
CREATE INDEX idx_tags_did_repo ON tags(did, repository);
|
||||
```
|
||||
|
||||
**firehose_cursor**
|
||||
```sql
|
||||
CREATE TABLE firehose_cursor (
|
||||
id INTEGER PRIMARY KEY CHECK (id = 1),
|
||||
cursor INTEGER NOT NULL,
|
||||
updated_at TIMESTAMP NOT NULL
|
||||
);
|
||||
```
|
||||
|
||||
## Firehose Worker
|
||||
|
||||
Background goroutine that subscribes to ATProto firehose and populates the database.
|
||||
|
||||
### Implementation
|
||||
|
||||
```go
|
||||
// pkg/ui/firehose/worker.go
|
||||
|
||||
type Worker struct {
|
||||
db *sql.DB
|
||||
jetstream *JetstreamClient
|
||||
resolver *atproto.Resolver
|
||||
stopCh chan struct{}
|
||||
}
|
||||
|
||||
func (w *Worker) Start() error {
|
||||
// Load cursor from database
|
||||
cursor := w.loadCursor()
|
||||
|
||||
// Subscribe to firehose
|
||||
events := w.jetstream.Subscribe(cursor, []string{
|
||||
"io.atcr.manifest",
|
||||
"io.atcr.tag",
|
||||
})
|
||||
|
||||
for {
|
||||
select {
|
||||
case event := <-events:
|
||||
w.handleEvent(event)
|
||||
case <-w.stopCh:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (w *Worker) handleEvent(event FirehoseEvent) error {
|
||||
switch event.Collection {
|
||||
case "io.atcr.manifest":
|
||||
return w.handleManifest(event)
|
||||
case "io.atcr.tag":
|
||||
return w.handleTag(event)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
```
|
||||
|
||||
### Event Handling
|
||||
|
||||
**Manifest create:**
|
||||
- Resolve DID → handle, PDS endpoint
|
||||
- Insert/update user record
|
||||
- Parse manifest JSON
|
||||
- Insert manifest record
|
||||
- Insert layer records
|
||||
|
||||
**Tag create/update:**
|
||||
- Insert/update tag record
|
||||
- Link to existing manifest
|
||||
|
||||
**Record deletion:**
|
||||
- Delete from database (cascade handles related records)
|
||||
|
||||
### Firehose Connection
|
||||
|
||||
Use Jetstream (bluesky-social/jetstream) or connect directly to relay:
|
||||
- **Jetstream:** Websocket to `wss://jetstream.atproto.tools/subscribe`
|
||||
- **Relay:** Websocket to relay (e.g., `wss://bsky.network/xrpc/com.atproto.sync.subscribeRepos`)
|
||||
|
||||
Jetstream is simpler and filters events server-side.
|
||||
|
||||
## Page Specifications
|
||||
|
||||
### 1. Front Page - Federated Discovery
|
||||
|
||||
**URL:** `/ui/` or `/ui/explore`
|
||||
|
||||
**Purpose:** Discover recently pushed images across all ATCR users.
|
||||
|
||||
**Layout:**
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ ATCR [Search] [@handle] [Login] │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ Recent Pushes [Filter ▼]│
|
||||
│ │
|
||||
│ ┌───────────────────────────────────────────────────────┐ │
|
||||
│ │ alice.bsky.social/nginx:latest │ │
|
||||
│ │ sha256:abc123... • hold1.alice.com • 2 hours ago │ │
|
||||
│ │ [docker pull atcr.io/alice.bsky.social/nginx:latest] │ │
|
||||
│ └───────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ ┌───────────────────────────────────────────────────────┐ │
|
||||
│ │ bob.dev/myapp:v1.2.3 │ │
|
||||
│ │ sha256:def456... • atcr-storage.fly.dev • 5 hours ago │ │
|
||||
│ │ [docker pull atcr.io/bob.dev/myapp:v1.2.3] │ │
|
||||
│ └───────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ [Load more...] │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Features:**
|
||||
- List of recent pushes (manifests + tags)
|
||||
- Show: handle, repository, tag, digest (truncated), timestamp, hold endpoint
|
||||
- Copy-paste pull command with click-to-copy
|
||||
- Filter by user (click handle to filter)
|
||||
- Search by repository name or tag
|
||||
- Click manifest to view details (modal or dedicated page)
|
||||
- Pagination (50 items per page)
|
||||
|
||||
**API Endpoint:**
|
||||
```
|
||||
GET /ui/api/recent-pushes
|
||||
Query params:
|
||||
- limit (default: 50)
|
||||
- offset (default: 0)
|
||||
- user (optional: filter by DID or handle)
|
||||
- repository (optional: filter by repo name)
|
||||
|
||||
Response:
|
||||
{
|
||||
"pushes": [
|
||||
{
|
||||
"did": "did:plc:alice123",
|
||||
"handle": "alice.bsky.social",
|
||||
"repository": "nginx",
|
||||
"tag": "latest",
|
||||
"digest": "sha256:abc123...",
|
||||
"hold_endpoint": "https://hold1.alice.com",
|
||||
"created_at": "2025-10-05T12:34:56Z",
|
||||
"pull_command": "docker pull atcr.io/alice.bsky.social/nginx:latest"
|
||||
}
|
||||
],
|
||||
"total": 1234,
|
||||
"offset": 0,
|
||||
"limit": 50
|
||||
}
|
||||
```
|
||||
|
||||
**Manifest Details Modal:**
|
||||
- Full manifest JSON (syntax highlighted)
|
||||
- Layer list with digests and sizes
|
||||
- Link to ATProto record (at://did/io.atcr.manifest/rkey)
|
||||
- Architecture, OS, labels
|
||||
- Creation timestamp
|
||||
|
||||
### 2. Settings Page
|
||||
|
||||
**URL:** `/ui/settings`
|
||||
|
||||
**Auth:** Requires login (OAuth)
|
||||
|
||||
**Purpose:** Configure profile and hold preferences.
|
||||
|
||||
**Layout:**
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ ATCR [@alice] [⚙️] │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ Settings │
|
||||
│ │
|
||||
│ ┌─ Identity ───────────────────────────────────────────┐ │
|
||||
│ │ Handle: alice.bsky.social │ │
|
||||
│ │ DID: did:plc:alice123abc (read-only) │ │
|
||||
│ │ PDS: https://bsky.social (read-only) │ │
|
||||
│ └───────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ ┌─ Default Hold ──────────────────────────────────────┐ │
|
||||
│ │ Current: https://hold1.alice.com │ │
|
||||
│ │ │ │
|
||||
│ │ [Dropdown: Select from your holds ▼] │ │
|
||||
│ │ • https://hold1.alice.com (Your BYOS) │ │
|
||||
│ │ • https://storage.atcr.io (AppView default) │ │
|
||||
│ │ • [Custom URL...] │ │
|
||||
│ │ │ │
|
||||
│ │ Custom hold URL: [_____________________] │ │
|
||||
│ │ │ │
|
||||
│ │ [Save] │ │
|
||||
│ └───────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ ┌─ OAuth Session ─────────────────────────────────────┐ │
|
||||
│ │ Logged in as: alice.bsky.social │ │
|
||||
│ │ Session expires: 2025-10-06 14:23:00 UTC │ │
|
||||
│ │ [Re-authenticate] │ │
|
||||
│ └───────────────────────────────────────────────────────┘ │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Features:**
|
||||
- Display current identity (handle, DID, PDS)
|
||||
- Default hold configuration:
|
||||
- Dropdown showing user's `io.atcr.hold` records (query from PDS)
|
||||
- Option to select AppView's default storage endpoint
|
||||
- Manual entry for custom hold URL
|
||||
- "Save" button updates `io.atcr.sailor.profile.defaultHold`
|
||||
- OAuth session status
|
||||
- Re-authenticate button (redirects to OAuth flow)
|
||||
|
||||
**API Endpoints:**
|
||||
|
||||
```
|
||||
GET /ui/api/profile
|
||||
Auth: Required (session cookie)
|
||||
Response:
|
||||
{
|
||||
"did": "did:plc:alice123",
|
||||
"handle": "alice.bsky.social",
|
||||
"pds_endpoint": "https://bsky.social",
|
||||
"default_hold": "https://hold1.alice.com",
|
||||
"holds": [
|
||||
{
|
||||
"endpoint": "https://hold1.alice.com",
|
||||
"name": "My BYOS Storage",
|
||||
"public": false
|
||||
}
|
||||
],
|
||||
"session_expires_at": "2025-10-06T14:23:00Z"
|
||||
}
|
||||
|
||||
POST /ui/api/profile/default-hold
|
||||
Auth: Required
|
||||
Body:
|
||||
{
|
||||
"hold_endpoint": "https://hold1.alice.com"
|
||||
}
|
||||
Response:
|
||||
{
|
||||
"success": true
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Personal Page - Your Images
|
||||
|
||||
**URL:** `/ui/images` or `/ui/@{handle}`
|
||||
|
||||
**Auth:** Requires login (OAuth)
|
||||
|
||||
**Purpose:** Manage your container images and tags.
|
||||
|
||||
**Layout:**
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ ATCR [@alice] [⚙️] │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ Your Images │
|
||||
│ │
|
||||
│ ┌─ nginx ──────────────────────────────────────────────┐ │
|
||||
│ │ 3 tags • 5 manifests • Last push: 2 hours ago │ │
|
||||
│ │ │ │
|
||||
│ │ Tags: │ │
|
||||
│ │ ┌────────────────────────────────────────────────┐ │ │
|
||||
│ │ │ latest → sha256:abc123... (2 hours ago) [✏️][🗑️]│ │ │
|
||||
│ │ │ v1.25 → sha256:def456... (1 day ago) [✏️][🗑️]│ │ │
|
||||
│ │ │ alpine → sha256:ghi789... (3 days ago) [✏️][🗑️]│ │ │
|
||||
│ │ └────────────────────────────────────────────────┘ │ │
|
||||
│ │ │ │
|
||||
│ │ Manifests: │ │
|
||||
│ │ ┌────────────────────────────────────────────────┐ │ │
|
||||
│ │ │ sha256:abc123... • 45MB • hold1.alice.com │ │ │
|
||||
│ │ │ linux/amd64 • 5 layers • [View] [Delete] │ │ │
|
||||
│ │ │ sha256:def456... • 42MB • hold1.alice.com │ │ │
|
||||
│ │ │ linux/amd64 • 5 layers • [View] [Delete] │ │ │
|
||||
│ │ └────────────────────────────────────────────────┘ │ │
|
||||
│ └───────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ ┌─ myapp ──────────────────────────────────────────────┐ │
|
||||
│ │ 2 tags • 2 manifests • Last push: 1 day ago │ │
|
||||
│ │ [Expand ▼] │ │
|
||||
│ └───────────────────────────────────────────────────────┘ │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Features:**
|
||||
|
||||
**Repository List:**
|
||||
- Group manifests by repository name
|
||||
- Show: tag count, manifest count, last push time
|
||||
- Collapsible/expandable repository cards
|
||||
|
||||
**Repository Details (Expanded):**
|
||||
- **Tags:** Table showing tag → manifest digest → timestamp
|
||||
- Edit tag: Modal to re-point tag to different manifest digest
|
||||
- Delete tag: Confirm dialog, removes `io.atcr.tag` record from PDS
|
||||
- **Manifests:** List of all manifests in repository
|
||||
- Show: digest (truncated), size, hold endpoint, architecture, layer count
|
||||
- View: Open manifest details modal (same as front page)
|
||||
- Delete: Confirm dialog with warning if manifest is tagged
|
||||
|
||||
**Actions:**
|
||||
- Copy pull command for each tag
|
||||
- Edit tag (re-point to different digest)
|
||||
- Delete tag
|
||||
- Delete manifest (with validation)
|
||||
|
||||
**API Endpoints:**
|
||||
|
||||
```
|
||||
GET /ui/api/images
|
||||
Auth: Required
|
||||
Response:
|
||||
{
|
||||
"repositories": [
|
||||
{
|
||||
"name": "nginx",
|
||||
"tag_count": 3,
|
||||
"manifest_count": 5,
|
||||
"last_push": "2025-10-05T10:23:45Z",
|
||||
"tags": [
|
||||
{
|
||||
"tag": "latest",
|
||||
"digest": "sha256:abc123...",
|
||||
"created_at": "2025-10-05T10:23:45Z"
|
||||
}
|
||||
],
|
||||
"manifests": [
|
||||
{
|
||||
"digest": "sha256:abc123...",
|
||||
"size": 47185920,
|
||||
"hold_endpoint": "https://hold1.alice.com",
|
||||
"architecture": "amd64",
|
||||
"os": "linux",
|
||||
"layer_count": 5,
|
||||
"created_at": "2025-10-05T10:23:45Z",
|
||||
"tagged": true
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
PUT /ui/api/images/{repository}/tags/{tag}
|
||||
Auth: Required
|
||||
Body:
|
||||
{
|
||||
"digest": "sha256:new-digest..."
|
||||
}
|
||||
Response:
|
||||
{
|
||||
"success": true
|
||||
}
|
||||
|
||||
DELETE /ui/api/images/{repository}/tags/{tag}
|
||||
Auth: Required
|
||||
Response:
|
||||
{
|
||||
"success": true
|
||||
}
|
||||
|
||||
DELETE /ui/api/images/{repository}/manifests/{digest}
|
||||
Auth: Required
|
||||
Response:
|
||||
{
|
||||
"success": true
|
||||
}
|
||||
```
|
||||
|
||||
## Authentication
|
||||
|
||||
### OAuth Login Flow
|
||||
|
||||
Reuse existing OAuth implementation from credential helper and AppView.
|
||||
|
||||
**Login Endpoint:** `/auth/oauth/login`
|
||||
|
||||
**Flow:**
|
||||
1. User clicks "Login" on UI
|
||||
2. Redirects to `/auth/oauth/login?return_to=/ui/images`
|
||||
3. User enters handle (e.g., "alice.bsky.social")
|
||||
4. Server resolves handle → DID → PDS → OAuth server
|
||||
5. Server initiates OAuth flow with PAR + DPoP
|
||||
6. User redirected to PDS for authorization
|
||||
7. OAuth callback to `/auth/oauth/callback`
|
||||
8. Server exchanges code for token, validates with PDS
|
||||
9. Server creates session cookie (secure, httpOnly, SameSite)
|
||||
10. Redirects to `return_to` URL or default `/ui/images`
|
||||
|
||||
**Session Management:**
|
||||
- Session cookie: `atcr_session` (JWT or opaque token)
|
||||
- Session storage: In-memory map or SQLite table
|
||||
- Session duration: 24 hours (or match OAuth token expiry)
|
||||
- Refresh: Auto-refresh OAuth token when needed
|
||||
|
||||
**Middleware:**
|
||||
```go
|
||||
// pkg/ui/middleware/auth.go
|
||||
|
||||
func RequireAuth(next http.Handler) http.Handler {
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
session := getSession(r)
|
||||
if session == nil {
|
||||
http.Redirect(w, r, "/auth/oauth/login?return_to="+r.URL.Path, http.StatusFound)
|
||||
return
|
||||
}
|
||||
|
||||
// Add session info to context
|
||||
ctx := context.WithValue(r.Context(), "session", session)
|
||||
next.ServeHTTP(w, r.WithContext(ctx))
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
## Implementation Roadmap
|
||||
|
||||
### Phase 1: Database & Firehose
|
||||
1. Define SQLite schema
|
||||
2. Implement database layer (pkg/ui/db/)
|
||||
3. Implement firehose worker (pkg/ui/firehose/)
|
||||
4. Test worker with real firehose
|
||||
|
||||
### Phase 2: API Endpoints
|
||||
1. Implement `/ui/api/recent-pushes` (front page data)
|
||||
2. Implement `/ui/api/profile` (settings page data)
|
||||
3. Implement `/ui/api/images` (personal page data)
|
||||
4. Implement tag/manifest mutation endpoints
|
||||
|
||||
### Phase 3: Authentication
|
||||
1. Implement OAuth login endpoint
|
||||
2. Implement session management
|
||||
3. Add auth middleware
|
||||
4. Test login flow
|
||||
|
||||
### Phase 4: Frontend
|
||||
1. Choose framework (templates vs SPA)
|
||||
2. Implement front page
|
||||
3. Implement settings page
|
||||
4. Implement personal page
|
||||
5. Add styling
|
||||
|
||||
### Phase 5: Polish
|
||||
1. Error handling
|
||||
2. Loading states
|
||||
3. Responsive design
|
||||
4. Testing
|
||||
|
||||
## Open Questions
|
||||
|
||||
1. **Framework choice:** Go templates (Templ?), HTMX, or SPA (React/Vue)?
|
||||
2. **Styling:** Tailwind, plain CSS, or component library?
|
||||
3. **Manifest details:** Modal vs dedicated page?
|
||||
4. **Search:** Full-text search on repository/tag names? Requires FTS in SQLite.
|
||||
5. **Real-time updates:** WebSocket for firehose events, or polling?
|
||||
6. **Image size calculation:** Sum of layer sizes, or read from manifest?
|
||||
7. **Public profiles:** Should `/ui/@alice` show public view of alice's images?
|
||||
8. **Firehose resilience:** Reconnect logic, backfill on downtime?
|
||||
|
||||
## Dependencies
|
||||
|
||||
New Go packages needed:
|
||||
- `github.com/mattn/go-sqlite3` - SQLite driver
|
||||
- `github.com/bluesky-social/jetstream` - Firehose client (or direct websocket)
|
||||
- Session management library (or custom implementation)
|
||||
- Frontend framework (TBD)
|
||||
|
||||
## Configuration
|
||||
|
||||
Add to `config/config.yml`:
|
||||
|
||||
```yaml
|
||||
ui:
|
||||
enabled: true
|
||||
database_path: /var/lib/atcr/ui.db
|
||||
firehose:
|
||||
enabled: true
|
||||
endpoint: wss://jetstream.atproto.tools/subscribe
|
||||
collections:
|
||||
- io.atcr.manifest
|
||||
- io.atcr.tag
|
||||
session:
|
||||
duration: 24h
|
||||
cookie_name: atcr_session
|
||||
cookie_secure: true
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
1. **Session cookies:** Secure, HttpOnly, SameSite=Lax
|
||||
2. **CSRF protection:** For mutation endpoints (tag/manifest delete)
|
||||
3. **Rate limiting:** On API endpoints
|
||||
4. **Input validation:** Sanitize user input for search/filters
|
||||
5. **Authorization:** Verify authenticated user owns resources before mutation
|
||||
6. **SQL injection:** Use parameterized queries
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
1. **Database indexes:** On DID, repository, created_at, digest
|
||||
2. **Pagination:** Limit query results to avoid large payloads
|
||||
3. **Caching:** Cache profile data, hold list, manifest details
|
||||
4. **Firehose buffering:** Batch database inserts
|
||||
5. **Connection pooling:** For SQLite and HTTP clients
|
||||
|
||||
## Testing Strategy
|
||||
|
||||
1. **Unit tests:** Database layer, API handlers
|
||||
2. **Integration tests:** Firehose worker with mock events
|
||||
3. **E2E tests:** Full login → browse → manage flow
|
||||
4. **Load testing:** Firehose worker with high event volume
|
||||
5. **Manual testing:** Real PDS, real images, real firehose
|
||||
728
docs/ATCR_VERIFY_CLI.md
Normal file
728
docs/ATCR_VERIFY_CLI.md
Normal file
@@ -0,0 +1,728 @@
|
||||
# atcr-verify CLI Tool
|
||||
|
||||
## Overview
|
||||
|
||||
`atcr-verify` is a command-line tool for verifying ATProto signatures on container images stored in ATCR. It provides cryptographic verification of image manifests using ATProto's DID-based trust model.
|
||||
|
||||
## Features
|
||||
|
||||
- ✅ Verify ATProto signatures via OCI Referrers API
|
||||
- ✅ DID resolution and public key extraction
|
||||
- ✅ PDS query and commit signature verification
|
||||
- ✅ Trust policy enforcement
|
||||
- ✅ Offline verification mode (with cached data)
|
||||
- ✅ Multiple output formats (human-readable, JSON, quiet)
|
||||
- ✅ Exit codes for CI/CD integration
|
||||
- ✅ Kubernetes admission controller integration
|
||||
|
||||
## Installation
|
||||
|
||||
### Binary Release
|
||||
|
||||
```bash
|
||||
# Linux (x86_64)
|
||||
curl -L https://github.com/atcr-io/atcr/releases/latest/download/atcr-verify-linux-amd64 -o atcr-verify
|
||||
chmod +x atcr-verify
|
||||
sudo mv atcr-verify /usr/local/bin/
|
||||
|
||||
# macOS (Apple Silicon)
|
||||
curl -L https://github.com/atcr-io/atcr/releases/latest/download/atcr-verify-darwin-arm64 -o atcr-verify
|
||||
chmod +x atcr-verify
|
||||
sudo mv atcr-verify /usr/local/bin/
|
||||
|
||||
# Windows
|
||||
curl -L https://github.com/atcr-io/atcr/releases/latest/download/atcr-verify-windows-amd64.exe -o atcr-verify.exe
|
||||
```
|
||||
|
||||
### From Source
|
||||
|
||||
```bash
|
||||
git clone https://github.com/atcr-io/atcr.git
|
||||
cd atcr
|
||||
go install ./cmd/atcr-verify
|
||||
```
|
||||
|
||||
### Container Image
|
||||
|
||||
```bash
|
||||
docker pull atcr.io/atcr/verify:latest
|
||||
|
||||
# Run
|
||||
docker run --rm atcr.io/atcr/verify:latest verify IMAGE
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Verification
|
||||
|
||||
```bash
|
||||
# Verify an image
|
||||
atcr-verify atcr.io/alice/myapp:latest
|
||||
|
||||
# Output:
|
||||
# ✓ Image verified successfully
|
||||
# Signed by: alice.bsky.social (did:plc:alice123)
|
||||
# Signed at: 2025-10-31T12:34:56.789Z
|
||||
```
|
||||
|
||||
### With Trust Policy
|
||||
|
||||
```bash
|
||||
# Verify against trust policy
|
||||
atcr-verify atcr.io/alice/myapp:latest --policy trust-policy.yaml
|
||||
|
||||
# Output:
|
||||
# ✓ Image verified successfully
|
||||
# ✓ Trust policy satisfied
|
||||
# Policy: production-images
|
||||
# Trusted DID: did:plc:alice123
|
||||
```
|
||||
|
||||
### JSON Output
|
||||
|
||||
```bash
|
||||
atcr-verify atcr.io/alice/myapp:latest --output json
|
||||
|
||||
# Output:
|
||||
{
|
||||
"verified": true,
|
||||
"image": "atcr.io/alice/myapp:latest",
|
||||
"digest": "sha256:abc123...",
|
||||
"signature": {
|
||||
"did": "did:plc:alice123",
|
||||
"handle": "alice.bsky.social",
|
||||
"pds": "https://bsky.social",
|
||||
"recordUri": "at://did:plc:alice123/io.atcr.manifest/abc123",
|
||||
"commitCid": "bafyreih8...",
|
||||
"signedAt": "2025-10-31T12:34:56.789Z",
|
||||
"algorithm": "ECDSA-K256-SHA256"
|
||||
},
|
||||
"trustPolicy": {
|
||||
"satisfied": true,
|
||||
"policy": "production-images",
|
||||
"trustedDID": true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Quiet Mode
|
||||
|
||||
```bash
|
||||
# Exit code only (for scripts)
|
||||
atcr-verify atcr.io/alice/myapp:latest --quiet
|
||||
echo $? # 0 = verified, 1 = failed
|
||||
```
|
||||
|
||||
### Offline Mode
|
||||
|
||||
```bash
|
||||
# Export verification bundle
|
||||
atcr-verify export atcr.io/alice/myapp:latest -o bundle.json
|
||||
|
||||
# Verify offline (in air-gapped environment)
|
||||
atcr-verify atcr.io/alice/myapp:latest --offline --bundle bundle.json
|
||||
```
|
||||
|
||||
## Command Reference
|
||||
|
||||
### verify
|
||||
|
||||
Verify ATProto signature for an image.
|
||||
|
||||
```bash
|
||||
atcr-verify verify IMAGE [flags]
|
||||
atcr-verify IMAGE [flags] # 'verify' subcommand is optional
|
||||
```
|
||||
|
||||
**Arguments:**
|
||||
- `IMAGE` - Image reference (registry/owner/repo:tag or @digest)
|
||||
|
||||
**Flags:**
|
||||
- `--policy FILE` - Trust policy file (default: none)
|
||||
- `--output FORMAT` - Output format: text, json, quiet (default: text)
|
||||
- `--offline` - Offline mode (requires --bundle)
|
||||
- `--bundle FILE` - Verification bundle for offline mode
|
||||
- `--cache-dir DIR` - Cache directory for DID documents (default: ~/.atcr/cache)
|
||||
- `--no-cache` - Disable caching
|
||||
- `--timeout DURATION` - Verification timeout (default: 30s)
|
||||
- `--verbose` - Verbose output
|
||||
|
||||
**Exit Codes:**
|
||||
- `0` - Verification succeeded
|
||||
- `1` - Verification failed
|
||||
- `2` - Invalid arguments
|
||||
- `3` - Network error
|
||||
- `4` - Trust policy violation
|
||||
|
||||
**Examples:**
|
||||
|
||||
```bash
|
||||
# Basic verification
|
||||
atcr-verify atcr.io/alice/myapp:latest
|
||||
|
||||
# With specific digest
|
||||
atcr-verify atcr.io/alice/myapp@sha256:abc123...
|
||||
|
||||
# With trust policy
|
||||
atcr-verify atcr.io/alice/myapp:latest --policy production-policy.yaml
|
||||
|
||||
# JSON output for scripting
|
||||
atcr-verify atcr.io/alice/myapp:latest --output json | jq .verified
|
||||
|
||||
# Quiet mode for CI/CD
|
||||
if atcr-verify atcr.io/alice/myapp:latest --quiet; then
|
||||
echo "Deploy approved"
|
||||
fi
|
||||
```
|
||||
|
||||
### export
|
||||
|
||||
Export verification bundle for offline verification.
|
||||
|
||||
```bash
|
||||
atcr-verify export IMAGE [flags]
|
||||
```
|
||||
|
||||
**Arguments:**
|
||||
- `IMAGE` - Image reference to export bundle for
|
||||
|
||||
**Flags:**
|
||||
- `-o, --output FILE` - Output file (default: stdout)
|
||||
- `--include-did-docs` - Include DID documents in bundle
|
||||
- `--include-commit` - Include ATProto commit data
|
||||
|
||||
**Examples:**
|
||||
|
||||
```bash
|
||||
# Export to file
|
||||
atcr-verify export atcr.io/alice/myapp:latest -o myapp-bundle.json
|
||||
|
||||
# Export with all verification data
|
||||
atcr-verify export atcr.io/alice/myapp:latest \
|
||||
--include-did-docs \
|
||||
--include-commit \
|
||||
-o complete-bundle.json
|
||||
|
||||
# Export for multiple images
|
||||
for img in $(cat images.txt); do
|
||||
atcr-verify export $img -o bundles/$(echo $img | tr '/:' '_').json
|
||||
done
|
||||
```
|
||||
|
||||
### trust
|
||||
|
||||
Manage trust policies and trusted DIDs.
|
||||
|
||||
```bash
|
||||
atcr-verify trust COMMAND [flags]
|
||||
```
|
||||
|
||||
**Subcommands:**
|
||||
|
||||
**`trust list`** - List trusted DIDs
|
||||
```bash
|
||||
atcr-verify trust list
|
||||
|
||||
# Output:
|
||||
# Trusted DIDs:
|
||||
# - did:plc:alice123 (alice.bsky.social)
|
||||
# - did:plc:bob456 (bob.example.com)
|
||||
```
|
||||
|
||||
**`trust add DID`** - Add trusted DID
|
||||
```bash
|
||||
atcr-verify trust add did:plc:alice123
|
||||
atcr-verify trust add did:plc:alice123 --name "Alice (DevOps)"
|
||||
```
|
||||
|
||||
**`trust remove DID`** - Remove trusted DID
|
||||
```bash
|
||||
atcr-verify trust remove did:plc:alice123
|
||||
```
|
||||
|
||||
**`trust policy validate`** - Validate trust policy file
|
||||
```bash
|
||||
atcr-verify trust policy validate policy.yaml
|
||||
```
|
||||
|
||||
### version
|
||||
|
||||
Show version information.
|
||||
|
||||
```bash
|
||||
atcr-verify version
|
||||
|
||||
# Output:
|
||||
# atcr-verify version 1.0.0
|
||||
# Go version: go1.21.5
|
||||
# Commit: 3b5b89b
|
||||
# Built: 2025-10-31T12:00:00Z
|
||||
```
|
||||
|
||||
## Trust Policy
|
||||
|
||||
Trust policies define which signatures to trust and what to do when verification fails.
|
||||
|
||||
### Policy File Format
|
||||
|
||||
```yaml
|
||||
version: 1.0
|
||||
|
||||
# Global settings
|
||||
defaultAction: enforce # enforce, audit, allow
|
||||
requireSignature: true
|
||||
|
||||
# Policies matched by image pattern (first match wins)
|
||||
policies:
|
||||
- name: production-images
|
||||
description: "Production images must be signed by DevOps or Security"
|
||||
scope: "atcr.io/*/prod-*"
|
||||
require:
|
||||
signature: true
|
||||
trustedDIDs:
|
||||
- did:plc:devops-team
|
||||
- did:plc:security-team
|
||||
minSignatures: 1
|
||||
maxAge: 2592000 # 30 days in seconds
|
||||
action: enforce
|
||||
|
||||
- name: staging-images
|
||||
scope: "atcr.io/*/staging-*"
|
||||
require:
|
||||
signature: true
|
||||
trustedDIDs:
|
||||
- did:plc:devops-team
|
||||
- did:plc:developers
|
||||
minSignatures: 1
|
||||
action: enforce
|
||||
|
||||
- name: dev-images
|
||||
scope: "atcr.io/*/dev-*"
|
||||
require:
|
||||
signature: false
|
||||
action: audit # Log but don't fail
|
||||
|
||||
# Trusted DID registry
|
||||
trustedDIDs:
|
||||
did:plc:devops-team:
|
||||
name: "DevOps Team"
|
||||
validFrom: "2024-01-01T00:00:00Z"
|
||||
expiresAt: null
|
||||
contact: "devops@example.com"
|
||||
|
||||
did:plc:security-team:
|
||||
name: "Security Team"
|
||||
validFrom: "2024-01-01T00:00:00Z"
|
||||
expiresAt: null
|
||||
|
||||
did:plc:developers:
|
||||
name: "Developer Team"
|
||||
validFrom: "2024-06-01T00:00:00Z"
|
||||
expiresAt: "2025-12-31T23:59:59Z"
|
||||
```
|
||||
|
||||
### Policy Matching
|
||||
|
||||
Policies are evaluated in order. First match wins.
|
||||
|
||||
**Scope patterns:**
|
||||
- `atcr.io/*/*` - All ATCR images
|
||||
- `atcr.io/myorg/*` - All images from myorg
|
||||
- `atcr.io/*/prod-*` - All images with "prod-" prefix
|
||||
- `atcr.io/myorg/myapp` - Specific repository
|
||||
- `atcr.io/myorg/myapp:v*` - Tag pattern matching
|
||||
|
||||
### Policy Actions
|
||||
|
||||
**`enforce`** - Reject if policy fails
|
||||
- Exit code 4
|
||||
- Blocks deployment
|
||||
|
||||
**`audit`** - Log but allow
|
||||
- Exit code 0 (success)
|
||||
- Warning message printed
|
||||
|
||||
**`allow`** - Always allow
|
||||
- No verification performed
|
||||
- Exit code 0
|
||||
|
||||
### Policy Requirements
|
||||
|
||||
**`signature: true`** - Require signature present
|
||||
|
||||
**`trustedDIDs`** - List of trusted DIDs
|
||||
```yaml
|
||||
trustedDIDs:
|
||||
- did:plc:alice123
|
||||
- did:web:example.com
|
||||
```
|
||||
|
||||
**`minSignatures`** - Minimum number of signatures required
|
||||
```yaml
|
||||
minSignatures: 2 # Require 2 signatures
|
||||
```
|
||||
|
||||
**`maxAge`** - Maximum signature age in seconds
|
||||
```yaml
|
||||
maxAge: 2592000 # 30 days
|
||||
```
|
||||
|
||||
**`algorithms`** - Allowed signature algorithms
|
||||
```yaml
|
||||
algorithms:
|
||||
- ECDSA-K256-SHA256
|
||||
```
|
||||
|
||||
## Verification Flow
|
||||
|
||||
### 1. Image Resolution
|
||||
|
||||
```
|
||||
Input: atcr.io/alice/myapp:latest
|
||||
↓
|
||||
Resolve tag to digest
|
||||
↓
|
||||
Output: sha256:abc123...
|
||||
```
|
||||
|
||||
### 2. Signature Discovery
|
||||
|
||||
```
|
||||
Query OCI Referrers API:
|
||||
GET /v2/alice/myapp/referrers/sha256:abc123
|
||||
?artifactType=application/vnd.atproto.signature.v1+json
|
||||
↓
|
||||
Returns: List of signature artifacts
|
||||
↓
|
||||
Download signature metadata blobs
|
||||
```
|
||||
|
||||
### 3. DID Resolution
|
||||
|
||||
```
|
||||
Extract DID from signature: did:plc:alice123
|
||||
↓
|
||||
Query PLC directory:
|
||||
GET https://plc.directory/did:plc:alice123
|
||||
↓
|
||||
Extract public key from DID document
|
||||
```
|
||||
|
||||
### 4. PDS Query
|
||||
|
||||
```
|
||||
Get PDS endpoint from DID document
|
||||
↓
|
||||
Query for manifest record:
|
||||
GET {pds}/xrpc/com.atproto.repo.getRecord
|
||||
?repo=did:plc:alice123
|
||||
&collection=io.atcr.manifest
|
||||
&rkey=abc123
|
||||
↓
|
||||
Get commit CID from record
|
||||
↓
|
||||
Fetch commit data (includes signature)
|
||||
```
|
||||
|
||||
### 5. Signature Verification
|
||||
|
||||
```
|
||||
Extract signature bytes from commit
|
||||
↓
|
||||
Compute commit hash (SHA-256)
|
||||
↓
|
||||
Verify: ECDSA_K256(hash, signature, publicKey)
|
||||
↓
|
||||
Result: Valid or Invalid
|
||||
```
|
||||
|
||||
### 6. Trust Policy Evaluation
|
||||
|
||||
```
|
||||
Check if DID is in trustedDIDs list
|
||||
↓
|
||||
Check signature age < maxAge
|
||||
↓
|
||||
Check minSignatures satisfied
|
||||
↓
|
||||
Apply policy action (enforce/audit/allow)
|
||||
```
|
||||
|
||||
## Integration Examples
|
||||
|
||||
### CI/CD Pipeline
|
||||
|
||||
**GitHub Actions:**
|
||||
```yaml
|
||||
name: Deploy
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [main]
|
||||
|
||||
jobs:
|
||||
verify-and-deploy:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Install atcr-verify
|
||||
run: |
|
||||
curl -L https://github.com/atcr-io/atcr/releases/latest/download/atcr-verify-linux-amd64 -o atcr-verify
|
||||
chmod +x atcr-verify
|
||||
sudo mv atcr-verify /usr/local/bin/
|
||||
|
||||
- name: Verify image signature
|
||||
run: |
|
||||
atcr-verify ${{ env.IMAGE }} --policy .github/trust-policy.yaml
|
||||
|
||||
- name: Deploy to production
|
||||
if: success()
|
||||
run: kubectl set image deployment/app app=${{ env.IMAGE }}
|
||||
```
|
||||
|
||||
**GitLab CI:**
|
||||
```yaml
|
||||
verify:
|
||||
stage: verify
|
||||
image: atcr.io/atcr/verify:latest
|
||||
script:
|
||||
- atcr-verify ${IMAGE} --policy trust-policy.yaml
|
||||
|
||||
deploy:
|
||||
stage: deploy
|
||||
dependencies:
|
||||
- verify
|
||||
script:
|
||||
- kubectl set image deployment/app app=${IMAGE}
|
||||
```
|
||||
|
||||
**Jenkins:**
|
||||
```groovy
|
||||
pipeline {
|
||||
agent any
|
||||
|
||||
stages {
|
||||
stage('Verify') {
|
||||
steps {
|
||||
sh 'atcr-verify ${IMAGE} --policy trust-policy.yaml'
|
||||
}
|
||||
}
|
||||
|
||||
stage('Deploy') {
|
||||
when {
|
||||
expression { currentBuild.result == 'SUCCESS' }
|
||||
}
|
||||
steps {
|
||||
sh 'kubectl set image deployment/app app=${IMAGE}'
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Kubernetes Admission Controller
|
||||
|
||||
**Using as webhook backend:**
|
||||
|
||||
```go
|
||||
// webhook server
|
||||
func (h *Handler) ValidatePod(w http.ResponseWriter, r *http.Request) {
|
||||
var admReq admissionv1.AdmissionReview
|
||||
json.NewDecoder(r.Body).Decode(&admReq)
|
||||
|
||||
pod := &corev1.Pod{}
|
||||
json.Unmarshal(admReq.Request.Object.Raw, pod)
|
||||
|
||||
// Verify each container image
|
||||
for _, container := range pod.Spec.Containers {
|
||||
cmd := exec.Command("atcr-verify", container.Image,
|
||||
"--policy", "/etc/atcr/trust-policy.yaml",
|
||||
"--quiet")
|
||||
|
||||
if err := cmd.Run(); err != nil {
|
||||
// Verification failed
|
||||
admResp := admissionv1.AdmissionReview{
|
||||
Response: &admissionv1.AdmissionResponse{
|
||||
UID: admReq.Request.UID,
|
||||
Allowed: false,
|
||||
Result: &metav1.Status{
|
||||
Message: fmt.Sprintf("Image %s failed signature verification", container.Image),
|
||||
},
|
||||
},
|
||||
}
|
||||
json.NewEncoder(w).Encode(admResp)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// All images verified
|
||||
admResp := admissionv1.AdmissionReview{
|
||||
Response: &admissionv1.AdmissionResponse{
|
||||
UID: admReq.Request.UID,
|
||||
Allowed: true,
|
||||
},
|
||||
}
|
||||
json.NewEncoder(w).Encode(admResp)
|
||||
}
|
||||
```
|
||||
|
||||
### Pre-Pull Verification
|
||||
|
||||
**Systemd service:**
|
||||
```ini
|
||||
# /etc/systemd/system/myapp.service
|
||||
[Unit]
|
||||
Description=My Application
|
||||
After=docker.service
|
||||
|
||||
[Service]
|
||||
Type=oneshot
|
||||
ExecStartPre=/usr/local/bin/atcr-verify atcr.io/myorg/myapp:latest --policy /etc/atcr/policy.yaml
|
||||
ExecStartPre=/usr/bin/docker pull atcr.io/myorg/myapp:latest
|
||||
ExecStart=/usr/bin/docker run atcr.io/myorg/myapp:latest
|
||||
Restart=on-failure
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
**Docker wrapper script:**
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# docker-secure-pull.sh
|
||||
|
||||
IMAGE="$1"
|
||||
|
||||
# Verify before pulling
|
||||
if ! atcr-verify "$IMAGE" --policy ~/.atcr/trust-policy.yaml; then
|
||||
echo "ERROR: Image signature verification failed"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Pull if verified
|
||||
docker pull "$IMAGE"
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Config File
|
||||
|
||||
Location: `~/.atcr/config.yaml`
|
||||
|
||||
```yaml
|
||||
# Default trust policy
|
||||
defaultPolicy: ~/.atcr/trust-policy.yaml
|
||||
|
||||
# Cache settings
|
||||
cache:
|
||||
enabled: true
|
||||
directory: ~/.atcr/cache
|
||||
ttl:
|
||||
didDocuments: 3600 # 1 hour
|
||||
commits: 600 # 10 minutes
|
||||
|
||||
# Network settings
|
||||
timeout: 30s
|
||||
retries: 3
|
||||
|
||||
# Output settings
|
||||
output:
|
||||
format: text # text, json, quiet
|
||||
color: auto # auto, always, never
|
||||
|
||||
# Registry settings
|
||||
registries:
|
||||
atcr.io:
|
||||
insecure: false
|
||||
credentialsFile: ~/.docker/config.json
|
||||
```
|
||||
|
||||
### Environment Variables
|
||||
|
||||
- `ATCR_CONFIG` - Config file path
|
||||
- `ATCR_POLICY` - Default trust policy file
|
||||
- `ATCR_CACHE_DIR` - Cache directory
|
||||
- `ATCR_OUTPUT` - Output format (text, json, quiet)
|
||||
- `ATCR_TIMEOUT` - Verification timeout
|
||||
- `HTTP_PROXY` / `HTTPS_PROXY` - Proxy settings
|
||||
- `NO_CACHE` - Disable caching
|
||||
|
||||
## Library Usage
|
||||
|
||||
`atcr-verify` can also be used as a Go library:
|
||||
|
||||
```go
|
||||
import "github.com/atcr-io/atcr/pkg/verify"
|
||||
|
||||
func main() {
|
||||
verifier := verify.NewVerifier(verify.Config{
|
||||
Policy: policy,
|
||||
Timeout: 30 * time.Second,
|
||||
})
|
||||
|
||||
result, err := verifier.Verify(ctx, "atcr.io/alice/myapp:latest")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
if !result.Verified {
|
||||
log.Fatal("Verification failed")
|
||||
}
|
||||
|
||||
fmt.Printf("Verified by %s\n", result.Signature.DID)
|
||||
}
|
||||
```
|
||||
|
||||
## Performance
|
||||
|
||||
### Typical Verification Times
|
||||
|
||||
- **First verification:** 500-1000ms
|
||||
- OCI Referrers API: 50-100ms
|
||||
- DID resolution: 50-150ms
|
||||
- PDS query: 100-300ms
|
||||
- Signature verification: 1-5ms
|
||||
|
||||
- **Cached verification:** 50-150ms
|
||||
- DID document cached
|
||||
- Signature metadata cached
|
||||
|
||||
### Optimization Tips
|
||||
|
||||
1. **Enable caching** - DID documents change rarely
|
||||
2. **Use offline bundles** - For air-gapped environments
|
||||
3. **Parallel verification** - Verify multiple images concurrently
|
||||
4. **Local trust policy** - Avoid remote policy fetches
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Verification Fails
|
||||
|
||||
```bash
|
||||
atcr-verify atcr.io/alice/myapp:latest --verbose
|
||||
```
|
||||
|
||||
Common issues:
|
||||
- **No signature found** - Image not signed, check Referrers API
|
||||
- **DID resolution failed** - Network issue, check PLC directory
|
||||
- **PDS unreachable** - Network issue, check PDS endpoint
|
||||
- **Signature invalid** - Tampering detected or key mismatch
|
||||
- **Trust policy violation** - DID not in trusted list
|
||||
|
||||
### Enable Debug Logging
|
||||
|
||||
```bash
|
||||
ATCR_LOG_LEVEL=debug atcr-verify IMAGE
|
||||
```
|
||||
|
||||
### Clear Cache
|
||||
|
||||
```bash
|
||||
rm -rf ~/.atcr/cache
|
||||
```
|
||||
|
||||
## See Also
|
||||
|
||||
- [ATProto Signatures](./ATPROTO_SIGNATURES.md) - How ATProto signing works
|
||||
- [Integration Strategy](./INTEGRATION_STRATEGY.md) - Overview of integration approaches
|
||||
- [Signature Integration](./SIGNATURE_INTEGRATION.md) - Tool-specific guides
|
||||
- [Trust Policy Examples](../examples/verification/trust-policy.yaml)
|
||||
501
docs/ATPROTO_SIGNATURES.md
Normal file
501
docs/ATPROTO_SIGNATURES.md
Normal file
@@ -0,0 +1,501 @@
|
||||
# ATProto Signatures for Container Images
|
||||
|
||||
## Overview
|
||||
|
||||
ATCR container images are **already cryptographically signed** through ATProto's repository commit system. Every manifest stored in a user's PDS is signed with the user's ATProto signing key, providing cryptographic proof of authorship and integrity.
|
||||
|
||||
This document explains:
|
||||
- How ATProto signing works
|
||||
- Why additional signing tools aren't needed
|
||||
- How to bridge ATProto signatures to the OCI/ORAS ecosystem
|
||||
- Trust model and security considerations
|
||||
|
||||
## Key Insight: Manifests Are Already Signed
|
||||
|
||||
When you push an image to ATCR:
|
||||
|
||||
```bash
|
||||
docker push atcr.io/alice/myapp:latest
|
||||
```
|
||||
|
||||
The following happens:
|
||||
|
||||
1. **AppView stores manifest** as an `io.atcr.manifest` record in alice's PDS
|
||||
2. **PDS creates repository commit** containing the manifest record
|
||||
3. **PDS signs the commit** with alice's ATProto signing key (ECDSA K-256)
|
||||
4. **Signature is stored** in the repository commit object
|
||||
|
||||
**Result:** The manifest is cryptographically signed with alice's private key, and anyone can verify it using alice's public key from her DID document.
|
||||
|
||||
## ATProto Signing Mechanism
|
||||
|
||||
### Repository Commit Signing
|
||||
|
||||
ATProto uses a Merkle Search Tree (MST) to store records, and every modification creates a signed commit:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────┐
|
||||
│ Repository Commit │
|
||||
├─────────────────────────────────────────────┤
|
||||
│ DID: did:plc:alice123 │
|
||||
│ Version: 3jzfkjqwdwa2a │
|
||||
│ Previous: bafyreig7... (parent commit) │
|
||||
│ Data CID: bafyreih8... (MST root) │
|
||||
│ ┌───────────────────────────────────────┐ │
|
||||
│ │ Signature (ECDSA K-256 + SHA-256) │ │
|
||||
│ │ Signed with: alice's private key │ │
|
||||
│ │ Value: 0x3045022100... (DER format) │ │
|
||||
│ └───────────────────────────────────────┘ │
|
||||
└─────────────────────────────────────────────┘
|
||||
│
|
||||
↓
|
||||
┌─────────────────────┐
|
||||
│ Merkle Search Tree │
|
||||
│ (contains records) │
|
||||
└─────────────────────┘
|
||||
│
|
||||
↓
|
||||
┌────────────────────────────┐
|
||||
│ io.atcr.manifest record │
|
||||
│ Repository: myapp │
|
||||
│ Digest: sha256:abc123... │
|
||||
│ Layers: [...] │
|
||||
└────────────────────────────┘
|
||||
```
|
||||
|
||||
### Signature Algorithm
|
||||
|
||||
**Algorithm:** ECDSA with K-256 (secp256k1) curve + SHA-256 hash
|
||||
- **Curve:** secp256k1 (same as Bitcoin, Ethereum)
|
||||
- **Hash:** SHA-256
|
||||
- **Format:** DER-encoded signature bytes
|
||||
- **Variant:** "low-S" signatures (per BIP-0062)
|
||||
|
||||
**Signing process:**
|
||||
1. Serialize commit data as DAG-CBOR
|
||||
2. Hash with SHA-256
|
||||
3. Sign hash with ECDSA K-256 private key
|
||||
4. Store signature in commit object
|
||||
|
||||
### Public Key Distribution
|
||||
|
||||
Public keys are distributed via DID documents, accessible through DID resolution:
|
||||
|
||||
**DID Resolution Flow:**
|
||||
```
|
||||
did:plc:alice123
|
||||
↓
|
||||
Query PLC directory: https://plc.directory/did:plc:alice123
|
||||
↓
|
||||
DID Document:
|
||||
{
|
||||
"@context": ["https://www.w3.org/ns/did/v1"],
|
||||
"id": "did:plc:alice123",
|
||||
"verificationMethod": [{
|
||||
"id": "did:plc:alice123#atproto",
|
||||
"type": "Multikey",
|
||||
"controller": "did:plc:alice123",
|
||||
"publicKeyMultibase": "zQ3shokFTS3brHcDQrn82RUDfCZESWL1ZdCEJwekUDdo1Ko4Z"
|
||||
}],
|
||||
"service": [{
|
||||
"id": "#atproto_pds",
|
||||
"type": "AtprotoPersonalDataServer",
|
||||
"serviceEndpoint": "https://bsky.social"
|
||||
}]
|
||||
}
|
||||
```
|
||||
|
||||
**Public key format:**
|
||||
- **Encoding:** Multibase (base58btc with `z` prefix)
|
||||
- **Codec:** Multicodec `0xE701` for K-256 keys
|
||||
- **Example:** `zQ3sh...` decodes to 33-byte compressed public key
|
||||
|
||||
## Verification Process
|
||||
|
||||
To verify a manifest's signature:
|
||||
|
||||
### Step 1: Resolve Image to Manifest Digest
|
||||
|
||||
```bash
|
||||
# Get manifest digest
|
||||
DIGEST=$(crane digest atcr.io/alice/myapp:latest)
|
||||
# Result: sha256:abc123...
|
||||
```
|
||||
|
||||
### Step 2: Fetch Manifest Record from PDS
|
||||
|
||||
```bash
|
||||
# Extract repository name from image reference
|
||||
REPO="myapp"
|
||||
|
||||
# Query PDS for manifest record
|
||||
curl "https://bsky.social/xrpc/com.atproto.repo.listRecords?\
|
||||
repo=did:plc:alice123&\
|
||||
collection=io.atcr.manifest&\
|
||||
limit=100" | jq -r '.records[] | select(.value.digest == "sha256:abc123...")'
|
||||
```
|
||||
|
||||
Response includes:
|
||||
```json
|
||||
{
|
||||
"uri": "at://did:plc:alice123/io.atcr.manifest/abc123",
|
||||
"cid": "bafyreig7...",
|
||||
"value": {
|
||||
"$type": "io.atcr.manifest",
|
||||
"repository": "myapp",
|
||||
"digest": "sha256:abc123...",
|
||||
...
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Fetch Repository Commit
|
||||
|
||||
```bash
|
||||
# Get current repository state
|
||||
curl "https://bsky.social/xrpc/com.atproto.sync.getRepo?\
|
||||
did=did:plc:alice123" --output repo.car
|
||||
|
||||
# Extract commit from CAR file (requires ATProto tools)
|
||||
# Commit includes signature over repository state
|
||||
```
|
||||
|
||||
### Step 4: Resolve DID to Public Key
|
||||
|
||||
```bash
|
||||
# Resolve DID document
|
||||
curl "https://plc.directory/did:plc:alice123" | jq -r '.verificationMethod[0].publicKeyMultibase'
|
||||
# Result: zQ3shokFTS3brHcDQrn82RUDfCZESWL1ZdCEJwekUDdo1Ko4Z
|
||||
```
|
||||
|
||||
### Step 5: Verify Signature
|
||||
|
||||
```go
|
||||
// Pseudocode for verification
|
||||
import "github.com/bluesky-social/indigo/atproto/crypto"
|
||||
|
||||
// 1. Parse commit
|
||||
commit := parseCommitFromCAR(repoCAR)
|
||||
|
||||
// 2. Extract signature bytes
|
||||
signature := commit.Sig
|
||||
|
||||
// 3. Get bytes that were signed
|
||||
bytesToVerify := commit.Unsigned().BytesForSigning()
|
||||
|
||||
// 4. Decode public key from multibase
|
||||
pubKey := decodeMultibasePublicKey(publicKeyMultibase)
|
||||
|
||||
// 5. Verify ECDSA signature
|
||||
valid := crypto.VerifySignature(pubKey, bytesToVerify, signature)
|
||||
```
|
||||
|
||||
### Step 6: Verify Manifest Integrity
|
||||
|
||||
```bash
|
||||
# Verify the manifest record's CID matches the content
|
||||
# CID is content-addressed, so tampering changes the CID
|
||||
```
|
||||
|
||||
## Bridging to OCI/ORAS Ecosystem
|
||||
|
||||
While ATProto signatures are cryptographically sound, the OCI ecosystem doesn't understand ATProto records. To make signatures discoverable, we create **ORAS signature artifacts** that reference the ATProto signature.
|
||||
|
||||
### ORAS Signature Artifact Format
|
||||
|
||||
```json
|
||||
{
|
||||
"schemaVersion": 2,
|
||||
"mediaType": "application/vnd.oci.image.manifest.v1+json",
|
||||
"artifactType": "application/vnd.atproto.signature.v1+json",
|
||||
"config": {
|
||||
"mediaType": "application/vnd.oci.empty.v1+json",
|
||||
"digest": "sha256:44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a",
|
||||
"size": 2
|
||||
},
|
||||
"subject": {
|
||||
"mediaType": "application/vnd.oci.image.manifest.v1+json",
|
||||
"digest": "sha256:abc123...",
|
||||
"size": 1234
|
||||
},
|
||||
"layers": [
|
||||
{
|
||||
"mediaType": "application/vnd.atproto.signature.v1+json",
|
||||
"digest": "sha256:sig789...",
|
||||
"size": 512,
|
||||
"annotations": {
|
||||
"org.opencontainers.image.title": "atproto-signature.json"
|
||||
}
|
||||
}
|
||||
],
|
||||
"annotations": {
|
||||
"io.atcr.atproto.did": "did:plc:alice123",
|
||||
"io.atcr.atproto.pds": "https://bsky.social",
|
||||
"io.atcr.atproto.recordUri": "at://did:plc:alice123/io.atcr.manifest/abc123",
|
||||
"io.atcr.atproto.commitCid": "bafyreih8...",
|
||||
"io.atcr.atproto.signedAt": "2025-10-31T12:34:56.789Z",
|
||||
"io.atcr.atproto.keyId": "did:plc:alice123#atproto"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Key elements:**
|
||||
|
||||
1. **artifactType**: `application/vnd.atproto.signature.v1+json` - identifies this as an ATProto signature
|
||||
2. **subject**: Links to the image manifest being signed
|
||||
3. **layers**: Contains signature metadata blob
|
||||
4. **annotations**: Quick-access metadata for verification
|
||||
|
||||
### Signature Metadata Blob
|
||||
|
||||
The layer blob contains detailed verification information:
|
||||
|
||||
```json
|
||||
{
|
||||
"$type": "io.atcr.atproto.signature",
|
||||
"version": "1.0",
|
||||
"subject": {
|
||||
"digest": "sha256:abc123...",
|
||||
"mediaType": "application/vnd.oci.image.manifest.v1+json"
|
||||
},
|
||||
"atproto": {
|
||||
"did": "did:plc:alice123",
|
||||
"handle": "alice.bsky.social",
|
||||
"pdsEndpoint": "https://bsky.social",
|
||||
"recordUri": "at://did:plc:alice123/io.atcr.manifest/abc123",
|
||||
"recordCid": "bafyreig7...",
|
||||
"commitCid": "bafyreih8...",
|
||||
"commitRev": "3jzfkjqwdwa2a",
|
||||
"signedAt": "2025-10-31T12:34:56.789Z"
|
||||
},
|
||||
"signature": {
|
||||
"algorithm": "ECDSA-K256-SHA256",
|
||||
"keyId": "did:plc:alice123#atproto",
|
||||
"publicKeyMultibase": "zQ3shokFTS3brHcDQrn82RUDfCZESWL1ZdCEJwekUDdo1Ko4Z"
|
||||
},
|
||||
"verification": {
|
||||
"method": "atproto-repo-commit",
|
||||
"instructions": "Fetch repository commit from PDS and verify signature using public key from DID document"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Discovery via Referrers API
|
||||
|
||||
ORAS artifacts are discoverable via the OCI Referrers API:
|
||||
|
||||
```bash
|
||||
# Query for signature artifacts
|
||||
curl "https://atcr.io/v2/alice/myapp/referrers/sha256:abc123?\
|
||||
artifactType=application/vnd.atproto.signature.v1+json"
|
||||
```
|
||||
|
||||
Response:
|
||||
```json
|
||||
{
|
||||
"schemaVersion": 2,
|
||||
"mediaType": "application/vnd.oci.image.index.v1+json",
|
||||
"manifests": [
|
||||
{
|
||||
"mediaType": "application/vnd.oci.image.manifest.v1+json",
|
||||
"digest": "sha256:sig789...",
|
||||
"size": 1234,
|
||||
"artifactType": "application/vnd.atproto.signature.v1+json",
|
||||
"annotations": {
|
||||
"io.atcr.atproto.did": "did:plc:alice123",
|
||||
"io.atcr.atproto.signedAt": "2025-10-31T12:34:56.789Z"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Trust Model
|
||||
|
||||
### What ATProto Signatures Prove
|
||||
|
||||
✅ **Authenticity**: Image was published by the DID owner
|
||||
✅ **Integrity**: Image manifest hasn't been tampered with since signing
|
||||
✅ **Non-repudiation**: Only the DID owner could have created this signature
|
||||
✅ **Timestamp**: When the image was signed (commit timestamp)
|
||||
|
||||
### What ATProto Signatures Don't Prove
|
||||
|
||||
❌ **Safety**: Image doesn't contain vulnerabilities (use vulnerability scanning)
|
||||
❌ **DID trustworthiness**: Whether the DID owner is trustworthy (trust policy decision)
|
||||
❌ **Key security**: Private key wasn't compromised (same limitation as all PKI)
|
||||
❌ **PDS honesty**: PDS operator serves correct data (verify across multiple sources)
|
||||
|
||||
### Trust Dependencies
|
||||
|
||||
1. **DID Resolution**: Must correctly resolve DID to public key
|
||||
- **Mitigation**: Use multiple resolvers, cache DID documents
|
||||
|
||||
2. **PDS Availability**: Must query PDS to verify signatures
|
||||
- **Mitigation**: Embed signature bytes in ORAS blob for offline verification
|
||||
|
||||
3. **PDS Honesty**: PDS could serve fake/unsigned records
|
||||
- **Mitigation**: Signature verification prevents this (can't forge signature)
|
||||
|
||||
4. **Key Security**: User's private key could be compromised
|
||||
- **Mitigation**: Key rotation via DID document updates, short-lived credentials
|
||||
|
||||
5. **Algorithm Security**: ECDSA K-256 must remain secure
|
||||
- **Status**: Well-studied, same as Bitcoin/Ethereum (widely trusted)
|
||||
|
||||
### Comparison with Other Signing Systems
|
||||
|
||||
| Aspect | ATProto Signatures | Cosign (Keyless) | Notary v2 |
|
||||
|--------|-------------------|------------------|-----------|
|
||||
| **Identity** | DID (decentralized) | OIDC (federated) | X.509 (PKI) |
|
||||
| **Key Management** | PDS signing keys | Ephemeral (Fulcio) | User-managed |
|
||||
| **Trust Anchor** | DID resolution | Fulcio CA + Rekor | Certificate chain |
|
||||
| **Transparency Log** | ATProto firehose | Rekor | Optional |
|
||||
| **Offline Verification** | Limited* | No | Yes |
|
||||
| **Decentralization** | High | Medium | Low |
|
||||
| **Complexity** | Low | High | Medium |
|
||||
|
||||
*Can be improved by embedding signature bytes in ORAS blob
|
||||
|
||||
### Security Considerations
|
||||
|
||||
**Threat: Man-in-the-Middle Attack**
|
||||
- **Attack**: Intercept PDS queries, serve fake records
|
||||
- **Defense**: TLS for PDS communication, verify signature with public key from DID document
|
||||
- **Result**: Attacker can't forge signature without private key
|
||||
|
||||
**Threat: Compromised PDS**
|
||||
- **Attack**: PDS operator serves unsigned/fake manifests
|
||||
- **Defense**: Signature verification fails (PDS can't sign without user's private key)
|
||||
- **Result**: Protected
|
||||
|
||||
**Threat: Key Compromise**
|
||||
- **Attack**: Attacker steals user's ATProto signing key
|
||||
- **Defense**: Key rotation via DID document, revoke old keys
|
||||
- **Result**: Same as any PKI system (rotate keys quickly)
|
||||
|
||||
**Threat: Replay Attack**
|
||||
- **Attack**: Replay old signed manifest to rollback to vulnerable version
|
||||
- **Defense**: Check commit timestamp, verify commit is in current repository DAG
|
||||
- **Result**: Protected (commits form immutable chain)
|
||||
|
||||
**Threat: DID Takeover**
|
||||
- **Attack**: Attacker gains control of user's DID (rotation keys)
|
||||
- **Defense**: Monitor DID document changes, verify key history
|
||||
- **Result**: Serious but requires compromising rotation keys (harder than signing keys)
|
||||
|
||||
## Implementation Strategy
|
||||
|
||||
### Automatic Signature Artifact Creation
|
||||
|
||||
When AppView stores a manifest in a user's PDS:
|
||||
|
||||
1. **Store manifest record** (existing behavior)
|
||||
2. **Get commit response** with commit CID and revision
|
||||
3. **Create ORAS signature artifact**:
|
||||
- Build metadata blob (JSON)
|
||||
- Upload blob to hold storage
|
||||
- Create ORAS manifest with subject = image manifest
|
||||
- Store ORAS manifest (creates referrer link)
|
||||
|
||||
### Storage Location
|
||||
|
||||
Signature artifacts follow the same pattern as SBOMs:
|
||||
- **Metadata blobs**: Stored in hold's blob storage
|
||||
- **ORAS manifests**: Stored in hold's embedded PDS
|
||||
- **Discovery**: Via OCI Referrers API
|
||||
|
||||
### Verification Tools
|
||||
|
||||
**Option 1: Custom CLI tool (`atcr-verify`)**
|
||||
```bash
|
||||
atcr-verify atcr.io/alice/myapp:latest
|
||||
# → Queries referrers API
|
||||
# → Fetches signature metadata
|
||||
# → Resolves DID → public key
|
||||
# → Queries PDS for commit
|
||||
# → Verifies signature
|
||||
```
|
||||
|
||||
**Option 2: Shell script (curl + jq)**
|
||||
- See `docs/SIGNATURE_INTEGRATION.md` for examples
|
||||
|
||||
**Option 3: Kubernetes admission controller**
|
||||
- Custom webhook that runs verification
|
||||
- Rejects pods with unsigned/invalid signatures
|
||||
|
||||
## Benefits of ATProto Signatures
|
||||
|
||||
### Compared to No Signing
|
||||
|
||||
✅ **Cryptographic proof** of image authorship
|
||||
✅ **Tamper detection** for manifests
|
||||
✅ **Identity binding** via DIDs
|
||||
✅ **Audit trail** via ATProto repository history
|
||||
|
||||
### Compared to Cosign/Notary
|
||||
|
||||
✅ **No additional signing required** (already signed by PDS)
|
||||
✅ **Decentralized identity** (DIDs, not CAs)
|
||||
✅ **Simpler infrastructure** (no Fulcio, no Rekor, no TUF)
|
||||
✅ **Consistent with ATCR's architecture** (ATProto-native)
|
||||
✅ **Lower operational overhead** (reuse existing PDS infrastructure)
|
||||
|
||||
### Trade-offs
|
||||
|
||||
⚠️ **Custom verification tools required** (standard tools won't work)
|
||||
⚠️ **Online verification preferred** (need to query PDS)
|
||||
⚠️ **Different trust model** (trust DIDs, not CAs)
|
||||
⚠️ **Ecosystem maturity** (newer approach, less tooling)
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
### Short-term
|
||||
|
||||
1. **Offline verification**: Embed signature bytes in ORAS blob
|
||||
2. **Multi-PDS verification**: Check signature across multiple PDSs
|
||||
3. **Key rotation support**: Handle historical key validity
|
||||
|
||||
### Medium-term
|
||||
|
||||
4. **Timestamp service**: RFC 3161 timestamps for long-term validity
|
||||
5. **Multi-signature**: Require N signatures from M DIDs
|
||||
6. **Transparency log integration**: Record verifications in public log
|
||||
|
||||
### Long-term
|
||||
|
||||
7. **IANA registration**: Register `application/vnd.atproto.signature.v1+json`
|
||||
8. **Standards proposal**: ATProto signature spec to ORAS/OCI
|
||||
9. **Cross-ecosystem bridges**: Convert to Cosign/Notary formats
|
||||
|
||||
## Conclusion
|
||||
|
||||
ATCR images are already cryptographically signed through ATProto's repository commit system. By creating ORAS signature artifacts that reference these existing signatures, we can:
|
||||
|
||||
- ✅ Make signatures discoverable to OCI tooling
|
||||
- ✅ Maintain ATProto as the source of truth
|
||||
- ✅ Provide verification tools for users and clusters
|
||||
- ✅ Avoid duplicating signing infrastructure
|
||||
|
||||
This approach leverages ATProto's strengths (decentralized identity, built-in signing) while bridging to the OCI ecosystem through standard ORAS artifacts.
|
||||
|
||||
## References
|
||||
|
||||
### ATProto Specifications
|
||||
- [ATProto Repository Specification](https://atproto.com/specs/repository)
|
||||
- [ATProto Data Model](https://atproto.com/specs/data-model)
|
||||
- [ATProto DID Methods](https://atproto.com/specs/did)
|
||||
|
||||
### OCI/ORAS Specifications
|
||||
- [OCI Distribution Specification](https://github.com/opencontainers/distribution-spec)
|
||||
- [OCI Referrers API](https://github.com/opencontainers/distribution-spec/blob/main/spec.md#listing-referrers)
|
||||
- [ORAS Artifacts](https://oras.land/docs/)
|
||||
|
||||
### Cryptography
|
||||
- [ECDSA (secp256k1)](https://en.bitcoin.it/wiki/Secp256k1)
|
||||
- [Multibase Encoding](https://github.com/multiformats/multibase)
|
||||
- [Multicodec](https://github.com/multiformats/multicodec)
|
||||
|
||||
### Related Documentation
|
||||
- [SBOM Scanning](./SBOM_SCANNING.md) - Similar ORAS artifact pattern
|
||||
- [Signature Integration](./SIGNATURE_INTEGRATION.md) - Practical integration examples
|
||||
238
docs/BILLING.md
Normal file
238
docs/BILLING.md
Normal file
@@ -0,0 +1,238 @@
|
||||
# Hold Service Billing Integration
|
||||
|
||||
Optional Stripe billing integration for hold services. Allows hold operators to charge for storage tiers via subscriptions.
|
||||
|
||||
## Overview
|
||||
|
||||
- **Compile-time optional**: Build with `-tags billing` to enable Stripe support
|
||||
- **Hold owns billing**: Each hold operator has their own Stripe account
|
||||
- **AppView aggregates UI**: Fetches subscription info from holds, displays in settings
|
||||
- **Customer-DID mapping**: DIDs stored in Stripe customer metadata (no extra database)
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
User → AppView Settings UI → Hold XRPC endpoints → Stripe
|
||||
↓
|
||||
Stripe webhook → Hold → Update crew tier
|
||||
```
|
||||
|
||||
## Building with Billing Support
|
||||
|
||||
```bash
|
||||
# Without billing (default)
|
||||
go build ./cmd/hold
|
||||
|
||||
# With billing
|
||||
go build -tags billing ./cmd/hold
|
||||
|
||||
# Docker with billing
|
||||
docker build --build-arg BILLING_ENABLED=true -f Dockerfile.hold .
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Environment Variables
|
||||
|
||||
```bash
|
||||
# Required for billing
|
||||
STRIPE_SECRET_KEY=sk_live_xxx # or sk_test_xxx for testing
|
||||
STRIPE_WEBHOOK_SECRET=whsec_xxx # from Stripe Dashboard or CLI
|
||||
|
||||
# Optional
|
||||
STRIPE_PUBLISHABLE_KEY=pk_live_xxx # for client-side (not currently used)
|
||||
```
|
||||
|
||||
### quotas.yaml
|
||||
|
||||
```yaml
|
||||
tiers:
|
||||
swabbie:
|
||||
quota: 2GB
|
||||
description: "Starter storage"
|
||||
# No stripe_price = free tier
|
||||
|
||||
deckhand:
|
||||
quota: 5GB
|
||||
description: "Standard storage"
|
||||
stripe_price_yearly: price_xxx # Price ID from Stripe
|
||||
|
||||
bosun:
|
||||
quota: 10GB
|
||||
description: "Mid-level storage"
|
||||
stripe_price_monthly: price_xxx
|
||||
stripe_price_yearly: price_xxx
|
||||
|
||||
defaults:
|
||||
new_crew_tier: swabbie
|
||||
plankowner_crew_tier: deckhand # Early adopters get this free
|
||||
|
||||
billing:
|
||||
enabled: true
|
||||
currency: usd
|
||||
success_url: "{hold_url}/billing/success"
|
||||
cancel_url: "{hold_url}/billing/cancel"
|
||||
```
|
||||
|
||||
### Stripe Price IDs
|
||||
|
||||
Use **Price IDs** (`price_xxx`), not Product IDs (`prod_xxx`).
|
||||
|
||||
To find Price IDs:
|
||||
1. Stripe Dashboard → Products → Select product
|
||||
2. Look at Pricing section
|
||||
3. Copy the Price ID
|
||||
|
||||
Or via API:
|
||||
```bash
|
||||
curl https://api.stripe.com/v1/prices?product=prod_xxx \
|
||||
-u sk_test_xxx:
|
||||
```
|
||||
|
||||
## XRPC Endpoints
|
||||
|
||||
| Endpoint | Auth | Description |
|
||||
|----------|------|-------------|
|
||||
| `GET /xrpc/io.atcr.hold.getSubscriptionInfo` | Optional | Get tiers and user's current subscription |
|
||||
| `POST /xrpc/io.atcr.hold.createCheckoutSession` | Required | Create Stripe checkout URL |
|
||||
| `GET /xrpc/io.atcr.hold.getBillingPortalUrl` | Required | Get Stripe billing portal URL |
|
||||
| `POST /xrpc/io.atcr.hold.stripeWebhook` | Stripe sig | Handle subscription events |
|
||||
|
||||
## Local Development
|
||||
|
||||
### Stripe CLI Setup
|
||||
|
||||
The Stripe CLI forwards webhooks to localhost:
|
||||
|
||||
```bash
|
||||
# Install
|
||||
brew install stripe/stripe-cli/stripe
|
||||
# Or: https://stripe.com/docs/stripe-cli
|
||||
|
||||
# Login
|
||||
stripe login
|
||||
|
||||
# Forward webhooks to local hold
|
||||
stripe listen --forward-to localhost:8080/xrpc/io.atcr.hold.stripeWebhook
|
||||
```
|
||||
|
||||
The CLI outputs a webhook signing secret:
|
||||
```
|
||||
Ready! Your webhook signing secret is whsec_xxxxxxxxxxxxx
|
||||
```
|
||||
|
||||
Use that as `STRIPE_WEBHOOK_SECRET` for local dev.
|
||||
|
||||
### Running Locally
|
||||
|
||||
```bash
|
||||
# Terminal 1: Run hold with billing
|
||||
export STRIPE_SECRET_KEY=sk_test_xxx
|
||||
export STRIPE_WEBHOOK_SECRET=whsec_xxx # from 'stripe listen'
|
||||
export HOLD_PUBLIC_URL=http://localhost:8080
|
||||
export STORAGE_DRIVER=filesystem
|
||||
export HOLD_DATABASE_DIR=/tmp/hold-test
|
||||
go run -tags billing ./cmd/hold
|
||||
|
||||
# Terminal 2: Forward webhooks
|
||||
stripe listen --forward-to localhost:8080/xrpc/io.atcr.hold.stripeWebhook
|
||||
|
||||
# Terminal 3: Trigger test events
|
||||
stripe trigger checkout.session.completed
|
||||
stripe trigger customer.subscription.created
|
||||
stripe trigger customer.subscription.updated
|
||||
stripe trigger customer.subscription.paused
|
||||
stripe trigger customer.subscription.resumed
|
||||
stripe trigger customer.subscription.deleted
|
||||
```
|
||||
|
||||
### Testing the Flow
|
||||
|
||||
1. Start hold with billing enabled
|
||||
2. Start Stripe CLI webhook forwarding
|
||||
3. Navigate to AppView settings page
|
||||
4. Click "Upgrade" on a tier
|
||||
5. Complete Stripe checkout (use test card `4242 4242 4242 4242`)
|
||||
6. Webhook fires → hold updates crew tier
|
||||
7. Refresh settings to see new tier
|
||||
|
||||
## Webhook Events
|
||||
|
||||
The hold handles these Stripe events:
|
||||
|
||||
| Event | Action |
|
||||
|-------|--------|
|
||||
| `checkout.session.completed` | Create/update subscription, set tier |
|
||||
| `customer.subscription.created` | Set crew tier from price ID |
|
||||
| `customer.subscription.updated` | Update crew tier if price changed |
|
||||
| `customer.subscription.paused` | Downgrade to free tier |
|
||||
| `customer.subscription.resumed` | Restore tier from subscription price |
|
||||
| `customer.subscription.deleted` | Downgrade to free tier |
|
||||
| `invoice.payment_failed` | Log warning (tier unchanged until canceled) |
|
||||
|
||||
## Plankowners (Grandfathering)
|
||||
|
||||
Early adopters can be marked as "plankowners" to get a paid tier for free:
|
||||
|
||||
```json
|
||||
{
|
||||
"$type": "io.atcr.hold.crew",
|
||||
"member": "did:plc:xxx",
|
||||
"tier": "deckhand",
|
||||
"plankowner": true,
|
||||
"permissions": ["blob:read", "blob:write"],
|
||||
"addedAt": "2025-01-01T00:00:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
Plankowners:
|
||||
- Get `plankowner_crew_tier` (e.g., deckhand) without paying
|
||||
- Still see upgrade options in UI if they want to support
|
||||
- Can upgrade to higher tiers normally
|
||||
|
||||
## Customer-DID Mapping
|
||||
|
||||
DIDs are stored in Stripe customer metadata:
|
||||
|
||||
```json
|
||||
{
|
||||
"metadata": {
|
||||
"user_did": "did:plc:xxx",
|
||||
"hold_did": "did:web:hold.example.com"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The hold uses an in-memory cache (10 min TTL) to reduce Stripe API calls. On webhook events, the cache is invalidated for the affected customer.
|
||||
|
||||
## Production Checklist
|
||||
|
||||
- [ ] Create Stripe products and prices in live mode
|
||||
- [ ] Set `STRIPE_SECRET_KEY` to live key (`sk_live_xxx`)
|
||||
- [ ] Configure webhook endpoint in Stripe Dashboard:
|
||||
- URL: `https://your-hold.com/xrpc/io.atcr.hold.stripeWebhook`
|
||||
- Events: `checkout.session.completed`, `customer.subscription.created`, `customer.subscription.updated`, `customer.subscription.paused`, `customer.subscription.resumed`, `customer.subscription.deleted`, `invoice.payment_failed`
|
||||
- [ ] Set `STRIPE_WEBHOOK_SECRET` from Dashboard webhook settings
|
||||
- [ ] Update `quotas.yaml` with live price IDs
|
||||
- [ ] Build hold with `-tags billing`
|
||||
- [ ] Test with a real payment (can refund immediately)
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Webhook signature verification failed
|
||||
- Ensure `STRIPE_WEBHOOK_SECRET` matches the webhook endpoint in Stripe Dashboard
|
||||
- For local dev, use the secret from `stripe listen` output
|
||||
|
||||
### Customer not found
|
||||
- Customer is created on first checkout
|
||||
- Check Stripe Dashboard → Customers for the DID in metadata
|
||||
|
||||
### Tier not updating after payment
|
||||
- Check hold logs for webhook processing errors
|
||||
- Verify price ID in `quotas.yaml` matches Stripe
|
||||
- Ensure `billing.enabled: true` in config
|
||||
|
||||
### "Billing not enabled" error
|
||||
- Build with `-tags billing`
|
||||
- Set `billing.enabled: true` in `quotas.yaml`
|
||||
- Ensure `STRIPE_SECRET_KEY` is set
|
||||
348
docs/BILLING_REFACTOR.md
Normal file
348
docs/BILLING_REFACTOR.md
Normal file
@@ -0,0 +1,348 @@
|
||||
# Billing & Webhooks Refactor: Move to AppView
|
||||
|
||||
## Motivation
|
||||
|
||||
The current billing model is **per-hold**: each hold operator runs their own Stripe integration, manages their own tiers, and users pay each hold separately. This creates problems:
|
||||
|
||||
1. **Multi-hold confusion**: A user on 3 holds could have 3 separate Stripe subscriptions with no unified view
|
||||
2. **Orphaned subscriptions**: Users can end up paying for holds they no longer use after switching their active hold
|
||||
3. **Complex UI**: The settings page needs to surface billing per-hold, with separate "Manage Billing" links for each
|
||||
4. **Captain-only billing**: Only hold captains can set up Stripe. Self-hosted hold operators who want to charge users would need their own Stripe account per hold
|
||||
|
||||
The proposed model is **per-appview**: a single Stripe integration on the appview, one subscription per user, covering all holds that appview manages.
|
||||
|
||||
## Current Architecture
|
||||
|
||||
```
|
||||
User ──Settings UI──→ AppView ──XRPC──→ Hold ──Stripe API──→ Stripe
|
||||
↑
|
||||
Stripe Webhooks
|
||||
```
|
||||
|
||||
### What lives where today
|
||||
|
||||
| Component | Location | Notes |
|
||||
|-----------|----------|-------|
|
||||
| Stripe customer management | Hold (`pkg/hold/billing/`) | Build tag: `-tags billing` |
|
||||
| Stripe checkout/portal | Hold XRPC endpoints | Authenticated via service token |
|
||||
| Stripe webhook receiver | Hold (`stripeWebhook` endpoint) | Updates crew tier on subscription change |
|
||||
| Tier definitions + pricing | Hold config (`quotas.yaml`, `billing` section) | Captain configures |
|
||||
| Quota enforcement | Hold (`pkg/hold/quota/`) | Checks tier limit on push |
|
||||
| Storage quota calculation | Hold PDS layer records | Deduped per-user |
|
||||
| Subscription UI | AppView handlers | Proxies all calls to hold |
|
||||
| Webhook management (scan) | Hold PDS + SQLite | URL/secret in SQLite, metadata in PDS record |
|
||||
| Webhook dispatch | Hold (`scan_broadcaster.go`) | Sends on scan completion |
|
||||
| Sailor webhook record | User's PDS | Links to hold's private webhook record |
|
||||
|
||||
## Proposed Architecture
|
||||
|
||||
```
|
||||
User ──Settings UI──→ AppView ──Stripe API──→ Stripe
|
||||
│ ↑
|
||||
│ Stripe Webhooks
|
||||
│
|
||||
├──XRPC──→ Hold A (quota enforcement, scan results)
|
||||
├──XRPC──→ Hold B
|
||||
└──XRPC──→ Hold C
|
||||
|
||||
AppView signs attestation
|
||||
│
|
||||
└──→ Hold stores in PDS (trust anchor)
|
||||
```
|
||||
|
||||
### What moves to AppView
|
||||
|
||||
| Component | From | To | Notes |
|
||||
|-----------|------|----|-------|
|
||||
| Stripe customer management | Hold | AppView | One customer per user, not per hold |
|
||||
| Stripe checkout/portal | Hold | AppView | Single subscription covers all holds |
|
||||
| Stripe webhook receiver | Hold | AppView | AppView updates tier across all holds |
|
||||
| Tier definitions + pricing | Hold config | AppView config | AppView defines billing tiers |
|
||||
| Scan webhooks (storage + dispatch) | Hold | AppView | AppView has user context, scan data comes via Jetstream/XRPC |
|
||||
|
||||
### What stays on the hold
|
||||
|
||||
| Component | Notes |
|
||||
|-----------|-------|
|
||||
| Quota enforcement | Hold still checks tier limit on push |
|
||||
| Storage quota calculation | Layer records stay in hold PDS |
|
||||
| Tier definitions (quota only) | Hold defines storage limits per tier, no pricing |
|
||||
| Scan execution + results | Scanner still talks to hold, results stored in hold PDS |
|
||||
| Crew tier field | Source of truth for enforcement, updated by appview |
|
||||
|
||||
## Billing Model
|
||||
|
||||
### One subscription, all holds
|
||||
|
||||
A user pays the appview once. Their subscription tier applies across every hold the appview manages.
|
||||
|
||||
```
|
||||
AppView billing tiers: [Free] [Tier 1] [Tier 2]
|
||||
│ │ │
|
||||
▼ ▼ ▼
|
||||
Hold A tiers (3GB/10GB/50GB): deckhand bosun quartermaster
|
||||
Hold B tiers (5GB/20GB/∞): deckhand bosun quartermaster
|
||||
```
|
||||
|
||||
### Tier pairing
|
||||
|
||||
The appview defines N billing slots. Each hold defines its own tier list with storage quotas. The appview maps its billing slots to each hold's lowest N tiers by rank order.
|
||||
|
||||
- AppView doesn't need to know tier names — just "slot 1, slot 2, slot 3"
|
||||
- Each hold independently decides what storage limit each tier gets
|
||||
- The settings UI shows the range: "5-10 GB depending on region" or "minimum 5 GB"
|
||||
|
||||
### Hold captains who want to charge
|
||||
|
||||
If a hold captain wants to charge their own users (not through the shared appview), they spin up their own appview instance with their own Stripe account. The billing code stays the same — it just runs on their appview instead of the shared one.
|
||||
|
||||
## AppView-Hold Trust Model
|
||||
|
||||
### Problem
|
||||
|
||||
The appview needs to tell holds "user X is tier Y." The hold needs to trust that instruction. If domains change, the hold needs to verify the appview's identity.
|
||||
|
||||
### Attestation handshake
|
||||
|
||||
1. **Hold config** already has `server.appview_url` (preferred appview)
|
||||
2. **AppView config** gains a `managed_holds` list (DIDs of holds it manages)
|
||||
3. On first connection, the appview signs an attestation with its private key:
|
||||
```json
|
||||
{
|
||||
"$type": "io.atcr.appview.attestation",
|
||||
"appviewDid": "did:web:atcr.io",
|
||||
"holdDid": "did:web:hold01.atcr.io",
|
||||
"issuedAt": "2026-02-23T...",
|
||||
"signature": "<signed with appview's P-256 key>"
|
||||
}
|
||||
```
|
||||
4. The hold stores this attestation in its embedded PDS
|
||||
5. On subsequent requests, the hold can challenge the appview: present the attestation, appview proves it holds the matching private key
|
||||
6. If the appview's domain changes, the attestation (tied to DID, not URL) remains valid
|
||||
|
||||
### Trust verification flow
|
||||
|
||||
```
|
||||
AppView boots → checks managed_holds list
|
||||
→ for each hold:
|
||||
→ calls hold's describeServer endpoint to verify DID
|
||||
→ signs attestation { appviewDid, holdDid, issuedAt }
|
||||
→ sends to hold via XRPC
|
||||
→ hold stores in PDS as io.atcr.hold.appview record
|
||||
|
||||
Hold receives tier update from appview:
|
||||
→ checks: does this request come from my preferred appview?
|
||||
→ verifies: signature on stored attestation matches appview's current key
|
||||
→ if valid: updates crew tier
|
||||
→ if invalid: rejects, logs warning
|
||||
```
|
||||
|
||||
### Key material
|
||||
|
||||
- **AppView**: P-256 key (already exists at `/var/lib/atcr/oauth/client.key`, used for OAuth)
|
||||
- **Hold**: K-256 key (PDS signing key)
|
||||
- Attestation is signed by appview's P-256 key, verifiable by anyone with the appview's public key (available via DID document)
|
||||
|
||||
## Webhooks: Move to AppView
|
||||
|
||||
### Why move
|
||||
|
||||
Scan webhooks currently live on the hold, but:
|
||||
- The webhook payload needs user handles, repository names, tags — all resolved by the appview
|
||||
- The hold only has DIDs and digests
|
||||
- The appview already processes scan records via Jetstream (backfill + live)
|
||||
- Webhook secrets shouldn't need to live on every hold the user pushes to
|
||||
|
||||
### New flow
|
||||
|
||||
```
|
||||
Scanner completes scan
|
||||
→ Hold stores scan record in PDS
|
||||
→ Jetstream delivers scan record to AppView
|
||||
→ AppView resolves user handle, repo name, tags
|
||||
→ AppView dispatches webhooks with full context
|
||||
```
|
||||
|
||||
### What changes
|
||||
|
||||
| Aspect | Current (hold) | Proposed (appview) |
|
||||
|--------|---------------|-------------------|
|
||||
| Webhook storage | Hold SQLite + PDS record | AppView DB + user's PDS record |
|
||||
| Webhook secrets | Hold SQLite (`webhook_secrets` table) | AppView DB |
|
||||
| Dispatch trigger | `scan_broadcaster.go` on scan completion | Jetstream processor on `io.atcr.hold.scan` record |
|
||||
| Payload enrichment | Hold fetches handle from appview metadata | AppView has full context natively |
|
||||
| Discord/Slack formatting | Hold (`webhooks.go`) | AppView (same code, moved) |
|
||||
| Tier-based limits | Hold quota manager | AppView billing tier |
|
||||
| XRPC endpoints | Hold (`listWebhooks`, `addWebhook`, etc.) | AppView API endpoints (already exist as proxies) |
|
||||
|
||||
### Webhook record changes
|
||||
|
||||
The `io.atcr.sailor.webhook` record in the user's PDS stays. It already stores `holdDid` and `triggers`. The `privateCid` field (linking to hold's internal record) becomes unnecessary since appview owns the full webhook now.
|
||||
|
||||
The `io.atcr.hold.webhook` record in the hold's PDS is no longer needed. Webhooks are appview-scoped, not hold-scoped.
|
||||
|
||||
### Migration path
|
||||
|
||||
1. AppView gains webhook storage in its own DB (new table)
|
||||
2. AppView gains webhook dispatch in its Jetstream processor
|
||||
3. Hold's webhook endpoints deprecated (return 410 Gone after transition period)
|
||||
4. Existing hold webhook records migrated via one-time script reading from hold XRPC + user PDS
|
||||
|
||||
## Config Changes
|
||||
|
||||
### AppView config additions
|
||||
|
||||
```yaml
|
||||
server:
|
||||
# Existing
|
||||
default_hold_did: "did:web:hold01.atcr.io"
|
||||
|
||||
# New
|
||||
managed_holds:
|
||||
- "did:web:hold01.atcr.io"
|
||||
- "did:plc:abc123..."
|
||||
|
||||
# New section
|
||||
billing:
|
||||
enabled: true
|
||||
currency: usd
|
||||
success_url: "{base_url}/settings#storage"
|
||||
cancel_url: "{base_url}/settings#storage"
|
||||
tiers:
|
||||
- name: "Free"
|
||||
# No stripe_price = free tier
|
||||
- name: "Standard"
|
||||
stripe_price_monthly: price_xxx
|
||||
stripe_price_yearly: price_yyy
|
||||
- name: "Pro"
|
||||
stripe_price_monthly: price_xxx
|
||||
stripe_price_yearly: price_yyy
|
||||
```
|
||||
|
||||
### AppView environment additions
|
||||
|
||||
```bash
|
||||
STRIPE_SECRET_KEY=sk_live_xxx
|
||||
STRIPE_WEBHOOK_SECRET=whsec_xxx
|
||||
```
|
||||
|
||||
### Hold config changes
|
||||
|
||||
```yaml
|
||||
# Removed
|
||||
billing:
|
||||
# entire section removed from hold config
|
||||
|
||||
# Stays (quota enforcement only)
|
||||
quota:
|
||||
tiers:
|
||||
- name: deckhand
|
||||
quota: 5GB
|
||||
- name: bosun
|
||||
quota: 50GB
|
||||
- name: quartermaster
|
||||
quota: 100GB
|
||||
defaults:
|
||||
new_crew_tier: deckhand
|
||||
```
|
||||
|
||||
The hold no longer has Stripe config. It just defines storage limits per tier and enforces them.
|
||||
|
||||
## AppView DB Schema Additions
|
||||
|
||||
```sql
|
||||
-- Webhook configurations (moved from hold SQLite)
|
||||
CREATE TABLE webhooks (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
user_did TEXT NOT NULL,
|
||||
url TEXT NOT NULL,
|
||||
secret_hash TEXT, -- bcrypt hash of HMAC secret
|
||||
triggers INTEGER NOT NULL DEFAULT 1, -- bitmask: first=1, all=2, changed=4
|
||||
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
|
||||
UNIQUE(user_did, url)
|
||||
);
|
||||
|
||||
-- Billing: track which holds have been attested
|
||||
CREATE TABLE hold_attestations (
|
||||
hold_did TEXT PRIMARY KEY,
|
||||
attestation_cid TEXT NOT NULL, -- CID of attestation record in hold's PDS
|
||||
issued_at DATETIME NOT NULL,
|
||||
verified_at DATETIME
|
||||
);
|
||||
```
|
||||
|
||||
Stripe customer/subscription data continues to live in Stripe (queried via API, cached in memory). No local subscription table needed — same pattern as current hold billing, just on appview.
|
||||
|
||||
## Implementation Phases
|
||||
|
||||
### Phase 1: Trust foundation
|
||||
- Add `managed_holds` to appview config
|
||||
- Implement attestation signing (appview) and storage (hold)
|
||||
- Add attestation verification to hold's tier-update endpoint
|
||||
- New XRPC endpoint on hold: `io.atcr.hold.updateCrewTier` (appview-authenticated)
|
||||
|
||||
### Phase 2: Billing migration
|
||||
- Move Stripe integration from hold to appview (reuse `pkg/hold/billing/` code)
|
||||
- AppView billing uses `-tags billing` build tag (same pattern)
|
||||
- Implement tier pairing: appview billing slots mapped to hold tier lists
|
||||
- New appview endpoints: checkout, portal, stripe webhook receiver
|
||||
- Settings UI: single subscription section (not per-hold)
|
||||
|
||||
### Phase 3: Webhook migration ✅
|
||||
- Add webhook + scans tables to appview DB
|
||||
- Implement webhook dispatch in appview's Jetstream processor
|
||||
- Move Discord/Slack formatting code to `pkg/appview/webhooks/`
|
||||
- Deprecate hold webhook XRPC endpoints (X-Deprecated header)
|
||||
- Webhooks now user-scoped (global across all holds) in appview DB
|
||||
- Scan records cached from Jetstream for change detection
|
||||
|
||||
### Phase 4: Cleanup ✅
|
||||
- Removed hold webhook XRPC endpoints, dispatch code, and `webhooks.go`
|
||||
- Removed `io.atcr.hold.webhook` and `io.atcr.sailor.webhook` record types + lexicons
|
||||
- Removed `webhook_secrets` SQLite schema from scan_broadcaster
|
||||
- Removed `MaxWebhooks`/`WebhookAllTriggers` from hold quota config
|
||||
- Removed sailor webhook from OAuth scopes
|
||||
|
||||
## Settings UI Impact
|
||||
|
||||
The storage tab simplifies significantly:
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────────┐
|
||||
│ Active Hold: [▼ hold01.atcr.io (Crew) ] │
|
||||
└──────────────────────────────────────────────────────┘
|
||||
|
||||
┌──────────────────────────────────────────────────────┐
|
||||
│ Subscription: Standard ($5/mo) [Manage Billing] │
|
||||
│ Storage: 3-5 GB depending on region │
|
||||
└──────────────────────────────────────────────────────┘
|
||||
|
||||
┌──────────────────────────────────────────────────────┐
|
||||
│ ★ hold01.atcr.io [Active] [Crew] [Online] │
|
||||
│ Tier: bosun · 281.5 MB / 5.0 GB (5%) │
|
||||
│ ▸ Webhooks (2 configured) │
|
||||
└──────────────────────────────────────────────────────┘
|
||||
|
||||
┌──────────────────────────────────────────────────────┐
|
||||
│ Other Holds Role Status Storage │
|
||||
│ hold02.atcr.io Crew ● 230 MB / 3 GB │
|
||||
│ hold03.atcr.io Owner ● No data │
|
||||
└──────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
Key changes:
|
||||
- **One subscription section** at the top (not per-hold)
|
||||
- **Webhooks section** under active hold card (managed by appview now)
|
||||
- **No "Paid" badge per hold** — subscription is global
|
||||
- **Storage range** shown on subscription card ("3-5 GB depending on region")
|
||||
- **Per-hold quota** still shown (each hold enforces its own limit for the user's tier)
|
||||
|
||||
## Open Questions
|
||||
|
||||
1. **Tier list endpoint**: Holds need a new XRPC endpoint that returns their tier list with quotas (without pricing). The appview calls this to build the "3-5 GB depending on region" display. Something like `io.atcr.hold.listTiers`.
|
||||
|
||||
2. **Existing Stripe customers**: Holds with existing Stripe subscriptions need a migration plan. Options: honor existing subscriptions until they expire, or bulk-migrate customers to appview's Stripe account.
|
||||
|
||||
3. **Webhook delivery guarantees**: Moving dispatch to appview adds latency (scan record → Jetstream → appview → webhook). For time-sensitive notifications, consider the hold sending a lightweight "scan completed" signal directly to appview via XRPC rather than waiting for Jetstream propagation.
|
||||
|
||||
4. **Self-hosted appviews**: The attestation model assumes one appview per set of holds. If multiple appviews try to manage the same hold, the hold should only trust the most recent attestation (or maintain a list).
|
||||
642
docs/BYOS.md
642
docs/BYOS.md
@@ -2,216 +2,144 @@
|
||||
|
||||
## Overview
|
||||
|
||||
ATCR supports "Bring Your Own Storage" (BYOS) for blob storage. This allows users to:
|
||||
- Deploy their own storage service backed by S3/Storj/Minio/filesystem
|
||||
- Control who can use their storage (public or private)
|
||||
- Keep blob data in their own infrastructure while manifests remain in their ATProto PDS
|
||||
ATCR supports "Bring Your Own Storage" (BYOS) for blob storage. Users can:
|
||||
- Deploy their own hold service with embedded PDS
|
||||
- Control access via crew membership in the hold's PDS
|
||||
- Keep blob data in their own S3-compatible storage (AWS S3, Storj, Minio, UpCloud, etc.) while manifests stay in their user PDS
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────┐
|
||||
│ ATCR AppView (API) │
|
||||
│ - Manifests → ATProto PDS │
|
||||
│ - Auth & token validation │
|
||||
│ - Blob routing (issues redirects) │
|
||||
│ - Profile management │
|
||||
└─────────────────┬───────────────────────────┘
|
||||
│
|
||||
│ Hold discovery priority:
|
||||
│ 1. io.atcr.sailor.profile.defaultHold
|
||||
│ 2. io.atcr.hold records
|
||||
│ 3. AppView default_storage_endpoint
|
||||
▼
|
||||
┌─────────────────────────────────────────────┐
|
||||
│ User's PDS │
|
||||
│ - io.atcr.sailor.profile (hold preference) │
|
||||
│ - io.atcr.hold records (own holds) │
|
||||
│ - io.atcr.manifest records (with holdEP) │
|
||||
└─────────────────┬───────────────────────────┘
|
||||
│
|
||||
│ Redirects to hold
|
||||
▼
|
||||
┌─────────────────────────────────────────────┐
|
||||
│ Storage Service (Hold) │
|
||||
│ - Blob storage (S3/Storj/Minio/filesystem) │
|
||||
│ - Presigned URL generation │
|
||||
│ - Authorization (DID-based) │
|
||||
└─────────────────────────────────────────────┘
|
||||
┌──────────────────────────────────────────┐
|
||||
│ ATCR AppView (API) │
|
||||
│ - Manifests → User's PDS │
|
||||
│ - Auth & service token management │
|
||||
│ - Blob routing via XRPC │
|
||||
│ - Profile management │
|
||||
└────────────┬─────────────────────────────┘
|
||||
│
|
||||
│ Hold discovery priority:
|
||||
│ 1. io.atcr.sailor.profile.defaultHold (DID)
|
||||
│ 2. io.atcr.hold records (legacy)
|
||||
│ 3. AppView default_hold_did
|
||||
▼
|
||||
┌──────────────────────────────────────────┐
|
||||
│ User's PDS │
|
||||
│ - io.atcr.sailor.profile (hold DID) │
|
||||
│ - io.atcr.manifest (with holdDid) │
|
||||
└────────────┬─────────────────────────────┘
|
||||
│
|
||||
│ Service token from user's PDS
|
||||
▼
|
||||
┌──────────────────────────────────────────┐
|
||||
│ Hold Service (did:web:hold.example.com) │
|
||||
│ ├── Embedded PDS │
|
||||
│ │ ├── Captain record (ownership) │
|
||||
│ │ └── Crew records (access control) │
|
||||
│ ├── XRPC multipart upload endpoints │
|
||||
│ └── Storage driver (S3/Storj/etc.) │
|
||||
└──────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## ATProto Records
|
||||
## Hold Service Components
|
||||
|
||||
### io.atcr.sailor.profile
|
||||
Each hold is a full ATProto actor with:
|
||||
- **DID**: `did:web:hold.example.com` (hold's identity)
|
||||
- **Embedded PDS**: Stores captain + crew records (shared data)
|
||||
- **Storage backend**: S3-compatible (AWS S3, Storj, Minio, UpCloud, etc.)
|
||||
- **XRPC endpoints**: Standard ATProto + custom OCI multipart upload
|
||||
|
||||
**NEW:** User profile for hold selection preferences. Created automatically on first authentication.
|
||||
### Records in Hold's PDS
|
||||
|
||||
**Captain record** (`io.atcr.hold.captain/self`):
|
||||
```json
|
||||
{
|
||||
"$type": "io.atcr.hold.captain",
|
||||
"owner": "did:plc:alice123",
|
||||
"public": false,
|
||||
"deployedAt": "2025-10-14T...",
|
||||
"region": "iad",
|
||||
"provider": "fly.io"
|
||||
}
|
||||
```
|
||||
|
||||
**Crew records** (`io.atcr.hold.crew/{rkey}`):
|
||||
```json
|
||||
{
|
||||
"$type": "io.atcr.hold.crew",
|
||||
"member": "did:plc:bob456",
|
||||
"role": "admin",
|
||||
"permissions": ["blob:read", "blob:write"],
|
||||
"addedAt": "2025-10-14T..."
|
||||
}
|
||||
```
|
||||
|
||||
### Sailor Profile (User's PDS)
|
||||
|
||||
Users set their preferred hold in their sailor profile:
|
||||
|
||||
```json
|
||||
{
|
||||
"$type": "io.atcr.sailor.profile",
|
||||
"defaultHold": "https://team-hold.example.com",
|
||||
"createdAt": "2025-10-02T12:00:00Z",
|
||||
"updatedAt": "2025-10-02T12:00:00Z"
|
||||
"defaultHold": "did:web:hold.example.com",
|
||||
"createdAt": "2025-10-02T...",
|
||||
"updatedAt": "2025-10-02T..."
|
||||
}
|
||||
```
|
||||
|
||||
**Record key:** Always `"self"` (only one profile per user)
|
||||
|
||||
**Behavior:**
|
||||
- Created automatically when user first authenticates (OAuth or Basic Auth)
|
||||
- If AppView has `default_storage_endpoint`, profile gets that as initial `defaultHold`
|
||||
- User can update to join shared holds or use their own hold
|
||||
- Set `defaultHold` to `null` to opt out of defaults (use own hold or AppView default)
|
||||
|
||||
**This solves the multi-hold problem:** Users who are crew members of multiple holds can explicitly choose which one to use via their profile.
|
||||
|
||||
### io.atcr.hold
|
||||
|
||||
Users create a hold record in their PDS to configure their own storage:
|
||||
|
||||
```json
|
||||
{
|
||||
"$type": "io.atcr.hold",
|
||||
"endpoint": "https://alice-storage.example.com",
|
||||
"owner": "did:plc:alice123",
|
||||
"public": false,
|
||||
"createdAt": "2025-10-01T12:00:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
### io.atcr.hold.crew
|
||||
|
||||
Hold owners can add crew members (for shared storage):
|
||||
|
||||
```json
|
||||
{
|
||||
"$type": "io.atcr.hold.crew",
|
||||
"hold": "at://did:plc:alice/io.atcr.hold/my-storage",
|
||||
"member": "did:plc:bob456",
|
||||
"role": "write",
|
||||
"addedAt": "2025-10-01T12:00:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
**Note:** Crew records are stored in the **hold owner's PDS**, not the crew member's PDS. This ensures the hold owner maintains full control over access.
|
||||
|
||||
## Storage Service
|
||||
|
||||
### Deployment
|
||||
|
||||
The storage service is a lightweight HTTP server that:
|
||||
1. Accepts presigned URL requests
|
||||
2. Verifies DID authorization
|
||||
3. Generates presigned URLs for S3/Storj/etc
|
||||
4. Returns URLs to AppView for client redirect
|
||||
## Deployment
|
||||
|
||||
### Configuration
|
||||
|
||||
The hold service is configured entirely via environment variables. See `.env.example` for all options.
|
||||
|
||||
**Required environment variables:**
|
||||
Hold service is configured entirely via environment variables:
|
||||
|
||||
```bash
|
||||
# Hold service public URL (REQUIRED)
|
||||
HOLD_PUBLIC_URL=https://storage.example.com
|
||||
# Hold identity (REQUIRED)
|
||||
HOLD_PUBLIC_URL=https://hold.example.com
|
||||
HOLD_OWNER=did:plc:your-did-here
|
||||
|
||||
# Storage driver type
|
||||
STORAGE_DRIVER=s3
|
||||
|
||||
# For S3/Minio
|
||||
# S3 storage backend (REQUIRED)
|
||||
AWS_ACCESS_KEY_ID=your_access_key
|
||||
AWS_SECRET_ACCESS_KEY=your_secret_key
|
||||
AWS_REGION=us-east-1
|
||||
S3_BUCKET=my-blobs
|
||||
|
||||
# For Storj (optional - custom S3 endpoint)
|
||||
# S3_ENDPOINT=https://gateway.storjshare.io
|
||||
# Access control
|
||||
HOLD_PUBLIC=false # Require authentication for reads
|
||||
HOLD_ALLOW_ALL_CREW=false # Only explicit crew members can write
|
||||
|
||||
# For filesystem storage
|
||||
# STORAGE_DRIVER=filesystem
|
||||
# STORAGE_ROOT_DIR=/var/lib/atcr-storage
|
||||
# Embedded PDS
|
||||
HOLD_DATABASE_PATH=/var/lib/atcr-hold/hold.db
|
||||
HOLD_DATABASE_KEY_PATH=/var/lib/atcr-hold/keys
|
||||
```
|
||||
|
||||
**Authorization:**
|
||||
### Running Locally
|
||||
|
||||
ATCR follows ATProto's public-by-default model with gated anonymous access:
|
||||
|
||||
**Read Access:**
|
||||
- **Public hold** (`HOLD_PUBLIC=true`): Anonymous reads allowed (no authentication)
|
||||
- **Private hold** (`HOLD_PUBLIC=false`): Requires authentication (any ATCR user with sailor.profile)
|
||||
|
||||
**Write Access:**
|
||||
- Always requires authentication
|
||||
- Must be hold owner OR crew member (verified via `io.atcr.hold.crew` records in owner's PDS)
|
||||
|
||||
**Key Points:**
|
||||
- "Private" just means "no anonymous access" - not "limited user access"
|
||||
- Any authenticated ATCR user can read from private holds
|
||||
- Crew membership only controls WRITE access, not READ access
|
||||
- This aligns with ATProto's public records model (no private PDS records yet)
|
||||
|
||||
### Running
|
||||
For local development, use Minio as an S3-compatible storage:
|
||||
|
||||
```bash
|
||||
# Start Minio (in separate terminal)
|
||||
docker run -p 9000:9000 -p 9001:9001 minio/minio server /data --console-address ":9001"
|
||||
|
||||
# Build
|
||||
go build -o atcr-hold ./cmd/hold
|
||||
go build -o bin/atcr-hold ./cmd/hold
|
||||
|
||||
# Set environment variables (or use .env file)
|
||||
export HOLD_PUBLIC_URL=https://storage.example.com
|
||||
export STORAGE_DRIVER=s3
|
||||
export AWS_ACCESS_KEY_ID=...
|
||||
export AWS_SECRET_ACCESS_KEY=...
|
||||
export AWS_REGION=us-east-1
|
||||
export S3_BUCKET=my-blobs
|
||||
# Run (with env vars or .env file)
|
||||
export HOLD_PUBLIC_URL=http://localhost:8080
|
||||
export HOLD_OWNER=did:plc:your-did-here
|
||||
export AWS_ACCESS_KEY_ID=minioadmin
|
||||
export AWS_SECRET_ACCESS_KEY=minioadmin
|
||||
export S3_BUCKET=test
|
||||
export S3_ENDPOINT=http://localhost:9000
|
||||
export HOLD_DATABASE_PATH=/tmp/atcr-hold/hold.db
|
||||
|
||||
# Run
|
||||
./atcr-hold
|
||||
./bin/atcr-hold
|
||||
```
|
||||
|
||||
**Registration (required):**
|
||||
|
||||
The hold service must be registered in a PDS to be discoverable by the AppView.
|
||||
|
||||
**Standard registration workflow:**
|
||||
|
||||
1. Set `HOLD_OWNER` to your DID:
|
||||
```bash
|
||||
export HOLD_OWNER=did:plc:your-did-here
|
||||
```
|
||||
|
||||
2. Start the hold service:
|
||||
```bash
|
||||
./atcr-hold
|
||||
```
|
||||
|
||||
3. **Check the logs** for the OAuth authorization URL:
|
||||
```
|
||||
================================================================================
|
||||
OAUTH AUTHORIZATION REQUIRED
|
||||
================================================================================
|
||||
|
||||
Please visit this URL to authorize the hold service:
|
||||
|
||||
https://bsky.app/authorize?client_id=...
|
||||
|
||||
Waiting for authorization...
|
||||
================================================================================
|
||||
```
|
||||
|
||||
4. Visit the URL in your browser and authorize
|
||||
|
||||
5. The hold service will:
|
||||
- Exchange the authorization code for a token
|
||||
- Create `io.atcr.hold` record in your PDS
|
||||
- Create `io.atcr.hold.crew` record (making you the owner)
|
||||
- Save registration state
|
||||
|
||||
6. On subsequent runs, the service checks if already registered and skips OAuth
|
||||
|
||||
**Alternative methods:**
|
||||
|
||||
- **Manual API registration**: Call `POST /register` with your own OAuth token
|
||||
- **Completely manual**: Create PDS records yourself using any ATProto client
|
||||
On first run, the hold service creates:
|
||||
- Captain record in embedded PDS (making you the owner)
|
||||
- Crew record for owner with all permissions
|
||||
- DID document at `/.well-known/did.json`
|
||||
|
||||
### Deploy to Fly.io
|
||||
|
||||
@@ -223,11 +151,10 @@ primary_region = "ord"
|
||||
|
||||
[env]
|
||||
HOLD_PUBLIC_URL = "https://my-atcr-hold.fly.dev"
|
||||
HOLD_SERVER_ADDR = ":8080"
|
||||
STORAGE_DRIVER = "s3"
|
||||
AWS_REGION = "us-east-1"
|
||||
S3_BUCKET = "my-blobs"
|
||||
HOLD_PUBLIC = "false"
|
||||
HOLD_ALLOW_ALL_CREW = "false"
|
||||
|
||||
[http_service]
|
||||
internal_port = 8080
|
||||
@@ -250,268 +177,197 @@ fly deploy
|
||||
fly secrets set AWS_ACCESS_KEY_ID=...
|
||||
fly secrets set AWS_SECRET_ACCESS_KEY=...
|
||||
fly secrets set HOLD_OWNER=did:plc:your-did-here
|
||||
|
||||
# Check logs for OAuth URL on first run
|
||||
fly logs
|
||||
|
||||
# Visit the OAuth URL shown in logs to authorize
|
||||
# The hold service will register itself in your PDS
|
||||
```
|
||||
|
||||
## Request Flow
|
||||
|
||||
### Push with BYOS
|
||||
|
||||
1. **Docker push** `atcr.io/alice/myapp:latest`
|
||||
2. **AppView** resolves `alice` → `did:plc:alice123`
|
||||
3. **AppView** discovers hold via priority logic:
|
||||
- Check alice's `io.atcr.sailor.profile` for `defaultHold`
|
||||
- If not set, check alice's `io.atcr.hold` records
|
||||
- Fall back to AppView's `default_storage_endpoint`
|
||||
4. **Found:** `alice.profile.defaultHold = "https://team-hold.example.com"`
|
||||
5. **AppView** → team-hold: POST `/put-presigned-url`
|
||||
```json
|
||||
{
|
||||
"did": "did:plc:alice123",
|
||||
"digest": "sha256:abc123...",
|
||||
"size": 1048576
|
||||
}
|
||||
```
|
||||
6. **Hold service**:
|
||||
- Verifies alice is authorized (checks crew records)
|
||||
- Generates S3 presigned upload URL (15min expiry)
|
||||
- Returns: `{"url": "https://s3.../blob?signature=..."}`
|
||||
7. **AppView** → Docker: `307 Redirect` to presigned URL
|
||||
8. **Docker** → S3: PUT blob directly (no proxy)
|
||||
9. **Manifest** stored in alice's PDS with `holdEndpoint: "https://team-hold.example.com"`
|
||||
```
|
||||
1. Client: docker push atcr.io/alice/myapp:latest
|
||||
|
||||
2. AppView resolves alice → did:plc:alice123
|
||||
|
||||
3. AppView discovers hold DID:
|
||||
- Check alice's sailor profile for defaultHold
|
||||
- Returns: "did:web:alice-storage.fly.dev"
|
||||
|
||||
4. AppView gets service token from alice's PDS:
|
||||
GET /xrpc/com.atproto.server.getServiceAuth?aud=did:web:alice-storage.fly.dev
|
||||
Response: { "token": "eyJ..." }
|
||||
|
||||
5. AppView initiates multipart upload to hold:
|
||||
POST https://alice-storage.fly.dev/xrpc/io.atcr.hold.initiateUpload
|
||||
Authorization: Bearer {serviceToken}
|
||||
Body: { "digest": "sha256:abc..." }
|
||||
Response: { "uploadId": "xyz" }
|
||||
|
||||
6. For each part:
|
||||
- AppView: POST /xrpc/io.atcr.hold.getPartUploadUrl
|
||||
- Hold validates service token, checks crew membership
|
||||
- Hold returns: { "url": "https://s3.../presigned" }
|
||||
- Client uploads directly to S3 presigned URL
|
||||
|
||||
7. AppView completes upload:
|
||||
POST /xrpc/io.atcr.hold.completeUpload
|
||||
Body: { "uploadId": "xyz", "digest": "sha256:abc...", "parts": [...] }
|
||||
|
||||
8. Manifest stored in alice's PDS:
|
||||
- holdDid: "did:web:alice-storage.fly.dev"
|
||||
- holdEndpoint: "https://alice-storage.fly.dev" (backward compat)
|
||||
```
|
||||
|
||||
### Pull with BYOS
|
||||
|
||||
1. **Docker pull** `atcr.io/alice/myapp:latest`
|
||||
2. **AppView** fetches manifest from alice's PDS
|
||||
3. **Manifest** contains `holdEndpoint: "https://team-hold.example.com"`
|
||||
4. **AppView** caches: `(alice's DID, "myapp") → "https://team-hold.example.com"` (10min TTL)
|
||||
5. **Docker** requests blobs: GET `/v2/alice/myapp/blobs/sha256:abc123`
|
||||
6. **AppView** uses **cached hold from manifest** (not re-discovered)
|
||||
7. **AppView** → team-hold: POST `/get-presigned-url`
|
||||
8. **Hold service** returns presigned download URL
|
||||
9. **AppView** → Docker: `307 Redirect`
|
||||
10. **Docker** → S3: GET blob directly
|
||||
```
|
||||
1. Client: docker pull atcr.io/alice/myapp:latest
|
||||
|
||||
**Key insight:** Pull uses the historical `holdEndpoint` from the manifest, ensuring blobs are fetched from where they were originally pushed, even if alice later changes her profile's `defaultHold`.
|
||||
2. AppView fetches manifest from alice's PDS
|
||||
|
||||
## Default Registry
|
||||
3. Manifest contains:
|
||||
- holdDid: "did:web:alice-storage.fly.dev"
|
||||
|
||||
The AppView can run its own storage service as the default:
|
||||
4. AppView caches hold DID for 10 minutes (covers pull operation)
|
||||
|
||||
### AppView config
|
||||
5. Client requests blob: GET /v2/alice/myapp/blobs/sha256:abc123
|
||||
|
||||
```yaml
|
||||
middleware:
|
||||
- name: registry
|
||||
options:
|
||||
atproto-resolver:
|
||||
default_storage_endpoint: https://storage.atcr.io
|
||||
6. AppView uses cached hold DID from manifest
|
||||
|
||||
7. AppView gets service token from alice's PDS
|
||||
|
||||
8. AppView calls hold XRPC:
|
||||
GET /xrpc/com.atproto.sync.getBlob?did={userDID}&cid=sha256:abc123
|
||||
Authorization: Bearer {serviceToken}
|
||||
Response: { "url": "https://s3.../presigned-download" }
|
||||
|
||||
9. AppView redirects client to presigned S3 URL
|
||||
|
||||
10. Client downloads directly from S3
|
||||
```
|
||||
|
||||
### Default hold service config
|
||||
**Key insight:** Pull uses the `holdDid` stored in the manifest, ensuring blobs are fetched from where they were originally pushed.
|
||||
|
||||
## Access Control
|
||||
|
||||
### Read Access
|
||||
|
||||
- **Public hold** (`HOLD_PUBLIC=true`): Anonymous + authenticated users
|
||||
- **Private hold** (`HOLD_PUBLIC=false`): Authenticated users with crew membership
|
||||
|
||||
### Write Access
|
||||
|
||||
- Hold owner (captain) OR crew members only
|
||||
- Verified via `io.atcr.hold.crew` records in hold's embedded PDS
|
||||
- Service token proves user identity (from user's PDS)
|
||||
|
||||
### Authorization Flow
|
||||
|
||||
```go
|
||||
1. AppView gets service token from user's PDS
|
||||
2. AppView sends request to hold with service token
|
||||
3. Hold validates service token (checks it's from user's PDS)
|
||||
4. Hold extracts user's DID from token
|
||||
5. Hold checks crew records in its embedded PDS
|
||||
6. If crew member found → allow, else → deny
|
||||
```
|
||||
|
||||
## Managing Crew Members
|
||||
|
||||
### Add Crew Member
|
||||
|
||||
Use ATProto client to create crew record in hold's PDS:
|
||||
|
||||
```bash
|
||||
# Accept any authenticated DID
|
||||
HOLD_PUBLIC=false # Requires authentication
|
||||
# Via XRPC (if hold supports it)
|
||||
POST https://hold.example.com/xrpc/io.atcr.hold.requestCrew
|
||||
Authorization: Bearer {userOAuthToken}
|
||||
|
||||
# Or allow public reads
|
||||
HOLD_PUBLIC=true # Public reads, auth required for writes
|
||||
# Or manually via captain's OAuth to hold's PDS
|
||||
atproto put-record \
|
||||
--pds https://hold.example.com \
|
||||
--collection io.atcr.hold.crew \
|
||||
--rkey "{memberDID}" \
|
||||
--value '{
|
||||
"$type": "io.atcr.hold.crew",
|
||||
"member": "did:plc:bob456",
|
||||
"role": "admin",
|
||||
"permissions": ["blob:read", "blob:write"]
|
||||
}'
|
||||
```
|
||||
|
||||
This provides free-tier shared storage for users who don't want to deploy their own.
|
||||
### Remove Crew Member
|
||||
|
||||
## Storage Drivers Supported
|
||||
```bash
|
||||
atproto delete-record \
|
||||
--pds https://hold.example.com \
|
||||
--collection io.atcr.hold.crew \
|
||||
--rkey "{memberDID}"
|
||||
```
|
||||
|
||||
The storage service uses distribution's storage drivers:
|
||||
## Storage Backends
|
||||
|
||||
- **S3** - AWS S3, Minio, Storj (via S3 gateway)
|
||||
- **Filesystem** - Local disk (for testing)
|
||||
- **Azure** - Azure Blob Storage
|
||||
- **GCS** - Google Cloud Storage
|
||||
- **Swift** - OpenStack Swift
|
||||
- **OSS** - Alibaba Cloud OSS
|
||||
|
||||
## Quotas
|
||||
|
||||
Quotas are NOT implemented in the storage service. Instead, use:
|
||||
|
||||
- **S3**: Bucket policies, lifecycle rules
|
||||
- **Storj**: Project limits in Storj dashboard
|
||||
- **Minio**: Quota enforcement features
|
||||
- **Filesystem**: Disk quotas at OS level
|
||||
|
||||
## Security
|
||||
|
||||
### Authorization
|
||||
|
||||
Authorization is based on ATProto's public-by-default model:
|
||||
|
||||
**Read Authorization:**
|
||||
- **Public hold** (`public: true` in hold record):
|
||||
- Anonymous users: ✅ Allowed
|
||||
- Any authenticated user: ✅ Allowed
|
||||
|
||||
- **Private hold** (`public: false` in hold record):
|
||||
- Anonymous users: ❌ 401 Unauthorized
|
||||
- Any authenticated ATCR user: ✅ Allowed (no crew membership required)
|
||||
|
||||
**Write Authorization:**
|
||||
- Anonymous users: ❌ 401 Unauthorized
|
||||
- Authenticated non-crew: ❌ 403 Forbidden
|
||||
- Authenticated crew member: ✅ Allowed
|
||||
- Hold owner: ✅ Allowed
|
||||
|
||||
**Implementation:**
|
||||
- Hold service queries owner's PDS for `io.atcr.hold.crew` records
|
||||
- Crew records are public ATProto records (read without authentication)
|
||||
- "Private" holds only gate anonymous access, not authenticated user access
|
||||
- This reflects ATProto's current limitation: no private PDS records
|
||||
|
||||
### Presigned URLs
|
||||
|
||||
- 15 minute expiry
|
||||
- Client uploads/downloads directly to storage
|
||||
- No data flows through AppView or hold service
|
||||
|
||||
### Private Holds
|
||||
|
||||
"Private" holds gate anonymous access while remaining accessible to authenticated users:
|
||||
|
||||
**What "Private" Means:**
|
||||
- `HOLD_PUBLIC=false` prevents anonymous reads
|
||||
- Any authenticated ATCR user can still read
|
||||
- This aligns with ATProto's public records model
|
||||
|
||||
**Write Control:**
|
||||
- Only hold owner and crew members can write
|
||||
- Crew membership managed via `io.atcr.hold.crew` records in owner's PDS
|
||||
- Removing crew member immediately revokes write access
|
||||
|
||||
**Future: True Private Access**
|
||||
- When ATProto adds private PDS records, ATCR can support truly private repos
|
||||
- For now, "private" = "authenticated-only access"
|
||||
|
||||
## Example: Personal Storage
|
||||
|
||||
Alice wants to use her own Storj account:
|
||||
|
||||
1. **Set environment variables**:
|
||||
```bash
|
||||
export HOLD_PUBLIC_URL=https://alice-storage.fly.dev
|
||||
export HOLD_OWNER=did:plc:alice123
|
||||
export STORAGE_DRIVER=s3
|
||||
export AWS_ACCESS_KEY_ID=your_storj_access_key
|
||||
export AWS_SECRET_ACCESS_KEY=your_storj_secret_key
|
||||
export S3_ENDPOINT=https://gateway.storjshare.io
|
||||
export S3_BUCKET=alice-blobs
|
||||
```
|
||||
|
||||
2. **Deploy hold service** to Fly.io - auto-registration creates hold + crew record
|
||||
|
||||
3. **Push images** - AppView automatically routes to her storage
|
||||
Hold service requires S3-compatible storage. Supported providers:
|
||||
- **AWS S3** - Amazon Simple Storage Service
|
||||
- **Storj** - Decentralized cloud storage (via S3 gateway)
|
||||
- **Minio** - High-performance object storage (great for local development)
|
||||
- **UpCloud** - European cloud provider
|
||||
- **Azure** - Azure Blob Storage (via S3-compatible API)
|
||||
- **GCS** - Google Cloud Storage (via S3-compatible API)
|
||||
|
||||
## Example: Team Hold
|
||||
|
||||
A company wants shared storage for their team:
|
||||
```bash
|
||||
# 1. Deploy hold service
|
||||
export HOLD_PUBLIC_URL=https://team-hold.fly.dev
|
||||
export HOLD_OWNER=did:plc:admin
|
||||
export HOLD_PUBLIC=false # Private
|
||||
export AWS_ACCESS_KEY_ID=...
|
||||
export AWS_SECRET_ACCESS_KEY=...
|
||||
export S3_BUCKET=team-blobs
|
||||
|
||||
1. **Deploy hold service** with S3 credentials and auto-registration:
|
||||
```bash
|
||||
export HOLD_PUBLIC_URL=https://company-hold.fly.dev
|
||||
export HOLD_OWNER=did:plc:admin
|
||||
export HOLD_PUBLIC=false
|
||||
export STORAGE_DRIVER=s3
|
||||
export AWS_ACCESS_KEY_ID=...
|
||||
export AWS_SECRET_ACCESS_KEY=...
|
||||
export S3_BUCKET=company-blobs
|
||||
```
|
||||
fly deploy
|
||||
|
||||
2. **Hold service auto-registers** on first run, creating:
|
||||
- Hold record in admin's PDS
|
||||
- Crew record making admin the owner
|
||||
# 2. Hold auto-creates captain + crew records on first run
|
||||
|
||||
3. **Admin adds crew members** via ATProto client or manually:
|
||||
```bash
|
||||
# Using atproto client
|
||||
atproto put-record \
|
||||
--collection io.atcr.hold.crew \
|
||||
--rkey "company-did:plc:engineer1" \
|
||||
--value '{
|
||||
"$type": "io.atcr.hold.crew",
|
||||
"hold": "at://did:plc:admin/io.atcr.hold/company",
|
||||
"member": "did:plc:engineer1",
|
||||
"role": "write"
|
||||
}'
|
||||
```
|
||||
# 3. Admin adds team members via hold's PDS (requires OAuth)
|
||||
# (TODO: Implement crew management UI/CLI)
|
||||
|
||||
4. **Team members set their profile** to use the shared hold:
|
||||
```bash
|
||||
# Engineer updates their sailor profile
|
||||
atproto put-record \
|
||||
--collection io.atcr.sailor.profile \
|
||||
--rkey "self" \
|
||||
--value '{
|
||||
"$type": "io.atcr.sailor.profile",
|
||||
"defaultHold": "https://company-hold.fly.dev"
|
||||
}'
|
||||
```
|
||||
# 4. Team members set their sailor profile:
|
||||
atproto put-record \
|
||||
--collection io.atcr.sailor.profile \
|
||||
--rkey "self" \
|
||||
--value '{
|
||||
"$type": "io.atcr.sailor.profile",
|
||||
"defaultHold": "did:web:team-hold.fly.dev"
|
||||
}'
|
||||
|
||||
5. **Hold service queries PDS** for crew records to authorize writes
|
||||
6. **Engineers push/pull** using `atcr.io/engineer1/myapp` - blobs go to company hold
|
||||
# 5. Team members can now push/pull using team hold
|
||||
```
|
||||
|
||||
## Limitations
|
||||
|
||||
1. **No resume/partial uploads** - Storage service doesn't track upload state
|
||||
2. **No advanced features** - Just basic put/get, no deduplication logic
|
||||
3. **In-memory cache** - Hold endpoint cache is in-memory (for production, use Redis)
|
||||
4. **Manual profile updates** - No UI for updating sailor profile (must use ATProto client)
|
||||
### Current IAM Challenges
|
||||
|
||||
## Performance Optimization: S3 Presigned URLs
|
||||
See [EMBEDDED_PDS.md](./EMBEDDED_PDS.md#iam-challenges) for detailed discussion.
|
||||
|
||||
**Status:** Planned implementation (see [PRESIGNED_URLS.md](./PRESIGNED_URLS.md))
|
||||
**Known issues:**
|
||||
1. **RPC permission format**: Service tokens don't work with IP-based DIDs in local dev
|
||||
2. **Dynamic hold discovery**: AppView can't dynamically OAuth arbitrary holds from sailor profiles
|
||||
3. **Manual profile management**: No UI for updating sailor profile (must use ATProto client)
|
||||
|
||||
Currently, hold services act as proxies for blob data. With presigned URLs:
|
||||
|
||||
- **Downloads:** Docker → S3 direct (via 307 redirect)
|
||||
- **Uploads:** Docker → AppView → S3 (via presigned URL)
|
||||
- **Hold service bandwidth:** Reduced by 99.98% (only orchestration)
|
||||
|
||||
**Benefits:**
|
||||
- Hold services can run on minimal infrastructure ($5/month instances)
|
||||
- Direct S3 transfers at maximum speed
|
||||
- Scales to arbitrarily large images
|
||||
- Works with Storj, MinIO, Backblaze B2, Cloudflare R2
|
||||
|
||||
See [PRESIGNED_URLS.md](./PRESIGNED_URLS.md) for complete technical details and implementation guide.
|
||||
**Workaround:** Use hostname-based DIDs (`did:web:hold.example.com`) and public holds for now.
|
||||
|
||||
## Future Improvements
|
||||
|
||||
1. **S3 Presigned URLs** - Implement direct S3 URLs (see [PRESIGNED_URLS.md](./PRESIGNED_URLS.md))
|
||||
2. **Automatic failover** - Multiple storage endpoints, fallback to default
|
||||
3. **Storage analytics** - Track usage per DID
|
||||
4. **Quota integration** - Optional quota tracking in storage service
|
||||
5. **Profile management UI** - Web interface for users to manage their sailor profile
|
||||
6. **Distributed cache** - Redis/Memcached for hold endpoint cache in multi-instance deployments
|
||||
|
||||
## Comparison to Default Storage
|
||||
|
||||
| Feature | Default (Shared S3) | BYOS |
|
||||
|---------|---------------------|------|
|
||||
| Setup | None required | Deploy storage service |
|
||||
| Cost | Free (with quota) | User pays for S3/Storj |
|
||||
| Control | Limited | Full control |
|
||||
| Performance | Shared | Dedicated |
|
||||
| Quotas | Enforced by AppView | User managed |
|
||||
| Privacy | Blobs in shared bucket | Blobs in user's bucket |
|
||||
1. **Crew management UI** - Web interface for adding/removing crew members
|
||||
2. **Dynamic OAuth** - Support for arbitrary BYOS holds without pre-configuration
|
||||
3. **Hold migration** - Tools for moving blobs between holds
|
||||
4. **Storage analytics** - Track usage per user/repository
|
||||
5. **Distributed cache** - Redis for hold DID cache in multi-instance deployments
|
||||
|
||||
## References
|
||||
|
||||
- [EMBEDDED_PDS.md](./EMBEDDED_PDS.md) - Embedded PDS architecture and IAM details
|
||||
- [ATProto Lexicon Spec](https://atproto.com/specs/lexicon)
|
||||
- [Distribution Storage Drivers](https://distribution.github.io/distribution/storage-drivers/)
|
||||
- [S3 Presigned URLs](https://docs.aws.amazon.com/AmazonS3/latest/userguide/PresignedUrlUploadObject.html)
|
||||
- [Storj Documentation](https://docs.storj.io/)
|
||||
|
||||
49
docs/CONFIG_BLOB_STORAGE.md
Normal file
49
docs/CONFIG_BLOB_STORAGE.md
Normal file
@@ -0,0 +1,49 @@
|
||||
# Config Blob Storage Decision
|
||||
|
||||
## Background
|
||||
|
||||
OCI image manifests reference two types of blobs:
|
||||
|
||||
1. **Layers** — filesystem diffs (tar+gzip), typically large, content-addressed and shared across users
|
||||
2. **Config blob** — small JSON (~2-15KB) containing image metadata: architecture, OS, environment variables, entrypoint, Dockerfile build history, and labels
|
||||
|
||||
In ATCR, manifests are stored in the user's PDS while all blobs (layers and config) are stored in S3 via the hold service. The hold tracks layers with `io.atcr.hold.layer` records but has no equivalent tracking for config blobs.
|
||||
|
||||
## Considered: Storing Config Blobs in PDS
|
||||
|
||||
Config blobs are unique per image build — unlike layers which are deduplicated across users, a config blob contains the specific Dockerfile history, env vars, and labels for that build. This makes them conceptually "user data" that could belong in the user's PDS alongside the manifest.
|
||||
|
||||
The proposal was to add a `ConfigBlob` field to `ManifestRecord`, uploading the config blob to PDS during push (the data is already fetched from S3 for label extraction). The config would remain in S3 as well since the distribution library puts it there during the blob push phase.
|
||||
|
||||
Potential benefits:
|
||||
- Manifests become more self-contained in PDS
|
||||
- Config metadata (entrypoint, env, history) available without S3 access (e.g., for web UI)
|
||||
- Aligns with the principle that user-specific data belongs in the user's PDS
|
||||
|
||||
## Decision: Keep Config Blobs in S3 Only
|
||||
|
||||
Config blobs can contain sensitive data:
|
||||
|
||||
- **Environment variables** — `ENV DATABASE_URL=...`, `ENV API_KEY=...` set in Dockerfiles
|
||||
- **Build history** — `history[].created_by` reveals exact Dockerfile commands, internal registry URLs, build arguments
|
||||
- **Labels** — may contain internal metadata not intended for public consumption
|
||||
|
||||
ATProto has no private data. The current storage split creates a useful privacy boundary:
|
||||
|
||||
| Storage | Visibility | Contains |
|
||||
|---------|-----------|----------|
|
||||
| PDS | Public (anyone) | Manifest structure, tags, repo names, annotations |
|
||||
| Hold/S3 | Auth-gated | Layers + config — actual image content |
|
||||
|
||||
This boundary enables **semi-private repos**: the public PDS metadata tells you what images exist (names, tags, sizes), but you cannot reconstruct or run the image without hold access. Storing config in PDS would break this — build secrets and Dockerfile history would be publicly readable even when the hold restricts blob access.
|
||||
|
||||
We considered making PDS storage optional (only for fully public holds or allow-all-crew holds), but an optional field that can't be relied upon adds complexity without clear benefit — the config must live in S3 regardless for the pull path.
|
||||
|
||||
## Current Status
|
||||
|
||||
Config blobs remain in S3 behind hold authorization. GC handles config digests to prevent orphaned deletion (config digests are included in the referenced set alongside layer digests).
|
||||
|
||||
## Revisit If
|
||||
|
||||
- ATProto adds private data support
|
||||
- A concrete use case emerges that requires PDS-native config access
|
||||
165
docs/CREDENTIAL_HELPER_V2.md
Normal file
165
docs/CREDENTIAL_HELPER_V2.md
Normal file
@@ -0,0 +1,165 @@
|
||||
# Credential Helper Rewrite
|
||||
|
||||
## Context
|
||||
|
||||
The current credential helper (`cmd/credential-helper/main.go`, ~1070 lines) is a monolithic single-file binary with a manual `switch` dispatch. It has no help text, hangs silently when run without stdin, embeds interactive device auth inside the Docker protocol `get` command (blocking pushes for up to 2 minutes while polling), and only supports one account per registry. Users want multi-account support (e.g., `evan.jarrett.net` and `michelle.jarrett.net` on the same `atcr.io`) and multi-registry support (e.g., `atcr.io` + `buoy.cr`).
|
||||
|
||||
## Approach
|
||||
|
||||
Rewrite using **Cobra** (already a project dependency) for the CLI framework and **charmbracelet/huh** for interactive prompts (select menus, confirmations, spinners). Separate Docker protocol commands (machine-readable, hidden) from user-facing commands (interactive, discoverable). Model after `gh auth` UX patterns.
|
||||
|
||||
**Smart account auto-detection**: The `get` command inspects the parent process command line (`/proc/<ppid>/cmdline` on Linux, `ps` on macOS) to determine which image Docker is pushing/pulling. Since ATCR URLs are `host/<identity>/repo:tag`, we can extract the identity and auto-select the matching account — no prompts, no manual switching needed in the common case.
|
||||
|
||||
## Command Tree
|
||||
|
||||
```
|
||||
docker-credential-atcr
|
||||
├── get (Docker protocol — stdin/stdout, hidden, smart account detection)
|
||||
├── store (Docker protocol — stdin, hidden)
|
||||
├── erase (Docker protocol — stdin, hidden)
|
||||
├── list (Docker protocol extension, hidden)
|
||||
├── login (Interactive device flow with huh prompts)
|
||||
├── logout (Remove account credentials)
|
||||
├── status (Show all accounts with active indicators)
|
||||
├── switch (Switch active account — auto-toggle for 2, select for 3+)
|
||||
├── configure-docker (Auto-edit ~/.docker/config.json credHelpers)
|
||||
├── update (Self-update, existing logic preserved)
|
||||
└── version (Built-in via cobra)
|
||||
```
|
||||
|
||||
## Smart Account Resolution (`get` command)
|
||||
|
||||
The `get` command resolves which account to use with this priority chain — fully non-interactive:
|
||||
|
||||
```
|
||||
1. Parse parent process cmdline → extract identity from image ref
|
||||
docker push atcr.io/evan.jarrett.net/test:latest
|
||||
→ parent cmdline contains "evan.jarrett.net" → use that account
|
||||
|
||||
2. Fall back to active account (set by `switch` command)
|
||||
|
||||
3. Fall back to sole account (if only one exists for this registry)
|
||||
|
||||
4. Error with helpful message:
|
||||
"Multiple accounts for atcr.io. Run: docker-credential-atcr switch"
|
||||
```
|
||||
|
||||
**Parent process detection** (in `helpers.go`):
|
||||
- Linux: read `/proc/<ppid>/cmdline` (null-separated args)
|
||||
- macOS: `ps -o args= -p <ppid>`
|
||||
- Windows: best-effort via `wmic` or skip (fall to active account)
|
||||
- Parse image ref: find the arg matching `<registry-host>/<identity>/...`, extract `<identity>`
|
||||
- Graceful failure: if parent isn't Docker, cmdline unreadable, or image ref not parseable → fall through to active account
|
||||
|
||||
## File Structure
|
||||
|
||||
```
|
||||
cmd/credential-helper/
|
||||
main.go — Cobra root command, version vars, subcommand registration
|
||||
config.go — Config types, load/save/migrate, getConfigPath
|
||||
device_auth.go — authorizeDevice(), validateCredentials() HTTP logic
|
||||
protocol.go — Docker protocol: get, store, erase, list (all hidden)
|
||||
cmd_login.go — login command (huh prompts + device flow)
|
||||
cmd_logout.go — logout command (huh confirm)
|
||||
cmd_status.go — status display
|
||||
cmd_switch.go — switch command (huh select)
|
||||
cmd_configure.go — configure-docker (edit ~/.docker/config.json)
|
||||
cmd_update.go — update command (moved from existing code)
|
||||
helpers.go — openBrowser, buildAppViewURL, isInsecureRegistry, parentCmdline, etc.
|
||||
```
|
||||
|
||||
## Config Format (`~/.atcr/device.json`)
|
||||
|
||||
```json
|
||||
{
|
||||
"version": 2,
|
||||
"registries": {
|
||||
"https://atcr.io": {
|
||||
"active": "evan.jarrett.net",
|
||||
"accounts": {
|
||||
"evan.jarrett.net": {
|
||||
"handle": "evan.jarrett.net",
|
||||
"did": "did:plc:abc123",
|
||||
"device_secret": "atcr_device_..."
|
||||
},
|
||||
"michelle.jarrett.net": {
|
||||
"handle": "michelle.jarrett.net",
|
||||
"did": "did:plc:def456",
|
||||
"device_secret": "atcr_device_..."
|
||||
}
|
||||
}
|
||||
},
|
||||
"https://buoy.cr": {
|
||||
"active": "evan.jarrett.net",
|
||||
"accounts": { ... }
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Migration**: `loadConfig()` auto-detects and migrates from old formats:
|
||||
- Legacy single-device `{handle, device_secret, appview_url}` → v2
|
||||
- Current multi-registry `{credentials: {url: {...}}}` → v2
|
||||
- Writes back migrated config on first load
|
||||
|
||||
## Key Behavioral Changes
|
||||
|
||||
| Command | Current | New |
|
||||
|---------|---------|-----|
|
||||
| `get` | Opens browser, polls 2min if no creds | Smart detection → active account → error |
|
||||
| `get` (multi-account) | N/A (single account only) | Auto-detects identity from parent cmdline |
|
||||
| `get` (no stdin) | Hangs forever | Detects terminal, prints help, exits 1 |
|
||||
| `get` (OAuth expired) | Auto-opens browser, polls | Prints login URL, exits 1 |
|
||||
| `store` | No-op | Stores if secret is device secret (`atcr_device_*`) |
|
||||
| `erase` | Removes all creds for host | Removes active account only |
|
||||
| No args | Prints bare usage | Prints full cobra help with all commands |
|
||||
|
||||
## Dependencies
|
||||
|
||||
- `github.com/spf13/cobra` — already in go.mod
|
||||
- `github.com/charmbracelet/huh` — new (pure Go, CGO_ENABLED=0 safe)
|
||||
|
||||
No changes to `.goreleaser.yaml` needed.
|
||||
|
||||
## Implementation Order
|
||||
|
||||
### Phase 1: Foundation
|
||||
1. `helpers.go` — move utility functions verbatim + add `getParentCmdline()` and `detectIdentityFromParent(registryHost)`
|
||||
2. `config.go` — new config types + migration from old formats
|
||||
3. `main.go` — Cobra root command, register all subcommands
|
||||
|
||||
### Phase 2: Docker Protocol (must work for existing users)
|
||||
4. `device_auth.go` — extract `authorizeDevice()` + `validateCredentials()`
|
||||
5. `protocol.go` — `get`/`store`/`erase`/`list` using new config with smart account resolution
|
||||
|
||||
### Phase 3: User Commands
|
||||
6. `cmd_login.go` — interactive device flow with huh spinner
|
||||
7. `cmd_status.go` — display all registries/accounts
|
||||
8. `cmd_switch.go` — huh select for account switching
|
||||
9. `cmd_logout.go` — huh confirm for removal
|
||||
10. `cmd_configure.go` — Docker config.json manipulation
|
||||
11. `cmd_update.go` — move existing update logic
|
||||
|
||||
### Phase 4: Polish
|
||||
12. Add `huh` to go.mod
|
||||
13. Delete old `main.go` contents (replaced by new files)
|
||||
|
||||
## What to Keep vs Rewrite
|
||||
|
||||
**Keep** (move to new files): `openBrowser()`, `buildAppViewURL()`, `isInsecureRegistry()`, `getDockerInsecureRegistries()`, `readDockerDaemonConfig()`, `stripPort()`, `isTerminal()`, `authorizeDevice()` HTTP logic, `validateCredentials()`, all update/version check functions.
|
||||
|
||||
**Rewrite**: `main()`, `handleGet()` (split into non-interactive `get` with smart detection + interactive `login`), `handleStore()` (implement actual storage), `handleErase()` (multi-account aware), config types and loading.
|
||||
|
||||
**New**: `list`, `login`, `logout`, `status`, `switch`, `configure-docker` commands. Config migration. Parent process identity detection. huh integration.
|
||||
|
||||
## Verification
|
||||
|
||||
1. Build: `go build -o bin/docker-credential-atcr ./cmd/credential-helper`
|
||||
2. Help works: `bin/docker-credential-atcr --help` shows all user commands
|
||||
3. Protocol works: `echo "atcr.io" | bin/docker-credential-atcr get` returns credentials or helpful error
|
||||
4. No hang: `bin/docker-credential-atcr get` (no stdin pipe) detects terminal, prints help, exits
|
||||
5. Smart detection: `docker push atcr.io/evan.jarrett.net/test:latest` auto-selects `evan.jarrett.net`
|
||||
6. Login flow: `bin/docker-credential-atcr login` triggers device auth with huh prompts
|
||||
7. Status: `bin/docker-credential-atcr status` shows configured accounts
|
||||
8. Config migration: Place old-format `~/.atcr/device.json`, run any command, verify auto-migration
|
||||
9. GoReleaser: `CGO_ENABLED=0 go build ./cmd/credential-helper` succeeds
|
||||
File diff suppressed because it is too large
Load Diff
724
docs/DEVELOPMENT.md
Normal file
724
docs/DEVELOPMENT.md
Normal file
@@ -0,0 +1,724 @@
|
||||
# Development Workflow for ATCR
|
||||
|
||||
## The Problem
|
||||
|
||||
**Current development cycle with Docker:**
|
||||
1. Edit CSS, JS, template, or Go file
|
||||
2. Run `docker compose build` (rebuilds entire image)
|
||||
3. Run `docker compose up` (restart container)
|
||||
4. Wait **2-3 minutes** for changes to appear
|
||||
5. Test, find issue, repeat...
|
||||
|
||||
**Why it's slow:**
|
||||
- All assets embedded via `embed.FS` at compile time
|
||||
- Multi-stage Docker build compiles everything from scratch
|
||||
- No development mode exists
|
||||
- Final image uses `scratch` base (no tools, no hot reload)
|
||||
|
||||
## The Solution
|
||||
|
||||
**Development setup combining:**
|
||||
1. **Dockerfile.devel** - Development-focused container (golang base, not scratch)
|
||||
2. **Volume mounts** - Live code editing (changes appear instantly in container)
|
||||
3. **DirFS** - Skip embed, read templates/CSS/JS from filesystem
|
||||
4. **Air** - Auto-rebuild on Go code changes
|
||||
|
||||
**Results:**
|
||||
- CSS/JS/Template changes: **Instant** (0 seconds, just refresh browser)
|
||||
- Go code changes: **2-5 seconds** (vs 2-3 minutes)
|
||||
- Production builds: **Unchanged** (still optimized with embed.FS)
|
||||
|
||||
## How It Works
|
||||
|
||||
### Architecture Flow
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────┐
|
||||
│ Your Editor (VSCode, etc) │
|
||||
│ Edit: style.css, app.js, *.html, *.go files │
|
||||
└─────────────────┬───────────────────────────────────┘
|
||||
│ (files saved to disk)
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────┐
|
||||
│ Volume Mount (docker-compose.dev.yml) │
|
||||
│ volumes: │
|
||||
│ - .:/app (entire codebase mounted) │
|
||||
└─────────────────┬───────────────────────────────────┘
|
||||
│ (changes appear instantly in container)
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────┐
|
||||
│ Container (golang:1.25.7 base, has all tools) │
|
||||
│ │
|
||||
│ ┌──────────────────────────────────────┐ │
|
||||
│ │ Air (hot reload tool) │ │
|
||||
│ │ Watches: *.go, *.html, *.css, *.js │ │
|
||||
│ │ │ │
|
||||
│ │ On change: │ │
|
||||
│ │ - *.go → rebuild binary (2-5s) │ │
|
||||
│ │ - templates/css/js → restart only │ │
|
||||
│ └──────────────────────────────────────┘ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌──────────────────────────────────────┐ │
|
||||
│ │ ATCR AppView (ATCR_DEV_MODE=true) │ │
|
||||
│ │ │ │
|
||||
│ │ ui.go checks DEV_MODE: │ │
|
||||
│ │ if DEV_MODE: │ │
|
||||
│ │ templatesFS = os.DirFS("...") │ │
|
||||
│ │ publicFS = os.DirFS("...") │ │
|
||||
│ │ else: │ │
|
||||
│ │ use embed.FS (production) │ │
|
||||
│ │ │ │
|
||||
│ │ Result: Reads from mounted files │ │
|
||||
│ └──────────────────────────────────────┘ │
|
||||
└─────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Change Scenarios
|
||||
|
||||
#### Scenario 1: Edit CSS/JS/Templates
|
||||
```
|
||||
1. Edit pkg/appview/public/css/style.css in VSCode
|
||||
2. Save file
|
||||
3. Change appears in container via volume mount (instant)
|
||||
4. App uses os.DirFS → reads new file from disk (instant)
|
||||
5. Refresh browser → see changes
|
||||
```
|
||||
**Time:** **Instant** (0 seconds)
|
||||
**No rebuild, no restart!**
|
||||
|
||||
#### Scenario 2: Edit Go Code
|
||||
```
|
||||
1. Edit pkg/appview/handlers/home.go
|
||||
2. Save file
|
||||
3. Air detects .go file change
|
||||
4. Air runs: go build -o ./tmp/atcr-appview ./cmd/appview
|
||||
5. Air kills old process and starts new binary
|
||||
6. App runs with new code
|
||||
```
|
||||
**Time:** **2-5 seconds**
|
||||
**Fast incremental build!**
|
||||
|
||||
## Implementation
|
||||
|
||||
### Step 1: Create Dockerfile.devel
|
||||
|
||||
Create `Dockerfile.devel` in project root:
|
||||
|
||||
```dockerfile
|
||||
# Development Dockerfile with hot reload support
|
||||
FROM golang:1.25.7-trixie
|
||||
|
||||
# Install Air for hot reload
|
||||
RUN go install github.com/cosmtrek/air@latest
|
||||
|
||||
# Install SQLite (required for CGO in ATCR)
|
||||
RUN apt-get update && apt-get install -y \
|
||||
sqlite3 \
|
||||
libsqlite3-dev \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# Copy dependency files and download (cached layer)
|
||||
COPY go.mod go.sum ./
|
||||
RUN go mod download
|
||||
|
||||
# Note: Source code comes from volume mount
|
||||
# (no COPY . . needed - that's the whole point!)
|
||||
|
||||
# Air will handle building and running
|
||||
CMD ["air", "-c", ".air.toml"]
|
||||
```
|
||||
|
||||
### Step 2: Create docker-compose.dev.yml
|
||||
|
||||
Create `docker-compose.dev.yml` in project root:
|
||||
|
||||
```yaml
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
atcr-appview:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile.devel
|
||||
volumes:
|
||||
# Mount entire codebase (live editing)
|
||||
- .:/app
|
||||
# Cache Go modules (faster rebuilds)
|
||||
- go-cache:/go/pkg/mod
|
||||
# Persist SQLite database
|
||||
- atcr-ui-dev:/var/lib/atcr
|
||||
environment:
|
||||
# Enable development mode (uses os.DirFS)
|
||||
ATCR_DEV_MODE: "true"
|
||||
|
||||
# AppView configuration
|
||||
ATCR_HTTP_ADDR: ":5000"
|
||||
ATCR_BASE_URL: "http://localhost:5000"
|
||||
ATCR_DEFAULT_HOLD_DID: "did:web:hold01.atcr.io"
|
||||
|
||||
# Database
|
||||
ATCR_UI_DATABASE_PATH: "/var/lib/atcr/ui.db"
|
||||
|
||||
# Auth
|
||||
ATCR_AUTH_KEY_PATH: "/var/lib/atcr/auth/private-key.pem"
|
||||
|
||||
# Jetstream (optional)
|
||||
# JETSTREAM_URL: "wss://jetstream2.us-east.bsky.network/subscribe"
|
||||
# ATCR_BACKFILL_ENABLED: "false"
|
||||
ports:
|
||||
- "5000:5000"
|
||||
networks:
|
||||
- atcr-dev
|
||||
|
||||
# Add other services as needed (postgres, hold, etc)
|
||||
# atcr-hold:
|
||||
# ...
|
||||
|
||||
networks:
|
||||
atcr-dev:
|
||||
driver: bridge
|
||||
|
||||
volumes:
|
||||
go-cache:
|
||||
atcr-ui-dev:
|
||||
```
|
||||
|
||||
### Step 3: Create .air.toml
|
||||
|
||||
Create `.air.toml` in project root:
|
||||
|
||||
```toml
|
||||
# Air configuration for hot reload
|
||||
# https://github.com/cosmtrek/air
|
||||
|
||||
root = "."
|
||||
testdata_dir = "testdata"
|
||||
tmp_dir = "tmp"
|
||||
|
||||
[build]
|
||||
# Arguments to pass to binary (AppView needs "serve")
|
||||
args_bin = ["serve"]
|
||||
|
||||
# Where to output the built binary
|
||||
bin = "./tmp/atcr-appview"
|
||||
|
||||
# Build command
|
||||
cmd = "go build -o ./tmp/atcr-appview ./cmd/appview"
|
||||
|
||||
# Delay before rebuilding (ms) - debounce rapid saves
|
||||
delay = 1000
|
||||
|
||||
# Directories to exclude from watching
|
||||
exclude_dir = [
|
||||
"tmp",
|
||||
"vendor",
|
||||
"bin",
|
||||
".git",
|
||||
"node_modules",
|
||||
"testdata"
|
||||
]
|
||||
|
||||
# Files to exclude from watching
|
||||
exclude_file = []
|
||||
|
||||
# Regex patterns to exclude
|
||||
exclude_regex = ["_test\\.go"]
|
||||
|
||||
# Don't rebuild if file content unchanged
|
||||
exclude_unchanged = false
|
||||
|
||||
# Follow symlinks
|
||||
follow_symlink = false
|
||||
|
||||
# Full command to run (leave empty to use cmd + bin)
|
||||
full_bin = ""
|
||||
|
||||
# Directories to include (empty = all)
|
||||
include_dir = []
|
||||
|
||||
# File extensions to watch
|
||||
include_ext = ["go", "html", "css", "js"]
|
||||
|
||||
# Specific files to watch
|
||||
include_file = []
|
||||
|
||||
# Delay before killing old process (s)
|
||||
kill_delay = "0s"
|
||||
|
||||
# Log file for build errors
|
||||
log = "build-errors.log"
|
||||
|
||||
# Use polling instead of fsnotify (for Docker/VM)
|
||||
poll = false
|
||||
poll_interval = 0
|
||||
|
||||
# Rerun binary if it exits
|
||||
rerun = false
|
||||
rerun_delay = 500
|
||||
|
||||
# Send interrupt signal instead of kill
|
||||
send_interrupt = false
|
||||
|
||||
# Stop on build error
|
||||
stop_on_error = false
|
||||
|
||||
[color]
|
||||
# Colorize output
|
||||
app = ""
|
||||
build = "yellow"
|
||||
main = "magenta"
|
||||
runner = "green"
|
||||
watcher = "cyan"
|
||||
|
||||
[log]
|
||||
# Show only app logs (not build logs)
|
||||
main_only = false
|
||||
|
||||
# Add timestamp to logs
|
||||
time = false
|
||||
|
||||
[misc]
|
||||
# Clean tmp directory on exit
|
||||
clean_on_exit = false
|
||||
|
||||
[screen]
|
||||
# Clear screen on rebuild
|
||||
clear_on_rebuild = false
|
||||
|
||||
# Keep scrollback
|
||||
keep_scroll = true
|
||||
```
|
||||
|
||||
### Step 4: Modify pkg/appview/ui.go
|
||||
|
||||
Add conditional filesystem loading to `pkg/appview/ui.go`:
|
||||
|
||||
```go
|
||||
package appview
|
||||
|
||||
import (
|
||||
"embed"
|
||||
"html/template"
|
||||
"io/fs"
|
||||
"log"
|
||||
"net/http"
|
||||
"os"
|
||||
)
|
||||
|
||||
// Embedded assets (used in production)
|
||||
//go:embed templates/**/*.html
|
||||
var embeddedTemplatesFS embed.FS
|
||||
|
||||
//go:embed static
|
||||
var embeddedpublicFS embed.FS
|
||||
|
||||
// Actual filesystems used at runtime (conditional)
|
||||
var templatesFS fs.FS
|
||||
var publicFS fs.FS
|
||||
|
||||
func init() {
|
||||
// Development mode: read from filesystem for instant updates
|
||||
if os.Getenv("ATCR_DEV_MODE") == "true" {
|
||||
log.Println("🔧 DEV MODE: Using filesystem for templates and static assets")
|
||||
templatesFS = os.DirFS("pkg/appview/templates")
|
||||
publicFS = os.DirFS("pkg/appview/static")
|
||||
} else {
|
||||
// Production mode: use embedded assets
|
||||
log.Println("📦 PRODUCTION MODE: Using embedded assets")
|
||||
templatesFS = embeddedTemplatesFS
|
||||
publicFS = embeddedpublicFS
|
||||
}
|
||||
}
|
||||
|
||||
// Templates returns parsed HTML templates
|
||||
func Templates() *template.Template {
|
||||
tmpl, err := template.ParseFS(templatesFS, "templates/**/*.html")
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to parse templates: %v", err)
|
||||
}
|
||||
return tmpl
|
||||
}
|
||||
|
||||
// StaticHandler returns a handler for static files
|
||||
func StaticHandler() http.Handler {
|
||||
sub, err := fs.Sub(publicFS, "static")
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to create static sub-filesystem: %v", err)
|
||||
}
|
||||
return http.FileServer(http.FS(sub))
|
||||
}
|
||||
```
|
||||
|
||||
**Important:** Update the `Templates()` function to NOT cache templates in dev mode:
|
||||
|
||||
```go
|
||||
// Templates returns parsed HTML templates
|
||||
func Templates() *template.Template {
|
||||
// In dev mode, reparse templates on every request (instant updates)
|
||||
// In production, this could be cached
|
||||
tmpl, err := template.ParseFS(templatesFS, "templates/**/*.html")
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to parse templates: %v", err)
|
||||
}
|
||||
return tmpl
|
||||
}
|
||||
```
|
||||
|
||||
If you're caching templates, wrap it with a dev mode check:
|
||||
|
||||
```go
|
||||
var templateCache *template.Template
|
||||
|
||||
func Templates() *template.Template {
|
||||
// Development: reparse every time (instant updates)
|
||||
if os.Getenv("ATCR_DEV_MODE") == "true" {
|
||||
tmpl, err := template.ParseFS(templatesFS, "templates/**/*.html")
|
||||
if err != nil {
|
||||
log.Printf("Template parse error: %v", err)
|
||||
return template.New("error")
|
||||
}
|
||||
return tmpl
|
||||
}
|
||||
|
||||
// Production: use cached templates
|
||||
if templateCache == nil {
|
||||
tmpl, err := template.ParseFS(templatesFS, "templates/**/*.html")
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to parse templates: %v", err)
|
||||
}
|
||||
templateCache = tmpl
|
||||
}
|
||||
return templateCache
|
||||
}
|
||||
```
|
||||
|
||||
### Step 5: Add to .gitignore
|
||||
|
||||
Add Air's temporary directory to `.gitignore`:
|
||||
|
||||
```
|
||||
# Air hot reload
|
||||
tmp/
|
||||
build-errors.log
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Starting Development Environment
|
||||
|
||||
```bash
|
||||
# Build and start dev container
|
||||
docker compose -f docker-compose.dev.yml up --build
|
||||
|
||||
# Or run in background
|
||||
docker compose -f docker-compose.dev.yml up -d
|
||||
|
||||
# View logs
|
||||
docker compose -f docker-compose.dev.yml logs -f atcr-appview
|
||||
```
|
||||
|
||||
You should see Air starting:
|
||||
|
||||
```
|
||||
atcr-appview | 🔧 DEV MODE: Using filesystem for templates and static assets
|
||||
atcr-appview |
|
||||
atcr-appview | __ _ ___
|
||||
atcr-appview | / /\ | | | |_)
|
||||
atcr-appview | /_/--\ |_| |_| \_ , built with Go
|
||||
atcr-appview |
|
||||
atcr-appview | watching .
|
||||
atcr-appview | !exclude tmp
|
||||
atcr-appview | building...
|
||||
atcr-appview | running...
|
||||
```
|
||||
|
||||
### Development Workflow
|
||||
|
||||
#### 1. Edit Templates/CSS/JS (Instant Updates)
|
||||
|
||||
```bash
|
||||
# Edit any template, CSS, or JS file
|
||||
vim pkg/appview/templates/pages/home.html
|
||||
vim pkg/appview/public/css/style.css
|
||||
vim pkg/appview/public/js/app.js
|
||||
|
||||
# Save file → changes appear instantly
|
||||
# Just refresh browser (Cmd+R / Ctrl+R)
|
||||
```
|
||||
|
||||
**No rebuild, no restart!** Air might restart the app, but it's instant since no compilation is needed.
|
||||
|
||||
#### 2. Edit Go Code (Fast Rebuild)
|
||||
|
||||
```bash
|
||||
# Edit any Go file
|
||||
vim pkg/appview/handlers/home.go
|
||||
|
||||
# Save file → Air detects change
|
||||
# Air output shows:
|
||||
# building...
|
||||
# build successful in 2.3s
|
||||
# restarting...
|
||||
|
||||
# Refresh browser to see changes
|
||||
```
|
||||
|
||||
**2-5 second rebuild** instead of 2-3 minutes!
|
||||
|
||||
### Stopping Development Environment
|
||||
|
||||
```bash
|
||||
# Stop containers
|
||||
docker compose -f docker-compose.dev.yml down
|
||||
|
||||
# Stop and remove volumes (fresh start)
|
||||
docker compose -f docker-compose.dev.yml down -v
|
||||
```
|
||||
|
||||
## Production Builds
|
||||
|
||||
**Production builds are completely unchanged:**
|
||||
|
||||
```bash
|
||||
# Production uses normal Dockerfile (embed.FS, scratch base)
|
||||
docker compose build
|
||||
|
||||
# Or specific service
|
||||
docker compose build atcr-appview
|
||||
|
||||
# Run production
|
||||
docker compose up
|
||||
```
|
||||
|
||||
**Why it works:**
|
||||
- Production doesn't set `ATCR_DEV_MODE=true`
|
||||
- `ui.go` defaults to embedded assets when env var is unset
|
||||
- Production Dockerfile still uses multi-stage build to scratch
|
||||
- No development dependencies in production image
|
||||
|
||||
## Comparison
|
||||
|
||||
| Change Type | Before (docker compose) | After (dev setup) | Improvement |
|
||||
|-------------|------------------------|-------------------|-------------|
|
||||
| Edit CSS | 2-3 minutes | **Instant (0s)** | ♾️x faster |
|
||||
| Edit JS | 2-3 minutes | **Instant (0s)** | ♾️x faster |
|
||||
| Edit Template | 2-3 minutes | **Instant (0s)** | ♾️x faster |
|
||||
| Edit Go Code | 2-3 minutes | **2-5 seconds** | 24-90x faster |
|
||||
| Production Build | Same | **Same** | No change |
|
||||
|
||||
## Advanced: Local Development (No Docker)
|
||||
|
||||
For even faster development, run locally without Docker:
|
||||
|
||||
```bash
|
||||
# Set environment variables
|
||||
export ATCR_DEV_MODE=true
|
||||
export ATCR_HTTP_ADDR=:5000
|
||||
export ATCR_BASE_URL=http://localhost:5000
|
||||
export ATCR_DEFAULT_HOLD_DID=did:web:hold01.atcr.io
|
||||
export ATCR_UI_DATABASE_PATH=/tmp/atcr-ui.db
|
||||
export ATCR_AUTH_KEY_PATH=/tmp/atcr-auth-key.pem
|
||||
|
||||
# Or use .env file
|
||||
source .env.appview
|
||||
|
||||
# Run with Air
|
||||
air -c .air.toml
|
||||
|
||||
# Or run directly (no hot reload)
|
||||
go run ./cmd/appview serve
|
||||
```
|
||||
|
||||
**Advantages:**
|
||||
- Even faster (no Docker overhead)
|
||||
- Native debugging with delve
|
||||
- Direct filesystem access
|
||||
- Full IDE integration
|
||||
|
||||
**Disadvantages:**
|
||||
- Need to manage dependencies locally (SQLite, etc)
|
||||
- May differ from production environment
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Air Not Rebuilding
|
||||
|
||||
**Problem:** Air doesn't detect changes
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
# Check if Air is actually running
|
||||
docker compose -f docker-compose.dev.yml logs atcr-appview
|
||||
|
||||
# Check .air.toml include_ext includes your file type
|
||||
# Default: ["go", "html", "css", "js"]
|
||||
|
||||
# Restart container
|
||||
docker compose -f docker-compose.dev.yml restart atcr-appview
|
||||
```
|
||||
|
||||
### Templates Not Updating
|
||||
|
||||
**Problem:** Template changes don't appear
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
# Check ATCR_DEV_MODE is set
|
||||
docker compose -f docker-compose.dev.yml exec atcr-appview env | grep DEV_MODE
|
||||
|
||||
# Should output: ATCR_DEV_MODE=true
|
||||
|
||||
# Check templates aren't cached (see Step 4 above)
|
||||
# Templates() should reparse in dev mode
|
||||
```
|
||||
|
||||
### Go Build Failing
|
||||
|
||||
**Problem:** Air shows build errors
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
# Check build logs
|
||||
docker compose -f docker-compose.dev.yml logs atcr-appview
|
||||
|
||||
# Or check build-errors.log in container
|
||||
docker compose -f docker-compose.dev.yml exec atcr-appview cat build-errors.log
|
||||
|
||||
# Fix the Go error, save file, Air will retry
|
||||
```
|
||||
|
||||
### Volume Mount Not Working
|
||||
|
||||
**Problem:** Changes don't appear in container
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
# Verify volume mount
|
||||
docker compose -f docker-compose.dev.yml exec atcr-appview ls -la /app
|
||||
|
||||
# Should show your source files
|
||||
|
||||
# On Windows/Mac, check Docker Desktop file sharing settings
|
||||
# Settings → Resources → File Sharing → add project directory
|
||||
```
|
||||
|
||||
### Permission Errors
|
||||
|
||||
**Problem:** Cannot write to /var/lib/atcr
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
# In Dockerfile.devel, add:
|
||||
RUN mkdir -p /var/lib/atcr && chmod 777 /var/lib/atcr
|
||||
|
||||
# Or use named volumes (already in docker-compose.dev.yml)
|
||||
volumes:
|
||||
- atcr-ui-dev:/var/lib/atcr
|
||||
```
|
||||
|
||||
### Slow Builds Even with Air
|
||||
|
||||
**Problem:** Air rebuilds slowly
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
# Use Go module cache volume (already in docker-compose.dev.yml)
|
||||
volumes:
|
||||
- go-cache:/go/pkg/mod
|
||||
|
||||
# Increase Air delay to debounce rapid saves
|
||||
# In .air.toml:
|
||||
delay = 2000 # 2 seconds
|
||||
|
||||
# Or check if CGO is slowing builds
|
||||
# AppView needs CGO for SQLite, but you can try:
|
||||
CGO_ENABLED=0 go build # (won't work for ATCR, but good to know)
|
||||
```
|
||||
|
||||
## Tips & Tricks
|
||||
|
||||
### Browser Auto-Reload (LiveReload)
|
||||
|
||||
Add LiveReload for automatic browser refresh:
|
||||
|
||||
```bash
|
||||
# Install browser extension
|
||||
# Chrome: https://chrome.google.com/webstore/detail/livereload
|
||||
# Firefox: https://addons.mozilla.org/en-US/firefox/addon/livereload-web-extension/
|
||||
|
||||
# Add livereload to .air.toml (future Air feature)
|
||||
# Or use a separate tool like browsersync
|
||||
```
|
||||
|
||||
### Database Resets
|
||||
|
||||
Development database is in a named volume:
|
||||
|
||||
```bash
|
||||
# Reset database (fresh start)
|
||||
docker compose -f docker-compose.dev.yml down -v
|
||||
docker compose -f docker-compose.dev.yml up
|
||||
|
||||
# Or delete specific volume
|
||||
docker volume rm atcr_atcr-ui-dev
|
||||
```
|
||||
|
||||
### Multiple Environments
|
||||
|
||||
Run dev and production side-by-side:
|
||||
|
||||
```bash
|
||||
# Development on port 5000
|
||||
docker compose -f docker-compose.dev.yml up -d
|
||||
|
||||
# Production on port 5001
|
||||
docker compose up -d
|
||||
|
||||
# Now you can compare behavior
|
||||
```
|
||||
|
||||
### Debugging with Delve
|
||||
|
||||
Add delve to Dockerfile.devel:
|
||||
|
||||
```dockerfile
|
||||
RUN go install github.com/go-delve/delve/cmd/dlv@latest
|
||||
|
||||
# Change CMD to use delve
|
||||
CMD ["dlv", "debug", "./cmd/appview", "--headless", "--listen=:2345", "--api-version=2", "--accept-multiclient", "--", "serve"]
|
||||
```
|
||||
|
||||
Then connect with VSCode or GoLand.
|
||||
|
||||
## Summary
|
||||
|
||||
**Development Setup (One-Time):**
|
||||
1. Create `Dockerfile.devel`
|
||||
2. Create `docker-compose.dev.yml`
|
||||
3. Create `.air.toml`
|
||||
4. Modify `pkg/appview/ui.go` for conditional DirFS
|
||||
5. Add `tmp/` to `.gitignore`
|
||||
|
||||
**Daily Development:**
|
||||
```bash
|
||||
# Start
|
||||
docker compose -f docker-compose.dev.yml up
|
||||
|
||||
# Edit files in your editor
|
||||
# Changes appear instantly (CSS/JS/templates)
|
||||
# Or in 2-5 seconds (Go code)
|
||||
|
||||
# Stop
|
||||
docker compose -f docker-compose.dev.yml down
|
||||
```
|
||||
|
||||
**Production (Unchanged):**
|
||||
```bash
|
||||
docker compose build
|
||||
docker compose up
|
||||
```
|
||||
|
||||
**Result:** 100x faster development iteration! 🚀
|
||||
304
docs/DIRECT_HOLD_ACCESS.md
Normal file
304
docs/DIRECT_HOLD_ACCESS.md
Normal file
@@ -0,0 +1,304 @@
|
||||
# Accessing Hold Data Without AppView
|
||||
|
||||
This document explains how to retrieve your data directly from a hold service without going through the ATCR AppView. This is useful for:
|
||||
- GDPR data export requests
|
||||
- Backup and migration
|
||||
- Debugging and development
|
||||
- Building alternative clients
|
||||
|
||||
## Quick Start: App Passwords (Recommended)
|
||||
|
||||
The simplest way to authenticate is using an ATProto app password. This avoids the complexity of OAuth + DPoP.
|
||||
|
||||
### Step 1: Create an App Password
|
||||
|
||||
1. Go to your Bluesky settings: https://bsky.app/settings/app-passwords
|
||||
2. Create a new app password
|
||||
3. Save it securely (you'll only see it once)
|
||||
|
||||
### Step 2: Get a Session Token
|
||||
|
||||
```bash
|
||||
# Replace with your handle and app password
|
||||
HANDLE="yourhandle.bsky.social"
|
||||
APP_PASSWORD="xxxx-xxxx-xxxx-xxxx"
|
||||
|
||||
# Create session with your PDS
|
||||
SESSION=$(curl -s -X POST "https://bsky.social/xrpc/com.atproto.server.createSession" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "{\"identifier\": \"$HANDLE\", \"password\": \"$APP_PASSWORD\"}")
|
||||
|
||||
# Extract tokens
|
||||
ACCESS_JWT=$(echo "$SESSION" | jq -r '.accessJwt')
|
||||
DID=$(echo "$SESSION" | jq -r '.did')
|
||||
PDS=$(echo "$SESSION" | jq -r '.didDoc.service[0].serviceEndpoint')
|
||||
|
||||
echo "DID: $DID"
|
||||
echo "PDS: $PDS"
|
||||
```
|
||||
|
||||
### Step 3: Get a Service Token for the Hold
|
||||
|
||||
```bash
|
||||
# The hold DID you want to access (e.g., did:web:hold01.atcr.io)
|
||||
HOLD_DID="did:web:hold01.atcr.io"
|
||||
|
||||
# Get a service token from your PDS
|
||||
SERVICE_TOKEN=$(curl -s -X GET "$PDS/xrpc/com.atproto.server.getServiceAuth?aud=$HOLD_DID" \
|
||||
-H "Authorization: Bearer $ACCESS_JWT" | jq -r '.token')
|
||||
|
||||
echo "Service Token: $SERVICE_TOKEN"
|
||||
```
|
||||
|
||||
### Step 4: Call Hold Endpoints
|
||||
|
||||
Now you can call any authenticated hold endpoint with the service token:
|
||||
|
||||
```bash
|
||||
# Export your data from the hold
|
||||
curl -s "https://hold01.atcr.io/xrpc/io.atcr.hold.exportUserData" \
|
||||
-H "Authorization: Bearer $SERVICE_TOKEN" | jq .
|
||||
```
|
||||
|
||||
### Complete Script
|
||||
|
||||
Here's a complete script that does all the above:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# export-hold-data.sh - Export your data from an ATCR hold
|
||||
|
||||
set -e
|
||||
|
||||
# Configuration
|
||||
HANDLE="${1:-yourhandle.bsky.social}"
|
||||
APP_PASSWORD="${2:-xxxx-xxxx-xxxx-xxxx}"
|
||||
HOLD_DID="${3:-did:web:hold01.atcr.io}"
|
||||
|
||||
# Default PDS (Bluesky's main PDS)
|
||||
DEFAULT_PDS="https://bsky.social"
|
||||
|
||||
echo "Authenticating as $HANDLE..."
|
||||
|
||||
# Step 1: Create session
|
||||
SESSION=$(curl -s -X POST "$DEFAULT_PDS/xrpc/com.atproto.server.createSession" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "{\"identifier\": \"$HANDLE\", \"password\": \"$APP_PASSWORD\"}")
|
||||
|
||||
# Check for errors
|
||||
if echo "$SESSION" | jq -e '.error' > /dev/null 2>&1; then
|
||||
echo "Error: $(echo "$SESSION" | jq -r '.message')"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
ACCESS_JWT=$(echo "$SESSION" | jq -r '.accessJwt')
|
||||
DID=$(echo "$SESSION" | jq -r '.did')
|
||||
|
||||
# Try to get PDS from didDoc, fall back to default
|
||||
PDS=$(echo "$SESSION" | jq -r '.didDoc.service[] | select(.id == "#atproto_pds") | .serviceEndpoint' 2>/dev/null || echo "$DEFAULT_PDS")
|
||||
if [ "$PDS" = "null" ] || [ -z "$PDS" ]; then
|
||||
PDS="$DEFAULT_PDS"
|
||||
fi
|
||||
|
||||
echo "Authenticated as $DID"
|
||||
echo "PDS: $PDS"
|
||||
|
||||
# Step 2: Get service token for the hold
|
||||
echo "Getting service token for $HOLD_DID..."
|
||||
SERVICE_RESPONSE=$(curl -s -X GET "$PDS/xrpc/com.atproto.server.getServiceAuth?aud=$HOLD_DID" \
|
||||
-H "Authorization: Bearer $ACCESS_JWT")
|
||||
|
||||
if echo "$SERVICE_RESPONSE" | jq -e '.error' > /dev/null 2>&1; then
|
||||
echo "Error getting service token: $(echo "$SERVICE_RESPONSE" | jq -r '.message')"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
SERVICE_TOKEN=$(echo "$SERVICE_RESPONSE" | jq -r '.token')
|
||||
|
||||
# Step 3: Resolve hold DID to URL
|
||||
if [[ "$HOLD_DID" == did:web:* ]]; then
|
||||
# did:web:example.com -> https://example.com
|
||||
HOLD_HOST="${HOLD_DID#did:web:}"
|
||||
HOLD_URL="https://$HOLD_HOST"
|
||||
else
|
||||
echo "Error: Only did:web holds are currently supported for direct resolution"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Hold URL: $HOLD_URL"
|
||||
|
||||
# Step 4: Export data
|
||||
echo "Exporting data from $HOLD_URL..."
|
||||
curl -s "$HOLD_URL/xrpc/io.atcr.hold.exportUserData" \
|
||||
-H "Authorization: Bearer $SERVICE_TOKEN" | jq .
|
||||
```
|
||||
|
||||
Usage:
|
||||
```bash
|
||||
chmod +x export-hold-data.sh
|
||||
./export-hold-data.sh yourhandle.bsky.social xxxx-xxxx-xxxx-xxxx did:web:hold01.atcr.io
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Available Hold Endpoints
|
||||
|
||||
Once you have a service token, you can call these endpoints:
|
||||
|
||||
### Data Export (GDPR)
|
||||
```bash
|
||||
GET /xrpc/io.atcr.hold.exportUserData
|
||||
Authorization: Bearer {service_token}
|
||||
```
|
||||
|
||||
Returns all your data stored on that hold:
|
||||
- Layer records (blobs you've pushed)
|
||||
- Crew membership status
|
||||
- Usage statistics
|
||||
- Whether you're the hold captain
|
||||
|
||||
### Quota Information
|
||||
```bash
|
||||
GET /xrpc/io.atcr.hold.getQuota?userDid={your_did}
|
||||
# No auth required - just needs your DID
|
||||
```
|
||||
|
||||
### Blob Download (if you have read access)
|
||||
```bash
|
||||
GET /xrpc/com.atproto.sync.getBlob?did={owner_did}&cid={blob_digest}
|
||||
Authorization: Bearer {service_token}
|
||||
```
|
||||
|
||||
Returns a presigned URL to download the blob directly from storage.
|
||||
|
||||
---
|
||||
|
||||
## OAuth + DPoP (Advanced)
|
||||
|
||||
App passwords are the simplest option, but OAuth with DPoP is the "proper" way to authenticate in ATProto. However, it's significantly more complex because:
|
||||
|
||||
1. **DPoP (Demonstrating Proof of Possession)** - Every request requires a cryptographically signed JWT proving you control a specific key
|
||||
2. **PAR (Pushed Authorization Requests)** - Authorization parameters are sent server-to-server
|
||||
3. **PKCE (Proof Key for Code Exchange)** - Prevents authorization code interception
|
||||
|
||||
### Why DPoP Makes Curl Impractical
|
||||
|
||||
Each request requires a fresh DPoP proof JWT with:
|
||||
- Unique `jti` (request ID)
|
||||
- Current `iat` timestamp
|
||||
- HTTP method and URL bound to the request
|
||||
- Server-provided `nonce`
|
||||
- Signature using your P-256 private key
|
||||
|
||||
Example DPoP proof structure:
|
||||
```json
|
||||
{
|
||||
"alg": "ES256",
|
||||
"typ": "dpop+jwt",
|
||||
"jwk": { "kty": "EC", "crv": "P-256", "x": "...", "y": "..." }
|
||||
}
|
||||
{
|
||||
"htm": "GET",
|
||||
"htu": "https://bsky.social/xrpc/com.atproto.server.getServiceAuth",
|
||||
"jti": "550e8400-e29b-41d4-a716-446655440000",
|
||||
"iat": 1735689100,
|
||||
"nonce": "server-provided-nonce"
|
||||
}
|
||||
```
|
||||
|
||||
### If You Need OAuth
|
||||
|
||||
If you need OAuth (e.g., for a production application), you'll want to use a library:
|
||||
|
||||
**Go:**
|
||||
```go
|
||||
import "github.com/bluesky-social/indigo/atproto/auth/oauth"
|
||||
```
|
||||
|
||||
**TypeScript/JavaScript:**
|
||||
```bash
|
||||
npm install @atproto/oauth-client-node
|
||||
```
|
||||
|
||||
**Python:**
|
||||
```bash
|
||||
pip install atproto
|
||||
```
|
||||
|
||||
These libraries handle all the DPoP complexity for you.
|
||||
|
||||
### High-Level OAuth Flow
|
||||
|
||||
For documentation purposes, here's what the flow looks like:
|
||||
|
||||
1. **Resolve identity**: `handle` → `DID` → `PDS endpoint`
|
||||
2. **Discover OAuth server**: `GET {pds}/.well-known/oauth-authorization-server`
|
||||
3. **Generate DPoP key**: Create P-256 key pair
|
||||
4. **PAR request**: Send authorization parameters (with DPoP proof)
|
||||
5. **User authorization**: Browser-based login
|
||||
6. **Token exchange**: Exchange code for tokens (with DPoP proof)
|
||||
7. **Use tokens**: All subsequent requests include DPoP proofs
|
||||
|
||||
Each step after #3 requires generating a fresh DPoP proof JWT, which is why libraries are essential.
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### "Invalid token" or "Token expired"
|
||||
|
||||
Service tokens are only valid for ~60 seconds. Get a fresh one:
|
||||
```bash
|
||||
SERVICE_TOKEN=$(curl -s "$PDS/xrpc/com.atproto.server.getServiceAuth?aud=$HOLD_DID" \
|
||||
-H "Authorization: Bearer $ACCESS_JWT" | jq -r '.token')
|
||||
```
|
||||
|
||||
### "Session expired"
|
||||
|
||||
Your access JWT from `createSession` has expired. Create a new session:
|
||||
```bash
|
||||
SESSION=$(curl -s -X POST "$PDS/xrpc/com.atproto.server.createSession" ...)
|
||||
ACCESS_JWT=$(echo "$SESSION" | jq -r '.accessJwt')
|
||||
```
|
||||
|
||||
### "Audience mismatch"
|
||||
|
||||
The service token is scoped to a specific hold. Make sure `HOLD_DID` matches exactly what's in the `aud` claim of your token.
|
||||
|
||||
### "Access denied: user is not a crew member"
|
||||
|
||||
You don't have access to this hold. You need to either:
|
||||
- Be the hold captain (owner)
|
||||
- Be a crew member with appropriate permissions
|
||||
|
||||
### Finding Your Hold DID
|
||||
|
||||
Check your sailor profile to find your default hold:
|
||||
```bash
|
||||
curl -s "https://bsky.social/xrpc/com.atproto.repo.getRecord?repo=$DID&collection=io.atcr.sailor.profile&rkey=self" \
|
||||
-H "Authorization: Bearer $ACCESS_JWT" | jq -r '.value.defaultHold'
|
||||
```
|
||||
|
||||
Or check your manifest records for the hold where your images are stored:
|
||||
```bash
|
||||
curl -s "https://bsky.social/xrpc/com.atproto.repo.listRecords?repo=$DID&collection=io.atcr.manifest&limit=1" \
|
||||
-H "Authorization: Bearer $ACCESS_JWT" | jq -r '.records[0].value.holdDid'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Security Notes
|
||||
|
||||
- **App passwords** are scoped tokens that can be revoked without changing your main password
|
||||
- **Service tokens** are short-lived (60 seconds) and scoped to a specific hold
|
||||
- **Never share** your app password or access tokens
|
||||
- Service tokens can only be used for the specific hold they were requested for (`aud` claim)
|
||||
|
||||
---
|
||||
|
||||
## References
|
||||
|
||||
- [ATProto OAuth Specification](https://atproto.com/specs/oauth)
|
||||
- [DPoP RFC 9449](https://datatracker.ietf.org/doc/html/rfc9449)
|
||||
- [Bluesky OAuth Guide](https://docs.bsky.app/docs/advanced-guides/oauth-client)
|
||||
- [ATCR BYOS Documentation](./BYOS.md)
|
||||
756
docs/HOLD_AS_CA.md
Normal file
756
docs/HOLD_AS_CA.md
Normal file
@@ -0,0 +1,756 @@
|
||||
# Hold-as-Certificate-Authority Architecture
|
||||
|
||||
## ⚠️ Important Notice
|
||||
|
||||
This document describes an **optional enterprise feature** for X.509 PKI compliance. The hold-as-CA approach introduces **centralization trade-offs** that contradict ATProto's decentralized philosophy.
|
||||
|
||||
**Default Recommendation:** Use [plugin-based integration](./INTEGRATION_STRATEGY.md) instead. Only implement hold-as-CA if your organization has specific X.509 PKI compliance requirements.
|
||||
|
||||
## Overview
|
||||
|
||||
The hold-as-CA architecture allows ATCR to generate Notation/Notary v2-compatible signatures by having hold services act as Certificate Authorities that issue X.509 certificates for users.
|
||||
|
||||
### The Problem
|
||||
|
||||
- **ATProto signatures** use K-256 (secp256k1) elliptic curve
|
||||
- **Notation** only supports P-256, P-384, P-521 elliptic curves
|
||||
- **Cannot convert** K-256 signatures to P-256 (different cryptographic curves)
|
||||
- **Must re-sign** with P-256 keys for Notation compatibility
|
||||
|
||||
### The Solution
|
||||
|
||||
Hold services act as trusted Certificate Authorities (CAs):
|
||||
|
||||
1. User pushes image → Manifest signed by PDS with K-256 (ATProto)
|
||||
2. Hold verifies ATProto signature is valid
|
||||
3. Hold generates ephemeral P-256 key pair for user
|
||||
4. Hold issues X.509 certificate to user's DID
|
||||
5. Hold signs manifest with P-256 key
|
||||
6. Hold creates Notation signature envelope (JWS format)
|
||||
7. Stores both ATProto and Notation signatures
|
||||
|
||||
**Result:** Images have two signatures:
|
||||
- **ATProto signature** (K-256) - Decentralized, DID-based
|
||||
- **Notation signature** (P-256) - Centralized, X.509 PKI
|
||||
|
||||
## Architecture
|
||||
|
||||
### Certificate Chain
|
||||
|
||||
```
|
||||
Hold Root CA Certificate (self-signed, P-256)
|
||||
└── User Certificate (issued to DID, P-256)
|
||||
└── Image Manifest Signature
|
||||
```
|
||||
|
||||
**Hold Root CA:**
|
||||
```
|
||||
Subject: CN=ATCR Hold CA - did:web:hold01.atcr.io
|
||||
Issuer: Self (self-signed)
|
||||
Key Usage: Digital Signature, Certificate Sign
|
||||
Basic Constraints: CA=true, pathLen=1
|
||||
Algorithm: ECDSA P-256
|
||||
Validity: 10 years
|
||||
```
|
||||
|
||||
**User Certificate:**
|
||||
```
|
||||
Subject: CN=did:plc:alice123
|
||||
SAN: URI:did:plc:alice123
|
||||
Issuer: Hold Root CA
|
||||
Key Usage: Digital Signature
|
||||
Extended Key Usage: Code Signing
|
||||
Algorithm: ECDSA P-256
|
||||
Validity: 24 hours (short-lived)
|
||||
```
|
||||
|
||||
### Push Flow
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────────┐
|
||||
│ 1. User: docker push atcr.io/alice/myapp:latest │
|
||||
└────────────────────┬─────────────────────────────────┘
|
||||
↓
|
||||
┌──────────────────────────────────────────────────────┐
|
||||
│ 2. AppView stores manifest in alice's PDS │
|
||||
│ - PDS signs with K-256 (ATProto standard) │
|
||||
│ - Signature stored in repository commit │
|
||||
└────────────────────┬─────────────────────────────────┘
|
||||
↓
|
||||
┌──────────────────────────────────────────────────────┐
|
||||
│ 3. AppView requests hold to co-sign │
|
||||
│ POST /xrpc/io.atcr.hold.coSignManifest │
|
||||
│ { │
|
||||
│ "userDid": "did:plc:alice123", │
|
||||
│ "manifestDigest": "sha256:abc123...", │
|
||||
│ "atprotoSignature": {...} │
|
||||
│ } │
|
||||
└────────────────────┬─────────────────────────────────┘
|
||||
↓
|
||||
┌──────────────────────────────────────────────────────┐
|
||||
│ 4. Hold verifies ATProto signature │
|
||||
│ a. Resolve alice's DID → public key │
|
||||
│ b. Fetch commit from alice's PDS │
|
||||
│ c. Verify K-256 signature │
|
||||
│ d. Ensure signature is valid │
|
||||
│ │
|
||||
│ If verification fails → REJECT │
|
||||
└────────────────────┬─────────────────────────────────┘
|
||||
↓
|
||||
┌──────────────────────────────────────────────────────┐
|
||||
│ 5. Hold generates ephemeral P-256 key pair │
|
||||
│ privateKey := ecdsa.GenerateKey(elliptic.P256()) │
|
||||
└────────────────────┬─────────────────────────────────┘
|
||||
↓
|
||||
┌──────────────────────────────────────────────────────┐
|
||||
│ 6. Hold issues X.509 certificate │
|
||||
│ Subject: CN=did:plc:alice123 │
|
||||
│ SAN: URI:did:plc:alice123 │
|
||||
│ Issuer: Hold CA │
|
||||
│ NotBefore: now │
|
||||
│ NotAfter: now + 24 hours │
|
||||
│ KeyUsage: Digital Signature │
|
||||
│ ExtKeyUsage: Code Signing │
|
||||
│ │
|
||||
│ Sign certificate with hold's CA private key │
|
||||
└────────────────────┬─────────────────────────────────┘
|
||||
↓
|
||||
┌──────────────────────────────────────────────────────┐
|
||||
│ 7. Hold signs manifest digest │
|
||||
│ hash := SHA256(manifestBytes) │
|
||||
│ signature := ECDSA_P256(hash, privateKey) │
|
||||
└────────────────────┬─────────────────────────────────┘
|
||||
↓
|
||||
┌──────────────────────────────────────────────────────┐
|
||||
│ 8. Hold creates Notation JWS envelope │
|
||||
│ { │
|
||||
│ "protected": {...}, │
|
||||
│ "payload": "base64(manifestDigest)", │
|
||||
│ "signature": "base64(p256Signature)", │
|
||||
│ "header": { │
|
||||
│ "x5c": [ │
|
||||
│ "base64(userCert)", │
|
||||
│ "base64(holdCACert)" │
|
||||
│ ] │
|
||||
│ } │
|
||||
│ } │
|
||||
└────────────────────┬─────────────────────────────────┘
|
||||
↓
|
||||
┌──────────────────────────────────────────────────────┐
|
||||
│ 9. Hold returns signature to AppView │
|
||||
└────────────────────┬─────────────────────────────────┘
|
||||
↓
|
||||
┌──────────────────────────────────────────────────────┐
|
||||
│ 10. AppView stores Notation signature │
|
||||
│ - Create ORAS artifact manifest │
|
||||
│ - Upload JWS envelope as layer blob │
|
||||
│ - Link to image via subject field │
|
||||
│ - artifactType: application/vnd.cncf.notary... │
|
||||
└──────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Verification Flow
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────────┐
|
||||
│ User: notation verify atcr.io/alice/myapp:latest │
|
||||
└────────────────────┬─────────────────────────────────┘
|
||||
↓
|
||||
┌──────────────────────────────────────────────────────┐
|
||||
│ 1. Notation queries Referrers API │
|
||||
│ GET /v2/alice/myapp/referrers/sha256:abc123 │
|
||||
│ → Discovers Notation signature artifact │
|
||||
└────────────────────┬─────────────────────────────────┘
|
||||
↓
|
||||
┌──────────────────────────────────────────────────────┐
|
||||
│ 2. Notation downloads JWS envelope │
|
||||
│ - Parses JSON Web Signature │
|
||||
│ - Extracts certificate chain from x5c header │
|
||||
└────────────────────┬─────────────────────────────────┘
|
||||
↓
|
||||
┌──────────────────────────────────────────────────────┐
|
||||
│ 3. Notation validates certificate chain │
|
||||
│ a. User cert issued by Hold CA? ✓ │
|
||||
│ b. Hold CA cert in trust store? ✓ │
|
||||
│ c. Certificate not expired? ✓ │
|
||||
│ d. Key usage correct? ✓ │
|
||||
│ e. Subject matches policy? ✓ │
|
||||
└────────────────────┬─────────────────────────────────┘
|
||||
↓
|
||||
┌──────────────────────────────────────────────────────┐
|
||||
│ 4. Notation verifies signature │
|
||||
│ a. Extract public key from user certificate │
|
||||
│ b. Compute manifest hash: SHA256(manifest) │
|
||||
│ c. Verify: ECDSA_P256(hash, sig, pubKey) ✓ │
|
||||
└────────────────────┬─────────────────────────────────┘
|
||||
↓
|
||||
┌──────────────────────────────────────────────────────┐
|
||||
│ 5. Success: Image verified ✓ │
|
||||
│ Signed by: did:plc:alice123 (via Hold CA) │
|
||||
└──────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
### Hold CA Certificate Generation
|
||||
|
||||
```go
|
||||
// cmd/hold/main.go - CA initialization
|
||||
func (h *Hold) initializeCA(ctx context.Context) error {
|
||||
caKeyPath := filepath.Join(h.config.DataDir, "ca-private-key.pem")
|
||||
caCertPath := filepath.Join(h.config.DataDir, "ca-certificate.pem")
|
||||
|
||||
// Load existing CA or generate new one
|
||||
if exists(caKeyPath) && exists(caCertPath) {
|
||||
h.caKey = loadPrivateKey(caKeyPath)
|
||||
h.caCert = loadCertificate(caCertPath)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Generate P-256 key pair for CA
|
||||
caKey, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to generate CA key: %w", err)
|
||||
}
|
||||
|
||||
// Create CA certificate template
|
||||
serialNumber, _ := rand.Int(rand.Reader, new(big.Int).Lsh(big.NewInt(1), 128))
|
||||
|
||||
template := &x509.Certificate{
|
||||
SerialNumber: serialNumber,
|
||||
Subject: pkix.Name{
|
||||
CommonName: fmt.Sprintf("ATCR Hold CA - %s", h.DID),
|
||||
},
|
||||
NotBefore: time.Now(),
|
||||
NotAfter: time.Now().AddDate(10, 0, 0), // 10 years
|
||||
|
||||
KeyUsage: x509.KeyUsageDigitalSignature | x509.KeyUsageCertSign,
|
||||
BasicConstraintsValid: true,
|
||||
IsCA: true,
|
||||
MaxPathLen: 1, // Can only issue end-entity certificates
|
||||
}
|
||||
|
||||
// Self-sign
|
||||
certDER, err := x509.CreateCertificate(
|
||||
rand.Reader,
|
||||
template,
|
||||
template, // Self-signed: issuer = subject
|
||||
&caKey.PublicKey,
|
||||
caKey,
|
||||
)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create CA certificate: %w", err)
|
||||
}
|
||||
|
||||
caCert, _ := x509.ParseCertificate(certDER)
|
||||
|
||||
// Save to disk (0600 permissions)
|
||||
savePrivateKey(caKeyPath, caKey)
|
||||
saveCertificate(caCertPath, caCert)
|
||||
|
||||
h.caKey = caKey
|
||||
h.caCert = caCert
|
||||
|
||||
log.Info("Generated new CA certificate", "did", h.DID, "expires", caCert.NotAfter)
|
||||
return nil
|
||||
}
|
||||
```
|
||||
|
||||
### User Certificate Issuance
|
||||
|
||||
```go
|
||||
// pkg/hold/cosign.go
|
||||
func (h *Hold) issueUserCertificate(userDID string) (*x509.Certificate, *ecdsa.PrivateKey, error) {
|
||||
// Generate ephemeral P-256 key for user
|
||||
userKey, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
|
||||
if err != nil {
|
||||
return nil, nil, fmt.Errorf("failed to generate user key: %w", err)
|
||||
}
|
||||
|
||||
serialNumber, _ := rand.Int(rand.Reader, new(big.Int).Lsh(big.NewInt(1), 128))
|
||||
|
||||
// Parse DID for SAN
|
||||
sanURI, _ := url.Parse(userDID)
|
||||
|
||||
template := &x509.Certificate{
|
||||
SerialNumber: serialNumber,
|
||||
Subject: pkix.Name{
|
||||
CommonName: userDID,
|
||||
},
|
||||
URIs: []*url.URL{sanURI}, // Subject Alternative Name
|
||||
|
||||
NotBefore: time.Now(),
|
||||
NotAfter: time.Now().Add(24 * time.Hour), // Short-lived: 24 hours
|
||||
|
||||
KeyUsage: x509.KeyUsageDigitalSignature,
|
||||
ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageCodeSigning},
|
||||
BasicConstraintsValid: true,
|
||||
IsCA: false,
|
||||
}
|
||||
|
||||
// Sign with hold's CA key
|
||||
certDER, err := x509.CreateCertificate(
|
||||
rand.Reader,
|
||||
template,
|
||||
h.caCert, // Issuer: Hold CA
|
||||
&userKey.PublicKey,
|
||||
h.caKey, // Sign with CA private key
|
||||
)
|
||||
if err != nil {
|
||||
return nil, nil, fmt.Errorf("failed to create user certificate: %w", err)
|
||||
}
|
||||
|
||||
userCert, _ := x509.ParseCertificate(certDER)
|
||||
|
||||
return userCert, userKey, nil
|
||||
}
|
||||
```
|
||||
|
||||
### Co-Signing XRPC Endpoint
|
||||
|
||||
```go
|
||||
// pkg/hold/oci/xrpc.go
|
||||
func (s *Server) handleCoSignManifest(ctx context.Context, req *CoSignRequest) (*CoSignResponse, error) {
|
||||
// 1. Verify caller is authenticated
|
||||
did, err := s.auth.VerifyToken(ctx, req.Token)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("authentication failed: %w", err)
|
||||
}
|
||||
|
||||
// 2. Verify ATProto signature
|
||||
valid, err := s.verifyATProtoSignature(ctx, req.UserDID, req.ManifestDigest, req.ATProtoSignature)
|
||||
if err != nil || !valid {
|
||||
return nil, fmt.Errorf("ATProto signature verification failed: %w", err)
|
||||
}
|
||||
|
||||
// 3. Issue certificate for user
|
||||
userCert, userKey, err := s.hold.issueUserCertificate(req.UserDID)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to issue certificate: %w", err)
|
||||
}
|
||||
|
||||
// 4. Sign manifest with user's key
|
||||
manifestHash := sha256.Sum256([]byte(req.ManifestDigest))
|
||||
signature, err := ecdsa.SignASN1(rand.Reader, userKey, manifestHash[:])
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to sign manifest: %w", err)
|
||||
}
|
||||
|
||||
// 5. Create JWS envelope
|
||||
jws, err := s.createJWSEnvelope(signature, userCert, s.hold.caCert, req.ManifestDigest)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create JWS: %w", err)
|
||||
}
|
||||
|
||||
return &CoSignResponse{
|
||||
JWS: jws,
|
||||
Certificate: encodeCertificate(userCert),
|
||||
CACertificate: encodeCertificate(s.hold.caCert),
|
||||
}, nil
|
||||
}
|
||||
```
|
||||
|
||||
## Trust Model
|
||||
|
||||
### Centralization Analysis
|
||||
|
||||
**ATProto Model (Decentralized):**
|
||||
- Each PDS is independent
|
||||
- User controls which PDS to use
|
||||
- Trust user's DID, not specific infrastructure
|
||||
- PDS compromise affects only that PDS's users
|
||||
- Multiple PDSs provide redundancy
|
||||
|
||||
**Hold-as-CA Model (Centralized):**
|
||||
- Hold acts as single Certificate Authority
|
||||
- All users must trust hold's CA certificate
|
||||
- Hold compromise = attacker can issue certificates for ANY user
|
||||
- Hold becomes single point of failure
|
||||
- Users depend on hold operator honesty
|
||||
|
||||
### What Hold Vouches For
|
||||
|
||||
When hold issues a certificate, it attests:
|
||||
|
||||
✅ **"I verified that [DID] signed this manifest with ATProto"**
|
||||
- Hold validated ATProto signature
|
||||
- Hold confirmed signature matches user's DID
|
||||
- Hold checked signature at specific time
|
||||
|
||||
❌ **"This image is safe"**
|
||||
- Hold does NOT audit image contents
|
||||
- Certificate ≠ vulnerability scan
|
||||
- Signature ≠ security guarantee
|
||||
|
||||
❌ **"I control this DID"**
|
||||
- Hold does NOT control user's DID
|
||||
- DID ownership is independent
|
||||
- Hold cannot revoke DIDs
|
||||
|
||||
### Threat Model
|
||||
|
||||
**Scenario 1: Hold Private Key Compromise**
|
||||
|
||||
**Attack:**
|
||||
- Attacker steals hold's CA private key
|
||||
- Can issue certificates for any DID
|
||||
- Can sign malicious images as any user
|
||||
|
||||
**Impact:**
|
||||
- **CRITICAL** - All users affected
|
||||
- Attacker can impersonate any user
|
||||
- All signatures become untrustworthy
|
||||
|
||||
**Detection:**
|
||||
- Certificate Transparency logs (if implemented)
|
||||
- Unusual certificate issuance patterns
|
||||
- Users report unexpected signatures
|
||||
|
||||
**Mitigation:**
|
||||
- Store CA key in Hardware Security Module (HSM)
|
||||
- Strict access controls
|
||||
- Audit logging
|
||||
- Regular key rotation
|
||||
|
||||
**Recovery:**
|
||||
- Revoke compromised CA certificate
|
||||
- Generate new CA certificate
|
||||
- Re-issue all active certificates
|
||||
- Notify all users
|
||||
- Update trust stores
|
||||
|
||||
---
|
||||
|
||||
**Scenario 2: Malicious Hold Operator**
|
||||
|
||||
**Attack:**
|
||||
- Hold operator issues certificates without verifying ATProto signatures
|
||||
- Hold operator signs malicious images
|
||||
- Hold operator backdates certificates
|
||||
|
||||
**Impact:**
|
||||
- **HIGH** - Trust model broken
|
||||
- Users receive signed malicious images
|
||||
- Difficult to detect without ATProto cross-check
|
||||
|
||||
**Detection:**
|
||||
- Compare Notation signature timestamp with ATProto commit time
|
||||
- Verify ATProto signature exists independently
|
||||
- Monitor hold's signing patterns
|
||||
|
||||
**Mitigation:**
|
||||
- Audit trail linking certificates to ATProto signatures
|
||||
- Public transparency logs
|
||||
- Multi-signature requirements
|
||||
- Periodically verify ATProto signatures
|
||||
|
||||
**Recovery:**
|
||||
- Identify malicious certificates
|
||||
- Revoke hold's CA trust
|
||||
- Switch to different hold
|
||||
- Re-verify all images
|
||||
|
||||
---
|
||||
|
||||
**Scenario 3: Certificate Theft**
|
||||
|
||||
**Attack:**
|
||||
- Attacker steals issued user certificate + private key
|
||||
- Uses it to sign malicious images
|
||||
|
||||
**Impact:**
|
||||
- **LOW-MEDIUM** - Limited scope
|
||||
- Affects only specific user/image
|
||||
- Short validity period (24 hours)
|
||||
|
||||
**Detection:**
|
||||
- Unexpected signature timestamps
|
||||
- Images signed from unknown locations
|
||||
|
||||
**Mitigation:**
|
||||
- Short certificate validity (24 hours)
|
||||
- Ephemeral keys (not stored long-term)
|
||||
- Certificate revocation if detected
|
||||
|
||||
**Recovery:**
|
||||
- Wait for certificate expiration (24 hours)
|
||||
- Revoke specific certificate
|
||||
- Investigate compromise source
|
||||
|
||||
## Certificate Management
|
||||
|
||||
### Expiration Strategy
|
||||
|
||||
**Short-Lived Certificates (24 hours):**
|
||||
|
||||
**Pros:**
|
||||
- ✅ Minimal revocation infrastructure needed
|
||||
- ✅ Compromise window is tiny
|
||||
- ✅ Automatic cleanup
|
||||
- ✅ Lower CRL/OCSP overhead
|
||||
|
||||
**Cons:**
|
||||
- ❌ Old images become unverifiable quickly
|
||||
- ❌ Requires re-signing for historical verification
|
||||
- ❌ Storage: multiple signatures for same image
|
||||
|
||||
**Solution: On-Demand Re-Signing**
|
||||
```
|
||||
User pulls old image → Notation verification fails (expired cert)
|
||||
→ User requests re-signing: POST /xrpc/io.atcr.hold.reSignManifest
|
||||
→ Hold verifies ATProto signature still valid
|
||||
→ Hold issues new certificate (24 hours)
|
||||
→ Hold creates new Notation signature
|
||||
→ User can verify with fresh certificate
|
||||
```
|
||||
|
||||
### Revocation
|
||||
|
||||
**Certificate Revocation List (CRL):**
|
||||
```
|
||||
Hold publishes CRL at: https://hold01.atcr.io/ca.crl
|
||||
|
||||
Notation configured to check CRL:
|
||||
{
|
||||
"trustPolicies": [{
|
||||
"name": "atcr-images",
|
||||
"signatureVerification": {
|
||||
"verificationLevel": "strict",
|
||||
"override": {
|
||||
"revocationValidation": "strict"
|
||||
}
|
||||
}
|
||||
}]
|
||||
}
|
||||
```
|
||||
|
||||
**OCSP (Online Certificate Status Protocol):**
|
||||
- Hold runs OCSP responder: `https://hold01.atcr.io/ocsp`
|
||||
- Real-time certificate status checks
|
||||
- Lower overhead than CRL downloads
|
||||
|
||||
**Revocation Triggers:**
|
||||
- Key compromise detected
|
||||
- Malicious signing detected
|
||||
- User request
|
||||
- DID ownership change
|
||||
|
||||
### CA Key Rotation
|
||||
|
||||
**Rotation Procedure:**
|
||||
|
||||
1. **Generate new CA key pair**
|
||||
2. **Create new CA certificate**
|
||||
3. **Cross-sign old CA with new CA** (transition period)
|
||||
4. **Distribute new CA certificate** to all users
|
||||
5. **Begin issuing with new CA** for new signatures
|
||||
6. **Grace period** (30 days): Accept both old and new CA
|
||||
7. **Retire old CA** after grace period
|
||||
|
||||
**Frequency:** Every 2-3 years (longer than short-lived certs)
|
||||
|
||||
## Trust Store Distribution
|
||||
|
||||
### Problem
|
||||
|
||||
Users must add hold's CA certificate to their Notation trust store for verification to work.
|
||||
|
||||
### Manual Distribution
|
||||
|
||||
```bash
|
||||
# 1. Download hold's CA certificate
|
||||
curl https://hold01.atcr.io/ca.crt -o hold01-ca.crt
|
||||
|
||||
# 2. Verify fingerprint (out-of-band)
|
||||
openssl x509 -in hold01-ca.crt -fingerprint -noout
|
||||
# Compare with published fingerprint
|
||||
|
||||
# 3. Add to Notation trust store
|
||||
notation cert add --type ca --store atcr-holds hold01-ca.crt
|
||||
```
|
||||
|
||||
### Automated Distribution
|
||||
|
||||
**ATCR CLI tool:**
|
||||
```bash
|
||||
atcr trust add hold01.atcr.io
|
||||
# → Fetches CA certificate
|
||||
# → Verifies via HTTPS + DNSSEC
|
||||
# → Adds to Notation trust store
|
||||
# → Configures trust policy
|
||||
|
||||
atcr trust list
|
||||
# → Shows trusted holds with fingerprints
|
||||
```
|
||||
|
||||
### System-Wide Trust
|
||||
|
||||
**For enterprise deployments:**
|
||||
|
||||
**Debian/Ubuntu:**
|
||||
```bash
|
||||
# Install CA certificate system-wide
|
||||
cp hold01-ca.crt /usr/local/share/ca-certificates/atcr-hold01.crt
|
||||
update-ca-certificates
|
||||
```
|
||||
|
||||
**RHEL/CentOS:**
|
||||
```bash
|
||||
cp hold01-ca.crt /etc/pki/ca-trust/source/anchors/
|
||||
update-ca-trust
|
||||
```
|
||||
|
||||
**Container images:**
|
||||
```dockerfile
|
||||
FROM ubuntu:22.04
|
||||
COPY hold01-ca.crt /usr/local/share/ca-certificates/
|
||||
RUN update-ca-certificates
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Hold Service
|
||||
|
||||
**Environment variables:**
|
||||
```bash
|
||||
# Enable co-signing feature
|
||||
HOLD_COSIGN_ENABLED=true
|
||||
|
||||
# CA certificate and key paths
|
||||
HOLD_CA_CERT_PATH=/var/lib/atcr/hold/ca-certificate.pem
|
||||
HOLD_CA_KEY_PATH=/var/lib/atcr/hold/ca-private-key.pem
|
||||
|
||||
# Certificate validity
|
||||
HOLD_CERT_VALIDITY_HOURS=24
|
||||
|
||||
# OCSP responder
|
||||
HOLD_OCSP_ENABLED=true
|
||||
HOLD_OCSP_URL=https://hold01.atcr.io/ocsp
|
||||
|
||||
# CRL distribution
|
||||
HOLD_CRL_ENABLED=true
|
||||
HOLD_CRL_URL=https://hold01.atcr.io/ca.crl
|
||||
```
|
||||
|
||||
### Notation Trust Policy
|
||||
|
||||
```json
|
||||
{
|
||||
"version": "1.0",
|
||||
"trustPolicies": [{
|
||||
"name": "atcr-images",
|
||||
"registryScopes": ["atcr.io/*/*"],
|
||||
"signatureVerification": {
|
||||
"level": "strict",
|
||||
"override": {
|
||||
"revocationValidation": "strict"
|
||||
}
|
||||
},
|
||||
"trustStores": ["ca:atcr-holds"],
|
||||
"trustedIdentities": [
|
||||
"x509.subject: CN=did:plc:*",
|
||||
"x509.subject: CN=did:web:*"
|
||||
]
|
||||
}]
|
||||
}
|
||||
```
|
||||
|
||||
## When to Use Hold-as-CA
|
||||
|
||||
### ✅ Use When
|
||||
|
||||
**Enterprise X.509 PKI Compliance:**
|
||||
- Organization requires standard X.509 certificates
|
||||
- Existing security policies mandate PKI
|
||||
- Audit requirements for certificate chains
|
||||
- Integration with existing CA infrastructure
|
||||
|
||||
**Tool Compatibility:**
|
||||
- Must use standard Notation without plugins
|
||||
- Cannot deploy custom verification tools
|
||||
- Existing tooling expects X.509 signatures
|
||||
|
||||
**Centralized Trust Acceptable:**
|
||||
- Organization already uses centralized trust model
|
||||
- Hold operator is internal/trusted team
|
||||
- Centralization risk is acceptable trade-off
|
||||
|
||||
### ❌ Don't Use When
|
||||
|
||||
**Default Deployment:**
|
||||
- Most users should use [plugin-based approach](./INTEGRATION_STRATEGY.md)
|
||||
- Plugins maintain decentralization
|
||||
- Plugins reuse existing ATProto signatures
|
||||
|
||||
**Small Teams / Startups:**
|
||||
- Certificate management overhead too high
|
||||
- Don't need X.509 compliance
|
||||
- Prefer simpler architecture
|
||||
|
||||
**Maximum Decentralization Required:**
|
||||
- Cannot accept hold as single trust point
|
||||
- Must maintain pure ATProto model
|
||||
- Centralization contradicts project goals
|
||||
|
||||
## Comparison: Hold-as-CA vs. Plugins
|
||||
|
||||
| Aspect | Hold-as-CA | Plugin Approach |
|
||||
|--------|------------|----------------|
|
||||
| **Standard compliance** | ✅ Full X.509/PKI | ⚠️ Custom verification |
|
||||
| **Tool compatibility** | ✅ Notation works unchanged | ❌ Requires plugin install |
|
||||
| **Decentralization** | ❌ Centralized (hold CA) | ✅ Decentralized (DIDs) |
|
||||
| **ATProto alignment** | ❌ Against philosophy | ✅ ATProto-native |
|
||||
| **Signature reuse** | ❌ Must re-sign (P-256) | ✅ Reuses ATProto (K-256) |
|
||||
| **Certificate mgmt** | 🔴 High overhead | 🟢 None |
|
||||
| **Trust distribution** | 🔴 Must distribute CA cert | 🟢 DID resolution |
|
||||
| **Hold compromise** | 🔴 All users affected | 🟢 Metadata only |
|
||||
| **Operational cost** | 🔴 High | 🟢 Low |
|
||||
| **Use case** | Enterprise PKI | General purpose |
|
||||
|
||||
## Recommendations
|
||||
|
||||
### Default Approach: Plugins
|
||||
|
||||
For most deployments, use plugin-based verification:
|
||||
- **Ratify plugin** for Kubernetes
|
||||
- **OPA Gatekeeper provider** for policy enforcement
|
||||
- **Containerd verifier** for runtime checks
|
||||
- **atcr-verify CLI** for general purpose
|
||||
|
||||
See [Integration Strategy](./INTEGRATION_STRATEGY.md) for details.
|
||||
|
||||
### Optional: Hold-as-CA for Enterprise
|
||||
|
||||
Only implement hold-as-CA if you have specific requirements:
|
||||
- Enterprise X.509 PKI mandates
|
||||
- Cannot use plugins (restricted environments)
|
||||
- Accept centralization trade-off
|
||||
|
||||
**Implement as opt-in feature:**
|
||||
```bash
|
||||
# Users explicitly enable co-signing
|
||||
docker push atcr.io/alice/myapp:latest --sign=notation
|
||||
|
||||
# Or via environment variable
|
||||
export ATCR_ENABLE_COSIGN=true
|
||||
docker push atcr.io/alice/myapp:latest
|
||||
```
|
||||
|
||||
### Security Best Practices
|
||||
|
||||
**If implementing hold-as-CA:**
|
||||
|
||||
1. **Store CA key in HSM** - Never on filesystem
|
||||
2. **Audit all certificate issuance** - Log every cert
|
||||
3. **Public transparency log** - Publish all certificates
|
||||
4. **Short certificate validity** - 24 hours max
|
||||
5. **Monitor unusual patterns** - Alert on anomalies
|
||||
6. **Regular CA key rotation** - Every 2-3 years
|
||||
7. **Cross-check ATProto** - Verify both signatures match
|
||||
8. **Incident response plan** - Prepare for compromise
|
||||
|
||||
## See Also
|
||||
|
||||
- [ATProto Signatures](./ATPROTO_SIGNATURES.md) - How ATProto signing works
|
||||
- [Integration Strategy](./INTEGRATION_STRATEGY.md) - Overview of integration approaches
|
||||
- [Signature Integration](./SIGNATURE_INTEGRATION.md) - Tool-specific integration guides
|
||||
1721
docs/HOLD_DISCOVERY.md
Normal file
1721
docs/HOLD_DISCOVERY.md
Normal file
File diff suppressed because it is too large
Load Diff
119
docs/HOLD_XRPC_ENDPOINTS.md
Normal file
119
docs/HOLD_XRPC_ENDPOINTS.md
Normal file
@@ -0,0 +1,119 @@
|
||||
# Hold Service XRPC Endpoints
|
||||
|
||||
This document lists all XRPC endpoints implemented in the Hold service (`pkg/hold/`).
|
||||
|
||||
## PDS Endpoints (`pkg/hold/pds/xrpc.go`)
|
||||
|
||||
### Public (No Auth Required)
|
||||
|
||||
| Endpoint | Method | Description |
|
||||
|----------|--------|-------------|
|
||||
| `/xrpc/_health` | GET | Health check |
|
||||
| `/xrpc/com.atproto.server.describeServer` | GET | Server metadata |
|
||||
| `/xrpc/com.atproto.repo.describeRepo` | GET | Repository information |
|
||||
| `/xrpc/com.atproto.repo.getRecord` | GET | Retrieve a single record |
|
||||
| `/xrpc/com.atproto.repo.listRecords` | GET | List records in a collection (paginated) |
|
||||
| `/xrpc/com.atproto.sync.listRepos` | GET | List all repositories |
|
||||
| `/xrpc/com.atproto.sync.getRecord` | GET | Get record as CAR file |
|
||||
| `/xrpc/com.atproto.sync.getRepo` | GET | Full repository as CAR file |
|
||||
| `/xrpc/com.atproto.sync.getRepoStatus` | GET | Repository hosting status |
|
||||
| `/xrpc/com.atproto.sync.subscribeRepos` | GET | WebSocket firehose |
|
||||
| `/xrpc/com.atproto.identity.resolveHandle` | GET | Resolve handle to DID |
|
||||
| `/xrpc/app.bsky.actor.getProfile` | GET | Get actor profile |
|
||||
| `/xrpc/app.bsky.actor.getProfiles` | GET | Get multiple profiles |
|
||||
| `/xrpc/io.atcr.hold.listTiers` | GET | List hold's available tiers with quotas and features |
|
||||
| `/.well-known/did.json` | GET | DID document |
|
||||
| `/.well-known/atproto-did` | GET | DID for handle resolution |
|
||||
|
||||
### Conditional Auth (based on captain.public)
|
||||
|
||||
| Endpoint | Method | Description |
|
||||
|----------|--------|-------------|
|
||||
| `/xrpc/com.atproto.sync.getBlob` | GET/HEAD | Get blob (routes OCI vs ATProto) |
|
||||
|
||||
### Owner/Crew Admin Required
|
||||
|
||||
| Endpoint | Method | Description |
|
||||
|----------|--------|-------------|
|
||||
| `/xrpc/com.atproto.repo.deleteRecord` | POST | Delete a record |
|
||||
| `/xrpc/com.atproto.repo.uploadBlob` | POST | Upload ATProto blob |
|
||||
|
||||
### Auth Required (Service Token or DPoP)
|
||||
|
||||
| Endpoint | Method | Description |
|
||||
|----------|--------|-------------|
|
||||
| `/xrpc/io.atcr.hold.requestCrew` | POST | Request crew membership |
|
||||
| `/xrpc/io.atcr.hold.exportUserData` | GET | GDPR data export (returns user's records) |
|
||||
### Appview Token Required
|
||||
|
||||
| Endpoint | Method | Description |
|
||||
|----------|--------|-------------|
|
||||
| `/xrpc/io.atcr.hold.updateCrewTier` | POST | Update a crew member's tier (appview-only) |
|
||||
|
||||
---
|
||||
|
||||
## OCI Multipart Upload Endpoints (`pkg/hold/oci/xrpc.go`)
|
||||
|
||||
All require `blob:write` permission via service token:
|
||||
|
||||
| Endpoint | Method | Description |
|
||||
|----------|--------|-------------|
|
||||
| `/xrpc/io.atcr.hold.initiateUpload` | POST | Start multipart upload |
|
||||
| `/xrpc/io.atcr.hold.getPartUploadUrl` | POST | Get presigned URL for part |
|
||||
| `/xrpc/io.atcr.hold.uploadPart` | PUT | Direct buffered part upload |
|
||||
| `/xrpc/io.atcr.hold.completeUpload` | POST | Finalize multipart upload |
|
||||
| `/xrpc/io.atcr.hold.abortUpload` | POST | Cancel multipart upload |
|
||||
| `/xrpc/io.atcr.hold.notifyManifest` | POST | Notify manifest push (creates layer records + optional Bluesky post) |
|
||||
|
||||
---
|
||||
|
||||
## ATCR Hold-Specific Endpoints (`io.atcr.hold.*`)
|
||||
|
||||
| Endpoint | Method | Auth | Description |
|
||||
|----------|--------|------|-------------|
|
||||
| `/xrpc/io.atcr.hold.initiateUpload` | POST | blob:write | Start multipart upload |
|
||||
| `/xrpc/io.atcr.hold.getPartUploadUrl` | POST | blob:write | Get presigned URL for part |
|
||||
| `/xrpc/io.atcr.hold.uploadPart` | PUT | blob:write | Direct buffered part upload |
|
||||
| `/xrpc/io.atcr.hold.completeUpload` | POST | blob:write | Finalize multipart upload |
|
||||
| `/xrpc/io.atcr.hold.abortUpload` | POST | blob:write | Cancel multipart upload |
|
||||
| `/xrpc/io.atcr.hold.notifyManifest` | POST | blob:write | Notify manifest push |
|
||||
| `/xrpc/io.atcr.hold.requestCrew` | POST | auth | Request crew membership |
|
||||
| `/xrpc/io.atcr.hold.exportUserData` | GET | auth | GDPR data export |
|
||||
| `/xrpc/io.atcr.hold.getQuota` | GET | none | Get user quota info |
|
||||
| `/xrpc/io.atcr.hold.getLayersForManifest` | GET | none | Get layer records for a manifest AT-URI |
|
||||
| `/xrpc/io.atcr.hold.image.getConfig` | GET | none | Get OCI image config record for a manifest digest |
|
||||
| `/xrpc/io.atcr.hold.listTiers` | GET | none | List hold's available tiers with quotas and features (scanOnPush) |
|
||||
| `/xrpc/io.atcr.hold.updateCrewTier` | POST | appview token | Update crew member's tier |
|
||||
|
||||
---
|
||||
|
||||
## Standard ATProto Endpoints (excluding io.atcr.hold.*)
|
||||
|
||||
| Endpoint |
|
||||
|----------|
|
||||
| /xrpc/_health |
|
||||
| /xrpc/com.atproto.server.describeServer |
|
||||
| /xrpc/com.atproto.repo.describeRepo |
|
||||
| /xrpc/com.atproto.repo.getRecord |
|
||||
| /xrpc/com.atproto.repo.listRecords |
|
||||
| /xrpc/com.atproto.repo.deleteRecord |
|
||||
| /xrpc/com.atproto.repo.uploadBlob |
|
||||
| /xrpc/com.atproto.sync.listRepos |
|
||||
| /xrpc/com.atproto.sync.getRecord |
|
||||
| /xrpc/com.atproto.sync.getRepo |
|
||||
| /xrpc/com.atproto.sync.getRepoStatus |
|
||||
| /xrpc/com.atproto.sync.getBlob |
|
||||
| /xrpc/com.atproto.sync.subscribeRepos |
|
||||
| /xrpc/com.atproto.identity.resolveHandle |
|
||||
| /xrpc/app.bsky.actor.getProfile |
|
||||
| /xrpc/app.bsky.actor.getProfiles |
|
||||
| /.well-known/did.json |
|
||||
| /.well-known/atproto-did |
|
||||
|
||||
---
|
||||
|
||||
## See Also
|
||||
|
||||
- [DIRECT_HOLD_ACCESS.md](./DIRECT_HOLD_ACCESS.md) - How to call hold endpoints directly without AppView (app passwords, curl examples)
|
||||
- [BYOS.md](./BYOS.md) - Bring Your Own Storage architecture
|
||||
- [OAUTH.md](./OAUTH.md) - OAuth + DPoP authentication details
|
||||
505
docs/IMAGE_SIGNING.md
Normal file
505
docs/IMAGE_SIGNING.md
Normal file
@@ -0,0 +1,505 @@
|
||||
# Image Signing with ATProto
|
||||
|
||||
ATCR provides cryptographic verification of container images through ATProto's native signature system. Every manifest stored in a PDS is cryptographically signed, providing tamper-proof image verification.
|
||||
|
||||
## Overview
|
||||
|
||||
**Key Fact:** Every image pushed to ATCR is automatically signed via ATProto's repository commit signing. No additional signing tools or steps are required.
|
||||
|
||||
When you push an image:
|
||||
1. Manifest stored in your PDS as an `io.atcr.manifest` record
|
||||
2. PDS signs the repository commit containing the manifest (ECDSA K-256)
|
||||
3. Signature is part of the ATProto repository chain
|
||||
4. Verification proves the manifest came from your DID and hasn't been tampered with
|
||||
|
||||
**This document explains:**
|
||||
- How ATProto signatures work for ATCR images
|
||||
- How to verify signatures using standard and custom tools
|
||||
- Integration options for different use cases
|
||||
- When to use optional X.509 certificates (Hold-as-CA)
|
||||
|
||||
## ATProto Signature Model
|
||||
|
||||
### How It Works
|
||||
|
||||
ATProto uses a **repository commit signing** model similar to Git:
|
||||
|
||||
```
|
||||
1. docker push atcr.io/alice/myapp:latest
|
||||
↓
|
||||
2. AppView stores manifest in alice's PDS as io.atcr.manifest record
|
||||
↓
|
||||
3. PDS creates repository commit containing the new record
|
||||
↓
|
||||
4. PDS signs commit with alice's private key (ECDSA K-256)
|
||||
↓
|
||||
5. Commit becomes part of alice's cryptographically signed repo chain
|
||||
```
|
||||
|
||||
**What this proves:**
|
||||
- ✅ Manifest came from alice's PDS (DID-based identity)
|
||||
- ✅ Manifest content hasn't been tampered with
|
||||
- ✅ Manifest was created at a specific time (commit timestamp)
|
||||
- ✅ Manifest is part of alice's verifiable repository history
|
||||
|
||||
**Trust model:**
|
||||
- Public keys distributed via DID documents (PLC directory, did:web)
|
||||
- Signatures use ECDSA K-256 (secp256k1)
|
||||
- Verification is decentralized (no central CA required)
|
||||
- Users control their own DIDs and can rotate keys
|
||||
|
||||
### Signature Metadata
|
||||
|
||||
In addition to ATProto's native commit signatures, ATCR creates **ORAS signature artifacts** that bridge ATProto signatures to the OCI ecosystem:
|
||||
|
||||
```json
|
||||
{
|
||||
"$type": "io.atcr.atproto.signature",
|
||||
"version": "1.0",
|
||||
"subject": {
|
||||
"digest": "sha256:abc123...",
|
||||
"mediaType": "application/vnd.oci.image.manifest.v1+json"
|
||||
},
|
||||
"atproto": {
|
||||
"did": "did:plc:alice123",
|
||||
"handle": "alice.bsky.social",
|
||||
"pdsEndpoint": "https://bsky.social",
|
||||
"recordUri": "at://did:plc:alice123/io.atcr.manifest/abc123",
|
||||
"commitCid": "bafyreih8...",
|
||||
"signedAt": "2025-10-31T12:34:56.789Z"
|
||||
},
|
||||
"signature": {
|
||||
"algorithm": "ECDSA-K256-SHA256",
|
||||
"keyId": "did:plc:alice123#atproto",
|
||||
"publicKeyMultibase": "zQ3shokFTS3brHcDQrn82RUDfCZESWL1ZdCEJwekUDdo1Ko4Z"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Stored as:**
|
||||
- OCI artifact with `artifactType: application/vnd.atproto.signature.v1+json`
|
||||
- Linked to image manifest via OCI Referrers API
|
||||
- Discoverable by standard OCI tools (ORAS, Cosign, Crane)
|
||||
|
||||
## Verification
|
||||
|
||||
### Quick Verification (Shell Script)
|
||||
|
||||
For manual verification, use the provided shell scripts:
|
||||
|
||||
```bash
|
||||
# Verify an image
|
||||
./examples/verification/atcr-verify.sh atcr.io/alice/myapp:latest
|
||||
|
||||
# Output shows:
|
||||
# - DID and handle of signer
|
||||
# - PDS endpoint
|
||||
# - ATProto record URI
|
||||
# - Signature verification status
|
||||
```
|
||||
|
||||
**See:** [examples/verification/README.md](../examples/verification/README.md) for complete examples including:
|
||||
- Standalone verification script
|
||||
- Secure pull wrapper (verify before pull)
|
||||
- Kubernetes webhook deployment
|
||||
- CI/CD integration examples
|
||||
|
||||
### Standard Tools (Discovery Only)
|
||||
|
||||
Standard OCI tools can **discover** ATProto signature artifacts but cannot **verify** them (different signature format):
|
||||
|
||||
```bash
|
||||
# Discover signatures with ORAS
|
||||
oras discover atcr.io/alice/myapp:latest \
|
||||
--artifact-type application/vnd.atproto.signature.v1+json
|
||||
|
||||
# Fetch signature metadata
|
||||
oras pull atcr.io/alice/myapp@sha256:sig789...
|
||||
|
||||
# View with Cosign (discovery only)
|
||||
cosign tree atcr.io/alice/myapp:latest
|
||||
```
|
||||
|
||||
**Note:** Cosign/Notary cannot verify ATProto signatures directly because they use a different signature format and trust model. Use integration plugins or the `atcr-verify` CLI tool instead.
|
||||
|
||||
## Integration Options
|
||||
|
||||
ATCR supports multiple integration approaches depending on your use case:
|
||||
|
||||
### 1. **Plugins (Recommended for Kubernetes)** ⭐
|
||||
|
||||
Build plugins for existing policy/verification engines:
|
||||
|
||||
**Ratify Verifier Plugin:**
|
||||
- Integrates with OPA Gatekeeper
|
||||
- Verifies ATProto signatures using Ratify's plugin interface
|
||||
- Policy-based enforcement for Kubernetes
|
||||
|
||||
**OPA Gatekeeper External Provider:**
|
||||
- HTTP service that verifies ATProto signatures
|
||||
- Rego policies call external provider
|
||||
- Flexible and easy to deploy
|
||||
|
||||
**Containerd 2.0 Bindir Plugin:**
|
||||
- Verifies signatures at containerd level
|
||||
- Works with any CRI-compatible runtime
|
||||
- No Kubernetes required
|
||||
|
||||
**See:** [docs/SIGNATURE_INTEGRATION.md](./SIGNATURE_INTEGRATION.md) for complete plugin implementation examples
|
||||
|
||||
### 2. **CLI Tool (atcr-verify)**
|
||||
|
||||
Standalone CLI tool for signature verification:
|
||||
|
||||
```bash
|
||||
# Install
|
||||
go install github.com/atcr-io/atcr/cmd/atcr-verify@latest
|
||||
|
||||
# Verify image
|
||||
atcr-verify atcr.io/alice/myapp:latest --policy trust-policy.yaml
|
||||
|
||||
# Use in CI/CD
|
||||
atcr-verify $IMAGE --quiet && kubectl apply -f deployment.yaml
|
||||
```
|
||||
|
||||
**Features:**
|
||||
- Trust policy management (which DIDs to trust)
|
||||
- Multiple output formats (text, JSON, SARIF)
|
||||
- Offline verification with cached DID documents
|
||||
- Library usage for custom integrations
|
||||
|
||||
**See:** [docs/ATCR_VERIFY_CLI.md](./ATCR_VERIFY_CLI.md) for complete CLI specification
|
||||
|
||||
### 3. **External Services**
|
||||
|
||||
Deploy verification as a service:
|
||||
|
||||
**GitHub Actions:**
|
||||
```yaml
|
||||
- name: Verify image signature
|
||||
uses: atcr-io/atcr-verify-action@v1
|
||||
with:
|
||||
image: atcr.io/alice/myapp:${{ github.sha }}
|
||||
policy: .atcr/trust-policy.yaml
|
||||
```
|
||||
|
||||
**GitLab CI, Jenkins, CircleCI:**
|
||||
- Use `atcr-verify` CLI in pipeline
|
||||
- Fail build if verification fails
|
||||
- Enforce signature requirements before deployment
|
||||
|
||||
### 4. **X.509 Certificates (Hold-as-CA)** ⚠️
|
||||
|
||||
Optional approach where hold services issue X.509 certificates based on ATProto signatures:
|
||||
|
||||
**Use cases:**
|
||||
- Enterprise environments requiring PKI compliance
|
||||
- Tools that only support X.509 (legacy systems)
|
||||
- Notation integration (P-256 certificates)
|
||||
|
||||
**Trade-offs:**
|
||||
- ❌ Introduces centralization (hold acts as CA)
|
||||
- ❌ Trust shifts from DIDs to hold operator
|
||||
- ❌ Requires hold service infrastructure
|
||||
|
||||
**See:** [docs/HOLD_AS_CA.md](./HOLD_AS_CA.md) for complete architecture and security considerations
|
||||
|
||||
## Integration Strategy Decision Matrix
|
||||
|
||||
Choose the right integration approach:
|
||||
|
||||
| Use Case | Recommended Approach | Priority |
|
||||
|----------|---------------------|----------|
|
||||
| **Kubernetes admission control** | Ratify plugin or Gatekeeper provider | HIGH |
|
||||
| **CI/CD verification** | atcr-verify CLI or GitHub Actions | HIGH |
|
||||
| **Docker/containerd** | Containerd bindir plugin | MEDIUM |
|
||||
| **Policy enforcement** | OPA Gatekeeper + external provider | HIGH |
|
||||
| **Manual verification** | Shell scripts or atcr-verify CLI | LOW |
|
||||
| **Enterprise PKI compliance** | Hold-as-CA (X.509 certificates) | OPTIONAL |
|
||||
| **Legacy tool support** | Hold-as-CA or external bridge service | OPTIONAL |
|
||||
|
||||
**See:** [docs/INTEGRATION_STRATEGY.md](./INTEGRATION_STRATEGY.md) for complete integration planning guide including:
|
||||
- Architecture layers and data flow
|
||||
- Tool compatibility matrix (16+ tools)
|
||||
- Implementation roadmap (4 phases)
|
||||
- When to use each approach
|
||||
|
||||
## Trust Policies
|
||||
|
||||
Define which signatures you trust:
|
||||
|
||||
```yaml
|
||||
# trust-policy.yaml
|
||||
version: 1.0
|
||||
|
||||
trustedDIDs:
|
||||
did:plc:alice123:
|
||||
name: "Alice (DevOps Lead)"
|
||||
validFrom: "2024-01-01T00:00:00Z"
|
||||
expiresAt: null
|
||||
|
||||
did:plc:bob456:
|
||||
name: "Bob (Security Team)"
|
||||
validFrom: "2024-06-01T00:00:00Z"
|
||||
expiresAt: "2025-12-31T23:59:59Z"
|
||||
|
||||
policies:
|
||||
- name: production-images
|
||||
scope: "atcr.io/*/prod-*"
|
||||
require:
|
||||
signature: true
|
||||
trustedDIDs:
|
||||
- did:plc:alice123
|
||||
- did:plc:bob456
|
||||
minSignatures: 1
|
||||
action: enforce # reject if policy fails
|
||||
|
||||
- name: dev-images
|
||||
scope: "atcr.io/*/dev-*"
|
||||
require:
|
||||
signature: false
|
||||
action: audit # log but don't reject
|
||||
```
|
||||
|
||||
**Use with:**
|
||||
- `atcr-verify` CLI: `atcr-verify IMAGE --policy trust-policy.yaml`
|
||||
- Kubernetes webhooks: ConfigMap with policy
|
||||
- CI/CD pipelines: Fail build if policy not met
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### What ATProto Signatures Prove
|
||||
|
||||
✅ **Identity:** Manifest signed by specific DID (e.g., `did:plc:alice123`)
|
||||
✅ **Integrity:** Manifest content hasn't been tampered with
|
||||
✅ **Timestamp:** When the manifest was signed
|
||||
✅ **Authenticity:** Signature created with private key for that DID
|
||||
|
||||
### What They Don't Prove
|
||||
|
||||
❌ **Vulnerability-free:** Signature doesn't mean image is safe
|
||||
❌ **Authorization:** DID ownership doesn't imply permission to deploy
|
||||
❌ **Key security:** Private key could be compromised
|
||||
❌ **PDS trustworthiness:** Malicious PDS could create fake records
|
||||
|
||||
### Trust Dependencies
|
||||
|
||||
When verifying signatures, you're trusting:
|
||||
1. **DID resolution** (PLC directory, did:web) - public key is correct for DID
|
||||
2. **PDS integrity** - PDS serves correct records and doesn't forge signatures
|
||||
3. **Cryptographic primitives** - ECDSA K-256 remains secure
|
||||
4. **Your trust policy** - DIDs you've chosen to trust are legitimate
|
||||
|
||||
### Best Practices
|
||||
|
||||
**1. Use Trust Policies**
|
||||
Don't blindly trust all signatures - define which DIDs you trust:
|
||||
```yaml
|
||||
trustedDIDs:
|
||||
- did:plc:your-org-team
|
||||
- did:plc:your-ci-system
|
||||
```
|
||||
|
||||
**2. Monitor Signature Coverage**
|
||||
Track which images have signatures:
|
||||
```bash
|
||||
atcr-verify --check-coverage namespace/production
|
||||
```
|
||||
|
||||
**3. Enforce in Production**
|
||||
Use Kubernetes admission control to block unsigned images:
|
||||
```yaml
|
||||
# Ratify + Gatekeeper or custom webhook
|
||||
enforceSignatures: true
|
||||
failurePolicy: Fail
|
||||
```
|
||||
|
||||
**4. Verify in CI/CD**
|
||||
Never deploy unsigned images:
|
||||
```yaml
|
||||
# GitHub Actions
|
||||
- name: Verify signature
|
||||
run: atcr-verify $IMAGE || exit 1
|
||||
```
|
||||
|
||||
**5. Plan for Compromised Keys**
|
||||
- Rotate DID keys periodically
|
||||
- Monitor DID documents for unexpected key changes
|
||||
- Have incident response plan for key compromise
|
||||
|
||||
## Implementation Status
|
||||
|
||||
### ✅ Available Now
|
||||
|
||||
- **ATProto signatures**: All manifests automatically signed by PDS
|
||||
- **ORAS artifacts**: Signature metadata stored as OCI artifacts
|
||||
- **OCI Referrers API**: Discovery via standard OCI endpoints
|
||||
- **Shell scripts**: Manual verification examples
|
||||
- **Documentation**: Complete integration guides
|
||||
|
||||
### 🔄 In Development
|
||||
|
||||
- **atcr-verify CLI**: Standalone verification tool
|
||||
- **Ratify plugin**: Kubernetes integration
|
||||
- **Gatekeeper provider**: OPA policy enforcement
|
||||
- **GitHub Actions**: CI/CD integration
|
||||
|
||||
### 📋 Planned
|
||||
|
||||
- **Containerd plugin**: Runtime-level verification
|
||||
- **Hold-as-CA**: X.509 certificate generation (optional)
|
||||
- **Web UI**: Signature viewer in AppView
|
||||
- **Offline bundles**: Air-gapped verification
|
||||
|
||||
## Comparison with Other Signing Solutions
|
||||
|
||||
| Feature | ATCR (ATProto) | Cosign (Sigstore) | Notation (Notary v2) |
|
||||
|---------|---------------|-------------------|---------------------|
|
||||
| **Signing** | Automatic (PDS) | Manual or keyless | Manual |
|
||||
| **Keys** | K-256 (secp256k1) | P-256 or RSA | P-256, P-384, P-521 |
|
||||
| **Trust** | DID-based | OIDC + Fulcio CA | X.509 PKI |
|
||||
| **Storage** | ATProto PDS | OCI registry | OCI registry |
|
||||
| **Centralization** | Decentralized | Centralized (Fulcio) | Configurable |
|
||||
| **Transparency Log** | ATProto firehose | Rekor | Configurable |
|
||||
| **Verification** | Custom tools/plugins | Cosign CLI | Notation CLI |
|
||||
| **Kubernetes** | Plugins (Ratify) | Policy Controller | Policy Controller |
|
||||
|
||||
**ATCR advantages:**
|
||||
- ✅ Decentralized trust (no CA required)
|
||||
- ✅ Automatic signing (no extra tools)
|
||||
- ✅ DID-based identity (portable, self-sovereign)
|
||||
- ✅ Transparent via ATProto firehose
|
||||
|
||||
**ATCR trade-offs:**
|
||||
- ⚠️ Requires custom verification tools/plugins
|
||||
- ⚠️ K-256 not supported by Notation (needs Hold-as-CA)
|
||||
- ⚠️ Smaller ecosystem than Cosign/Notation
|
||||
|
||||
## Why Not Use Cosign Directly?
|
||||
|
||||
**Question:** Why not just integrate with Cosign's keyless signing (OIDC + Fulcio)?
|
||||
|
||||
**Answer:** ATProto and Cosign use incompatible authentication models:
|
||||
|
||||
| Requirement | Cosign Keyless | ATProto |
|
||||
|-------------|---------------|---------|
|
||||
| **Identity protocol** | OIDC | ATProto OAuth + DPoP |
|
||||
| **Token format** | JWT from OIDC provider | DPoP-bound access token |
|
||||
| **CA** | Fulcio (Sigstore CA) | None (DID-based PKI) |
|
||||
| **Infrastructure** | Fulcio + Rekor + TUF | PDS + DID resolver |
|
||||
|
||||
**To make Cosign work, we'd need to:**
|
||||
1. Deploy Fulcio (certificate authority)
|
||||
2. Deploy Rekor (transparency log)
|
||||
3. Deploy TUF (metadata distribution)
|
||||
4. Build OIDC provider bridge for ATProto OAuth
|
||||
5. Maintain all this infrastructure
|
||||
|
||||
**Instead:** We leverage ATProto's existing signatures and build lightweight plugins/tools for verification. This is simpler, more decentralized, and aligns with ATCR's design philosophy.
|
||||
|
||||
**For tools that need X.509 certificates:** See [Hold-as-CA](./HOLD_AS_CA.md) for an optional centralized approach.
|
||||
|
||||
## Getting Started
|
||||
|
||||
### Verify Your First Image
|
||||
|
||||
```bash
|
||||
# 1. Check if image has ATProto signature
|
||||
oras discover atcr.io/alice/myapp:latest \
|
||||
--artifact-type application/vnd.atproto.signature.v1+json
|
||||
|
||||
# 2. Pull signature metadata
|
||||
oras pull atcr.io/alice/myapp@sha256:sig789...
|
||||
|
||||
# 3. Verify with shell script
|
||||
./examples/verification/atcr-verify.sh atcr.io/alice/myapp:latest
|
||||
|
||||
# 4. Use atcr-verify CLI (when available)
|
||||
atcr-verify atcr.io/alice/myapp:latest --policy trust-policy.yaml
|
||||
```
|
||||
|
||||
### Deploy Kubernetes Verification
|
||||
|
||||
```bash
|
||||
# 1. Choose an approach
|
||||
# Option A: Ratify plugin (recommended)
|
||||
# Option B: Gatekeeper external provider
|
||||
# Option C: Custom admission webhook
|
||||
|
||||
# 2. Follow integration guide
|
||||
# See docs/SIGNATURE_INTEGRATION.md for step-by-step
|
||||
|
||||
# 3. Enable for namespace
|
||||
kubectl label namespace production atcr-verify=enabled
|
||||
|
||||
# 4. Test with sample pod
|
||||
kubectl run test --image=atcr.io/alice/myapp:latest -n production
|
||||
```
|
||||
|
||||
### Integrate with CI/CD
|
||||
|
||||
```bash
|
||||
# GitHub Actions
|
||||
- name: Verify signature
|
||||
run: |
|
||||
curl -LO https://github.com/atcr-io/atcr/releases/latest/download/atcr-verify
|
||||
chmod +x atcr-verify
|
||||
./atcr-verify ${{ env.IMAGE }} --policy .atcr/trust-policy.yaml
|
||||
|
||||
# GitLab CI
|
||||
verify_image:
|
||||
script:
|
||||
- wget https://github.com/atcr-io/atcr/releases/latest/download/atcr-verify
|
||||
- chmod +x atcr-verify
|
||||
- ./atcr-verify $IMAGE --policy .atcr/trust-policy.yaml
|
||||
```
|
||||
|
||||
## Documentation
|
||||
|
||||
### Core Documentation
|
||||
|
||||
- **[ATProto Signatures](./ATPROTO_SIGNATURES.md)** - Technical deep-dive into signature format and verification
|
||||
- **[Signature Integration](./SIGNATURE_INTEGRATION.md)** - Tool-specific integration guides (Ratify, Gatekeeper, Containerd)
|
||||
- **[Integration Strategy](./INTEGRATION_STRATEGY.md)** - High-level overview and decision matrix
|
||||
- **[atcr-verify CLI](./ATCR_VERIFY_CLI.md)** - CLI tool specification and usage
|
||||
- **[Hold-as-CA](./HOLD_AS_CA.md)** - Optional X.509 certificate approach
|
||||
|
||||
### Examples
|
||||
|
||||
- **[examples/verification/](../examples/verification/)** - Shell scripts, Kubernetes configs, trust policies
|
||||
- **[examples/plugins/](../examples/plugins/)** - Plugin skeletons for Ratify, Gatekeeper, Containerd
|
||||
|
||||
### External References
|
||||
|
||||
- **ATProto:** https://atproto.com/specs/repository (repository commit signing)
|
||||
- **ORAS:** https://oras.land/ (artifact registry)
|
||||
- **OCI Referrers API:** https://github.com/opencontainers/distribution-spec/blob/main/spec.md#listing-referrers
|
||||
- **Ratify:** https://ratify.dev/ (verification framework)
|
||||
- **OPA Gatekeeper:** https://open-policy-agent.github.io/gatekeeper/
|
||||
|
||||
## Support
|
||||
|
||||
For questions or issues:
|
||||
- GitHub Issues: https://github.com/atcr-io/atcr/issues
|
||||
- Documentation: https://docs.atcr.io
|
||||
- Security: security@atcr.io
|
||||
|
||||
## Summary
|
||||
|
||||
**Key Points:**
|
||||
|
||||
1. **Automatic signing**: Every ATCR image is automatically signed via ATProto's native signature system
|
||||
2. **No additional tools**: Signing happens transparently when you push images
|
||||
3. **Decentralized trust**: DID-based signatures, no central CA required
|
||||
4. **Standard discovery**: ORAS artifacts and OCI Referrers API for signature metadata
|
||||
5. **Custom verification**: Use plugins, CLI tools, or shell scripts (not Cosign directly)
|
||||
6. **Multiple integrations**: Kubernetes (Ratify, Gatekeeper), CI/CD (atcr-verify), containerd
|
||||
7. **Optional X.509**: Hold-as-CA for enterprise PKI compliance (centralized)
|
||||
|
||||
**Next Steps:**
|
||||
|
||||
1. Read [examples/verification/README.md](../examples/verification/README.md) for practical examples
|
||||
2. Choose integration approach from [INTEGRATION_STRATEGY.md](./INTEGRATION_STRATEGY.md)
|
||||
3. Implement plugin or deploy CLI tool from [SIGNATURE_INTEGRATION.md](./SIGNATURE_INTEGRATION.md)
|
||||
4. Define trust policy for your organization
|
||||
5. Deploy to test environment first, then production
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user