26 KiB
S3 Presigned URLs Implementation
Overview
Currently, ATCR's hold service acts as a proxy for all blob data, meaning every byte flows through the hold service when uploading or downloading container images. This document describes the implementation of S3 presigned URLs to eliminate this bottleneck, allowing direct data transfer between clients and S3-compatible storage.
Current Architecture (Proxy Mode)
Downloads: Docker → AppView → Hold Service → S3 → Hold Service → AppView → Docker
Uploads: Docker → AppView → Hold Service → S3
Problems:
- All blob data flows through hold service
- Hold service bandwidth = total image bandwidth
- Latency from extra hops
- Hold service becomes bottleneck for large images
Target Architecture (Presigned URLs)
Downloads: Docker → AppView (gets presigned URL) → S3 (direct download)
Uploads: Docker → AppView → S3 (via presigned URL)
Move: AppView → Hold Service → S3 (server-side CopyObject API)
Benefits:
- ✅ Hold service only orchestrates (no data transfer)
- ✅ Blob data never touches hold service
- ✅ Direct S3 uploads/downloads at wire speed
- ✅ Hold service can run on minimal resources
- ✅ Works with all S3-compatible services
How Presigned URLs Work
For Downloads (GET)
- Docker requests blob:
GET /v2/alice/myapp/blobs/sha256:abc123 - AppView asks hold service:
POST /get-presigned-url{"did": "did:plc:alice123", "digest": "sha256:abc123"} - Hold service generates presigned URL:
req, _ := s3Client.GetObjectRequest(&s3.GetObjectInput{ Bucket: "my-bucket", Key: "blobs/sha256/ab/abc123.../data", }) url, _ := req.Presign(15 * time.Minute) // Returns: https://gateway.storjshare.io/bucket/blobs/...?X-Amz-Signature=... - AppView redirects Docker:
HTTP 307 Location: <presigned-url> - Docker downloads directly from S3 using the presigned URL
Data path: Docker → S3 (direct) Hold service bandwidth: ~1KB (API request/response)
For Uploads (PUT)
Small blobs (< 5MB) using Put():
- Docker sends blob to AppView:
PUT /v2/alice/myapp/blobs/uploads/{uuid} - AppView asks hold service:
POST /put-presigned-url{"did": "did:plc:alice123", "digest": "sha256:abc123", "size": 1024} - Hold service generates presigned URL:
req, _ := s3Client.PutObjectRequest(&s3.PutObjectInput{ Bucket: "my-bucket", Key: "blobs/sha256/ab/abc123.../data", }) url, _ := req.Presign(15 * time.Minute) - AppView uploads to S3 using presigned URL
- AppView confirms to Docker:
201 Created
Data path: Docker → AppView → S3 (via presigned URL) Hold service bandwidth: ~1KB (API request/response)
For Streaming Uploads (Create/Commit)
Large blobs (> 5MB) using streaming:
- Docker starts upload:
POST /v2/alice/myapp/blobs/uploads/ - AppView creates upload session with UUID
- AppView gets presigned URL for temp location:
POST /put-presigned-url {"did": "...", "digest": "uploads/temp-{uuid}", "size": 0} - Docker streams data:
PATCH /v2/alice/myapp/blobs/uploads/{uuid} - AppView streams to S3 using presigned URL to
uploads/temp-{uuid}/data - Docker finalizes:
PUT /v2/.../uploads/{uuid}?digest=sha256:abc123 - AppView requests move:
POST /move?from=uploads/temp-{uuid}&to=sha256:abc123 - Hold service executes S3 server-side copy:
s3.CopyObject(&s3.CopyObjectInput{ Bucket: "my-bucket", CopySource: "/my-bucket/uploads/temp-{uuid}/data", Key: "blobs/sha256/ab/abc123.../data", }) s3.DeleteObject(&s3.DeleteObjectInput{ Key: "uploads/temp-{uuid}/data", })
Data path: Docker → AppView → S3 (temp location) Move path: S3 internal copy (no data transfer!) Hold service bandwidth: ~2KB (presigned URL + CopyObject API)
For Chunked Uploads (Multipart Upload)
Large blobs with OCI chunked protocol (Docker PATCH requests):
The OCI Distribution Spec uses chunked uploads via multiple PATCH requests. Single presigned URLs don't support this - we need S3 Multipart Upload.
- Docker starts upload:
POST /v2/alice/myapp/blobs/uploads/ - AppView initiates multipart:
POST /start-multipart {"did": "...", "digest": "uploads/temp-{uuid}"} → Returns: {"upload_id": "xyz123"} - Docker sends chunk 1:
PATCH /v2/.../uploads/{uuid}(5MB data) - AppView gets part URL:
POST /part-presigned-url {"did": "...", "digest": "uploads/temp-{uuid}", "upload_id": "xyz123", "part_number": 1} → Returns: {"url": "https://s3.../part?uploadId=xyz123&partNumber=1&..."} - AppView uploads part 1 using presigned URL → Gets ETag
- Docker sends chunk 2:
PATCH /v2/.../uploads/{uuid}(5MB data) - Repeat steps 4-5 for part 2 (and subsequent parts)
- Docker finalizes:
PUT /v2/.../uploads/{uuid}?digest=sha256:abc123 - AppView completes multipart:
POST /complete-multipart {"did": "...", "digest": "uploads/temp-{uuid}", "upload_id": "xyz123", "parts": [{"part_number": 1, "etag": "..."}, {"part_number": 2, "etag": "..."}]} - AppView requests move:
POST /move?from=uploads/temp-{uuid}&to=sha256:abc123 - Hold service executes S3 server-side copy (same as above)
Data path: Docker → AppView (buffers 5MB) → S3 (via presigned URL per part) Each PATCH: Independent, non-blocking, immediate response Hold service bandwidth: ~1KB per part + ~1KB for completion
Why This Fixes "Client Disconnected" Errors:
- Previous implementation: Single presigned URL + pipe → PATCH blocks → Docker timeout
- New implementation: Each PATCH → separate part upload → immediate response → no blocking
Why the Temp → Final Move is Required
This is not an ATCR implementation detail — it's required by the OCI Distribution Specification.
The Problem: Unknown Digest
Docker doesn't know the blob's digest until after uploading:
- Streaming data: Can't buffer 5GB layer in memory to calculate digest first
- Stdin pipes:
docker build . | docker pushgenerates data on-the-fly - Chunked uploads: Multiple PATCH requests, digest calculated as data streams
The Solution: Upload to Temp, Verify, Move
All OCI registries do this:
- Client:
POST /v2/{name}/blobs/uploads/→ Get upload UUID - Client:
PATCH /v2/{name}/blobs/uploads/{uuid}→ Stream data to temp location - Client:
PUT /v2/{name}/blobs/uploads/{uuid}?digest=sha256:abc→ Provide digest - Registry: Verify digest matches uploaded data
- Registry: Move
uploads/{uuid}→blobs/sha256/abc123...
Docker Hub, GHCR, ECR, Harbor — all use this pattern.
Why It's Efficient with S3
For S3, the move is a CopyObject API call:
// This happens INSIDE S3 servers - no data transfer!
s3.CopyObject(&s3.CopyObjectInput{
Bucket: "my-bucket",
CopySource: "/my-bucket/uploads/temp-12345/data", // 5GB blob
Key: "blobs/sha256/ab/abc123.../data",
})
// S3 copies internally, hold service only sends ~1KB API request
For a 5GB layer:
- Hold service bandwidth: ~1KB (API request/response)
- S3 internal copy: Instant (metadata operation on S3 side)
- No data leaves S3, no network transfer
This is why the move operation is essentially free!
Implementation Details
1. Add S3 Client to Hold Service
File: cmd/hold/main.go
Modify HoldService struct:
type HoldService struct {
driver storagedriver.StorageDriver
config *Config
s3Client *s3.S3 // NEW: S3 client for presigned URLs
bucket string // NEW: Bucket name
s3PathPrefix string // NEW: Path prefix (if any)
}
Add initialization function:
func (s *HoldService) initS3Client() error {
if s.config.Storage.Type() != "s3" {
log.Printf("Storage driver is %s (not S3), presigned URLs disabled", s.config.Storage.Type())
return nil
}
params := s.config.Storage.Parameters()["s3"].(configuration.Parameters)
// Build AWS config
awsConfig := &aws.Config{
Region: aws.String(params["region"].(string)),
Credentials: credentials.NewStaticCredentials(
params["accesskey"].(string),
params["secretkey"].(string),
"",
),
}
// Add custom endpoint for S3-compatible services (Storj, MinIO, etc.)
if endpoint, ok := params["regionendpoint"].(string); ok && endpoint != "" {
awsConfig.Endpoint = aws.String(endpoint)
awsConfig.S3ForcePathStyle = aws.Bool(true) // Required for MinIO, Storj
}
sess, err := session.NewSession(awsConfig)
if err != nil {
return fmt.Errorf("failed to create AWS session: %w", err)
}
s.s3Client = s3.New(sess)
s.bucket = params["bucket"].(string)
log.Printf("S3 presigned URLs enabled for bucket: %s", s.bucket)
return nil
}
Call during service initialization:
func NewHoldService(cfg *Config) (*HoldService, error) {
// ... existing driver creation ...
service := &HoldService{
driver: driver,
config: cfg,
}
// Initialize S3 client for presigned URLs
if err := service.initS3Client(); err != nil {
log.Printf("WARNING: S3 presigned URLs disabled: %v", err)
}
return service, nil
}
2. Implement Presigned URL Generation
For Downloads:
func (s *HoldService) getDownloadURL(ctx context.Context, digest string, did string) (string, error) {
path := blobPath(digest)
// Check if blob exists
if _, err := s.driver.Stat(ctx, path); err != nil {
return "", fmt.Errorf("blob not found: %w", err)
}
// If S3 client available, generate presigned URL
if s.s3Client != nil {
s3Key := strings.TrimPrefix(path, "/")
req, _ := s.s3Client.GetObjectRequest(&s3.GetObjectInput{
Bucket: aws.String(s.bucket),
Key: aws.String(s3Key),
})
url, err := req.Presign(15 * time.Minute)
if err != nil {
log.Printf("WARN: Presigned URL generation failed, falling back to proxy: %v", err)
return s.getProxyDownloadURL(digest, did), nil
}
log.Printf("Generated presigned download URL for %s (expires in 15min)", digest)
return url, nil
}
// Fallback: return proxy URL
return s.getProxyDownloadURL(digest, did), nil
}
func (s *HoldService) getProxyDownloadURL(digest, did string) string {
return fmt.Sprintf("%s/blobs/%s?did=%s", s.config.Server.PublicURL, digest, did)
}
For Uploads:
func (s *HoldService) getUploadURL(ctx context.Context, digest string, size int64, did string) (string, error) {
path := blobPath(digest)
// If S3 client available, generate presigned URL
if s.s3Client != nil {
s3Key := strings.TrimPrefix(path, "/")
req, _ := s.s3Client.PutObjectRequest(&s3.PutObjectInput{
Bucket: aws.String(s.bucket),
Key: aws.String(s3Key),
})
url, err := req.Presign(15 * time.Minute)
if err != nil {
log.Printf("WARN: Presigned URL generation failed, falling back to proxy: %v", err)
return s.getProxyUploadURL(digest, did), nil
}
log.Printf("Generated presigned upload URL for %s (expires in 15min)", digest)
return url, nil
}
// Fallback: return proxy URL
return s.getProxyUploadURL(digest, did), nil
}
func (s *HoldService) getProxyUploadURL(digest, did string) string {
return fmt.Sprintf("%s/blobs/%s?did=%s", s.config.Server.PublicURL, digest, did)
}
3. Multipart Upload Endpoints (Required for Chunked Uploads)
File: cmd/hold/main.go
Start Multipart Upload
func (s *HoldService) HandleStartMultipart(w http.ResponseWriter, r *http.Request) {
var req StartMultipartUploadRequest // {did, digest}
// Validate DID authorization for WRITE
if !s.isAuthorizedWrite(req.DID) {
// Return 403 Forbidden
}
// Initiate S3 multipart upload
result, err := s.s3Client.CreateMultipartUploadWithContext(ctx, &s3.CreateMultipartUploadInput{
Bucket: aws.String(s.bucket),
Key: aws.String(s3Key),
})
// Return upload ID
json.NewEncoder(w).Encode(StartMultipartUploadResponse{
UploadID: *result.UploadId,
ExpiresAt: time.Now().Add(24 * time.Hour),
})
}
Route: POST /start-multipart
Get Part Presigned URL
func (s *HoldService) HandleGetPartURL(w http.ResponseWriter, r *http.Request) {
var req GetPartURLRequest // {did, digest, upload_id, part_number}
// Generate presigned URL for specific part
req, _ := s.s3Client.UploadPartRequest(&s3.UploadPartInput{
Bucket: aws.String(s.bucket),
Key: aws.String(s3Key),
UploadId: aws.String(uploadID),
PartNumber: aws.Int64(int64(partNumber)),
})
url, err := req.Presign(15 * time.Minute)
json.NewEncoder(w).Encode(GetPartURLResponse{URL: url})
}
Route: POST /part-presigned-url
Complete Multipart Upload
func (s *HoldService) HandleCompleteMultipart(w http.ResponseWriter, r *http.Request) {
var req CompleteMultipartRequest // {did, digest, upload_id, parts: [{part_number, etag}]}
// Convert parts to S3 format
s3Parts := make([]*s3.CompletedPart, len(req.Parts))
for i, p := range req.Parts {
s3Parts[i] = &s3.CompletedPart{
PartNumber: aws.Int64(int64(p.PartNumber)),
ETag: aws.String(p.ETag),
}
}
// Complete multipart upload
_, err := s.s3Client.CompleteMultipartUploadWithContext(ctx, &s3.CompleteMultipartUploadInput{
Bucket: aws.String(s.bucket),
Key: aws.String(s3Key),
UploadId: aws.String(uploadID),
MultipartUpload: &s3.CompletedMultipartUpload{Parts: s3Parts},
})
}
Route: POST /complete-multipart
Abort Multipart Upload
func (s *HoldService) HandleAbortMultipart(w http.ResponseWriter, r *http.Request) {
var req AbortMultipartRequest // {did, digest, upload_id}
// Abort and cleanup parts
_, err := s.s3Client.AbortMultipartUploadWithContext(ctx, &s3.AbortMultipartUploadInput{
Bucket: aws.String(s.bucket),
Key: aws.String(s3Key),
UploadId: aws.String(uploadID),
})
}
Route: POST /abort-multipart
4. Move Operation (No Changes)
The existing /move endpoint already uses driver.Move(), which for S3:
- Calls
s3.CopyObject()(server-side copy) - Calls
s3.DeleteObject()(delete source) - No data transfer through hold service!
File: cmd/hold/main.go:393 (already exists, no changes needed)
func (s *HoldService) HandleMove(w http.ResponseWriter, r *http.Request) {
// ... existing auth and parsing ...
sourcePath := blobPath(fromPath) // uploads/temp-{uuid}/data
destPath := blobPath(toDigest) // blobs/sha256/ab/abc123.../data
// For S3, this does CopyObject + DeleteObject (server-side)
if err := s.driver.Move(ctx, sourcePath, destPath); err != nil {
// ... error handling ...
}
}
5. AppView Changes (Multipart Upload Implementation)
File: pkg/storage/proxy_blob_store.go:228
Currently streams to hold service proxy URL. Could be optimized to use presigned URL:
// In Create() - line 228
go func() {
defer pipeReader.Close()
tempPath := fmt.Sprintf("uploads/temp-%s", writer.id)
// Try to get presigned URL for temp location
url, err := p.getUploadURL(ctx, digest.FromString(tempPath), 0)
if err != nil {
// Fallback to direct proxy URL
url = fmt.Sprintf("%s/blobs/%s?did=%s", p.storageEndpoint, tempPath, p.did)
}
req, err := http.NewRequestWithContext(uploadCtx, "PUT", url, pipeReader)
// ... rest unchanged
}()
Note: This optimization is optional. The presigned URL will be returned by hold service's getUploadURL() anyway.
S3-Compatible Service Support
Storj
# .env file
STORAGE_DRIVER=s3
AWS_ACCESS_KEY_ID=your-storj-access-key
AWS_SECRET_ACCESS_KEY=your-storj-secret-key
S3_BUCKET=your-bucket-name
S3_REGION=global
S3_ENDPOINT=https://gateway.storjshare.io
Presigned URL example:
https://gateway.storjshare.io/your-bucket/blobs/sha256/ab/abc123.../data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=...&X-Amz-Signature=...
MinIO
STORAGE_DRIVER=s3
AWS_ACCESS_KEY_ID=minioadmin
AWS_SECRET_ACCESS_KEY=minioadmin
S3_BUCKET=registry
S3_REGION=us-east-1
S3_ENDPOINT=http://minio.example.com:9000
Backblaze B2
STORAGE_DRIVER=s3
AWS_ACCESS_KEY_ID=your-b2-key-id
AWS_SECRET_ACCESS_KEY=your-b2-application-key
S3_BUCKET=your-bucket-name
S3_REGION=us-west-002
S3_ENDPOINT=https://s3.us-west-002.backblazeb2.com
Cloudflare R2
STORAGE_DRIVER=s3
AWS_ACCESS_KEY_ID=your-r2-access-key-id
AWS_SECRET_ACCESS_KEY=your-r2-secret-access-key
S3_BUCKET=your-bucket-name
S3_REGION=auto
S3_ENDPOINT=https://<account-id>.r2.cloudflarestorage.com
All these services support presigned URLs with AWS SDK v1!
Performance Impact
Bandwidth Savings
Before (proxy mode):
- 5GB layer upload: Hold service receives 5GB, sends 5GB to S3 = 10GB bandwidth
- 5GB layer download: S3 sends 5GB to hold, hold sends 5GB to client = 10GB bandwidth
- Total for push+pull: 20GB hold service bandwidth
After (presigned URLs):
- 5GB layer upload: Hold generates URL (1KB), AppView → S3 direct (5GB), CopyObject API (1KB) = ~2KB hold bandwidth
- 5GB layer download: Hold generates URL (1KB), client → S3 direct = ~1KB hold bandwidth
- Total for push+pull: ~3KB hold service bandwidth
Savings: 99.98% reduction in hold service bandwidth!
Latency Improvements
Before:
- Download: Client → AppView → Hold → S3 → Hold → AppView → Client (4 hops)
- Upload: Client → AppView → Hold → S3 (3 hops)
After:
- Download: Client → AppView (redirect) → S3 (1 hop to data)
- Upload: Client → AppView → S3 (2 hops)
- Move: S3 internal (no network hops)
Resource Requirements
Before:
- Hold service needs bandwidth = sum of all image operations
- For 100 concurrent 1GB pushes: 100GB/s bandwidth needed
- Expensive, hard to scale
After:
- Hold service needs minimal CPU for presigned URL signing
- For 100 concurrent 1GB pushes: ~100KB/s bandwidth needed (API traffic)
- Can run on $5/month instance!
Security Considerations
Presigned URL Expiration
- Default: 15 minutes expiration
- Presigned URL includes embedded credentials in query params
- After expiry, URL becomes invalid (S3 rejects with 403)
- No long-lived URLs floating around
Authorization Flow
- AppView validates user via ATProto OAuth
- AppView passes DID to hold service in presigned URL request
- Hold service validates DID (owner or crew member)
- Hold service generates presigned URL if authorized
- Client uses presigned URL directly with S3
Security boundary: Hold service controls who gets presigned URLs, S3 validates the URLs.
Fallback Security
If presigned URL generation fails:
- Falls back to proxy URLs (existing behavior)
- Still requires hold service authorization
- Data flows through hold service (original security model)
Testing & Validation
Verify Presigned URLs are Used
1. Check hold service logs:
docker logs atcr-hold | grep -i presigned
# Should see: "Generated presigned download/upload URL for sha256:..."
2. Monitor network traffic:
# Before: Large data transfers to/from hold service
docker stats atcr-hold
# After: Minimal network usage on hold service
docker stats atcr-hold
3. Inspect redirect responses:
# Should see 307 redirect to S3 URL
curl -v http://appview:5000/v2/alice/myapp/blobs/sha256:abc123 \
-H "Authorization: Bearer $TOKEN"
# Look for:
# < HTTP/1.1 307 Temporary Redirect
# < Location: https://gateway.storjshare.io/...?X-Amz-Signature=...
Test Fallback Behavior
1. With filesystem driver (should use proxy URLs):
STORAGE_DRIVER=filesystem docker-compose up atcr-hold
# Logs should show: "Storage driver is filesystem (not S3), presigned URLs disabled"
2. With S3 but invalid credentials (should fall back):
AWS_ACCESS_KEY_ID=invalid docker-compose up atcr-hold
# Logs should show: "WARN: Presigned URL generation failed, falling back to proxy"
Bandwidth Monitoring
Track hold service bandwidth over time:
# Install bandwidth monitoring
docker exec atcr-hold apt-get update && apt-get install -y vnstat
# Monitor
docker exec atcr-hold vnstat -l
Expected results:
- Before: Bandwidth correlates with image operations
- After: Bandwidth stays minimal regardless of image operations
Migration Guide
For Existing ATCR Deployments
1. Update hold service code (this implementation)
2. No configuration changes needed if already using S3:
# Existing S3 config works automatically
STORAGE_DRIVER=s3
AWS_ACCESS_KEY_ID=...
AWS_SECRET_ACCESS_KEY=...
S3_BUCKET=...
S3_ENDPOINT=...
3. Restart hold service:
docker-compose restart atcr-hold
4. Verify in logs:
S3 presigned URLs enabled for bucket: my-bucket
5. Test with image push/pull:
docker push atcr.io/alice/myapp:latest
docker pull atcr.io/alice/myapp:latest
6. Monitor bandwidth to confirm reduction
Rollback Plan
If issues arise:
Option 1: Disable presigned URLs via env var (if we add this feature)
PRESIGNED_URLS_ENABLED=false docker-compose restart atcr-hold
Option 2: Revert code changes to previous hold service version
The implementation has automatic fallbacks, so partial failures won't break functionality.
Testing with DISABLE_PRESIGNED_URLS
Environment Variable
Set DISABLE_PRESIGNED_URLS=true to force proxy/buffered mode even when S3 is configured.
Use cases:
- Testing proxy/buffered code paths with S3 storage
- Debugging multipart uploads in buffered mode
- Simulating S3 providers that don't support presigned URLs
- Verifying fallback behavior works correctly
How It Works
When DISABLE_PRESIGNED_URLS=true:
Single blob operations:
getDownloadURL()returns proxy URL instead of S3 presigned URLgetHeadURL()returns proxy URL instead of S3 presigned HEAD URLgetUploadURL()returns proxy URL instead of S3 presigned PUT URL- Client uses
/blobs/{digest}endpoints (proxy through hold service)
Multipart uploads:
StartMultipartUploadWithManager()creates Buffered session instead of S3NativeGetPartUploadURL()returns/multipart-parts/{uploadID}/{partNumber}instead of S3 presigned URL- Parts are buffered in memory in the hold service
CompleteMultipartUploadWithManager()assembles parts and writes via storage driver
Testing Example
# Test S3 with forced proxy mode
export STORAGE_DRIVER=s3
export S3_BUCKET=my-bucket
export AWS_ACCESS_KEY_ID=...
export AWS_SECRET_ACCESS_KEY=...
export DISABLE_PRESIGNED_URLS=true # Force buffered/proxy mode
./bin/atcr-hold
# Push an image - should use proxy mode
docker push atcr.io/yourdid/test:latest
# Check logs for:
# "Presigned URLs disabled, using proxy URL"
# "Presigned URLs disabled (DISABLE_PRESIGNED_URLS=true), using buffered mode"
# "Stored part: uploadID=... part=1 size=..."
Future Enhancements
1. Configurable Expiration
Allow customizing presigned URL expiry:
PRESIGNED_URL_EXPIRY=30m # Default: 15m
2. Presigned URL Caching
Cache presigned URLs for frequently accessed blobs (with shorter TTL).
3. CloudFront/CDN Integration
For downloads, use CloudFront presigned URLs instead of direct S3:
- Better global distribution
- Lower egress costs
- Faster downloads
4. Multipart Upload Support
For very large layers (>5GB), use presigned URLs with multipart upload:
- Generate presigned URLs for each part
- Client uploads parts directly to S3
- Hold service finalizes multipart upload
5. Metrics & Monitoring
Track presigned URL usage:
- Count of presigned URLs generated
- Fallback rate (proxy vs presigned)
- Bandwidth savings metrics
References
- OCI Distribution Specification - Push
- AWS SDK Go v1 - Presigned URLs
- Storj - Using Presigned URLs
- MinIO - Presigned Upload via Browser
- Cloudflare R2 - Presigned URLs
- Backblaze B2 - S3 Compatible API
Summary
Implementing S3 presigned URLs transforms ATCR's hold service from a data proxy to a lightweight orchestrator:
✅ 99.98% bandwidth reduction for hold service ✅ Direct client → S3 transfers for maximum speed ✅ Works with all S3-compatible services (Storj, MinIO, R2, B2) ✅ OCI-compliant temp → final move pattern ✅ Automatic fallback to proxy mode for non-S3 drivers ✅ No breaking changes to existing deployments
This makes BYOS (Bring Your Own Storage) truly scalable and cost-effective, as users can run hold services on minimal infrastructure while serving arbitrarily large container images.