mirror of
https://codeberg.org/git-pages/git-pages.git
synced 2026-05-14 11:11:35 +00:00
Compare commits
17 Commits
v0.8.1
...
cat/audit-
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
94f51d8138 | ||
|
|
55f87083e5 | ||
|
|
a9fc5780b1 | ||
|
|
ad92847fa0 | ||
|
|
3311fb639d | ||
|
|
93ce4f9671 | ||
|
|
73e47cd8d5 | ||
|
|
dd7268a657 | ||
|
|
edae862551 | ||
|
|
5808e90e5a | ||
|
|
684553ba72 | ||
|
|
89f672beda | ||
|
|
a233cdfbb8 | ||
|
|
4d8e620846 | ||
|
|
e8112c1abe | ||
|
|
b0a674abf4 | ||
|
|
f001107056 |
20
README.md
20
README.md
@@ -92,7 +92,7 @@ Features
|
||||
* All updates to site content are atomic (subject to consistency guarantees of the storage backend). That is, there is an instantaneous moment during an update before which the server will return the old content and after which it will return the new content.
|
||||
* Files with a certain name, when placed in the root of a site, have special functions:
|
||||
- [Netlify `_redirects`][_redirects] file can be used to specify HTTP redirect and rewrite rules. The _git-pages_ implementation currently does not support placeholders, query parameters, or conditions, and may differ from Netlify in other minor ways. If you find that a supported `_redirects` file feature does not work the same as on Netlify, please file an issue. (Note that _git-pages_ does not perform URL normalization; `/foo` and `/foo/` are *not* the same, unlike with Netlify.)
|
||||
- [Netlify `_headers`][_headers] file can be used to specify custom HTTP response headers (if allowlisted by configuration). In particular, this is useful to enable [CORS requests][cors]. The _git-pages_ implementation may differ from Netlify in minor ways; if you find that a `_headers` file feature does not work the same as on Netlify, please file an issue.
|
||||
- [Netlify `_headers`][_headers] file can be used to specify custom HTTP response headers (if allowlisted by configuration). In particular, this is useful to enable [cross-origin isolation (COOP/COEP)][isolation]. The _git-pages_ implementation may differ from Netlify in minor ways; if you find that a `_headers` file feature does not work the same as on Netlify, please file an issue.
|
||||
- [Netlify `Basic-Auth:`][basic-auth] pseudo-header in the `_headers` file can be used to password-protect parts of a site, if enabled via the `[limits].allow-basic-auth` configuration option. **This is not a security feature: credentials are stored in cleartext and are accessible to anyone who can update the site. *Only* use it in low-stakes applications, e.g. preventing search engines from indexing parts of a site.** The authors of _git-pages_ shall not be held liable for any unauthorized information disclosures resulting from the use of this feature.
|
||||
* Incremental updates can be made using `PUT` or `PATCH` requests where the body contains an archive (both tar and zip are supported).
|
||||
- Any archive entry that is a symlink to `/git/blobs/<git-sha256>` is replaced with an existing manifest entry for the same site whose git blob hash matches `<git-sha256>`. If there is no existing manifest entry with the specified git hash, the update fails with a `422 Unprocessable Entity`.
|
||||
@@ -103,7 +103,7 @@ Features
|
||||
[_redirects]: https://docs.netlify.com/manage/routing/redirects/overview/
|
||||
[_headers]: https://docs.netlify.com/manage/routing/headers/
|
||||
[basic-auth]: https://docs.netlify.com/manage/security/secure-access-to-sites/basic-authentication-with-custom-http-headers/
|
||||
[cors]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/CORS
|
||||
[isolation]: https://web.dev/articles/cross-origin-isolation-guide
|
||||
[go-git-sha256]: https://github.com/go-git/go-git/issues/706
|
||||
[whiteout]: https://docs.kernel.org/filesystems/overlayfs.html#whiteouts-and-opaque-directories
|
||||
|
||||
@@ -116,22 +116,22 @@ DNS is the primary authorization method, using either TXT records or wildcard ma
|
||||
The authorization flow for content updates (`PUT`, `PATCH`, `DELETE`, `POST` requests) proceeds sequentially in the following order, with the first of multiple applicable rule taking precedence:
|
||||
|
||||
1. **Development Mode:** If the environment variable `PAGES_INSECURE` is set to a truthful value like `1`, the request is authorized.
|
||||
2. **DNS Challenge:** If the method is `PUT`, `PATCH`, `DELETE`, `POST`, and a well-formed `Authorization:` header is provided containing a `<token>`, and a TXT record lookup at `_git-pages-challenge.<host>` returns a record whose concatenated value equals `SHA256("<host> <token>")`, the request is authorized.
|
||||
2. **DNS Challenge:** If the method is `PUT`, `PATCH`, `DELETE`, `POST`, and a well-formed `Authorization:` header is provided containing a `<token>`, and a TXT record lookup at `_git-pages-challenge.<host>` returns a record whose concatenated value equals `SHA256("<host> <token>")`, and (for `PUT` and `POST` requests) the requested branch is `pages`, the request is authorized.
|
||||
- **`Pages` scheme:** Request includes an `Authorization: Pages <token>` header.
|
||||
- **`Basic` scheme:** Request includes an `Authorization: Basic <basic>` header, where `<basic>` is equal to `Base64("Pages:<token>")`. (Useful for non-Forgejo forges.)
|
||||
3. **DNS Allowlist:** If the method is `PUT` or `POST`, and the request URL is `scheme://<user>.<host>/`, and a TXT record lookup at `_git-pages-repository.<host>` returns a set of well-formed absolute URLs, and (for `PUT` requests) the body contains a repository URL, and the requested clone URLs is contained in this set of URLs, the request is authorized.
|
||||
4. **Wildcard Match (content):** If the method is `POST`, and a `[[wildcard]]` configuration section exists where the suffix of a hostname (compared label-wise) is equal to `[[wildcard]].domain`, and (for `PUT` requests) the body contains a repository URL, and the requested clone URL is a *matching* clone URL, the request is authorized.
|
||||
- **Index repository:** If the request URL is `scheme://<user>.<host>/`, a *matching* clone URL is computed by templating `[[wildcard]].clone-url` with `<user>` and `<project>`, where `<project>` is computed by templating each element of `[[wildcard]].index-repos` with `<user>`, and `[[wildcard]]` is the section where the match occurred.
|
||||
- **Project repository:** If the request URL is `scheme://<user>.<host>/<project>/`, a *matching* clone URL is computed by templating `[[wildcard]].clone-url` with `<user>` and `<project>`, and `[[wildcard]]` is the section where the match occurred.
|
||||
5. **Forge Authorization (wildcard):** If the method is `PUT` or `PATCH` or `DELETE`, and (unless the method is `DELETE`) the body contains an archive, and a `[[wildcard]]` configuration section exists where the suffix of a hostname (compared label-wise) is equal to `[[wildcard]].domain`, and `[[wildcard]].authorization` is non-empty, and the request includes a `Forge-Authorization:` header, and the header (when forwarded as `Authorization:`) grants push permissions to a repository at the *matching* clone URL (as defined above) as determined by an API call to the forge, the request is authorized.
|
||||
6. **Forge Authorization (DNS allowlist):** If the method is `PUT` or `PATCH` or `DELETE`, and (unless the method is `DELETE`) the body contains an archive, and the request URL is `scheme://<user>.<host>/`, and a TXT record lookup at `_git-pages-forge-allowlist.<host>` returns a set of well-formed absolute URLs, and the request includes a `Forge-Authorization:` header, and the header (when forwarded as `Authorization:`) grants push permissions to a repository at any of the URLs in the TXT records as determined by an API call to the forge, the request is authorized.
|
||||
3. **DNS Allowlist:** If the method is `PUT` or `POST`, and the request URL is `scheme://<user>.<host>/`, and a TXT record lookup at `_git-pages-repository.<host>` returns a set of well-formed absolute URLs, and (for `PUT` requests) the body contains a repository URL or (for `POST` requests) the body contains a GitHub-style webhook payload, and the requested clone URLs is contained in this set of URLs, and the requested branch is `pages`, the request is authorized.
|
||||
4. **Wildcard Match (content):** If the method is `POST`, and the body contains a GitHub-style webhook payload, and a `[[wildcard]]` configuration section exists such that `[[wildcard]].domain` is a suffix of the site hostname (compared label-wise), and the body contains a repository URL, and the requested clone URL is a *matching* clone URL, and the requested branch is a *matching* branch, the request is authorized.
|
||||
- **Index repository:** If the request URL is `scheme://<user>.<host>/`: a *matching* clone URL is computed by templating `[[wildcard]].clone-url` with `<user>` and `<project>`, where `<project>` is computed by templating `[[wildcard]].index-repo` with `<user>`, and `[[wildcard]]` is the section where the match occurred; and a *matching* branch is specified by `[[wildcard]].index-repo-branch`.
|
||||
- **Project repository:** If the request URL is `scheme://<user>.<host>/<project>/`: a *matching* clone URL is computed by templating `[[wildcard]].clone-url` with `<user>` and `<project>`, and `[[wildcard]]` is the section where the match occurred; and a *matching* branch is `pages`.
|
||||
5. **Forge Authorization (wildcard):** If the method is `PUT` or `PATCH` or `DELETE`, and (unless the method is `DELETE`) the body contains an archive, and a `[[wildcard]]` configuration section exists such that `[[wildcard]].domain` is a suffix of the site hostname (compared label-wise), and `[[wildcard]].authorization` is defined, and the request includes a `Forge-Authorization:` header, and the header (when forwarded as `Authorization:`) grants push permissions to a repository at the *matching* clone URL (as defined above) as determined by an API call to the forge, the request is authorized.
|
||||
6. **Forge Authorization (DNS allowlist):** If the method is `PUT` or `PATCH` or `DELETE`, and (unless the method is `DELETE`) the body contains an archive, and the request URL is `scheme://<host>/`, and a TXT record lookup at `_git-pages-forge-allowlist.<host>` returns a set of well-formed absolute URLs, and the request includes a `Forge-Authorization:` header, and the header (when forwarded as `Authorization:`) grants push permissions to a repository at any of the URLs in the TXT records as determined by an API call to the forge, the request is authorized.
|
||||
7. **Default Deny:** Otherwise, the request is not authorized.
|
||||
|
||||
The authorization flow for metadata retrieval (`GET` requests with site paths starting with `.git-pages/`) in the following order, with the first of multiple applicable rule taking precedence:
|
||||
|
||||
1. **Development Mode:** Same as for content updates.
|
||||
2. **DNS Challenge:** Same as for content updates.
|
||||
3. **Wildcard Match (metadata):** If a `[[wildcard]]` configuration section exists where the suffix of a hostname (compared label-wise) is equal to `[[wildcard]].domain`, the request is authorized.
|
||||
3. **Wildcard Match (metadata):** If a `[[wildcard]]` configuration section exists where the suffix of a hostname (compared label-wise) is equal to `[[wildcard]].domain`, and the site never uses the `Basic-Auth:` pseudo-header, the request is authorized.
|
||||
4. **Default Deny:** Otherwise, the request is not authorized.
|
||||
|
||||
|
||||
|
||||
3
go.mod
3
go.mod
@@ -5,11 +5,11 @@ go 1.25.0
|
||||
require (
|
||||
codeberg.org/git-pages/go-headers v1.1.1
|
||||
codeberg.org/git-pages/go-slog-syslog v0.0.0-20251207093707-892f654e80b7
|
||||
github.com/BurntSushi/toml v1.6.0
|
||||
github.com/KimMachineGun/automemlimit v0.7.5
|
||||
github.com/bits-and-blooms/bloom/v3 v3.7.1
|
||||
github.com/c2h5oh/datasize v0.0.0-20231215233829-aa82cc1e6500
|
||||
github.com/creasty/defaults v1.8.0
|
||||
github.com/dghubble/trie v0.1.0
|
||||
github.com/fatih/color v1.19.0
|
||||
github.com/go-git/go-billy/v6 v6.0.0-20260410103409-85b6241850b5
|
||||
github.com/go-git/go-git/v6 v6.0.0-alpha.2
|
||||
@@ -18,7 +18,6 @@ require (
|
||||
github.com/klauspost/compress v1.18.5
|
||||
github.com/maypok86/otter/v2 v2.3.0
|
||||
github.com/minio/minio-go/v7 v7.0.100
|
||||
github.com/pelletier/go-toml/v2 v2.3.0
|
||||
github.com/pquerna/cachecontrol v0.2.0
|
||||
github.com/prometheus/client_golang v1.23.2
|
||||
github.com/samber/slog-multi v1.8.0
|
||||
|
||||
6
go.sum
6
go.sum
@@ -2,6 +2,8 @@ codeberg.org/git-pages/go-headers v1.1.1 h1:fpIBELKo66Z2k+gCeYl5mCEXVQ99Lmx1iup1
|
||||
codeberg.org/git-pages/go-headers v1.1.1/go.mod h1:N4gwH0U3YPwmuyxqH7xBA8j44fTPX+vOEP7ejJVBPts=
|
||||
codeberg.org/git-pages/go-slog-syslog v0.0.0-20251207093707-892f654e80b7 h1:+rkrAxhNZo/eKEcKOqVOsF6ohAPv5amz0JLburOeRjs=
|
||||
codeberg.org/git-pages/go-slog-syslog v0.0.0-20251207093707-892f654e80b7/go.mod h1:8NPSXbYcVb71qqNM5cIgn1/uQgMisLbu2dVD1BNxsUw=
|
||||
github.com/BurntSushi/toml v1.6.0 h1:dRaEfpa2VI55EwlIW72hMRHdWouJeRF7TPYhI+AUQjk=
|
||||
github.com/BurntSushi/toml v1.6.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho=
|
||||
github.com/KimMachineGun/automemlimit v0.7.5 h1:RkbaC0MwhjL1ZuBKunGDjE/ggwAX43DwZrJqVwyveTk=
|
||||
github.com/KimMachineGun/automemlimit v0.7.5/go.mod h1:QZxpHaGOQoYvFhv/r4u3U0JTC2ZcOwbSr11UZF46UBM=
|
||||
github.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY=
|
||||
@@ -31,8 +33,6 @@ github.com/cyphar/filepath-securejoin v0.6.1/go.mod h1:A8hd4EnAeyujCJRrICiOWqjS1
|
||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/dghubble/trie v0.1.0 h1:kJnjBLFFElBwS60N4tkPvnLhnpcDxbBjIulgI8CpNGM=
|
||||
github.com/dghubble/trie v0.1.0/go.mod h1:sOmnzfBNH7H92ow2292dDFWNsVQuh/izuD7otCYb1ak=
|
||||
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
|
||||
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
|
||||
github.com/emirpasic/gods v1.18.1 h1:FXtiHYKDGKCW2KzwZKx0iC0PQmdlorYgdFG9jPXJ1Bc=
|
||||
@@ -94,8 +94,6 @@ github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq
|
||||
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
|
||||
github.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58 h1:onHthvaw9LFnH4t2DcNVpwGmV9E1BkGknEliJkfwQj0=
|
||||
github.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58/go.mod h1:DXv8WO4yhMYhSNPKjeNKa5WY9YCIEBRbNzFFPJbWO6Y=
|
||||
github.com/pelletier/go-toml/v2 v2.3.0 h1:k59bC/lIZREW0/iVaQR8nDHxVq8OVlIzYCOJf421CaM=
|
||||
github.com/pelletier/go-toml/v2 v2.3.0/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY=
|
||||
github.com/philhofer/fwd v1.2.0 h1:e6DnBTl7vGY+Gz322/ASL4Gyp1FspeMvx1RNDoToZuM=
|
||||
github.com/philhofer/fwd v1.2.0/go.mod h1:RqIHx9QI14HlwKwm98g9Re5prTQ6LdeRQn+gXJFxsJM=
|
||||
github.com/pjbgf/sha1cd v0.5.0 h1:a+UkboSi1znleCDUNT3M5YxjOnN1fz2FhN48FlwCxs0=
|
||||
|
||||
@@ -7,6 +7,9 @@ schema = 3
|
||||
[mod."codeberg.org/git-pages/go-slog-syslog"]
|
||||
version = "v0.0.0-20251207093707-892f654e80b7"
|
||||
hash = "sha256-ye+DBIyxqTEOViYRrQPWyGJCaLmyKSDwH5btlqDPizM="
|
||||
[mod."github.com/BurntSushi/toml"]
|
||||
version = "v1.6.0"
|
||||
hash = "sha256-ptdUJvuc21ixeLt+M5way/na3aCnCO4MYHWulWp8NEY="
|
||||
[mod."github.com/KimMachineGun/automemlimit"]
|
||||
version = "v0.7.5"
|
||||
hash = "sha256-lH/ip9j2hbYUc2W/XIYve/5TScQPZtEZe3hu76CY//k="
|
||||
@@ -43,9 +46,6 @@ schema = 3
|
||||
[mod."github.com/davecgh/go-spew"]
|
||||
version = "v1.1.1"
|
||||
hash = "sha256-nhzSUrE1fCkN0+RL04N4h8jWmRFPPPWbCuDc7Ss0akI="
|
||||
[mod."github.com/dghubble/trie"]
|
||||
version = "v0.1.0"
|
||||
hash = "sha256-hVh7uYylpMCCSPcxl70hJTmzSwaA1MxBmJFBO5Xdncc="
|
||||
[mod."github.com/dustin/go-humanize"]
|
||||
version = "v1.0.1"
|
||||
hash = "sha256-yuvxYYngpfVkUg9yAmG99IUVmADTQA0tMbBXe0Fq0Mc="
|
||||
@@ -115,9 +115,6 @@ schema = 3
|
||||
[mod."github.com/pbnjay/memory"]
|
||||
version = "v0.0.0-20210728143218-7b4eea64cf58"
|
||||
hash = "sha256-QI+F1oPLOOtwNp8+m45OOoSfYFs3QVjGzE0rFdpF/IA="
|
||||
[mod."github.com/pelletier/go-toml/v2"]
|
||||
version = "v2.3.0"
|
||||
hash = "sha256-3ftKBqSwUp5rs10NigReAJ8RxfnP4Aol45EkP0XRaa4="
|
||||
[mod."github.com/philhofer/fwd"]
|
||||
version = "v1.2.0"
|
||||
hash = "sha256-cGx2/0QQay46MYGZuamFmU0TzNaFyaO+J7Ddzlr/3dI="
|
||||
|
||||
@@ -8,10 +8,6 @@
|
||||
{
|
||||
"matchPackageNames": ["actions/buildah-simple"],
|
||||
"enabled": false
|
||||
},
|
||||
{
|
||||
"matchPackageNames": ["github.com/pelletier/go-toml/v2"],
|
||||
"enabled": false // added AGENTS.md; v2.3.0 has been manually reviewed
|
||||
}
|
||||
],
|
||||
"automerge": false,
|
||||
|
||||
14
src/audit.go
14
src/audit.go
@@ -50,6 +50,8 @@ func GetPrincipal(ctx context.Context) *Principal {
|
||||
return nil
|
||||
}
|
||||
|
||||
var AuditSnowflakeStartTime = time.Date(2025, 12, 1, 0, 0, 0, 0, time.UTC)
|
||||
|
||||
type AuditID int64
|
||||
|
||||
func GenerateAuditID() AuditID {
|
||||
@@ -74,6 +76,7 @@ func (id AuditID) String() string {
|
||||
|
||||
func (id AuditID) CompareTime(when time.Time) int {
|
||||
idMillis := int64(id) >> (snowflake.MachineIDLength + snowflake.SequenceLength)
|
||||
idMillis += AuditSnowflakeStartTime.UnixMilli()
|
||||
whenMillis := when.UTC().UnixNano() / 1e6
|
||||
return cmp.Compare(idMillis, whenMillis)
|
||||
}
|
||||
@@ -108,6 +111,9 @@ func (record *AuditRecord) DescribePrincipal() string {
|
||||
record.Principal.GetForgeUser().GetHandle(),
|
||||
record.Principal.GetForgeUser().GetId()))
|
||||
}
|
||||
if record.Principal.GetRepoUrl() != "" {
|
||||
items = append(items, record.Principal.GetRepoUrl())
|
||||
}
|
||||
if record.Principal.GetCliAdmin() {
|
||||
items = append(items, "<cli-admin>")
|
||||
}
|
||||
@@ -129,6 +135,14 @@ func (record *AuditRecord) DescribeResource() string {
|
||||
return desc
|
||||
}
|
||||
|
||||
func (record *AuditRecord) IsDetachable() bool {
|
||||
return record.GetEvent() == AuditEvent_CommitManifest
|
||||
}
|
||||
|
||||
func (record *AuditRecord) IsDetached() bool {
|
||||
return record.IsDetachable() && record.Manifest == nil
|
||||
}
|
||||
|
||||
type AuditRecordScope int
|
||||
|
||||
const (
|
||||
|
||||
24
src/auth.go
24
src/auth.go
@@ -78,16 +78,25 @@ func GetHost(r *http.Request) (string, error) {
|
||||
return host, nil
|
||||
}
|
||||
|
||||
func IsValidProjectName(name string) bool {
|
||||
return !strings.HasPrefix(name, ".") && !strings.Contains(name, "%")
|
||||
func ValidateProjectName(name string) error {
|
||||
if strings.HasPrefix(name, ".") {
|
||||
return fmt.Errorf("must not start with %q", ".")
|
||||
}
|
||||
|
||||
forbiddenChars := "%*"
|
||||
if strings.ContainsAny(name, forbiddenChars) {
|
||||
return fmt.Errorf("must not contain any of %q", forbiddenChars)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func GetProjectName(r *http.Request) (string, error) {
|
||||
// path must be either `/` or `/foo/` (`/foo` is accepted as an alias)
|
||||
path := strings.TrimPrefix(strings.TrimSuffix(r.URL.Path, "/"), "/")
|
||||
if !IsValidProjectName(path) {
|
||||
if err := ValidateProjectName(path); err != nil {
|
||||
return "", AuthError{http.StatusBadRequest,
|
||||
fmt.Sprintf("directory name %q is reserved", ".index")}
|
||||
fmt.Sprintf("directory name: %v", err)}
|
||||
} else if strings.Contains(path, "/") {
|
||||
return "", AuthError{http.StatusBadRequest,
|
||||
"directories nested too deep"}
|
||||
@@ -110,6 +119,13 @@ type Authorization struct {
|
||||
forgeUser *ForgeUser
|
||||
}
|
||||
|
||||
func (auth *Authorization) ForgeRepoURL() string {
|
||||
if auth.forgeUser != nil && len(auth.repoURLs) == 1 {
|
||||
return auth.repoURLs[0]
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func authorizeDNSChallenge(r *http.Request) (*Authorization, error) {
|
||||
host, err := GetHost(r)
|
||||
if err != nil {
|
||||
|
||||
@@ -159,6 +159,12 @@ type Backend interface {
|
||||
|
||||
// Retrieve audit record contents for given IDs.
|
||||
GetAuditLogRecords(ctx context.Context, ids iter.Seq2[AuditID, error]) iter.Seq2[*AuditRecord, error]
|
||||
|
||||
// Detach an audit record from its blobs.
|
||||
DetachAuditRecord(ctx context.Context, id AuditID) error
|
||||
|
||||
// Delete an audit record with a given ID.
|
||||
ExpireAuditRecord(ctx context.Context, id AuditID) error
|
||||
}
|
||||
|
||||
func CreateBackend(ctx context.Context, config *StorageConfig) (backend Backend, err error) {
|
||||
|
||||
@@ -484,12 +484,16 @@ func (fs *FSBackend) HaveDomainsChanged(ctx context.Context, since time.Time) (b
|
||||
return true, nil // not implemented
|
||||
}
|
||||
|
||||
func auditDetachedName(id AuditID) string {
|
||||
return fmt.Sprintf("%s.detached", id)
|
||||
}
|
||||
|
||||
func (fs *FSBackend) AppendAuditLog(ctx context.Context, id AuditID, record *AuditRecord) error {
|
||||
if _, err := fs.auditRoot.Stat(id.String()); err == nil {
|
||||
panic(fmt.Errorf("audit ID collision: %s", id))
|
||||
}
|
||||
|
||||
return fs.auditRoot.WriteFile(id.String(), EncodeAuditRecord(record), 0o644)
|
||||
return fs.auditRoot.WriteFile(id.String(), EncodeAuditRecord(record), 0o444)
|
||||
}
|
||||
|
||||
func (fs *FSBackend) QueryAuditLog(ctx context.Context, id AuditID) (*AuditRecord, error) {
|
||||
@@ -498,6 +502,11 @@ func (fs *FSBackend) QueryAuditLog(ctx context.Context, id AuditID) (*AuditRecor
|
||||
} else if record, err := DecodeAuditRecord(data); err != nil {
|
||||
return nil, fmt.Errorf("decode: %w", err)
|
||||
} else {
|
||||
if _, err := fs.auditRoot.Stat(auditDetachedName(id)); err == nil {
|
||||
record.Manifest = nil
|
||||
} else if !errors.Is(err, os.ErrNotExist) {
|
||||
return nil, fmt.Errorf("stat detached marker: %w", err)
|
||||
}
|
||||
return record, nil
|
||||
}
|
||||
}
|
||||
@@ -514,6 +523,8 @@ func (fs *FSBackend) SearchAuditLog(
|
||||
var id AuditID
|
||||
if err != nil {
|
||||
// report error
|
||||
} else if strings.Contains(path, ".") {
|
||||
return nil // skip
|
||||
} else if id, err = ParseAuditID(path); err != nil {
|
||||
// report error
|
||||
} else if !opts.Since.IsZero() && id.CompareTime(opts.Since) < 0 {
|
||||
@@ -545,3 +556,11 @@ func (fs *FSBackend) GetAuditLogRecords(
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (fs *FSBackend) DetachAuditRecord(ctx context.Context, id AuditID) error {
|
||||
return fs.auditRoot.WriteFile(auditDetachedName(id), []byte{}, 0o644)
|
||||
}
|
||||
|
||||
func (fs *FSBackend) ExpireAuditRecord(ctx context.Context, id AuditID) error {
|
||||
return fs.auditRoot.Remove(id.String())
|
||||
}
|
||||
|
||||
@@ -827,6 +827,10 @@ func auditObjectName(id AuditID) string {
|
||||
return fmt.Sprintf("audit/%s", id)
|
||||
}
|
||||
|
||||
func auditDetachedObjectName(id AuditID) string {
|
||||
return fmt.Sprintf("audit/%s.detached", id)
|
||||
}
|
||||
|
||||
func (s3 *S3Backend) AppendAuditLog(ctx context.Context, id AuditID, record *AuditRecord) error {
|
||||
logc.Printf(ctx, "s3: append audit %s\n", id)
|
||||
|
||||
@@ -858,7 +862,20 @@ func (s3 *S3Backend) QueryAuditLog(ctx context.Context, id AuditID) (*AuditRecor
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return DecodeAuditRecord(data)
|
||||
record, err := DecodeAuditRecord(data)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
_, err = s3.client.StatObject(ctx, s3.bucket, auditDetachedObjectName(id),
|
||||
minio.StatObjectOptions{})
|
||||
if err == nil {
|
||||
record.Manifest = nil
|
||||
} else if errResp := minio.ToErrorResponse(err); err != nil && errResp.Code != "NoSuchKey" {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return record, nil
|
||||
}
|
||||
|
||||
func (s3 *S3Backend) SearchAuditLog(
|
||||
@@ -878,8 +895,14 @@ func (s3 *S3Backend) SearchAuditLog(
|
||||
var err error
|
||||
if object.Err != nil {
|
||||
err = object.Err
|
||||
} else {
|
||||
id, err = ParseAuditID(strings.TrimPrefix(object.Key, prefix))
|
||||
} else if strings.Contains(object.Key, ".") {
|
||||
continue
|
||||
} else if id, err = ParseAuditID(strings.TrimPrefix(object.Key, prefix)); err != nil {
|
||||
// report error
|
||||
} else if !opts.Since.IsZero() && id.CompareTime(opts.Since) < 0 {
|
||||
continue
|
||||
} else if !opts.Until.IsZero() && id.CompareTime(opts.Until) > 0 {
|
||||
continue
|
||||
}
|
||||
if !yield(id, err) {
|
||||
break
|
||||
@@ -924,3 +947,18 @@ func (s3 *S3Backend) GetAuditLogRecords(
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (s3 *S3Backend) DetachAuditRecord(ctx context.Context, id AuditID) error {
|
||||
logc.Printf(ctx, "s3: detach audit record %s\n", id)
|
||||
|
||||
_, err := s3.client.PutObject(ctx, s3.bucket, auditDetachedObjectName(id),
|
||||
&bytes.Reader{}, 0, minio.PutObjectOptions{})
|
||||
return err
|
||||
}
|
||||
|
||||
func (s3 *S3Backend) ExpireAuditRecord(ctx context.Context, id AuditID) error {
|
||||
logc.Printf(ctx, "s3: expire audit record %s\n", id)
|
||||
|
||||
return s3.client.RemoveObject(ctx, s3.bucket, auditObjectName(id),
|
||||
minio.RemoveObjectOptions{})
|
||||
}
|
||||
|
||||
@@ -32,7 +32,7 @@ func ServeCaddy(w http.ResponseWriter, r *http.Request) {
|
||||
// Run a cheap check as to whether we might be serving the domain.
|
||||
var found = domainCache.CheckDomain(r.Context(), domain)
|
||||
|
||||
if !found {
|
||||
if found {
|
||||
// Run an expensive check as to whether we are actually serving the domain.
|
||||
found, err = backend.CheckDomain(r.Context(), domain)
|
||||
}
|
||||
|
||||
@@ -12,9 +12,9 @@ import (
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/BurntSushi/toml"
|
||||
"github.com/c2h5oh/datasize"
|
||||
"github.com/creasty/defaults"
|
||||
"github.com/pelletier/go-toml/v2"
|
||||
)
|
||||
|
||||
// For an unknown reason, the standard `time.Duration` type doesn't implement the standard
|
||||
@@ -309,23 +309,28 @@ func PrintConfigEnvVars() {
|
||||
})
|
||||
}
|
||||
|
||||
func PrettyTomlKey(key toml.Key) string {
|
||||
if len(key) == 1 {
|
||||
return key.String()
|
||||
} else {
|
||||
// `toml.Key.String()` adds quotes if necessary.
|
||||
return fmt.Sprintf("[%s].%s", key[:len(key)-1].String(), key[len(key)-1:].String())
|
||||
}
|
||||
}
|
||||
|
||||
func ReadConfigFile(config *Config, tomlPath string) (err error) {
|
||||
if tomlPath != "" {
|
||||
var file *os.File
|
||||
file, err = os.Open(tomlPath)
|
||||
meta, err := toml.DecodeFile(tomlPath, config)
|
||||
if err != nil {
|
||||
return
|
||||
return err
|
||||
}
|
||||
|
||||
defer func(file *os.File) {
|
||||
err = file.Close()
|
||||
}(file)
|
||||
|
||||
decoder := toml.NewDecoder(file)
|
||||
decoder.DisallowUnknownFields()
|
||||
decoder.EnableUnmarshalerInterface()
|
||||
if err = decoder.Decode(&config); err != nil {
|
||||
return
|
||||
unknownKeys := []string{}
|
||||
for _, key := range meta.Undecoded() {
|
||||
unknownKeys = append(unknownKeys, PrettyTomlKey(key))
|
||||
}
|
||||
if len(unknownKeys) > 0 {
|
||||
return fmt.Errorf("unknown keys: %s", strings.Join(unknownKeys, ", "))
|
||||
}
|
||||
}
|
||||
return nil
|
||||
|
||||
@@ -5,30 +5,29 @@ import (
|
||||
"fmt"
|
||||
|
||||
"github.com/c2h5oh/datasize"
|
||||
"github.com/dghubble/trie"
|
||||
)
|
||||
|
||||
func trieReduce(data trie.Trier) (items, total int64) {
|
||||
data.Walk(func(key string, value any) error {
|
||||
items += 1
|
||||
total += *value.(*int64)
|
||||
return nil
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
func TraceGarbage(ctx context.Context) error {
|
||||
allBlobs := trie.NewRuneTrie()
|
||||
liveBlobs := trie.NewRuneTrie()
|
||||
allBlobs := map[string]int64{}
|
||||
liveBlobs := map[string]int64{}
|
||||
|
||||
traceManifest := func(manifestName string, manifest *Manifest) error {
|
||||
reduceBlobs := func(data map[string]int64) (items, total int64) {
|
||||
for _, value := range data {
|
||||
items += 1
|
||||
total += value
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
traceManifest := func(manifestKind string, manifestName string, manifest *Manifest) error {
|
||||
for _, entry := range manifest.GetContents() {
|
||||
if entry.GetType() == Type_ExternalFile {
|
||||
blobName := string(entry.Data)
|
||||
if size := allBlobs.Get(blobName); size == nil {
|
||||
return fmt.Errorf("%s: dangling reference %s", manifestName, blobName)
|
||||
if size, ok := allBlobs[blobName]; ok {
|
||||
liveBlobs[blobName] = size
|
||||
} else {
|
||||
liveBlobs.Put(blobName, size)
|
||||
logc.Printf(ctx, "trace manifest: %s/%s: dangling reference %s",
|
||||
manifestKind, manifestName, blobName)
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -36,42 +35,44 @@ func TraceGarbage(ctx context.Context) error {
|
||||
}
|
||||
|
||||
// Enumerate all blobs.
|
||||
logc.Printf(ctx, "trace: enumerating blobs")
|
||||
for metadata, err := range backend.EnumerateBlobs(ctx) {
|
||||
if err != nil {
|
||||
return fmt.Errorf("trace blobs err: %w", err)
|
||||
}
|
||||
allBlobs.Put(metadata.Name, &metadata.Size)
|
||||
allBlobs[metadata.Name] = metadata.Size
|
||||
}
|
||||
|
||||
// Enumerate blobs live via site manifests.
|
||||
logc.Printf(ctx, "trace: enumerating manifests")
|
||||
for item, err := range backend.GetAllManifests(ctx) {
|
||||
metadata, manifest := item.Splat()
|
||||
if err != nil {
|
||||
return fmt.Errorf("trace sites err: %w", err)
|
||||
}
|
||||
err = traceManifest(metadata.Name, manifest)
|
||||
err = traceManifest("site", metadata.Name, manifest)
|
||||
if err != nil {
|
||||
return fmt.Errorf("trace sites err: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Enumerate blobs live via audit records.
|
||||
|
||||
logc.Printf(ctx, "trace: enumerating audit records")
|
||||
auditIDs := backend.SearchAuditLog(ctx, SearchAuditLogOptions{})
|
||||
for record, err := range backend.GetAuditLogRecords(ctx, auditIDs) {
|
||||
if err != nil {
|
||||
logc.Fatalln(ctx, err)
|
||||
return fmt.Errorf("trace audit err: %w", err)
|
||||
}
|
||||
if record.Manifest != nil {
|
||||
err = traceManifest(record.GetAuditID().String(), record.Manifest)
|
||||
err = traceManifest("audit", record.GetAuditID().String(), record.Manifest)
|
||||
if err != nil {
|
||||
return fmt.Errorf("trace audit err: %w", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
allBlobsCount, allBlobsSize := trieReduce(allBlobs)
|
||||
liveBlobsCount, liveBlobsSize := trieReduce(liveBlobs)
|
||||
allBlobsCount, allBlobsSize := reduceBlobs(allBlobs)
|
||||
liveBlobsCount, liveBlobsSize := reduceBlobs(liveBlobs)
|
||||
logc.Printf(ctx, "trace all: %d blobs, %s",
|
||||
allBlobsCount, datasize.ByteSize(allBlobsSize).HR())
|
||||
logc.Printf(ctx, "trace live: %d blobs, %s",
|
||||
|
||||
@@ -27,9 +27,9 @@ func SizeHistogram(ctx context.Context) ([]*DomainStatistics, error) {
|
||||
statisticsMap[domain] = &DomainStatistics{Domain: domain}
|
||||
}
|
||||
statistics := statisticsMap[domain]
|
||||
statistics.OriginalSize += manifest.GetOriginalSize()
|
||||
statistics.CompressedSize += manifest.GetCompressedSize()
|
||||
statistics.StoredSize += manifest.GetStoredSize()
|
||||
statistics.OriginalSize += metadata.Size + manifest.GetOriginalSize()
|
||||
statistics.CompressedSize += metadata.Size + manifest.GetCompressedSize()
|
||||
statistics.StoredSize += metadata.Size + manifest.GetStoredSize()
|
||||
}
|
||||
return slices.Collect(maps.Values(statisticsMap)), nil
|
||||
}
|
||||
|
||||
116
src/main.go
116
src/main.go
@@ -18,6 +18,7 @@ import (
|
||||
"path"
|
||||
"runtime/debug"
|
||||
"slices"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
@@ -101,7 +102,7 @@ func configureFallback(_ context.Context) (err error) {
|
||||
|
||||
// Thread-unsafe, must be called only during initial configuration.
|
||||
func configureAudit(_ context.Context) (err error) {
|
||||
snowflake.SetStartTime(time.Date(2025, 12, 1, 0, 0, 0, 0, time.UTC))
|
||||
snowflake.SetStartTime(AuditSnowflakeStartTime)
|
||||
snowflake.SetMachineID(config.Audit.NodeID)
|
||||
return
|
||||
}
|
||||
@@ -190,9 +191,13 @@ func usage() {
|
||||
fmt.Fprintf(os.Stderr, "(debug) "+
|
||||
"git-pages {-get-blob|-get-manifest|-get-archive|-update-site} <ref> [file]\n")
|
||||
fmt.Fprintf(os.Stderr, "(admin) "+
|
||||
"git-pages {-freeze-domain <domain>|-unfreeze-domain <domain>}\n")
|
||||
"git-pages {-freeze-domain|-unfreeze-domain} <domain>\n")
|
||||
fmt.Fprintf(os.Stderr, "(audit) "+
|
||||
"git-pages {-audit-log|-audit-read <id>|-audit-server <endpoint> <program> [args...]}\n")
|
||||
"git-pages {-audit-log|-audit-read <id>|-audit-rollback <id>}\n")
|
||||
fmt.Fprintf(os.Stderr, "(audit) "+
|
||||
"git-pages {-audit-expire <days>|-audit-detach <domain>/<project>}\n")
|
||||
fmt.Fprintf(os.Stderr, "(audit) "+
|
||||
"git-pages -audit-server <endpoint> <program> [args...]\n")
|
||||
fmt.Fprintf(os.Stderr, "(maint) "+
|
||||
"git-pages {-run-migration <name>|-trace-garbage|-size-histogram {original|stored}}\n")
|
||||
flag.PrintDefaults()
|
||||
@@ -234,6 +239,10 @@ func Main(versionInfo string) {
|
||||
"extract contents of audit record `id` to files '<id>-*'")
|
||||
auditRollback := flag.String("audit-rollback", "",
|
||||
"restore site from contents of audit record `id`")
|
||||
auditExpire := flag.String("audit-expire", "",
|
||||
"expire audit records older than `days` old")
|
||||
auditDetach := flag.String("audit-detach", "",
|
||||
"detach all blobs of audit records for a single `site` (or the entire domain with 'domain.tld/*')")
|
||||
auditServer := flag.String("audit-server", "",
|
||||
"listen for notifications on `endpoint` and spawn a process for each audit event")
|
||||
runMigration := flag.String("run-migration", "",
|
||||
@@ -264,6 +273,8 @@ func Main(versionInfo string) {
|
||||
*auditLog,
|
||||
*auditRead != "",
|
||||
*auditRollback != "",
|
||||
*auditExpire != "",
|
||||
*auditDetach != "",
|
||||
*auditServer != "",
|
||||
*runMigration != "",
|
||||
*sizeHistogram != "",
|
||||
@@ -276,8 +287,8 @@ func Main(versionInfo string) {
|
||||
if cliOperations > 1 {
|
||||
logc.Fatalln(ctx, "-list-blobs, -list-manifests, -get-blob, -get-manifest, -get-archive, "+
|
||||
"-update-site, -freeze-domain, -unfreeze-domain, -audit-log, -audit-read, "+
|
||||
"-audit-rollback, -audit-server, -run-migration, -size-histogram, "+
|
||||
"and -trace-garbage are mutually exclusive")
|
||||
"-audit-rollback, -audit-expire, -audit-detach, -audit-server, -run-migration, "+
|
||||
"-size-histogram, and -trace-garbage are mutually exclusive")
|
||||
}
|
||||
|
||||
if *configTomlPath != "" && *noConfig {
|
||||
@@ -328,15 +339,15 @@ func Main(versionInfo string) {
|
||||
logc.Fatalln(ctx, err)
|
||||
}
|
||||
|
||||
if domainCache, err = CreateDomainCache(ctx); err != nil {
|
||||
logc.Fatalln(ctx, err)
|
||||
}
|
||||
|
||||
// The server has its own logic for creating the backend.
|
||||
if cliOperations > 0 {
|
||||
if backend, err = CreateBackend(ctx, &config.Storage); err != nil {
|
||||
logc.Fatalln(ctx, err)
|
||||
}
|
||||
|
||||
if domainCache, err = CreateDomainCache(ctx); err != nil {
|
||||
logc.Fatalln(ctx, err)
|
||||
}
|
||||
}
|
||||
|
||||
switch {
|
||||
@@ -426,7 +437,7 @@ func Main(versionInfo string) {
|
||||
}
|
||||
|
||||
webRoot := webRootArg(*updateSite)
|
||||
result = UpdateFromArchive(ctx, webRoot, contentType, file)
|
||||
result = UpdateFromArchive(ctx, webRoot, "", contentType, file)
|
||||
} else {
|
||||
branch := "pages"
|
||||
if sourceURL.Fragment != "" {
|
||||
@@ -495,13 +506,19 @@ func Main(versionInfo string) {
|
||||
})
|
||||
|
||||
for _, record := range records {
|
||||
fmt.Fprintf(color.Output, "%s %s %s %s %s\n",
|
||||
parts := []string{
|
||||
record.GetAuditID().String(),
|
||||
color.HiWhiteString(record.GetTimestamp().AsTime().UTC().Format(time.RFC3339)),
|
||||
color.HiMagentaString(record.DescribePrincipal()),
|
||||
fmt.Sprint(record.GetEvent()),
|
||||
color.HiGreenString(record.DescribeResource()),
|
||||
record.GetEvent(),
|
||||
)
|
||||
color.HiMagentaString(record.DescribePrincipal()),
|
||||
}
|
||||
if record.IsDetached() {
|
||||
parts = append(parts,
|
||||
color.HiYellowString("(detached)"),
|
||||
)
|
||||
}
|
||||
fmt.Fprintln(color.Output, strings.Join(parts, " "))
|
||||
}
|
||||
|
||||
case *auditRead != "":
|
||||
@@ -547,6 +564,45 @@ func Main(versionInfo string) {
|
||||
logc.Fatalln(ctx, err)
|
||||
}
|
||||
|
||||
case *auditDetach != "":
|
||||
domain, project, found := strings.Cut(*auditDetach, "/")
|
||||
if !found || domain == "" || project == "" {
|
||||
logc.Fatalln(ctx, "argument to -audit-detach must be in the form of "+
|
||||
"'domain.tld/project' or 'domain.tld/*'")
|
||||
}
|
||||
|
||||
if project != "*" && project != ".index" {
|
||||
if err := ValidateProjectName(project); err != nil {
|
||||
logc.Fatalf(ctx, "audit detach: project name: %v\n", err)
|
||||
}
|
||||
}
|
||||
|
||||
count := 0
|
||||
ids := backend.SearchAuditLog(ctx, SearchAuditLogOptions{})
|
||||
for record, err := range backend.GetAuditLogRecords(ctx, ids) {
|
||||
if err != nil {
|
||||
logc.Fatalln(ctx, err)
|
||||
}
|
||||
if record.GetDomain() == domain && (project == "*" || record.GetProject() == project) {
|
||||
if !record.IsDetachable() {
|
||||
continue
|
||||
} else if !record.IsDetached() {
|
||||
logc.Printf(ctx, "detaching audit record %s\n", record.GetAuditID())
|
||||
err = backend.DetachAuditRecord(ctx, record.GetAuditID())
|
||||
if err != nil {
|
||||
logc.Fatalln(ctx, err)
|
||||
}
|
||||
count++
|
||||
} else {
|
||||
logc.Printf(ctx, "audit record %s already detached\n", record.GetAuditID())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if count == 0 {
|
||||
logc.Printf(ctx, "no detachable audit records found for %s/%s", domain, project)
|
||||
}
|
||||
|
||||
case *auditServer != "":
|
||||
if flag.NArg() < 1 {
|
||||
logc.Fatalln(ctx, "handler path not provided")
|
||||
@@ -559,6 +615,34 @@ func Main(versionInfo string) {
|
||||
|
||||
serve(ctx, listen(ctx, "audit", *auditServer), ObserveHTTPHandler(processor))
|
||||
|
||||
case *auditExpire != "":
|
||||
days, err := strconv.ParseInt(*auditExpire, 10, 0)
|
||||
if err != nil {
|
||||
logc.Fatalln(ctx, err)
|
||||
}
|
||||
|
||||
ids := backend.SearchAuditLog(ctx, SearchAuditLogOptions{
|
||||
Until: time.Now().AddDate(0, 0, int(-days)),
|
||||
})
|
||||
|
||||
count := 0
|
||||
for id, err := range ids {
|
||||
if err != nil {
|
||||
logc.Fatalln(ctx, err)
|
||||
continue
|
||||
}
|
||||
|
||||
err = backend.ExpireAuditRecord(ctx, id)
|
||||
if err != nil {
|
||||
logc.Fatalln(ctx, err)
|
||||
} else {
|
||||
logc.Printf(ctx, "audit: expired record %s\n", id)
|
||||
count += 1
|
||||
}
|
||||
}
|
||||
|
||||
logc.Printf(ctx, "audit: expired %d records\n", count)
|
||||
|
||||
case *runMigration != "":
|
||||
if err = RunMigration(ctx, *runMigration); err != nil {
|
||||
logc.Fatalln(ctx, err)
|
||||
@@ -658,6 +742,10 @@ func Main(versionInfo string) {
|
||||
}
|
||||
backend = NewObservedBackend(backend)
|
||||
|
||||
if domainCache, err = CreateDomainCache(ctx); err != nil {
|
||||
logc.Fatalln(ctx, err)
|
||||
}
|
||||
|
||||
middleware := chainHTTPMiddleware(
|
||||
panicHandler,
|
||||
remoteAddrMiddleware,
|
||||
|
||||
@@ -397,3 +397,17 @@ func (backend *observedBackend) GetAuditLogRecords(
|
||||
span.Finish()
|
||||
}
|
||||
}
|
||||
|
||||
func (backend *observedBackend) DetachAuditRecord(ctx context.Context, id AuditID) (err error) {
|
||||
span, ctx := ObserveFunction(ctx, "DetachAuditRecord", "audit.id", id)
|
||||
err = backend.inner.DetachAuditRecord(ctx, id)
|
||||
span.Finish()
|
||||
return
|
||||
}
|
||||
|
||||
func (backend *observedBackend) ExpireAuditRecord(ctx context.Context, id AuditID) (err error) {
|
||||
span, ctx := ObserveFunction(ctx, "ExpireAuditRecord", "audit.id", id)
|
||||
err = backend.inner.ExpireAuditRecord(ctx, id)
|
||||
span.Finish()
|
||||
return
|
||||
}
|
||||
|
||||
41
src/pages.go
41
src/pages.go
@@ -65,6 +65,17 @@ func observeSiteUpdate(via string, result *UpdateResult) {
|
||||
}
|
||||
}
|
||||
|
||||
func copyForgeAuthToPrincipal(principal *Principal, auth *Authorization) {
|
||||
if auth.forgeUser != nil {
|
||||
principal.ForgeUser = auth.forgeUser
|
||||
}
|
||||
|
||||
repoURL := auth.ForgeRepoURL()
|
||||
if repoURL != "" {
|
||||
principal.RepoUrl = &repoURL
|
||||
}
|
||||
}
|
||||
|
||||
func normalizeHost(host string) string {
|
||||
return strings.ToLower(host)
|
||||
}
|
||||
@@ -143,7 +154,7 @@ func getPage(w http.ResponseWriter, r *http.Request) error {
|
||||
err = nil
|
||||
sitePath = strings.TrimPrefix(r.URL.Path, "/")
|
||||
if projectName, projectPath, hasProjectSlash := strings.Cut(sitePath, "/"); projectName != "" {
|
||||
if IsValidProjectName(projectName) {
|
||||
if ValidateProjectName(projectName) == nil {
|
||||
var projectManifest *Manifest
|
||||
var projectMetadata ManifestMetadata
|
||||
projectManifest, projectMetadata, err = backend.GetManifest(
|
||||
@@ -523,19 +534,23 @@ func putPage(w http.ResponseWriter, r *http.Request) error {
|
||||
result = UpdateFromRepository(ctx, webRoot, repoURL, branch)
|
||||
|
||||
default:
|
||||
if auth, err := AuthorizeUpdateFromArchive(r); err != nil {
|
||||
auth, err := AuthorizeUpdateFromArchive(r)
|
||||
if err != nil {
|
||||
return err
|
||||
} else if auth.forgeUser != nil {
|
||||
GetPrincipal(r.Context()).ForgeUser = auth.forgeUser
|
||||
}
|
||||
|
||||
principal := GetPrincipal(r.Context())
|
||||
copyForgeAuthToPrincipal(principal, auth)
|
||||
|
||||
repoURL := auth.ForgeRepoURL()
|
||||
|
||||
if checkDryRun(w, r) {
|
||||
return nil
|
||||
}
|
||||
|
||||
// request body contains archive
|
||||
reader := http.MaxBytesReader(w, r.Body, int64(config.Limits.MaxSiteSize.Bytes()))
|
||||
result = UpdateFromArchive(ctx, webRoot, contentType, reader)
|
||||
result = UpdateFromArchive(ctx, webRoot, repoURL, contentType, reader)
|
||||
}
|
||||
|
||||
return reportUpdateResult(w, r, result)
|
||||
@@ -556,12 +571,14 @@ func patchPage(w http.ResponseWriter, r *http.Request) error {
|
||||
return err
|
||||
}
|
||||
|
||||
if auth, err := AuthorizeUpdateFromArchive(r); err != nil {
|
||||
auth, err := AuthorizeUpdateFromArchive(r)
|
||||
if err != nil {
|
||||
return err
|
||||
} else if auth.forgeUser != nil {
|
||||
GetPrincipal(r.Context()).ForgeUser = auth.forgeUser
|
||||
}
|
||||
|
||||
principal := GetPrincipal(r.Context())
|
||||
copyForgeAuthToPrincipal(principal, auth)
|
||||
|
||||
if checkDryRun(w, r) {
|
||||
return nil
|
||||
}
|
||||
@@ -688,12 +705,14 @@ func deletePage(w http.ResponseWriter, r *http.Request) error {
|
||||
return err
|
||||
}
|
||||
|
||||
if auth, err := AuthorizeDeletion(r); err != nil {
|
||||
auth, err := AuthorizeDeletion(r)
|
||||
if err != nil {
|
||||
return err
|
||||
} else if auth.forgeUser != nil {
|
||||
GetPrincipal(r.Context()).ForgeUser = auth.forgeUser
|
||||
}
|
||||
|
||||
principal := GetPrincipal(r.Context())
|
||||
copyForgeAuthToPrincipal(principal, auth)
|
||||
|
||||
if checkDryRun(w, r) {
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -863,6 +863,7 @@ type Principal struct {
|
||||
IpAddress *string `protobuf:"bytes,1,opt,name=ip_address,json=ipAddress" json:"ip_address,omitempty"`
|
||||
CliAdmin *bool `protobuf:"varint,2,opt,name=cli_admin,json=cliAdmin" json:"cli_admin,omitempty"`
|
||||
ForgeUser *ForgeUser `protobuf:"bytes,3,opt,name=forge_user,json=forgeUser" json:"forge_user,omitempty"`
|
||||
RepoUrl *string `protobuf:"bytes,4,opt,name=repo_url,json=repoUrl" json:"repo_url,omitempty"`
|
||||
unknownFields protoimpl.UnknownFields
|
||||
sizeCache protoimpl.SizeCache
|
||||
}
|
||||
@@ -918,6 +919,13 @@ func (x *Principal) GetForgeUser() *ForgeUser {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (x *Principal) GetRepoUrl() string {
|
||||
if x != nil && x.RepoUrl != nil {
|
||||
return *x.RepoUrl
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
type ForgeUser struct {
|
||||
state protoimpl.MessageState `protogen:"open.v1"`
|
||||
Origin *string `protobuf:"bytes,1,opt,name=origin" json:"origin,omitempty"`
|
||||
@@ -1041,14 +1049,15 @@ const file_schema_proto_rawDesc = "" +
|
||||
"\x06domain\x18\n" +
|
||||
" \x01(\tR\x06domain\x12\x18\n" +
|
||||
"\aproject\x18\v \x01(\tR\aproject\x12%\n" +
|
||||
"\bmanifest\x18\f \x01(\v2\t.ManifestR\bmanifest\"r\n" +
|
||||
"\bmanifest\x18\f \x01(\v2\t.ManifestR\bmanifest\"\x8d\x01\n" +
|
||||
"\tPrincipal\x12\x1d\n" +
|
||||
"\n" +
|
||||
"ip_address\x18\x01 \x01(\tR\tipAddress\x12\x1b\n" +
|
||||
"\tcli_admin\x18\x02 \x01(\bR\bcliAdmin\x12)\n" +
|
||||
"\n" +
|
||||
"forge_user\x18\x03 \x01(\v2\n" +
|
||||
".ForgeUserR\tforgeUser\"K\n" +
|
||||
".ForgeUserR\tforgeUser\x12\x19\n" +
|
||||
"\brepo_url\x18\x04 \x01(\tR\arepoUrl\"K\n" +
|
||||
"\tForgeUser\x12\x16\n" +
|
||||
"\x06origin\x18\x01 \x01(\tR\x06origin\x12\x0e\n" +
|
||||
"\x02id\x18\x02 \x01(\x03R\x02id\x12\x16\n" +
|
||||
|
||||
@@ -144,6 +144,7 @@ message Principal {
|
||||
string ip_address = 1;
|
||||
bool cli_admin = 2;
|
||||
ForgeUser forge_user = 3;
|
||||
string repo_url = 4;
|
||||
}
|
||||
|
||||
message ForgeUser {
|
||||
|
||||
@@ -128,6 +128,7 @@ var errArchiveFormat = errors.New("unsupported archive format")
|
||||
func UpdateFromArchive(
|
||||
ctx context.Context,
|
||||
webRoot string,
|
||||
repoURL string,
|
||||
contentType string,
|
||||
reader io.Reader,
|
||||
) (result UpdateResult) {
|
||||
@@ -162,6 +163,10 @@ func UpdateFromArchive(
|
||||
logc.Printf(ctx, "update %s err: %s", webRoot, err)
|
||||
result = UpdateResult{UpdateError, nil, err}
|
||||
} else {
|
||||
if repoURL != "" {
|
||||
newManifest.RepoUrl = &repoURL
|
||||
}
|
||||
|
||||
result = Update(ctx, webRoot, oldManifest, newManifest, ModifyManifestOptions{})
|
||||
}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user