Currently you can specify "Branch: HEAD" or "Branch: refs/tags/v1" and
go-git will resolve it to the relevant ref. Given the HTTP header is
called Branch this is confusing.
Given this is already depending on zstd I don't see a reason not to.
Can be tested with libarchive via: `bsdtar -a --options zip:compression=zstd -cf file.zip files...`
Reviewed-on: https://codeberg.org/git-pages/git-pages/pulls/91
Co-authored-by: David Leadbeater <dgl@dgl.cx>
Co-committed-by: David Leadbeater <dgl@dgl.cx>
Previously, you could issue e.g. a `GET /%2e%2e/%2e%2e` and it would
get interpreted as a parent directory path segment in the handler.
This didn't result in a path traversal vulnerability when passed to
the S3 backend because of a `path.Clean()` call indirectly done by
`makeWebRoot()`, but it's prudent to not take chances.
This reverts commit 351d0a0c85.
This option does not have any effect at the moment and may potentially
confuse users. It can be easily reintroduced later (by reverting this
commit) once we start logging at any level other than `info`.
Previously, this would disallow all git clones except for those via
wildcard domains. This is highly unintuitive. It also meant that
disabling this function via environment variable was not possible.
It is expected that in most deployments, a reverse proxy server like
Caddy or Nginx will be connecting to Caddy; listening on any address
by default is a privacy and security concern.
It has been tested on Grebedoc (Fly.io servers) and found to work
satisfactorily, though without any apparent benefit. It requires client
opt-in and so enabling it at all times is benign.
The PATCH method has been tested by myself and on Codeberg and found
to work satisfactorily.
Because using PATCH causes the git-pages server to store state that
is not necessarily easily reproducible from any single specific source
(i.e. it stores a composition of many disparate requests), it may be
necessary to back it up. For this, the feature `archive-site` is also
stabilized. It has not seen much use but not providing a backup method
would be a disservice.
To use this function, configure git-pages with e.g.:
[audit]
collect = true
notify-url = "http://localhost:3004/"
and run an audit server with e.g.:
git-pages -audit-server tcp/:3004 python $(pwd)/process.py
The provided command line is executed after appending two arguments
(audit record ID and event type), and runs in a temporary directory
with the audit record extracted into it. The following files will
be present in this directory:
* `$1-event.json` (always)
* `$1-manifest.json` (if type is `CommitManifest`)
* `$1-archive.tar` (if type is `CommitManifest`)
The script must complete successfully for the event processing to
finish. The notification will keep being re-sent (by the worker) with
exponential backoff until it does.
This acts like `mkdir -p`, making it much less annoying to deploy
e.g. documentation preview generators that use deep paths.
Like before, the site must already exist: we cannot do a CAS on
a non-existent manifest at the moment.