The following panic was triggered when mc client (that uses
chunked uploads) would upload a 171164 byte file. This likely
could have been hit with other sizes as well, but this size
was able to reliably reproduce the issue.
panic: runtime error: slice bounds out of range [:2] with capacity 1
goroutine 66 [running]:
github.com/versity/versitygw/s3api/utils.(*ChunkReader).parseChunkHeaderBytes(0x14000276200, {0x14000167fff?, 0x14000103180?, 0x200000003?})
versitygw/s3api/utils/signed-chunk-reader.go:372 +0xe54
github.com/versity/versitygw/s3api/utils.(*ChunkReader).parseAndRemoveChunkInfo(0x14000276200, {0x14000167fff, 0x1, 0x1})
versitygw/s3api/utils/signed-chunk-reader.go:251 +0x50
github.com/versity/versitygw/s3api/utils.(*ChunkReader).Read(0x14000276200, {0x14000160000, 0x14000056c00?, 0x8000})
versitygw/s3api/utils/signed-chunk-reader.go:126 +0x188
io.(*teeReader).Read(0x140000b09c0, {0x14000160000, 0x105e7b368?, 0x8000})
/usr/local/go/src/io/io.go:628 +0x34
...
The reproducer is:
% truncate -s 171764 testfile
% mc cp testfile gwtest/mybucket/testfile
mc: <ERROR> Failed to copy `/Users/ben/repo/s3perf/tools/testfile`. Put "http://127.0.0.1:7070/mybucket/testfile": dial tcp 127.0.0.1:7070: connect: connection refused
The panic can happen because the capacity of header ([]byte) at
the point of the debuglog line can be less than 2, but we were
trying to always send the first 2 bytes to the debug log.
The debuglogger should be a top level module since we expect
all modules within the project to make use of this. If its
hidden in s3api, then contributors are less likely to make
use of this outside of s3api.
Closes#1221
Adds debug logging for `signed`/`unsigned` chunk readers.
Adds the `debuglogger.Infof` log method, which prints out green info logs with `[INFO]:` prefix.
The debug logging inclues some chunk details: size, signature, trailers. It also prints out stash/release stash operations.
The error cases are logged with standart yellow `[DEBUG]:` prefix.
The `String to sign` block in signed chunk reader is logged in purple horizontal borders with title.
Fixes#1238
The signed chunk reader stashes the header bytes if it can't fully parse the chunk header. On the next `io.Reader` call, the stash is combined with the new buffer data to attempt parsing the header again. The stashing logic was broken due to the premature removal of the first two header bytes (`\r\n`). As a result, the stash was incomplete, leading to parsing issues on subsequent calls.
These changes fix the stashing logic and correct the buffer offset calculation in `parseChunkHeaderBytes`.
Added missing debug logs in the `front-end` and `utility` functions.
Enhanced debug logging with the following improvements:
- Each debug message is now prefixed with [DEBUG] and appears in color.
- The full request URL is printed at the beginning of each debug log block.
- Request/response details are wrapped in framed sections for better readability.
- Headers are displayed in a colored box.
- XML request/response bodies are pretty-printed with indentation and color.
Fixes#1165
The signed chunk encoding with trailers should return api error for:
1. Invalid checksum - `(InvalidRequest) Value for x-amz-checksum-x trailing header is invalid.`
2. Incorrect checksum - `(BadDigest) The x you specified did not match the calculated checksum.`
Where `x` could be crc32, crc32c, sha1 ...
Closes#1159Fixes#1161
Implements signed chunk encoding with trailers in the gateway.
The signed encoding (both with and without trailers) is now handled by the `ChunkReader`.
Fixes the `ChunkReader` implementation to validate encoding headers byte by byte.
The chunk encoding with trailers follows the general signed chunk encoding pattern, but the final chunk includes the trailing signature (`x-amz-trailing-signature`) and the checksum header (`x-amz-checksum-x`, where `x` can be `crc32`, `crc32c`, `sha1`, `sha256`, or `crc64nvme`).
Adds validation for the `X-Amz-Trailer` header.
Fixes#1147
The final chunk header with 0 length, contains the last signature in signed chunk encoding implementation.
Added this last signature verification in the signed chunk encoding without trailers.
We were getting errors such as:
2025/02/07 19:24:28 Internal Error, write object data: write exceeds content length 87
whenever the chunk encoding did not have the correct chunk
signatures. The issue was that the chunk encoding reader
was reading from the underlying reader and then passing the full
buffer read back to the caller if the underlying reader returned
an error. This meant that we were not processing the chunk
headers within the buffer due to the higher level error, and
would possibly hand back the longer unprocessed chunk encoded
stream to the caller that was in turn trying to write to the
object file exceeding the content length limit.
Fixes#1056
An invalid chunk encoding, or parse errors leading to parsing
invalid data can lead to a server panic if the chunk header
remaining is determined to be larger than the max buffer size.
This was previously seen when the chunk trailer checksums were
used by the client without the support from the server side
for this encoding. Example panic:
panic: runtime error: slice bounds out of range [4088:1024]
goroutine 5 [running]:
github.com/versity/versitygw/s3api/utils.(*ChunkReader).parseChunkHeaderBytes(0xc0003c4280, {0xc0000e6000?, 0x3000?, 0x423525?})
/home/tester/s3api/utils/chunk-reader.go:242 +0x492
github.com/versity/versitygw/s3api/utils.(*ChunkReader).parseAndRemoveChunkInfo(0xc0003c4280, {0xc0000e6000, 0x3000, 0x8000})
/home/tester/s3api/utils/chunk-reader.go:170 +0x20b
github.com/versity/versitygw/s3api/utils.(*ChunkReader).Read(0xc0003c4280, {0xc0000e6000, 0xc0000b41e0?, 0x8000})
/home/tester/s3api/utils/chunk-reader.go:91 +0x11e
This fix will validate the data length before copying into the
temporary buffer to prevent a panic and instead just return
an error.