The scoutfs filesystem allows setting project IDs on files and
directories for project level accounting tracking. This adds the
option to set the project id for the following:
create bucket
put object
put part
complete multipart upload
The project id will only be set if all of the following is true:
- set project id option enabled
- filesystem format version supports projects (version >1)
- account project id > 0
Some systems may choose to allow non-aws compliant bucket names
and/or handle the bucket naem validation in the backend instead.
This adds the option to turn off the strict bucket name validation
checks in the frontend API handlers.
When frontend bucket name validation is disabled, we need to do
sanity checks for posix compliant names in the posix/scoutfs
backends. This is automatically enabled when strict bucket
name validation is disabled.
Fixes#1564
The debuglogger should be a top level module since we expect
all modules within the project to make use of this. If its
hidden in s3api, then contributors are less likely to make
use of this outside of s3api.
Implements a middleware that validates incoming bucket and object names before authentication. This helps prevent malicious attacks that attempt to access restricted or unreachable data in `POSIX`.
Adds test cases to cover such attack scenarios, including false negatives where encoded paths are used to try accessing resources outside the intended bucket.
Removes bucket validation from all other layers—including `controllers` and both `POSIX` and `ScoutFS` backends — by moving the logic entirely into the middleware layer.
When multiple uploads with the same object key are racing, we can
end up with an EEXIST when trying to link the final object into
the namespace. When this happens, we should just remove the
existing file and try again since the semantics are that the
last upload should win.
The Etag can be quoted or not, so the check to verify the part
Etag must remove the quotes before checking for equality. This
check is the same now as posix.
The xml encoding for the s3.CompleteMultipartUploadOutput response
type was not producing exactly the right field names for the
expected complete multipart upload result.
This change follows the pattern we have had to do for other xml
responses to create our own type that will encode better to the
expected response.
This will change the backend.Backend interface, so plugins and
other backends will have to make the corresponding changes.
Added missing debug logs in the `front-end` and `utility` functions.
Enhanced debug logging with the following improvements:
- Each debug message is now prefixed with [DEBUG] and appears in color.
- The full request URL is printed at the beginning of each debug log block.
- Request/response details are wrapped in framed sections for better readability.
- Headers are displayed in a colored box.
- XML request/response bodies are pretty-printed with indentation and color.
We had put the error handling in for the read only filesystems
when O_TMPFILE is supported, but missed the CreateTemp() fallback
case. This fixes this case to also return the method not allowed
error.
This also adds the error handling for the scoutfs case as well.
Fixes#1195
This syncs recent updates to posix for scoutfs backend including
the extra metadata such as Content-Disposition,
Content-Language, Cache-Control and Expires. This also fixes the
directory object listings that have a double trailing slash due
to the change in the backend.Walk().
This also simplifies head-object to call the posix on and then
post process for glacier changes. This allows keeping in closer
sync with posix head-object over time.
The part files for multipart uploads are considered temporary
files and should not be archived by default. This adds the
noarchive attribute to the part files to prevent scoutam from
trying to archive these.
There is a new parameter, disablenoarchive, that will prevent
adding the noarchive attribute to these files for the case
where there is a desire to archive these temp files.
This was previously not including the bucket directory for the
mutlipart temp file cleanup. This fixes leftovers in the tmp
directories after uploading multipart uploads.
Fixes#1004Fixes#1122Fixes#1120
Separates `GetObject` and `UploadPartCopy` range parsing/validation.
`GetObject` returns a successful response if acceptRange is invalid.
Adjusts the range upper limit, if it exceeds the actual objects size for `GetObject`.
Corrects the `ContentRange` in the `GetObject` response.
Fixes the `UploadPartCopy` action copy source range parsing/validation.
`UploadPartCopy` returns `InvalidArgument` if the copy source range is not valid.
The fileystem will return ENOTDIR if we try to access a file path
where a parent directory within the path already exists as a file.
In this case we need to return a standard 404 no such key since
the request object does not exist within the filesytem.
Fixes#942
We had 0755 hard coded for newly created directories before. This
adds a user option to configure what the default mode permissions
should be for newly created directories.
Fixes#878
This fixes the cases for racing uploads with the same object names.
Before we were making some bad assumptions about what would cause
an error when trying to link/rename the final object name into
the namespace, but missed the case that another upload for the
same name could be racing with this upload and causing an incorrect
error.
This also changes the order of setting metadata to prevent
accidental setting of metadata for the current upload to another
racing upload.
This also fix auth.CheckObjectAccess() when objects are removed
while this runs.
Fixes#854
It is better if we let the s3response module handle the xml
formatting spec specifics, and let the backends not worry
about how to format the time fields. This should help to
prevent any future backend modifications or additions from
accidental incorrect time formatting.
Changed ListObjectsV2 and ListObjects actions return types from
*s3.ListObjects(V2)Output to s3response.ListObjects(V2)Result.
Changed the listing objects timestamp to RFC3339 to match AWS
S3 objects timestamp.
Fixes#752
The posix limits wont exactly match up with the AWS key length
limits because posix has component length limits as well as path
length limits.
This reponds with the aws compatible KeyTooLongError under these
conditions now.
Note that delete object returns success even in the error cases.
Fixes#755
The API hanlders and backend were stripping trailing "/" in object
paths. So if an object exists and a request came in for head/get/delete/copy
for that same name but with a trailing "/" indicating request should
be for directory object, the "/" would be stripped and the request
would be handlied for the incorrect file object.
This fix adds in checks to handle the case with the training "/"
in the request.
Fixes#709
For large directories, the treewalk can take longer than the
client request timeout. If the client times out the request
then we need to stop walking the filesystem and just return
the context error.
This should prevent the gateway from consuming system resources
uneccessarily after an incoming request is terminated.
This adds the ability to treat symlinks to directories at the top
level gateway directory as buckets the same as normal directories.
This could be a potential security issue allowing traversal into
other filesystems within the system, so is defaulted to off. This
can be enabled when specifically needed for both posix and scoutfs
backend systems.
Fixes#644
The restore object api request handler was incorrectly trying to
unmarshal the request body, but for the stadnard (all?) case the
request body is emtpy. We only need the bucket and opbject params
for now.
This also adds a fix to actually honor the enable glacier mode
in scoutfs.
Add meta.MetadataStorer compatibility to scoutfs so that scoutfs
is using the same interface as posix. This fixes the metadata
retrieval and adds the recently supported object lock compatibility
as well.
When fileystem quota exceeded, the gateway will now return the
error:
S3 error: 403 (QuotaExceeded):
Your request was denied due to quota exceeded.
This will help clients to better detect upload errors due to
quota exceeded.
Fixes#483
We had some duplicated code that we can bring into the backend
package so that we can remove duplications. This moves the mkdir
implementation into backend so that both posix and scoutfs can
call the same implementation.