mirror of
https://github.com/seaweedfs/seaweedfs.git
synced 2026-05-14 05:41:29 +00:00
* fix(s3tests): wire lifecycle worker for expiration suite
The upstream s3-tests `test_lifecycle_expiration` / `test_lifecyclev2_expiration`
exercise the "set rule, wait, verify deletion" path. Phase 4 (#9367) intentionally
stripped the PUT-time back-stamp, so pre-existing objects no longer pick up TtlSec
on a freshly-applied rule. The s3tests CI bare-bones `weed -s3` had nothing left
driving expiration.
Three changes that work together:
- Engine scales `Days` by `util.LifeCycleInterval`. Production keeps the 24h day;
the `s3tests` build tag shrinks it to 10s so a `Days: 1` rule completes inside
the suite's 30s polling window. Exported `DaysToDuration` so sibling-package
tests pin to the same scale.
- Scheduler/dispatcher tick defaults split into `_default` / `_s3tests` files.
Production stays 5s/30s/5m; the test build runs at 500ms/2s/2s so deletions
land within a couple ticks of becoming due.
- s3tests.yml spawns `weed shell s3.lifecycle.run-shard -shards 0-15 -events 0
-runtime 1800s` alongside the s3 server in both the basic and SQL blocks; the
shell command runs the full pipeline (reader + scheduler + dispatcher) for the
duration of the suite. `test_lifecycle_expiration_versioning_enabled` is left
out for now — versioned-bucket expiration via the worker still needs its own
pass.
Drive-by: bump `TestWorkerDefaultJobTypes` to 7 to match the registered
handler count (8b87ceb0d updated `mini_plugin_test.go` for the s3_lifecycle
plugin but missed this twin test).
Two retention-gate engine tests `t.Skip` under the s3tests build because they
rely on absolute lookback-vs-retention math the day-rescale collapses; the prod
build still covers them.
* review: harden lifecycle worker spawn + assert handler identity
- Workflow: aliveness check on the backgrounded `weed shell` (a bad command
exits in <1s and the suite would otherwise just opaque-timeout); move
worker/server teardown into a `trap cleanup EXIT` so failure paths still
print the worker log and reap the data dir.
- worker_test: check the actual job-type set by name, not just the count.
* fix(shell): keep s3.lifecycle.run-shard alive when no rules exist yet
The s3-tests CI runs the worker BEFORE any test creates a bucket, so
LoadCompileInputs returns empty and the shell command was bailing out
with "no buckets with enabled lifecycle rules found" within ~1s. The
aliveness check then fired exit 1 before tox ever started.
Two changes:
- Don't early-exit on empty inputs. Compile against the empty set, log a
one-liner, and let the pipeline run normally — the meta-log subscription
is already up, so events for buckets created later DO arrive; they just
need the engine to know about them when they do.
- Add `-refresh <duration>` (default 5m, 2s in s3tests CI) that
periodically re-runs LoadCompileInputs + engine.Compile so rules added
after startup land in the snapshot the dispatcher reads on its next
tick. Production deployments keep the 5m default; only the CI workflow
drops to 2s.
Workflow passes `-refresh 2s` in both basic and SQL blocks.
* fix(shell): backfill pre-rule entries via bootstrap walker
The reader-driven path only sees meta-log events created AFTER its
engine snapshot knows the rule. The s3-tests CI scenario PUTs objects
first, then PUTs the lifecycle config, so by the time the engine
refresh picks up the new bucket the object events have already been
seen-and-dropped (BucketActionKeys returned empty for the bucket).
Wire bootstrap.Walk into the shell command:
- bucketBootstrapper tracks buckets seen so far. kickOffNew spawns one
loop goroutine per fresh bucket.
- Each goroutine re-walks the bucket every walkInterval (defaults to
the same value as -refresh, i.e. 2s in s3tests CI, 5m in prod) and
feeds each entry through bootstrap.Walk; due actions dispatch via a
direct LifecycleDelete RPC. Not-yet-due entries are silently skipped
and picked up on a later iteration once they age past their (rescaled
or real) threshold.
- LifecycleDelete is called with no expected_identity; the server-side
identityMatches treats nil as "skip CAS", which is the right call
for bootstrap (the bootstrap entry doesn't carry chunk fid /
extended hash anyway).
The dispatcher's pkg-private toProtoActionKind is duplicated in the
shell file rather than exported, since the shape is six lines and the
reverse import would pull a proto dep into the s3lifecycle root.
* refactor(s3/lifecycle): hoist bucket bootstrapper into scheduler pkg
The shell command got the backfill in the previous commit but the worker
plugin (weed/worker/tasks/s3_lifecycle/handler.go) drives Scheduler.Run
directly and missed it — same root cause: the reader-driven path only
sees events created after the rule lands, so a daily cron picking up a
freshly-PUT rule wouldn't expire any pre-rule object.
Move the looping bucket walker into scheduler.BucketBootstrapper:
- Scheduler.Run now constructs one and calls KickOffNew on every engine
refresh. Per-bucket goroutines re-walk every BootstrapWalkInterval
(defaults to RefreshInterval — 5m in prod, 2s under s3tests).
- The shell command consumes the same struct instead of its own copy
so the two paths can't drift in semantics.
* refactor(s3/lifecycle): walk-once + schedule via event injection
Previous per-bucket walker re-listed every WalkInterval forever. For a
bucket with N objects under a long rule, the worker did O(N * runtime /
walkInterval) listings even when nothing was newly due — way too much
for production-scale buckets.
New approach: walk each bucket exactly once on first sight, synthesize
one *reader.Event per existing entry, push it onto Pipeline.events.
Router.Route builds a Match with DueTime=mtime+delay; future-due matches
sit in the per-shard Schedule and fire when their DueTime arrives.
Currently-due matches fire on the very next dispatch tick.
Wiring:
- dispatcher.Pipeline lifts its events channel into a struct field
with sync.Once init, and exposes InjectEvent(ctx, ev). Reader no
longer closes the channel — the dispatch goroutine exits on runCtx
cancellation, which works the same as channel-close did.
- scheduler.BucketBootstrapper drops the WalkInterval ticker. KickOffNew
spawns one walker goroutine per fresh bucket; the goroutine lists,
synthesizes events, then exits.
- scheduler.Scheduler builds its pipelines up front and exposes a
pipelineFanout (shard -> Pipeline) as the EventInjector, so a multi-
worker scheduler routes each synthesized event to the pipeline that
owns its shard.
- Shell command's single-pipeline path passes pipeline.InjectEvent
directly.
Synthesized events carry TsNs=0; dispatcher.advance treats that as a
no-op so the reader's persisted cursor isn't ratcheted past unprocessed
meta-log events. Identity (HeadFid + ExtendedHash) is still computed
from the real filer entry, so the server's identity-CAS catches an
overwrite between bootstrap and dispatch.
* debug(s3tests): make lifecycle worker progress visible in CI logs
The previous CI failure dumped an empty $LC_LOG even though the worker
was running. Two reasons:
1. weed shell suppresses glog by default (logtostderr / alsologtostderr
set to false). Pass `-debug` so the bootstrapper's V(0) lines reach
stderr instead of disappearing into /tmp/weed.*.log.
2. cleanup used `kill -9` which skips Go's stdout flush. SIGTERM first
with a 1s grace, then SIGKILL the holdout, then read the log.
While here: bump the bootstrap walker's two informational logs to V(0)
so the diagnosis from CI doesn't require -v=1 on the worker.
* fix(s3/lifecycle/dispatcher): refresh snap on every event
Pipeline.Run captured snap at startup and only refreshed it on the
dispatch tick. With bootstrap event injection, the walker pushes events
seconds after engine.Compile sees the bucket — typically WITHIN the
same dispatch interval. Routing against the cached (empty) snap then
silently dropped every match because BucketActionKeys returned nil for
the bucket-not-yet-in-snapshot case.
Re-fetch on each event. Engine.Snapshot is an atomic.Pointer.Load, so
the cost is negligible. The dispatch-tick branch keeps using a fresh
local read for its own loop, so its semantics are unchanged.
1216 lines
60 KiB
YAML
1216 lines
60 KiB
YAML
name: "Ceph S3 tests"
|
|
|
|
on:
|
|
push:
|
|
branches: [ master ]
|
|
pull_request:
|
|
branches: [ master ]
|
|
|
|
concurrency:
|
|
group: ${{ github.head_ref }}/s3tests
|
|
cancel-in-progress: true
|
|
|
|
permissions:
|
|
contents: read
|
|
|
|
jobs:
|
|
basic-s3-tests:
|
|
name: Basic S3 tests (KV store)
|
|
runs-on: ubuntu-22.04
|
|
timeout-minutes: 15
|
|
steps:
|
|
- name: Check out code into the Go module directory
|
|
uses: actions/checkout@v6
|
|
|
|
- name: Set up Go 1.x
|
|
uses: actions/setup-go@v6
|
|
with:
|
|
go-version-file: 'go.mod'
|
|
id: go
|
|
|
|
- name: Set up Python
|
|
uses: actions/setup-python@v6
|
|
with:
|
|
python-version: '3.9'
|
|
|
|
- name: Clone s3-tests
|
|
run: |
|
|
git clone https://github.com/ceph/s3-tests.git
|
|
cd s3-tests
|
|
sudo apt-get update -qq
|
|
sudo apt-get install -y -qq libxml2-dev libxslt1-dev zlib1g-dev
|
|
pip install -r requirements.txt
|
|
pip install tox
|
|
pip install -e .
|
|
|
|
- name: Fix S3 tests bucket creation conflicts
|
|
run: |
|
|
python3 test/s3/fix_s3_tests_bucket_conflicts.py
|
|
env:
|
|
S3_TESTS_PATH: s3-tests
|
|
|
|
- name: Run Basic S3 tests
|
|
timeout-minutes: 15
|
|
env:
|
|
S3TEST_CONF: ../docker/compose/s3tests.conf
|
|
shell: bash
|
|
run: |
|
|
cd weed
|
|
go install -tags s3tests -buildvcs=false
|
|
set -x
|
|
# Create clean data directory for this test run
|
|
export WEED_DATA_DIR="/tmp/seaweedfs-s3tests-$(date +%s)"
|
|
mkdir -p "$WEED_DATA_DIR"
|
|
weed -v 3 server -filer -filer.maxMB=64 -s3 -ip.bind 0.0.0.0 \
|
|
-dir="$WEED_DATA_DIR" \
|
|
-master.raftHashicorp -master.electionTimeout 1s -master.volumeSizeLimitMB=100 \
|
|
-volume.max=100 -volume.preStopSeconds=1 \
|
|
-master.port=9333 -volume.port=8080 -filer.port=8888 -s3.port=8000 -metricsPort=9324 \
|
|
-s3.allowDeleteBucketNotEmpty=true -s3.config="$GITHUB_WORKSPACE/docker/compose/s3.json" -master.peers=none &
|
|
pid=$!
|
|
|
|
# Wait for all SeaweedFS components to be ready
|
|
echo "Waiting for SeaweedFS components to start..."
|
|
for i in {1..30}; do
|
|
if curl -s http://localhost:9333/cluster/status > /dev/null 2>&1; then
|
|
echo "Master server is ready"
|
|
break
|
|
fi
|
|
echo "Waiting for master server... ($i/30)"
|
|
sleep 2
|
|
done
|
|
|
|
for i in {1..30}; do
|
|
if curl -s http://localhost:8080/status > /dev/null 2>&1; then
|
|
echo "Volume server is ready"
|
|
break
|
|
fi
|
|
echo "Waiting for volume server... ($i/30)"
|
|
sleep 2
|
|
done
|
|
|
|
for i in {1..30}; do
|
|
if curl -s http://localhost:8888/ > /dev/null 2>&1; then
|
|
echo "Filer is ready"
|
|
break
|
|
fi
|
|
echo "Waiting for filer... ($i/30)"
|
|
sleep 2
|
|
done
|
|
|
|
for i in {1..30}; do
|
|
if curl -s http://localhost:8000/ > /dev/null 2>&1; then
|
|
echo "S3 server is ready"
|
|
break
|
|
fi
|
|
echo "Waiting for S3 server... ($i/30)"
|
|
sleep 2
|
|
done
|
|
|
|
echo "All SeaweedFS components are ready!"
|
|
cd ../s3-tests
|
|
sed -i "s/assert prefixes == \['foo%2B1\/', 'foo\/', 'quux%20ab\/'\]/assert prefixes == \['foo\/', 'foo%2B1\/', 'quux%20ab\/'\]/" s3tests/functional/test_s3.py
|
|
|
|
# Debug: Show the config file contents
|
|
echo "=== S3 Config File Contents ==="
|
|
cat ../docker/compose/s3tests.conf
|
|
echo "=== End Config ==="
|
|
|
|
# Additional wait for S3-Filer integration to be fully ready
|
|
echo "Waiting additional 10 seconds for S3-Filer integration..."
|
|
sleep 10
|
|
|
|
# Test S3 connection before running tests
|
|
echo "Testing S3 connection..."
|
|
for i in {1..10}; do
|
|
if curl -s -f http://localhost:8000/ > /dev/null 2>&1; then
|
|
echo "S3 connection test successful"
|
|
break
|
|
fi
|
|
echo "S3 connection test failed, retrying... ($i/10)"
|
|
sleep 2
|
|
done
|
|
|
|
echo "✅ S3 server is responding, starting tests..."
|
|
|
|
# Spawn the lifecycle worker so test_lifecycle_expiration etc. have
|
|
# something driving deletions. The s3tests build tag rescales one
|
|
# day to LifeCycleInterval=10s, so a 1d rule fires within ~10s of
|
|
# the upload's mtime; -dispatch / -checkpoint defaults are already
|
|
# tightened under the same build tag.
|
|
LC_LOG=/tmp/lifecycle-worker.log
|
|
# -debug routes glog to stderr so the bootstrap walker's progress
|
|
# shows up in $LC_LOG; without it weed shell silences glog.
|
|
(echo "s3.lifecycle.run-shard -shards 0-15 -s3 localhost:18000 -events 0 -runtime 1800s -refresh 2s" && echo exit) \
|
|
| weed shell -debug -master=localhost:9333 \
|
|
> "$LC_LOG" 2>&1 &
|
|
lc_pid=$!
|
|
# Aliveness check: a bad shell command exits in <1s and the suite
|
|
# would otherwise just timeout the expiration tests with no signal.
|
|
sleep 2
|
|
if ! kill -0 "$lc_pid" 2>/dev/null; then
|
|
echo "lifecycle worker died on startup"
|
|
tail -50 "$LC_LOG" 2>/dev/null || true
|
|
exit 1
|
|
fi
|
|
echo "lifecycle worker pid=$lc_pid"
|
|
|
|
# bash -e exits the step on the first tox failure, so move teardown
|
|
# into a trap to guarantee the worker log + data dir reach the runner.
|
|
cleanup() {
|
|
status=$?
|
|
# SIGTERM first so the worker's stdout flushes; SIGKILL is the
|
|
# bash fallback if it ignores TERM. Reading the log AFTER the
|
|
# graceful-stop window catches the bootstrap walker's progress.
|
|
kill -TERM "$lc_pid" 2>/dev/null || true
|
|
kill -TERM "$pid" 2>/dev/null || true
|
|
sleep 1
|
|
if [ "$status" -ne 0 ]; then
|
|
echo "=== lifecycle worker log (tail) ==="
|
|
tail -200 "$LC_LOG" 2>/dev/null || true
|
|
fi
|
|
kill -9 "$lc_pid" 2>/dev/null || true
|
|
kill -9 "$pid" 2>/dev/null || true
|
|
rm -rf "$WEED_DATA_DIR" 2>/dev/null || true
|
|
}
|
|
trap cleanup EXIT
|
|
|
|
tox -- \
|
|
s3tests/functional/test_s3.py::test_bucket_list_empty \
|
|
s3tests/functional/test_s3.py::test_bucket_list_distinct \
|
|
s3tests/functional/test_s3.py::test_bucket_list_many \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_many \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_delimiter_basic \
|
|
s3tests/functional/test_s3.py::test_bucket_list_delimiter_basic \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_encoding_basic \
|
|
s3tests/functional/test_s3.py::test_bucket_list_encoding_basic \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_delimiter_prefix \
|
|
s3tests/functional/test_s3.py::test_bucket_list_delimiter_prefix \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_delimiter_prefix_ends_with_delimiter \
|
|
s3tests/functional/test_s3.py::test_bucket_list_delimiter_prefix_ends_with_delimiter \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_delimiter_alt \
|
|
s3tests/functional/test_s3.py::test_bucket_list_delimiter_alt \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_delimiter_prefix_underscore \
|
|
s3tests/functional/test_s3.py::test_bucket_list_delimiter_prefix_underscore \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_delimiter_percentage \
|
|
s3tests/functional/test_s3.py::test_bucket_list_delimiter_percentage \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_delimiter_whitespace \
|
|
s3tests/functional/test_s3.py::test_bucket_list_delimiter_whitespace \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_delimiter_dot \
|
|
s3tests/functional/test_s3.py::test_bucket_list_delimiter_dot \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_delimiter_unreadable \
|
|
s3tests/functional/test_s3.py::test_bucket_list_delimiter_unreadable \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_delimiter_empty \
|
|
s3tests/functional/test_s3.py::test_bucket_list_delimiter_empty \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_delimiter_none \
|
|
s3tests/functional/test_s3.py::test_bucket_list_delimiter_none \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_delimiter_not_exist \
|
|
s3tests/functional/test_s3.py::test_bucket_list_delimiter_not_exist \
|
|
s3tests/functional/test_s3.py::test_bucket_list_delimiter_not_skip_special \
|
|
s3tests/functional/test_s3.py::test_bucket_list_prefix_delimiter_basic \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_prefix_delimiter_basic \
|
|
s3tests/functional/test_s3.py::test_bucket_list_prefix_delimiter_alt \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_prefix_delimiter_alt \
|
|
s3tests/functional/test_s3.py::test_bucket_list_prefix_delimiter_prefix_not_exist \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_prefix_delimiter_prefix_not_exist \
|
|
s3tests/functional/test_s3.py::test_bucket_list_prefix_delimiter_delimiter_not_exist \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_prefix_delimiter_delimiter_not_exist \
|
|
s3tests/functional/test_s3.py::test_bucket_list_prefix_delimiter_prefix_delimiter_not_exist \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_prefix_delimiter_prefix_delimiter_not_exist \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_fetchowner_notempty \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_fetchowner_defaultempty \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_fetchowner_empty \
|
|
s3tests/functional/test_s3.py::test_bucket_list_prefix_basic \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_prefix_basic \
|
|
s3tests/functional/test_s3.py::test_bucket_list_prefix_alt \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_prefix_alt \
|
|
s3tests/functional/test_s3.py::test_bucket_list_prefix_empty \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_prefix_empty \
|
|
s3tests/functional/test_s3.py::test_bucket_list_prefix_none \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_prefix_none \
|
|
s3tests/functional/test_s3.py::test_bucket_list_prefix_not_exist \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_prefix_not_exist \
|
|
s3tests/functional/test_s3.py::test_bucket_list_prefix_unreadable \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_prefix_unreadable \
|
|
s3tests/functional/test_s3.py::test_bucket_list_maxkeys_one \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_maxkeys_one \
|
|
s3tests/functional/test_s3.py::test_bucket_list_maxkeys_zero \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_maxkeys_zero \
|
|
s3tests/functional/test_s3.py::test_bucket_list_maxkeys_none \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_maxkeys_none \
|
|
s3tests/functional/test_s3.py::test_bucket_list_unordered \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_unordered \
|
|
s3tests/functional/test_s3.py::test_bucket_list_maxkeys_invalid \
|
|
s3tests/functional/test_s3.py::test_bucket_list_marker_none \
|
|
s3tests/functional/test_s3.py::test_bucket_list_marker_empty \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_continuationtoken_empty \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_continuationtoken \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_both_continuationtoken_startafter \
|
|
s3tests/functional/test_s3.py::test_bucket_list_marker_unreadable \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_startafter_unreadable \
|
|
s3tests/functional/test_s3.py::test_bucket_list_marker_not_in_list \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_startafter_not_in_list \
|
|
s3tests/functional/test_s3.py::test_bucket_list_marker_after_list \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_startafter_after_list \
|
|
s3tests/functional/test_s3.py::test_bucket_list_return_data \
|
|
s3tests/functional/test_s3.py::test_bucket_list_objects_anonymous \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_objects_anonymous \
|
|
s3tests/functional/test_s3.py::test_bucket_list_objects_anonymous_fail \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_objects_anonymous_fail \
|
|
s3tests/functional/test_s3.py::test_bucket_list_long_name \
|
|
s3tests/functional/test_s3.py::test_bucket_list_special_prefix \
|
|
s3tests/functional/test_s3.py::test_bucket_delete_notexist \
|
|
s3tests/functional/test_s3.py::test_bucket_create_delete \
|
|
s3tests/functional/test_s3.py::test_object_read_not_exist \
|
|
s3tests/functional/test_s3.py::test_multi_object_delete \
|
|
s3tests/functional/test_s3.py::test_multi_objectv2_delete \
|
|
s3tests/functional/test_s3.py::test_object_head_zero_bytes \
|
|
s3tests/functional/test_s3.py::test_object_write_check_etag \
|
|
s3tests/functional/test_s3.py::test_object_write_cache_control \
|
|
s3tests/functional/test_s3.py::test_object_write_expires \
|
|
s3tests/functional/test_s3.py::test_object_write_read_update_read_delete \
|
|
s3tests/functional/test_s3.py::test_object_metadata_replaced_on_put \
|
|
s3tests/functional/test_s3.py::test_object_write_file \
|
|
s3tests/functional/test_s3.py::test_post_object_invalid_date_format \
|
|
s3tests/functional/test_s3.py::test_post_object_no_key_specified \
|
|
s3tests/functional/test_s3.py::test_post_object_missing_signature \
|
|
s3tests/functional/test_s3.py::test_post_object_condition_is_case_sensitive \
|
|
s3tests/functional/test_s3.py::test_post_object_expires_is_case_sensitive \
|
|
s3tests/functional/test_s3.py::test_post_object_missing_expires_condition \
|
|
s3tests/functional/test_s3.py::test_post_object_missing_conditions_list \
|
|
s3tests/functional/test_s3.py::test_post_object_upload_size_limit_exceeded \
|
|
s3tests/functional/test_s3.py::test_post_object_missing_content_length_argument \
|
|
s3tests/functional/test_s3.py::test_post_object_invalid_content_length_argument \
|
|
s3tests/functional/test_s3.py::test_post_object_upload_size_below_minimum \
|
|
s3tests/functional/test_s3.py::test_post_object_empty_conditions \
|
|
s3tests/functional/test_s3.py::test_get_object_ifmatch_good \
|
|
s3tests/functional/test_s3.py::test_get_object_ifnonematch_good \
|
|
s3tests/functional/test_s3.py::test_get_object_ifmatch_failed \
|
|
s3tests/functional/test_s3.py::test_get_object_ifnonematch_failed \
|
|
s3tests/functional/test_s3.py::test_get_object_ifmodifiedsince_good \
|
|
s3tests/functional/test_s3.py::test_get_object_ifmodifiedsince_failed \
|
|
s3tests/functional/test_s3.py::test_get_object_ifunmodifiedsince_failed \
|
|
s3tests/functional/test_s3.py::test_bucket_head \
|
|
s3tests/functional/test_s3.py::test_bucket_head_notexist \
|
|
s3tests/functional/test_s3.py::test_object_raw_authenticated \
|
|
s3tests/functional/test_s3.py::test_object_raw_authenticated_bucket_acl \
|
|
s3tests/functional/test_s3.py::test_object_raw_authenticated_object_acl \
|
|
s3tests/functional/test_s3.py::test_object_raw_authenticated_object_gone \
|
|
s3tests/functional/test_s3.py::test_object_raw_get_x_amz_expires_out_range_zero \
|
|
s3tests/functional/test_s3.py::test_object_anon_put \
|
|
s3tests/functional/test_s3.py::test_object_put_authenticated \
|
|
s3tests/functional/test_s3.py::test_bucket_recreate_overwrite_acl \
|
|
s3tests/functional/test_s3.py::test_bucket_recreate_new_acl \
|
|
s3tests/functional/test_s3.py::test_buckets_create_then_list \
|
|
s3tests/functional/test_s3.py::test_buckets_list_ctime \
|
|
s3tests/functional/test_s3.py::test_list_buckets_invalid_auth \
|
|
s3tests/functional/test_s3.py::test_list_buckets_bad_auth \
|
|
s3tests/functional/test_s3.py::test_bucket_create_naming_good_contains_period \
|
|
s3tests/functional/test_s3.py::test_bucket_create_naming_good_contains_hyphen \
|
|
s3tests/functional/test_s3.py::test_bucket_list_special_prefix \
|
|
s3tests/functional/test_s3.py::test_object_copy_zero_size \
|
|
s3tests/functional/test_s3.py::test_object_copy_same_bucket \
|
|
s3tests/functional/test_s3.py::test_object_copy_to_itself \
|
|
s3tests/functional/test_s3.py::test_object_copy_diff_bucket \
|
|
s3tests/functional/test_s3.py::test_object_copy_canned_acl \
|
|
s3tests/functional/test_s3.py::test_object_copy_bucket_not_found \
|
|
s3tests/functional/test_s3.py::test_object_copy_key_not_found \
|
|
s3tests/functional/test_s3.py::test_multipart_copy_small \
|
|
s3tests/functional/test_s3.py::test_multipart_copy_without_range \
|
|
s3tests/functional/test_s3.py::test_multipart_copy_special_names \
|
|
s3tests/functional/test_s3.py::test_multipart_copy_multiple_sizes \
|
|
s3tests/functional/test_s3.py::test_multipart_get_part \
|
|
s3tests/functional/test_s3.py::test_multipart_upload \
|
|
s3tests/functional/test_s3.py::test_multipart_upload_empty \
|
|
s3tests/functional/test_s3.py::test_multipart_upload_multiple_sizes \
|
|
s3tests/functional/test_s3.py::test_multipart_upload_contents \
|
|
s3tests/functional/test_s3.py::test_multipart_upload_overwrite_existing_object \
|
|
s3tests/functional/test_s3.py::test_multipart_upload_size_too_small \
|
|
s3tests/functional/test_s3.py::test_multipart_resend_first_finishes_last \
|
|
s3tests/functional/test_s3.py::test_multipart_upload_resend_part \
|
|
s3tests/functional/test_s3.py::test_multipart_upload_missing_part \
|
|
s3tests/functional/test_s3.py::test_multipart_upload_incorrect_etag \
|
|
s3tests/functional/test_s3.py::test_abort_multipart_upload \
|
|
s3tests/functional/test_s3.py::test_list_multipart_upload \
|
|
s3tests/functional/test_s3.py::test_atomic_read_1mb \
|
|
s3tests/functional/test_s3.py::test_atomic_read_4mb \
|
|
s3tests/functional/test_s3.py::test_atomic_read_8mb \
|
|
s3tests/functional/test_s3.py::test_atomic_write_1mb \
|
|
s3tests/functional/test_s3.py::test_atomic_write_4mb \
|
|
s3tests/functional/test_s3.py::test_atomic_write_8mb \
|
|
s3tests/functional/test_s3.py::test_atomic_dual_write_1mb \
|
|
s3tests/functional/test_s3.py::test_atomic_dual_write_4mb \
|
|
s3tests/functional/test_s3.py::test_atomic_dual_write_8mb \
|
|
s3tests/functional/test_s3.py::test_atomic_multipart_upload_write \
|
|
s3tests/functional/test_s3.py::test_ranged_request_response_code \
|
|
s3tests/functional/test_s3.py::test_ranged_big_request_response_code \
|
|
s3tests/functional/test_s3.py::test_ranged_request_skip_leading_bytes_response_code \
|
|
s3tests/functional/test_s3.py::test_ranged_request_return_trailing_bytes_response_code \
|
|
s3tests/functional/test_s3.py::test_copy_object_ifmatch_good \
|
|
s3tests/functional/test_s3.py::test_copy_object_ifnonematch_failed \
|
|
s3tests/functional/test_s3.py::test_copy_object_ifmatch_failed \
|
|
s3tests/functional/test_s3.py::test_copy_object_ifnonematch_good \
|
|
s3tests/functional/test_s3.py::test_lifecycle_set \
|
|
s3tests/functional/test_s3.py::test_lifecycle_get \
|
|
s3tests/functional/test_s3.py::test_lifecycle_set_filter \
|
|
s3tests/functional/test_s3.py::test_lifecycle_expiration \
|
|
s3tests/functional/test_s3.py::test_lifecyclev2_expiration
|
|
# cleanup() trap handles worker/server kill + data dir wipe.
|
|
|
|
versioning-tests:
|
|
name: S3 Versioning & Object Lock tests
|
|
runs-on: ubuntu-22.04
|
|
timeout-minutes: 15
|
|
steps:
|
|
- name: Check out code into the Go module directory
|
|
uses: actions/checkout@v6
|
|
|
|
- name: Set up Go 1.x
|
|
uses: actions/setup-go@v6
|
|
with:
|
|
go-version-file: 'go.mod'
|
|
id: go
|
|
|
|
- name: Set up Python
|
|
uses: actions/setup-python@v6
|
|
with:
|
|
python-version: '3.9'
|
|
|
|
- name: Clone s3-tests
|
|
run: |
|
|
git clone https://github.com/ceph/s3-tests.git
|
|
cd s3-tests
|
|
sudo apt-get update -qq
|
|
sudo apt-get install -y -qq libxml2-dev libxslt1-dev zlib1g-dev
|
|
pip install -r requirements.txt
|
|
pip install tox
|
|
pip install -e .
|
|
|
|
- name: Fix S3 tests bucket creation conflicts
|
|
run: |
|
|
python3 test/s3/fix_s3_tests_bucket_conflicts.py
|
|
env:
|
|
S3_TESTS_PATH: s3-tests
|
|
|
|
- name: Run S3 Object Lock, Retention, and Versioning tests
|
|
timeout-minutes: 15
|
|
shell: bash
|
|
run: |
|
|
cd weed
|
|
go install -buildvcs=false
|
|
set -x
|
|
# Create clean data directory for this test run
|
|
export WEED_DATA_DIR="/tmp/seaweedfs-objectlock-versioning-$(date +%s)"
|
|
mkdir -p "$WEED_DATA_DIR"
|
|
|
|
# Verify S3 config file exists
|
|
echo "Checking S3 config file: $GITHUB_WORKSPACE/docker/compose/s3.json"
|
|
ls -la "$GITHUB_WORKSPACE/docker/compose/s3.json"
|
|
weed -v 0 server -filer -filer.maxMB=64 -s3 -ip.bind 0.0.0.0 \
|
|
-dir="$WEED_DATA_DIR" \
|
|
-master.raftHashicorp -master.electionTimeout 1s -master.volumeSizeLimitMB=100 \
|
|
-volume.max=100 -volume.preStopSeconds=1 \
|
|
-master.port=9334 -volume.port=8081 -filer.port=8889 -s3.port=8001 -metricsPort=9325 \
|
|
-s3.allowDeleteBucketNotEmpty=true -s3.config="$GITHUB_WORKSPACE/docker/compose/s3.json" -master.peers=none &
|
|
pid=$!
|
|
|
|
# Wait for all SeaweedFS components to be ready
|
|
echo "Waiting for SeaweedFS components to start..."
|
|
for i in {1..30}; do
|
|
if curl -s http://localhost:9334/cluster/status > /dev/null 2>&1; then
|
|
echo "Master server is ready"
|
|
break
|
|
fi
|
|
echo "Waiting for master server... ($i/30)"
|
|
sleep 2
|
|
done
|
|
|
|
for i in {1..30}; do
|
|
if curl -s http://localhost:8081/status > /dev/null 2>&1; then
|
|
echo "Volume server is ready"
|
|
break
|
|
fi
|
|
echo "Waiting for volume server... ($i/30)"
|
|
sleep 2
|
|
done
|
|
|
|
for i in {1..30}; do
|
|
if curl -s http://localhost:8889/ > /dev/null 2>&1; then
|
|
echo "Filer is ready"
|
|
break
|
|
fi
|
|
echo "Waiting for filer... ($i/30)"
|
|
sleep 2
|
|
done
|
|
|
|
for i in {1..30}; do
|
|
if curl -s http://localhost:8001/ > /dev/null 2>&1; then
|
|
echo "S3 server is ready"
|
|
break
|
|
fi
|
|
echo "Waiting for S3 server... ($i/30)"
|
|
sleep 2
|
|
done
|
|
|
|
echo "All SeaweedFS components are ready!"
|
|
cd ../s3-tests
|
|
sed -i "s/assert prefixes == \['foo%2B1\/', 'foo\/', 'quux%20ab\/'\]/assert prefixes == \['foo\/', 'foo%2B1\/', 'quux%20ab\/'\]/" s3tests/functional/test_s3.py
|
|
# Create and update s3tests.conf to use port 8001
|
|
cp ../docker/compose/s3tests.conf ../docker/compose/s3tests-versioning.conf
|
|
sed -i 's/port = 8000/port = 8001/g' ../docker/compose/s3tests-versioning.conf
|
|
sed -i 's/:8000/:8001/g' ../docker/compose/s3tests-versioning.conf
|
|
sed -i 's/localhost:8000/localhost:8001/g' ../docker/compose/s3tests-versioning.conf
|
|
sed -i 's/127\.0\.0\.1:8000/127.0.0.1:8001/g' ../docker/compose/s3tests-versioning.conf
|
|
# Use the configured bucket prefix from config and do not override with unique prefixes
|
|
# This avoids mismatch in tests that rely on a fixed provided name
|
|
export S3TEST_CONF=../docker/compose/s3tests-versioning.conf
|
|
|
|
# Debug: Show the config file contents
|
|
echo "=== S3 Config File Contents ==="
|
|
cat ../docker/compose/s3tests-versioning.conf
|
|
echo "=== End Config ==="
|
|
|
|
# Additional wait for S3-Filer integration to be fully ready
|
|
echo "Waiting additional 10 seconds for S3-Filer integration..."
|
|
sleep 10
|
|
|
|
# Test S3 connection before running tests
|
|
echo "Testing S3 connection..."
|
|
for i in {1..10}; do
|
|
if curl -s -f http://localhost:8001/ > /dev/null 2>&1; then
|
|
echo "S3 connection test successful"
|
|
break
|
|
fi
|
|
echo "S3 connection test failed, retrying... ($i/10)"
|
|
sleep 2
|
|
done
|
|
|
|
# Force cleanup any existing buckets to avoid conflicts
|
|
echo "Cleaning up any existing buckets..."
|
|
python3 -c "
|
|
import boto3
|
|
from botocore.exceptions import ClientError
|
|
try:
|
|
s3 = boto3.client('s3',
|
|
endpoint_url='http://localhost:8001',
|
|
aws_access_key_id='0555b35654ad1656d804',
|
|
aws_secret_access_key='h7GhxuBLTrlhVUyxSPUKUV8r/2EI4ngqJxD7iBdBYLhwluN30JaT3Q==')
|
|
buckets = s3.list_buckets()['Buckets']
|
|
for bucket in buckets:
|
|
bucket_name = bucket['Name']
|
|
print(f'Deleting bucket: {bucket_name}')
|
|
try:
|
|
# Delete all objects first
|
|
objects = s3.list_objects_v2(Bucket=bucket_name)
|
|
if 'Contents' in objects:
|
|
for obj in objects['Contents']:
|
|
s3.delete_object(Bucket=bucket_name, Key=obj['Key'])
|
|
# Delete all versions if versioning enabled
|
|
versions = s3.list_object_versions(Bucket=bucket_name)
|
|
if 'Versions' in versions:
|
|
for version in versions['Versions']:
|
|
s3.delete_object(Bucket=bucket_name, Key=version['Key'], VersionId=version['VersionId'])
|
|
if 'DeleteMarkers' in versions:
|
|
for marker in versions['DeleteMarkers']:
|
|
s3.delete_object(Bucket=bucket_name, Key=marker['Key'], VersionId=marker['VersionId'])
|
|
# Delete bucket
|
|
s3.delete_bucket(Bucket=bucket_name)
|
|
except ClientError as e:
|
|
print(f'Error deleting bucket {bucket_name}: {e}')
|
|
except Exception as e:
|
|
print(f'Cleanup failed: {e}')
|
|
" || echo "Cleanup completed with some errors (expected)"
|
|
|
|
# Run versioning and object lock tests once (avoid duplicates)
|
|
tox -- s3tests/functional/test_s3.py -k "object_lock or versioning" --tb=short
|
|
kill -9 $pid || true
|
|
# Clean up data directory
|
|
rm -rf "$WEED_DATA_DIR" || true
|
|
|
|
cors-tests:
|
|
name: S3 CORS tests
|
|
runs-on: ubuntu-22.04
|
|
timeout-minutes: 10
|
|
steps:
|
|
- name: Check out code into the Go module directory
|
|
uses: actions/checkout@v6
|
|
|
|
- name: Set up Go 1.x
|
|
uses: actions/setup-go@v6
|
|
with:
|
|
go-version-file: 'go.mod'
|
|
id: go
|
|
|
|
- name: Set up Python
|
|
uses: actions/setup-python@v6
|
|
with:
|
|
python-version: '3.9'
|
|
|
|
- name: Clone s3-tests
|
|
run: |
|
|
git clone https://github.com/ceph/s3-tests.git
|
|
cd s3-tests
|
|
sudo apt-get update -qq
|
|
sudo apt-get install -y -qq libxml2-dev libxslt1-dev zlib1g-dev
|
|
pip install -r requirements.txt
|
|
pip install tox
|
|
pip install -e .
|
|
|
|
- name: Run S3 CORS tests
|
|
timeout-minutes: 10
|
|
shell: bash
|
|
run: |
|
|
cd weed
|
|
go install -buildvcs=false
|
|
set -x
|
|
# Create clean data directory for this test run
|
|
export WEED_DATA_DIR="/tmp/seaweedfs-cors-test-$(date +%s)"
|
|
mkdir -p "$WEED_DATA_DIR"
|
|
weed -v 0 server -filer -filer.maxMB=64 -s3 -ip.bind 0.0.0.0 \
|
|
-dir="$WEED_DATA_DIR" \
|
|
-master.raftHashicorp -master.electionTimeout 1s -master.volumeSizeLimitMB=100 \
|
|
-volume.max=100 -volume.preStopSeconds=1 \
|
|
-master.port=9335 -volume.port=8082 -filer.port=8890 -s3.port=8002 -metricsPort=9326 \
|
|
-s3.allowDeleteBucketNotEmpty=true -s3.config="$GITHUB_WORKSPACE/docker/compose/s3.json" -master.peers=none &
|
|
pid=$!
|
|
|
|
# Wait for all SeaweedFS components to be ready
|
|
echo "Waiting for SeaweedFS components to start..."
|
|
for i in {1..30}; do
|
|
if curl -s http://localhost:9335/cluster/status > /dev/null 2>&1; then
|
|
echo "Master server is ready"
|
|
break
|
|
fi
|
|
echo "Waiting for master server... ($i/30)"
|
|
sleep 2
|
|
done
|
|
|
|
for i in {1..30}; do
|
|
if curl -s http://localhost:8082/status > /dev/null 2>&1; then
|
|
echo "Volume server is ready"
|
|
break
|
|
fi
|
|
echo "Waiting for volume server... ($i/30)"
|
|
sleep 2
|
|
done
|
|
|
|
for i in {1..30}; do
|
|
if curl -s http://localhost:8890/ > /dev/null 2>&1; then
|
|
echo "Filer is ready"
|
|
break
|
|
fi
|
|
echo "Waiting for filer... ($i/30)"
|
|
sleep 2
|
|
done
|
|
|
|
for i in {1..30}; do
|
|
if curl -s http://localhost:8002/ > /dev/null 2>&1; then
|
|
echo "S3 server is ready"
|
|
break
|
|
fi
|
|
echo "Waiting for S3 server... ($i/30)"
|
|
sleep 2
|
|
done
|
|
|
|
echo "All SeaweedFS components are ready!"
|
|
cd ../s3-tests
|
|
sed -i "s/assert prefixes == \['foo%2B1\/', 'foo\/', 'quux%20ab\/'\]/assert prefixes == \['foo\/', 'foo%2B1\/', 'quux%20ab\/'\]/" s3tests/functional/test_s3.py
|
|
# Create and update s3tests.conf to use port 8002
|
|
cp ../docker/compose/s3tests.conf ../docker/compose/s3tests-cors.conf
|
|
sed -i 's/port = 8000/port = 8002/g' ../docker/compose/s3tests-cors.conf
|
|
sed -i 's/:8000/:8002/g' ../docker/compose/s3tests-cors.conf
|
|
sed -i 's/localhost:8000/localhost:8002/g' ../docker/compose/s3tests-cors.conf
|
|
sed -i 's/127\.0\.0\.1:8000/127.0.0.1:8002/g' ../docker/compose/s3tests-cors.conf
|
|
export S3TEST_CONF=../docker/compose/s3tests-cors.conf
|
|
|
|
# Debug: Show the config file contents
|
|
echo "=== S3 Config File Contents ==="
|
|
cat ../docker/compose/s3tests-cors.conf
|
|
echo "=== End Config ==="
|
|
|
|
# Additional wait for S3-Filer integration to be fully ready
|
|
echo "Waiting additional 10 seconds for S3-Filer integration..."
|
|
sleep 10
|
|
|
|
# Test S3 connection before running tests
|
|
echo "Testing S3 connection..."
|
|
for i in {1..10}; do
|
|
if curl -s -f http://localhost:8002/ > /dev/null 2>&1; then
|
|
echo "S3 connection test successful"
|
|
break
|
|
fi
|
|
echo "S3 connection test failed, retrying... ($i/10)"
|
|
sleep 2
|
|
done
|
|
# Run CORS-specific tests from s3-tests suite
|
|
tox -- s3tests/functional/test_s3.py -k "cors" --tb=short || echo "No CORS tests found in s3-tests suite"
|
|
# If no specific CORS tests exist, run bucket configuration tests that include CORS
|
|
tox -- s3tests/functional/test_s3.py::test_put_bucket_cors || echo "No put_bucket_cors test found"
|
|
tox -- s3tests/functional/test_s3.py::test_get_bucket_cors || echo "No get_bucket_cors test found"
|
|
tox -- s3tests/functional/test_s3.py::test_delete_bucket_cors || echo "No delete_bucket_cors test found"
|
|
kill -9 $pid || true
|
|
# Clean up data directory
|
|
rm -rf "$WEED_DATA_DIR" || true
|
|
|
|
copy-tests:
|
|
name: SeaweedFS Custom S3 Copy tests
|
|
runs-on: ubuntu-22.04
|
|
timeout-minutes: 10
|
|
steps:
|
|
- name: Check out code into the Go module directory
|
|
uses: actions/checkout@v6
|
|
|
|
- name: Set up Go 1.x
|
|
uses: actions/setup-go@v6
|
|
with:
|
|
go-version-file: 'go.mod'
|
|
id: go
|
|
|
|
- name: Run SeaweedFS Custom S3 Copy tests
|
|
timeout-minutes: 10
|
|
shell: bash
|
|
run: |
|
|
cd weed
|
|
go install -buildvcs=false
|
|
# Create clean data directory for this test run
|
|
export WEED_DATA_DIR="/tmp/seaweedfs-copy-test-$(date +%s)"
|
|
mkdir -p "$WEED_DATA_DIR"
|
|
set -x
|
|
weed -v 0 server -filer -filer.maxMB=64 -s3 -ip.bind 0.0.0.0 \
|
|
-dir="$WEED_DATA_DIR" \
|
|
-master.raftHashicorp -master.electionTimeout 1s -master.volumeSizeLimitMB=100 \
|
|
-volume.max=100 -volume.preStopSeconds=1 \
|
|
-master.port=9336 -volume.port=8083 -filer.port=8891 -s3.port=8003 -metricsPort=9327 \
|
|
-s3.allowDeleteBucketNotEmpty=true -s3.config="$GITHUB_WORKSPACE/docker/compose/s3.json" -master.peers=none &
|
|
pid=$!
|
|
|
|
# Wait for all SeaweedFS components to be ready
|
|
echo "Waiting for SeaweedFS components to start..."
|
|
for i in {1..30}; do
|
|
if curl -s http://localhost:9336/cluster/status > /dev/null 2>&1; then
|
|
echo "Master server is ready"
|
|
break
|
|
fi
|
|
echo "Waiting for master server... ($i/30)"
|
|
sleep 2
|
|
done
|
|
|
|
for i in {1..30}; do
|
|
if curl -s http://localhost:8083/status > /dev/null 2>&1; then
|
|
echo "Volume server is ready"
|
|
break
|
|
fi
|
|
echo "Waiting for volume server... ($i/30)"
|
|
sleep 2
|
|
done
|
|
|
|
for i in {1..30}; do
|
|
if curl -s http://localhost:8891/ > /dev/null 2>&1; then
|
|
echo "Filer is ready"
|
|
break
|
|
fi
|
|
echo "Waiting for filer... ($i/30)"
|
|
sleep 2
|
|
done
|
|
|
|
for i in {1..30}; do
|
|
if curl -s http://localhost:8003/ > /dev/null 2>&1; then
|
|
echo "S3 server is ready"
|
|
break
|
|
fi
|
|
echo "Waiting for S3 server... ($i/30)"
|
|
sleep 2
|
|
done
|
|
|
|
echo "All SeaweedFS components are ready!"
|
|
cd ../test/s3/copying
|
|
# Patch Go tests to use the correct S3 endpoint (port 8003)
|
|
sed -i 's/http:\/\/127\.0\.0\.1:8000/http:\/\/127.0.0.1:8003/g' s3_copying_test.go
|
|
|
|
# Debug: Show what endpoint the Go tests will use
|
|
echo "=== Go Test Configuration ==="
|
|
grep -n "127.0.0.1" s3_copying_test.go || echo "No IP configuration found"
|
|
echo "=== End Configuration ==="
|
|
|
|
# Additional wait for S3-Filer integration to be fully ready
|
|
echo "Waiting additional 10 seconds for S3-Filer integration..."
|
|
sleep 10
|
|
|
|
# Test S3 connection before running tests
|
|
echo "Testing S3 connection..."
|
|
for i in {1..10}; do
|
|
if curl -s -f http://localhost:8003/ > /dev/null 2>&1; then
|
|
echo "S3 connection test successful"
|
|
break
|
|
fi
|
|
echo "S3 connection test failed, retrying... ($i/10)"
|
|
sleep 2
|
|
done
|
|
|
|
go test -v
|
|
kill -9 $pid || true
|
|
# Clean up data directory
|
|
rm -rf "$WEED_DATA_DIR" || true
|
|
|
|
sql-store-tests:
|
|
name: Basic S3 tests (SQL store)
|
|
runs-on: ubuntu-22.04
|
|
timeout-minutes: 15
|
|
steps:
|
|
- name: Check out code into the Go module directory
|
|
uses: actions/checkout@v6
|
|
|
|
- name: Set up Go 1.x
|
|
uses: actions/setup-go@v6
|
|
with:
|
|
go-version-file: 'go.mod'
|
|
id: go
|
|
|
|
- name: Set up Python
|
|
uses: actions/setup-python@v6
|
|
with:
|
|
python-version: '3.9'
|
|
|
|
- name: Clone s3-tests
|
|
run: |
|
|
git clone https://github.com/ceph/s3-tests.git
|
|
cd s3-tests
|
|
sudo apt-get update -qq
|
|
sudo apt-get install -y -qq libxml2-dev libxslt1-dev zlib1g-dev
|
|
pip install -r requirements.txt
|
|
pip install tox
|
|
pip install -e .
|
|
|
|
- name: Run Ceph S3 tests with SQL store
|
|
timeout-minutes: 15
|
|
shell: bash
|
|
run: |
|
|
cd weed
|
|
|
|
# Debug: Check for port conflicts before starting
|
|
echo "=== Pre-start Port Check ==="
|
|
netstat -tulpn | grep -E "(9337|8085|8892|8004|9328)" || echo "Ports are free"
|
|
|
|
# Kill any existing weed processes that might interfere
|
|
echo "=== Cleanup existing processes ==="
|
|
pkill -f weed || echo "No weed processes found"
|
|
|
|
# More aggressive port cleanup using multiple methods
|
|
for port in 9337 8085 8892 8004 9328; do
|
|
echo "Cleaning port $port..."
|
|
|
|
# Method 1: lsof
|
|
pid=$(lsof -ti :$port 2>/dev/null || echo "")
|
|
if [ -n "$pid" ]; then
|
|
echo "Found process $pid using port $port (via lsof)"
|
|
kill -9 $pid 2>/dev/null || echo "Failed to kill $pid"
|
|
fi
|
|
|
|
# Method 2: netstat + ps (for cases where lsof fails)
|
|
netstat_pids=$(netstat -tlnp 2>/dev/null | grep ":$port " | awk '{print $7}' | cut -d'/' -f1 | grep -v '^-$' || echo "")
|
|
for npid in $netstat_pids; do
|
|
if [ -n "$npid" ] && [ "$npid" != "-" ]; then
|
|
echo "Found process $npid using port $port (via netstat)"
|
|
kill -9 $npid 2>/dev/null || echo "Failed to kill $npid"
|
|
fi
|
|
done
|
|
|
|
# Method 3: fuser (if available)
|
|
if command -v fuser >/dev/null 2>&1; then
|
|
fuser -k ${port}/tcp 2>/dev/null || echo "No process found via fuser for port $port"
|
|
fi
|
|
|
|
sleep 1
|
|
done
|
|
|
|
# Wait for ports to be released
|
|
sleep 5
|
|
|
|
echo "=== Post-cleanup Port Check ==="
|
|
netstat -tulpn | grep -E "(9337|8085|8892|8004|9328)" || echo "All ports are now free"
|
|
|
|
# If any ports are still in use, fail fast
|
|
if netstat -tulpn | grep -E "(9337|8085|8892|8004|9328)" >/dev/null 2>&1; then
|
|
echo "❌ ERROR: Some ports are still in use after aggressive cleanup!"
|
|
echo "=== Detailed Port Analysis ==="
|
|
for port in 9337 8085 8892 8004 9328; do
|
|
echo "Port $port:"
|
|
netstat -tlnp 2>/dev/null | grep ":$port " || echo " Not in use"
|
|
lsof -i :$port 2>/dev/null || echo " No lsof info"
|
|
done
|
|
exit 1
|
|
fi
|
|
|
|
go install -tags "sqlite s3tests" -buildvcs=false
|
|
# Create clean data directory for this test run with unique timestamp and process ID
|
|
export WEED_DATA_DIR="/tmp/seaweedfs-sql-test-$(date +%s)-$$"
|
|
mkdir -p "$WEED_DATA_DIR"
|
|
chmod 777 "$WEED_DATA_DIR"
|
|
|
|
# SQLite-specific configuration
|
|
export WEED_LEVELDB2_ENABLED="false"
|
|
export WEED_SQLITE_ENABLED="true"
|
|
export WEED_SQLITE_DBFILE="$WEED_DATA_DIR/filer.db"
|
|
|
|
echo "=== SQL Store Configuration ==="
|
|
echo "Data Dir: $WEED_DATA_DIR"
|
|
echo "SQLite DB: $WEED_SQLITE_DBFILE"
|
|
echo "LEVELDB2_ENABLED: $WEED_LEVELDB2_ENABLED"
|
|
echo "SQLITE_ENABLED: $WEED_SQLITE_ENABLED"
|
|
|
|
set -x
|
|
weed -v 1 server -filer -filer.maxMB=64 -s3 -ip.bind 0.0.0.0 \
|
|
-dir="$WEED_DATA_DIR" \
|
|
-master.raftHashicorp -master.electionTimeout 1s -master.volumeSizeLimitMB=100 \
|
|
-volume.max=100 -volume.preStopSeconds=1 \
|
|
-master.port=9337 -volume.port=8085 -filer.port=8892 -s3.port=8004 -metricsPort=9328 \
|
|
-s3.allowDeleteBucketNotEmpty=true -s3.config="$GITHUB_WORKSPACE/docker/compose/s3.json" \
|
|
-master.peers=none \
|
|
> /tmp/seaweedfs-sql-server.log 2>&1 &
|
|
pid=$!
|
|
|
|
echo "=== Server started with PID: $pid ==="
|
|
|
|
# Wait for all SeaweedFS components to be ready
|
|
echo "Waiting for SeaweedFS components to start..."
|
|
|
|
# Check if server process is still alive before waiting
|
|
if ! kill -0 $pid 2>/dev/null; then
|
|
echo "❌ Server process died immediately after start"
|
|
echo "=== Immediate Log Check ==="
|
|
tail -20 /tmp/seaweedfs-sql-server.log 2>/dev/null || echo "No log available"
|
|
exit 1
|
|
fi
|
|
|
|
sleep 5 # Give SQLite more time to initialize
|
|
|
|
for i in {1..30}; do
|
|
if curl -s http://localhost:9337/cluster/status > /dev/null 2>&1; then
|
|
echo "Master server is ready"
|
|
break
|
|
fi
|
|
echo "Waiting for master server... ($i/30)"
|
|
# Check if server process is still alive
|
|
if ! kill -0 $pid 2>/dev/null; then
|
|
echo "❌ Server process died while waiting for master"
|
|
tail -20 /tmp/seaweedfs-sql-server.log 2>/dev/null
|
|
exit 1
|
|
fi
|
|
sleep 2
|
|
done
|
|
|
|
for i in {1..30}; do
|
|
if curl -s http://localhost:8085/status > /dev/null 2>&1; then
|
|
echo "Volume server is ready"
|
|
break
|
|
fi
|
|
echo "Waiting for volume server... ($i/30)"
|
|
if ! kill -0 $pid 2>/dev/null; then
|
|
echo "❌ Server process died while waiting for volume"
|
|
tail -20 /tmp/seaweedfs-sql-server.log 2>/dev/null
|
|
exit 1
|
|
fi
|
|
sleep 2
|
|
done
|
|
|
|
for i in {1..30}; do
|
|
if curl -s http://localhost:8892/ > /dev/null 2>&1; then
|
|
echo "Filer (SQLite) is ready"
|
|
break
|
|
fi
|
|
echo "Waiting for filer (SQLite)... ($i/30)"
|
|
if ! kill -0 $pid 2>/dev/null; then
|
|
echo "❌ Server process died while waiting for filer"
|
|
tail -20 /tmp/seaweedfs-sql-server.log 2>/dev/null
|
|
exit 1
|
|
fi
|
|
sleep 2
|
|
done
|
|
|
|
# Extra wait for SQLite filer to fully initialize
|
|
echo "Giving SQLite filer extra time to initialize..."
|
|
sleep 5
|
|
|
|
for i in {1..30}; do
|
|
if curl -s http://localhost:8004/ > /dev/null 2>&1; then
|
|
echo "S3 server is ready"
|
|
break
|
|
fi
|
|
echo "Waiting for S3 server... ($i/30)"
|
|
if ! kill -0 $pid 2>/dev/null; then
|
|
echo "❌ Server process died while waiting for S3"
|
|
tail -20 /tmp/seaweedfs-sql-server.log 2>/dev/null
|
|
exit 1
|
|
fi
|
|
sleep 2
|
|
done
|
|
|
|
echo "All SeaweedFS components are ready!"
|
|
cd ../s3-tests
|
|
sed -i "s/assert prefixes == \['foo%2B1\/', 'foo\/', 'quux%20ab\/'\]/assert prefixes == \['foo\/', 'foo%2B1\/', 'quux%20ab\/'\]/" s3tests/functional/test_s3.py
|
|
# Create and update s3tests.conf to use port 8004
|
|
cp ../docker/compose/s3tests.conf ../docker/compose/s3tests-sql.conf
|
|
sed -i 's/port = 8000/port = 8004/g' ../docker/compose/s3tests-sql.conf
|
|
sed -i 's/:8000/:8004/g' ../docker/compose/s3tests-sql.conf
|
|
sed -i 's/localhost:8000/localhost:8004/g' ../docker/compose/s3tests-sql.conf
|
|
sed -i 's/127\.0\.0\.1:8000/127.0.0.1:8004/g' ../docker/compose/s3tests-sql.conf
|
|
export S3TEST_CONF=../docker/compose/s3tests-sql.conf
|
|
|
|
# Debug: Show the config file contents
|
|
echo "=== S3 Config File Contents ==="
|
|
cat ../docker/compose/s3tests-sql.conf
|
|
echo "=== End Config ==="
|
|
|
|
# Additional wait for S3-Filer integration to be fully ready
|
|
echo "Waiting additional 10 seconds for S3-Filer integration..."
|
|
sleep 10
|
|
|
|
# Test S3 connection before running tests
|
|
echo "Testing S3 connection..."
|
|
|
|
# Debug: Check if SeaweedFS processes are running
|
|
echo "=== Process Status ==="
|
|
ps aux | grep -E "(weed|seaweedfs)" | grep -v grep || echo "No SeaweedFS processes found"
|
|
|
|
# Debug: Check port status
|
|
echo "=== Port Status ==="
|
|
netstat -tulpn | grep -E "(8004|9337|8085|8892)" || echo "Ports not found"
|
|
|
|
# Debug: Check server logs
|
|
echo "=== Recent Server Logs ==="
|
|
echo "--- SQL Server Log ---"
|
|
tail -20 /tmp/seaweedfs-sql-server.log 2>/dev/null || echo "No SQL server log found"
|
|
echo "--- Other Logs ---"
|
|
ls -la /tmp/seaweedfs-*.log 2>/dev/null || echo "No other log files found"
|
|
|
|
for i in {1..10}; do
|
|
if curl -s -f http://localhost:8004/ > /dev/null 2>&1; then
|
|
echo "S3 connection test successful"
|
|
break
|
|
fi
|
|
echo "S3 connection test failed, retrying... ($i/10)"
|
|
|
|
# Debug: Try different HTTP methods
|
|
echo "Debug: Testing different endpoints..."
|
|
curl -s -I http://localhost:8004/ || echo "HEAD request failed"
|
|
curl -s http://localhost:8004/status || echo "Status endpoint failed"
|
|
|
|
sleep 2
|
|
done
|
|
|
|
# Spawn the lifecycle worker (see basic-tests block for context).
|
|
LC_LOG=/tmp/lifecycle-worker-sql.log
|
|
(echo "s3.lifecycle.run-shard -shards 0-15 -s3 localhost:18004 -events 0 -runtime 1800s -refresh 2s" && echo exit) \
|
|
| weed shell -debug -master=localhost:9337 \
|
|
> "$LC_LOG" 2>&1 &
|
|
lc_pid=$!
|
|
sleep 2
|
|
if ! kill -0 "$lc_pid" 2>/dev/null; then
|
|
echo "lifecycle worker died on startup"
|
|
tail -50 "$LC_LOG" 2>/dev/null || true
|
|
exit 1
|
|
fi
|
|
echo "lifecycle worker pid=$lc_pid"
|
|
|
|
cleanup() {
|
|
status=$?
|
|
# SIGTERM first so the worker's stdout flushes; SIGKILL is the
|
|
# bash fallback if it ignores TERM. Reading the log AFTER the
|
|
# graceful-stop window catches the bootstrap walker's progress.
|
|
kill -TERM "$lc_pid" 2>/dev/null || true
|
|
kill -TERM "$pid" 2>/dev/null || true
|
|
sleep 1
|
|
if [ "$status" -ne 0 ]; then
|
|
echo "=== lifecycle worker log (tail) ==="
|
|
tail -200 "$LC_LOG" 2>/dev/null || true
|
|
fi
|
|
kill -9 "$lc_pid" 2>/dev/null || true
|
|
kill -9 "$pid" 2>/dev/null || true
|
|
rm -rf "$WEED_DATA_DIR" 2>/dev/null || true
|
|
}
|
|
trap cleanup EXIT
|
|
|
|
tox -- \
|
|
s3tests/functional/test_s3.py::test_bucket_list_empty \
|
|
s3tests/functional/test_s3.py::test_bucket_list_distinct \
|
|
s3tests/functional/test_s3.py::test_bucket_list_many \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_many \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_delimiter_basic \
|
|
s3tests/functional/test_s3.py::test_bucket_list_delimiter_basic \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_encoding_basic \
|
|
s3tests/functional/test_s3.py::test_bucket_list_encoding_basic \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_delimiter_prefix \
|
|
s3tests/functional/test_s3.py::test_bucket_list_delimiter_prefix \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_delimiter_prefix_ends_with_delimiter \
|
|
s3tests/functional/test_s3.py::test_bucket_list_delimiter_prefix_ends_with_delimiter \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_delimiter_alt \
|
|
s3tests/functional/test_s3.py::test_bucket_list_delimiter_alt \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_delimiter_prefix_underscore \
|
|
s3tests/functional/test_s3.py::test_bucket_list_delimiter_prefix_underscore \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_delimiter_percentage \
|
|
s3tests/functional/test_s3.py::test_bucket_list_delimiter_percentage \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_delimiter_whitespace \
|
|
s3tests/functional/test_s3.py::test_bucket_list_delimiter_whitespace \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_delimiter_dot \
|
|
s3tests/functional/test_s3.py::test_bucket_list_delimiter_dot \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_delimiter_unreadable \
|
|
s3tests/functional/test_s3.py::test_bucket_list_delimiter_unreadable \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_delimiter_empty \
|
|
s3tests/functional/test_s3.py::test_bucket_list_delimiter_empty \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_delimiter_none \
|
|
s3tests/functional/test_s3.py::test_bucket_list_delimiter_none \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_delimiter_not_exist \
|
|
s3tests/functional/test_s3.py::test_bucket_list_delimiter_not_exist \
|
|
s3tests/functional/test_s3.py::test_bucket_list_delimiter_not_skip_special \
|
|
s3tests/functional/test_s3.py::test_bucket_list_prefix_delimiter_basic \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_prefix_delimiter_basic \
|
|
s3tests/functional/test_s3.py::test_bucket_list_prefix_delimiter_alt \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_prefix_delimiter_alt \
|
|
s3tests/functional/test_s3.py::test_bucket_list_prefix_delimiter_prefix_not_exist \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_prefix_delimiter_prefix_not_exist \
|
|
s3tests/functional/test_s3.py::test_bucket_list_prefix_delimiter_delimiter_not_exist \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_prefix_delimiter_delimiter_not_exist \
|
|
s3tests/functional/test_s3.py::test_bucket_list_prefix_delimiter_prefix_delimiter_not_exist \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_prefix_delimiter_prefix_delimiter_not_exist \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_fetchowner_notempty \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_fetchowner_defaultempty \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_fetchowner_empty \
|
|
s3tests/functional/test_s3.py::test_bucket_list_prefix_basic \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_prefix_basic \
|
|
s3tests/functional/test_s3.py::test_bucket_list_prefix_alt \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_prefix_alt \
|
|
s3tests/functional/test_s3.py::test_bucket_list_prefix_empty \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_prefix_empty \
|
|
s3tests/functional/test_s3.py::test_bucket_list_prefix_none \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_prefix_none \
|
|
s3tests/functional/test_s3.py::test_bucket_list_prefix_not_exist \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_prefix_not_exist \
|
|
s3tests/functional/test_s3.py::test_bucket_list_prefix_unreadable \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_prefix_unreadable \
|
|
s3tests/functional/test_s3.py::test_bucket_list_maxkeys_one \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_maxkeys_one \
|
|
s3tests/functional/test_s3.py::test_bucket_list_maxkeys_zero \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_maxkeys_zero \
|
|
s3tests/functional/test_s3.py::test_bucket_list_maxkeys_none \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_maxkeys_none \
|
|
s3tests/functional/test_s3.py::test_bucket_list_unordered \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_unordered \
|
|
s3tests/functional/test_s3.py::test_bucket_list_maxkeys_invalid \
|
|
s3tests/functional/test_s3.py::test_bucket_list_marker_none \
|
|
s3tests/functional/test_s3.py::test_bucket_list_marker_empty \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_continuationtoken_empty \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_continuationtoken \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_both_continuationtoken_startafter \
|
|
s3tests/functional/test_s3.py::test_bucket_list_marker_unreadable \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_startafter_unreadable \
|
|
s3tests/functional/test_s3.py::test_bucket_list_marker_not_in_list \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_startafter_not_in_list \
|
|
s3tests/functional/test_s3.py::test_bucket_list_marker_after_list \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_startafter_after_list \
|
|
s3tests/functional/test_s3.py::test_bucket_list_return_data \
|
|
s3tests/functional/test_s3.py::test_bucket_list_objects_anonymous \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_objects_anonymous \
|
|
s3tests/functional/test_s3.py::test_bucket_list_objects_anonymous_fail \
|
|
s3tests/functional/test_s3.py::test_bucket_listv2_objects_anonymous_fail \
|
|
s3tests/functional/test_s3.py::test_bucket_list_long_name \
|
|
s3tests/functional/test_s3.py::test_bucket_list_special_prefix \
|
|
s3tests/functional/test_s3.py::test_bucket_delete_notexist \
|
|
s3tests/functional/test_s3.py::test_bucket_create_delete \
|
|
s3tests/functional/test_s3.py::test_object_read_not_exist \
|
|
s3tests/functional/test_s3.py::test_multi_object_delete \
|
|
s3tests/functional/test_s3.py::test_multi_objectv2_delete \
|
|
s3tests/functional/test_s3.py::test_object_head_zero_bytes \
|
|
s3tests/functional/test_s3.py::test_object_write_check_etag \
|
|
s3tests/functional/test_s3.py::test_object_write_cache_control \
|
|
s3tests/functional/test_s3.py::test_object_write_expires \
|
|
s3tests/functional/test_s3.py::test_object_write_read_update_read_delete \
|
|
s3tests/functional/test_s3.py::test_object_metadata_replaced_on_put \
|
|
s3tests/functional/test_s3.py::test_object_write_file \
|
|
s3tests/functional/test_s3.py::test_post_object_invalid_date_format \
|
|
s3tests/functional/test_s3.py::test_post_object_no_key_specified \
|
|
s3tests/functional/test_s3.py::test_post_object_missing_signature \
|
|
s3tests/functional/test_s3.py::test_post_object_condition_is_case_sensitive \
|
|
s3tests/functional/test_s3.py::test_post_object_expires_is_case_sensitive \
|
|
s3tests/functional/test_s3.py::test_post_object_missing_expires_condition \
|
|
s3tests/functional/test_s3.py::test_post_object_missing_conditions_list \
|
|
s3tests/functional/test_s3.py::test_post_object_upload_size_limit_exceeded \
|
|
s3tests/functional/test_s3.py::test_post_object_missing_content_length_argument \
|
|
s3tests/functional/test_s3.py::test_post_object_invalid_content_length_argument \
|
|
s3tests/functional/test_s3.py::test_post_object_upload_size_below_minimum \
|
|
s3tests/functional/test_s3.py::test_post_object_empty_conditions \
|
|
s3tests/functional/test_s3.py::test_get_object_ifmatch_good \
|
|
s3tests/functional/test_s3.py::test_get_object_ifnonematch_good \
|
|
s3tests/functional/test_s3.py::test_get_object_ifmatch_failed \
|
|
s3tests/functional/test_s3.py::test_get_object_ifnonematch_failed \
|
|
s3tests/functional/test_s3.py::test_get_object_ifmodifiedsince_good \
|
|
s3tests/functional/test_s3.py::test_get_object_ifmodifiedsince_failed \
|
|
s3tests/functional/test_s3.py::test_get_object_ifunmodifiedsince_failed \
|
|
s3tests/functional/test_s3.py::test_bucket_head \
|
|
s3tests/functional/test_s3.py::test_bucket_head_notexist \
|
|
s3tests/functional/test_s3.py::test_object_raw_authenticated \
|
|
s3tests/functional/test_s3.py::test_object_raw_authenticated_bucket_acl \
|
|
s3tests/functional/test_s3.py::test_object_raw_authenticated_object_acl \
|
|
s3tests/functional/test_s3.py::test_object_raw_authenticated_object_gone \
|
|
s3tests/functional/test_s3.py::test_object_raw_get_x_amz_expires_out_range_zero \
|
|
s3tests/functional/test_s3.py::test_object_anon_put \
|
|
s3tests/functional/test_s3.py::test_object_put_authenticated \
|
|
s3tests/functional/test_s3.py::test_bucket_recreate_overwrite_acl \
|
|
s3tests/functional/test_s3.py::test_bucket_recreate_new_acl \
|
|
s3tests/functional/test_s3.py::test_buckets_create_then_list \
|
|
s3tests/functional/test_s3.py::test_buckets_list_ctime \
|
|
s3tests/functional/test_s3.py::test_list_buckets_invalid_auth \
|
|
s3tests/functional/test_s3.py::test_list_buckets_bad_auth \
|
|
s3tests/functional/test_s3.py::test_bucket_create_naming_good_contains_period \
|
|
s3tests/functional/test_s3.py::test_bucket_create_naming_good_contains_hyphen \
|
|
s3tests/functional/test_s3.py::test_bucket_list_special_prefix \
|
|
s3tests/functional/test_s3.py::test_object_copy_zero_size \
|
|
s3tests/functional/test_s3.py::test_object_copy_same_bucket \
|
|
s3tests/functional/test_s3.py::test_object_copy_to_itself \
|
|
s3tests/functional/test_s3.py::test_object_copy_diff_bucket \
|
|
s3tests/functional/test_s3.py::test_object_copy_canned_acl \
|
|
s3tests/functional/test_s3.py::test_object_copy_bucket_not_found \
|
|
s3tests/functional/test_s3.py::test_object_copy_key_not_found \
|
|
s3tests/functional/test_s3.py::test_multipart_copy_small \
|
|
s3tests/functional/test_s3.py::test_multipart_copy_without_range \
|
|
s3tests/functional/test_s3.py::test_multipart_copy_special_names \
|
|
s3tests/functional/test_s3.py::test_multipart_copy_multiple_sizes \
|
|
s3tests/functional/test_s3.py::test_multipart_get_part \
|
|
s3tests/functional/test_s3.py::test_multipart_upload \
|
|
s3tests/functional/test_s3.py::test_multipart_upload_empty \
|
|
s3tests/functional/test_s3.py::test_multipart_upload_multiple_sizes \
|
|
s3tests/functional/test_s3.py::test_multipart_upload_contents \
|
|
s3tests/functional/test_s3.py::test_multipart_upload_overwrite_existing_object \
|
|
s3tests/functional/test_s3.py::test_multipart_upload_size_too_small \
|
|
s3tests/functional/test_s3.py::test_multipart_resend_first_finishes_last \
|
|
s3tests/functional/test_s3.py::test_multipart_upload_resend_part \
|
|
s3tests/functional/test_s3.py::test_multipart_upload_missing_part \
|
|
s3tests/functional/test_s3.py::test_multipart_upload_incorrect_etag \
|
|
s3tests/functional/test_s3.py::test_abort_multipart_upload \
|
|
s3tests/functional/test_s3.py::test_list_multipart_upload \
|
|
s3tests/functional/test_s3.py::test_atomic_read_1mb \
|
|
s3tests/functional/test_s3.py::test_atomic_read_4mb \
|
|
s3tests/functional/test_s3.py::test_atomic_read_8mb \
|
|
s3tests/functional/test_s3.py::test_atomic_write_1mb \
|
|
s3tests/functional/test_s3.py::test_atomic_write_4mb \
|
|
s3tests/functional/test_s3.py::test_atomic_write_8mb \
|
|
s3tests/functional/test_s3.py::test_atomic_dual_write_1mb \
|
|
s3tests/functional/test_s3.py::test_atomic_dual_write_4mb \
|
|
s3tests/functional/test_s3.py::test_atomic_dual_write_8mb \
|
|
s3tests/functional/test_s3.py::test_atomic_multipart_upload_write \
|
|
s3tests/functional/test_s3.py::test_ranged_request_response_code \
|
|
s3tests/functional/test_s3.py::test_ranged_big_request_response_code \
|
|
s3tests/functional/test_s3.py::test_ranged_request_skip_leading_bytes_response_code \
|
|
s3tests/functional/test_s3.py::test_ranged_request_return_trailing_bytes_response_code \
|
|
s3tests/functional/test_s3.py::test_copy_object_ifmatch_good \
|
|
s3tests/functional/test_s3.py::test_copy_object_ifnonematch_failed \
|
|
s3tests/functional/test_s3.py::test_copy_object_ifmatch_failed \
|
|
s3tests/functional/test_s3.py::test_copy_object_ifnonematch_good \
|
|
s3tests/functional/test_s3.py::test_lifecycle_set \
|
|
s3tests/functional/test_s3.py::test_lifecycle_get \
|
|
s3tests/functional/test_s3.py::test_lifecycle_set_filter \
|
|
s3tests/functional/test_s3.py::test_lifecycle_expiration \
|
|
s3tests/functional/test_s3.py::test_lifecyclev2_expiration
|
|
# cleanup() trap handles worker/server kill + data dir wipe.
|
|
|
|
|