diff --git a/tests/README.md b/tests/README.md index 870b85f4..8ba5fb03 100644 --- a/tests/README.md +++ b/tests/README.md @@ -18,6 +18,8 @@ ### Posix Backend +> **Note**: many of the required libraries, and a good rundown of the installation procedure, can be found in the `Dockerfile_test_bats` dockerfile in the root folder. + 1. Build the `versitygw` binary. 2. Install the command-line interface(s) you want to test if unavailable on your machine. * **aws cli**: Instructions are [here](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html). @@ -50,7 +52,7 @@ To preserve buckets while running tests, set `RECREATE_BUCKETS` to `false`. Two utility functions are included, if needed, to create, and delete buckets for this: `tests/setup_static.sh` and `tests/remove_static.sh`. Note that this creates a bucket with object lock enabled, and some tests may fail if the bucket being tested doesn't have object lock enabled. -### S3 Backend +### ~~S3 Backend~~ (Not Working) Instructions are mostly the same; however, testing with the S3 backend requires two S3 accounts. Ideally, these are two real accounts, but one can also be a dummy account that versity uses internally. @@ -65,7 +67,7 @@ To set up the latter: To communicate directly with s3, in order to compare the gateway results to direct results: 1. Create an AWS profile with the direct connection info. Set `AWS_PROFILE` to this. -2. Set `RUN_VERSITYGW` to false. +2. Set `RUN_VERSITYGW` to **false**. 3. Set `AWS_ENDPOINT_URL` to the typical endpoint location (usually `https://s3.amazonaws.com`). 4. If testing **s3cmd**, create a new `s3cfg.local` file with `host_base` and `host_bucket` set to `s3.amazonaws.com`. 5. If testing **mc**, change the `MC_ALIAS` value to a new value such as `versity-direct`. @@ -75,7 +77,7 @@ To communicate directly with s3, in order to compare the gateway results to dire 1. Copy `.secrets.default` to `.secrets` in the `tests` folder and change the parameters and add the additional s3 fields explained in the **S3 Backend** section above if running with the s3 backend. 2. By default, the dockerfile uses the **arm** architecture (usually modern Mac). If using **amd** (usually earlier Mac or Linux), you can either replace the corresponding `ARG` values directly, or with `arg="="` Also, you can determine which is used by your OS with `uname -a`. 3. Build and run the `Dockerfile_test_bats` file. Change the `SECRETS_FILE` and `CONFIG_FILE` parameters to point to your secrets and config file, respectively, if not using the defaults. Example: `docker build -t --build-arg="SECRETS_FILE=" --build-arg="CONFIG_FILE=" -f tests/Dockerfile_test_bats .`. -4. To run the entire suite, run `docker run -it `. To run an individual suite, pass in the name of the suite as defined in `tests/run.sh` (e.g. REST tests -> `docker run -it rest`). Also, multiple specific suites can be run, if separated by comma. +4. To run the entire suite, run `docker run -it `. This is not recommended due to the sheer amount of tests. To run an individual suite, pass in the name of the suite as defined in `tests/run.sh` (e.g. REST tests -> `docker run -it rest`). Also, multiple specific suites can be run, if separated by comma. ## Instructions - Running with docker-compose @@ -108,24 +110,28 @@ A single instance can be run with `docker-compose -f docker-compose-bats.yml up **ACL_AWS_CANONICAL_ID**: for direct mode, the canonical ID for the user to test ACL changes and access by non-owners -**ACL_AWS_ACCESS_KEY_ID**, **ACL_AWS_ACCESS_SECRET_KEY**: for direct mode, the ID and key for the S3 user in the **ACL_AWS_CANONICAL_ID** account. +**ACL_AWS_ACCESS_KEY_ID**, **ACL_AWS_SECRET_ACCESS_KEY**: for direct mode, the ID and key for the S3 user in the **ACL_AWS_CANONICAL_ID** account. + +**ACL_AWS_ACCESS_KEY_ID_TWO**, **ACL_AWS_SECRET_ACCESS_KEY_TWO**: if running a second versitygw application, the user ID and secret key for this application. **USER_ID_{role}_{id}**, **USERNAME_{role}_{id}**, **PASSWORD_{role}_{id}**: for setup_user_v2 non-autocreated users, the format for the user. * example: USER_ID_USER_1={name}: user ID corresponding to the first user with **user** permissions in the test. -#### - ### Non-Secret **VERSITY_EXE**: location of the versity executable relative to test folder. **RUN_VERSITYGW**: whether to run the versitygw executable, should be set to **false** when running tests directly against **s3**. +**PORT**: port to run the versity app on, if not specified, defaults to **7070**. + +**PORT_TWO**: port to run the second versity app on, if running two versity applications simultaneously. If not specified, defaults to **7071**. + **BACKEND**: the storage backend type for the gateway, e.g. **posix** or **s3**. **LOCAL_FOLDER**: if running with a **posix** backend, the backend storage folder. -**BUCKET_ONE_NAME**, **BUCKET_TWO_NAME**: test bucket names. +**BUCKET_ONE_NAME**, **BUCKET_TWO_NAME**: test bucket names. In newer tests and when `RECREATE_BUCKETS` is set to **true**, these are prefixes and the suffixes are autogenerated. **RECREATE_BUCKETS**: whether to delete buckets between tests. If set to false, the bucket will be restored to an original state for the purpose of ensuring consistent tests, but not deleted. @@ -143,6 +149,8 @@ A single instance can be run with `docker-compose -f docker-compose-bats.yml up **USERS_FOLDER**: folder to use if storing IAM data in a folder. +**USERS_BUCKET**: bucket to use if storing IAM data in an S3 bucket + **IAM_TYPE**: how to store IAM data (**s3** or **folder**). **TEST_LOG_FILE**: log file location for these bats tests. @@ -173,9 +181,9 @@ A single instance can be run with `docker-compose -f docker-compose-bats.yml up **AUTOGENERATE_USERS**: setup_user_v2, whether or not to autocreate users for tests. If set to **false**, users must be pre-created (see `Secret` section above). -**USER_AUTOGENERATION_PREFIX**: setup_user_v2, if **AUTOCREATE_USERS** is set to **true**, the prefix for the autocreated username. +**USER_AUTOGENERATION_PREFIX**: setup_user_v2, if **AUTOGENERATE_USERS** is set to **true**, the prefix for the autocreated username. -**CREATE_STATIC_USERS_IF_NONEXISTENT**: setup_user_v2, if **AUTOCREATE_USERS** is set to **false**, generate non-existing users if they don't exist, but don't delete them, as with user autogeneration +**CREATE_STATIC_USERS_IF_NONEXISTENT**: setup_user_v2, if **AUTOGENERATE_USERS** is set to **false**, generate non-existing users if they don't exist, but don't delete them, as with user autogeneration **DIRECT_POST_COMMAND_DELAY**: in v1 direct mode, time to wait before sending new commands to try to prevent propagation delay issues @@ -189,6 +197,16 @@ A single instance can be run with `docker-compose -f docker-compose-bats.yml up **COVERAGE_LOG**: if set, where to write test or test suite coverage data +**PYTHON_ENV_FOLDER**: where to place or use the python environment to calculate certain AWS checksums. The default is `env` in the `TEST_FILE_FOLDER`. + +**TEMPLATE_MATRIX_FILE**: YAML file location used to retrieve the templates for expected responses, in some cases + +**SKIP_POLICY**: set to **true** to skip tests involving policies + +**SKIP_BUCKET_OWNERSHIP_CONTROLS**: set to **true** to avoid bucket ownership operations. This is needed to properly set up and clean the buckets if these operations are not supported by the server. + +**BYPASS_ENV_FILE**: skip loading `.env` file on startup, default is **false** + ## REST Scripts REST scripts are included for calls to S3's REST API in the `./tests/rest_scripts/` folder. To call a script, the following parameters are needed: diff --git a/tests/drivers/user.sh b/tests/drivers/user.sh index 487178a3..7f6e4c2a 100644 --- a/tests/drivers/user.sh +++ b/tests/drivers/user.sh @@ -170,3 +170,16 @@ reset_bucket() { return 1 fi } + +get_user_id() { + if [ "$DIRECT" == "true" ]; then + if [ "$DIRECT_AWS_USER_ID" == "" ]; then + log 2 "DIRECT_AWS_USER_ID is empty or not defined" + return 1 + fi + echo "$DIRECT_AWS_USER_ID" + return 0 + fi + echo "$AWS_ACCESS_KEY_ID" + return 0 +} \ No newline at end of file diff --git a/tests/test_rest_delete_bucket.sh b/tests/test_rest_delete_bucket.sh index 0f71dc1a..39b9fd4d 100755 --- a/tests/test_rest_delete_bucket.sh +++ b/tests/test_rest_delete_bucket.sh @@ -142,7 +142,11 @@ source ./tests/drivers/put_object/put_object_rest.sh run setup_bucket "$bucket_name" assert_success - run send_rest_go_command "204" "-method" "DELETE" "-bucketName" "$bucket_name" "-signedParams" "x-amz-expected-bucket-owner:$AWS_USER_ID" + run get_user_id + assert_success + user_id=$output + + run send_rest_go_command "204" "-method" "DELETE" "-bucketName" "$bucket_name" "-signedParams" "x-amz-expected-bucket-owner:$user_id" assert_success } diff --git a/tests/test_rest_head_bucket.sh b/tests/test_rest_head_bucket.sh index 5849f1d3..8eae10e4 100755 --- a/tests/test_rest_head_bucket.sh +++ b/tests/test_rest_head_bucket.sh @@ -64,6 +64,10 @@ source ./tests/drivers/create_bucket/create_bucket_rest.sh run setup_bucket "$bucket_name" assert_success - run head_bucket_rest_expect_success "$bucket_name" "EXPECTED_OWNER=$AWS_USER_ID" + run get_user_id + assert_success + user_id=$output + + run head_bucket_rest_expect_success "$bucket_name" "EXPECTED_OWNER=$user_id" assert_success } diff --git a/tests/test_rest_put_object.sh b/tests/test_rest_put_object.sh index 5b32d190..8e21a30b 100755 --- a/tests/test_rest_put_object.sh +++ b/tests/test_rest_put_object.sh @@ -55,7 +55,7 @@ export RUN_USERS=true } @test "REST - PutObject with user permission - admin user" { - if [ "$SKIP_USERS_TEST" == "true" ]; then + if [ "$SKIP_USERS_TESTS" == "true" ]; then skip "skipping versity-specific users tests" fi run get_bucket_name "$BUCKET_ONE_NAME" @@ -73,7 +73,7 @@ export RUN_USERS=true } @test "REST - PutObject with no permission - 'user' user" { - if [ "$SKIP_USERS_TEST" == "true" ]; then + if [ "$SKIP_USERS_TESTS" == "true" ]; then skip "skipping versity-specific users tests" fi run get_bucket_name "$BUCKET_ONE_NAME" @@ -538,7 +538,7 @@ export RUN_USERS=true run bash -c "echo -n \"$payload_content\" > $TEST_FILE_FOLDER/$test_file" assert_success - run send_openssl_go_command "200" "-method" "PUT" "-payload" "$payload_content" "-bucketName" "$bucket_name" "-objectKey" "$test_file" + run send_openssl_go_command "200" "-method" "PUT" "-bucketName" "$bucket_name" "-objectKey" "$test_file" "-payload" "$payload_content" assert_success run download_and_compare_file "$TEST_FILE_FOLDER/$test_file" "$bucket_name" "$test_file" "$TEST_FILE_FOLDER/${test_file}_downloaded"