We have to account for moved objects as well
as copied objects so they will be balanced with
the respective `del_live_object` calls called
by the destructor.
However, since chunked_vector requires the
value_type to be nothrow_move_constructible,
just count the additional live object, but
do not modify _countdown or, respectively, throw
an exception, as this should be considered only
for the default and copy constructors.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Currently, if the fill ctor throws an exception,
the destructor won't be called, as it object is not
fully constructed yet.
Call the default ctor first (which doesn't throw)
to make sure the destructor will be called on exception.
Fixesscylladb/scylladb#18635
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Even when configured to not do any validation at all, the validator still did some. This small series fixes this, and adds a test to check that validation levels in general are respected, and the validator doesn't validate more than it is asked to.
Fixes: #18662
(cherry picked from commit f6511ca1b0)
(cherry picked from commit e7b07692b6)
(cherry picked from commit 78afb3644c)
Refs #18667Closesscylladb/scylladb#18724
* github.com:scylladb/scylladb:
test/boost/mutation_fragment_test.cc: add test for validator validation levels
mutation: mutation_fragment_stream_validating_filter: fix validation_level::none
mutation: mutation_fragment_stream_validating_filter: add raises_error ctor parameter
Despite its name, this validation level still did some validation. Fix
this, by short-circuiting the catch-all operator(), preventing any
validation when the user asked for none.
(cherry picked from commit e7b07692b6)
When set to false, no exceptions will be raised from the validator on
validation error. Instead, it will just return false from the respective
validator methods. This makes testing simpler, asserting exceptions is
clunky.
When true (default), the previous behaviour will remain: any validation
error will invoke on_internal_error(), resulting in either std::abort()
or an exception.
(cherry picked from commit f6511ca1b0)
when migrating to the uuid-based identifiers, the mapping from the
integer-based generation to the shard-id is preserved. we used to have
"gen % smp_count" for calculating the shard which is responsible to host
a given sstable. despite that this is not a documented behavior, this is
handy when we try to correlate an sstable to a shard, typically when
looking at a performance issue.
in this change, a new subcommand is added to expose the connection
between the sstable and its "owner" shards.
Fixes#16343
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
Closes https://github.com/scylladb/scylladb/pull/16345
(cherry picked from commit 273ee36bee)
Fixes#18381
- [x] need to backport, because we have needs in production to figure out the mapping from an sstable identifier to the shard which "owns" it.
Closesscylladb/scylladb#18681
* github.com:scylladb/scylladb:
tools: Make sstable shard-of efficient by loading minimum to compute owners
test/cql-pytest/test_tools.py: test shard-of with a single partition
tools/scylla-sstable: add `scylla sstable shard-of` command
Getting token() function first tries to find a schema for underlying
table and continues with nullptr if there's no one. Later, when creating
token_fct, the schema is passed as is and referenced. If it's null crash
happens.
It used to throw before 5983e9e7b2 (cql3: test_assignment: pass optional
schema everywhere) on missing schema, but this commit changed the way
schema is looked up, so nullptr is now possible.
fixes: #18637
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
(cherry picked from commit df8a446437)
Closesscylladb/scylladb#18698
when migrating to the uuid-based identifiers, the mapping from the
integer-based generation to the shard-id is preserved. we used to have
"gen % smp_count" for calculating the shard which is responsible to host
a given sstable. despite that this is not a documented behavior, this is
handy when we try to correlate an sstable to a shard, typically when
looking at a performance issue.
in this change, a new subcommand is added to expose the connection
between the sstable and its "owner" shards.
Fixes#16343
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
Closesscylladb/scylladb#16345
(cherry picked from commit 273ee36bee)
When a compaction strategy uses garbage collected sstables to track
expired tombstones, do not use complete partition estimates for them,
instead, use a fraction of it based on the droppable tombstone ratio
estimate.
Fixes#18283
Signed-off-by: Lakshmi Narayanan Sreethar <lakshmi.sreethar@scylladb.com>
Closesscylladb/scylladb#18465
(cherry picked from commit d39adf6438)
Closesscylladb/scylladb#18656
The default limit of open file descriptors
per process may be too small for iotune on
certain machines with large number of cores.
In such case iotune reports failure due to
unability to create files or to set up seastar
framework.
This change configures the limit of open file
descriptors before running iotune to ensure
that the failure does not occur.
The limit is set via 'resource.setrlimit()' in
the parent process. The limit is then inherited
by the child process.
Signed-off-by: Patryk Wrobel <patryk.wrobel@scylladb.com>
(cherry picked from commit ec820e214c)
Closesscylladb/scylladb#18655
Mostly a set of fixes in the area of ssl handling
* tools/cqlsh 99b2b777...9d49b385 (21):
> cqlshlib/sslhandling: fix logic of `ssl_check_hostname`
> cqlshlib/sslhandling.py: don't use empty userkey/usercert
> Dockerfile: noninteractive isn't enough for answering yet on apt-get
> fix cqlsh version print
> cqlshlib/sslhandling: change `check_hostname` deafult to False
> Introduce new ssl configuration for disableing check_hostname
> set the hostname in ssl_options.server_hostname when SSL is used
> issue-73 Fixed a bug where username and password from the credentials file were ignored.
> issue-73 Fixed a bug where username and password from the credentials file were ignored.
> issue-73
> github actions: update `cibuildwheel==v2.16.5`
> dist/debian: fix the trailer line format
> `COPY TO STDOUT` shouldn't put None where a function is expected
> Make cqlsh work with unix domain sockets
> Bump python-driver version
> dist/debian: add trailer line
> dist/debian: wrap long line
> Draft: explicit build-time packge dependencies
> stop retruning status_code=2 on schema disagreement
> Fix minor typos in the code
> Dockerfile: apt-get update and apt-get upgrade to get latest OS packages
Ref: #18590Closesscylladb/scylladb#18652
This PR removes the incorrect information that the ScyllaDB Rust Driver is not GA.
In addition, it replaces "Scylla" with "ScyllaDB".
Fixes https://github.com/scylladb/scylladb/issues/16178Closesscylladb/scylladb#16199
* github.com:scylladb/scylladb:
doc: remove the "preview" label from Rust driver
doc: fix Rust Driver release information
(cherry picked from commit 56c3515751)
Currently default task_ttl_in_seconds is 0, but scylla.yaml changes
the value to 10.
Change task_ttl_in_seconds in scylla.yaml to 0, so that there are
consistent defaults. Comment it out.
Fixes: #16714.
(cherry picked from commit 67bbaad62e)
Closesscylladb/scylladb#18584
The direct failure detector design is simplistic. It sends pings
sequentially and times out listeners that reached the threshold (i.e.
didn't hear from a given endpoint for too long) in-between pings.
Given the sequential nature, the previous ping must finish so the next
ping can start. We timeout pings that take too long. The timeout was
hardcoded and set to 300ms. This is too low for wide-area setups --
latencies across the Earth can indeed go up to 300ms. 3 subsequent timed
out pings to a given node were sufficient for the Raft listener to "mark
server as down" (the listener used a threshold of 1s).
Increase the ping timeout to 600ms which should be enough even for
pinging the opposite side of Earth, and make it tunable.
Increase the Raft listener threshold from 1s to 2s. Without the
increased threshold, one timed out ping would be enough to mark the
server as down. Increasing it to 2s requires 3 timed out pings which
makes it more robust in presence of transient network hiccups.
In the future we'll most likely want to decrease the Raft listener
threshold again, if we use Raft for data path -- so leader elections
start quickly after leader failures. (Faster than 2s). To do that we'll
have to improve the design of the direct failure detector.
Ref: scylladb/scylladb#16410Fixes: scylladb/scylladb#16607
---
I tested the change manually using `tc qdisc ... netem delay`, setting
network delay on local setup to ~300ms with jitter. Without the change,
the result is as observed in scylladb/scylladb#16410: interleaving
```
raft_group_registry - marking Raft server ... as dead for Raft groups
raft_group_registry - marking Raft server ... as alive for Raft groups
```
happening once every few seconds. The "marking as dead" happens whenever
we get 3 subsequent failed pings, which is happens with certain (high)
probability depending on the latency jitter. Then as soon as we get a
successful ping, we mark server back as alive.
With the change, the phenomenon no longer appears.
(cherry picked from commit 8df6d10e88)
Closesscylladb/scylladb#18559
More than three years ago, in issue #7949, we noticed that trying to
set a `map<ascii, int>` from JSON input (i.e., using INSERT JSON or the
fromJson() function) fails - the ascii key is incorrectly parsed.
We fixed that issue in commit 75109e9519
but unfortunately, did not do our due diligence: We did not write enough
tests inspired by this bug, and failed to discover that actually we have
the same bug for many other key types, not just for "ascii". Specifically,
the following key types have exactly the same bug:
* blob
* date
* inet
* time
* timestamp
* timeuuid
* uuid
Other types, like numbers or boolean worked "by accident" - instead of
parsing them as a normal string, we asked the JSON parser to parse them
again after removing the quotes, and because unquoted numbers and
unquoted true/false happwn to work in JSON, this didn't fail.
The fix here is very simple - for all *native* types (i.e., not
collections or tuples), the encoding of the key in JSON is simply a
quoted string - and removing the quotes is all we need to do and there's
no need to run the JSON parser a second time. Only for more elaborate
types - collections and tuples - we need to run the JSON parser a
second time on the key string to build the more elaborate object.
This patch also includes tests for fromJson() reading a map with all
native key types, confirming that all the aforementioned key types
were broken before this patch, and all key types (including the numbers
and booleans which worked even befoe this patch) work with this patch.
Fixes#18477.
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
(cherry picked from commit 21557cfaa6)
Closesscylladb/scylladb#18522
The current text seems to suggest that `USING TIMEOUT` doesn't work with `DELETE` and `BATCH`. But that's wrong.
Closesscylladb/scylladb#18424
(cherry picked from commit c1146314a1)
The event is used in a loop.
Found by clang-tidy:
```
streaming/stream_result_future.cc:80:49: warning: 'event' used after it was moved [bugprone-use-after-move]
listener->handle_stream_event(std::move(event));
^
streaming/stream_result_future.cc:80:39: note: move occurred here
listener->handle_stream_event(std::move(event));
^
streaming/stream_result_future.cc:80:49: note: the use happens in a later loop iteration than the move
listener->handle_stream_event(std::move(event));
^
```
Fixes#18332Closesscylladb/scylladb#18333
(cherry picked from commit 1ca779d287)
When reclaiming memory from bloom filters, do not remove them from
_recognised_components, as that leads to the on-disk filter component
being left back on disk when the SSTable is deleted.
Fixes#18398
Signed-off-by: Lakshmi Narayanan Sreethar <lakshmi.sreethar@scylladb.com>
Closesscylladb/scylladb#18400
(cherry picked from commit 6af2659b57)
When a view update has both a local and remote target endpoint,
it extends the lifetime of its memory tracking semaphore units
only until the end of the local update, while the resources are
actually used until the remote update finishes.
This patch changes the semaphore transferring so that in case
of both local and remote endpoints, both view updates share the
units, causing them to be released only after the update that
takes longer finishes.
Fixes#17890
(cherry picked from commit 9789a3dc7c)
Refs #17891Closesscylladb/scylladb#18108
```
sstables/storage.cc:152:21: warning: 'file_path' used after it was moved [bugprone-use-after-move]
remove_file(file_path).get();
^
sstables/storage.cc:145:64: note: move occurred here
auto w = file_writer(output_stream<char>(std::move(sink)), std::move(file_path));
```
It's a regression when TOC is found for a new sstable, and we try to delete temporary TOC.
courtesy of clang-tidy.
Fixes#18323.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
(cherry picked from commit 2fba1f936d)
Closesscylladb/scylladb#18382
in handler.cc, `make_non_overlapping_ranges()` references a moved
instance of `ColumnSlice` when something unexpected happens to
format the error message in an exception, the move constructor of
`ColumnSlice` is default-generated, so the members' move constructors
are used to construct the new instance in the move constructor. this
could lead to undefined behavior when dereferencing the move instance.
in this change, in order to avoid use-after free, let's keep
a copy of the referenced member variables and reference them when
formatting error message in the exception.
this use-after-move issue was introduced in 822a315dfa, which implemented
`get_multi_slice` verb and this piece in the first place. since both 5.2
and 5.4 include this commit, we should backport this change to them.
Refs 822a315dfaFixes#18356
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
(cherry picked from commit 1ad3744edc)
Closesscylladb/scylladb#18374
The new MX-native validator, which validates the index in tandem with the data file, was discovered to print false-positive errors, related to range-tombstones and promoted-index positions.
This series fixes that. But first, it refactors the scrub-related tests. These are currently dominated by boiler-plate code. They are hard to read and hard to write. In the first half of the series, a new scrub_test is introduced, which moves all the boiler-plate to a central place, allowing the tests to focus on just the aspect of scrub that is tested.
Then, all the found bugs in validate are fixed and finally a new test, checking validate with valid sstable is introduced.
This PR backports https://github.com/scylladb/scylladb/pull/16327.
Fixes: https://github.com/scylladb/scylladb/issues/16326Closesscylladb/scylladb#18404
* github.com:scylladb/scylladb:
test/boost/sstable_compaction_test: add validation test with valid sstable
sstablex/mx/reader: validate(): print trace message when finishing the PI block
sstablex/mx/reader: validate(): make index-data PI position check message consistent
sstablex/mx/reader: validate(): only load the next PI block if current is exhausted
sstablex/mx/reader: validate(): reset the current PI block on partition-start
sstablex/mx/reader: validate(): consume_range_tombstone(): check for finished clustering blocked
sstablex/mx/reader: validate(): fix validator for range tombstone end bounds
test/boost/sstable_compaction_test: drop write_corrupt_sstable() helper
test/boost/sstable_compaction_test: fix indentation
test/boost/sstable_compaction_test: use test_scrub_framework in test_scrub_quarantine_mode_test
test/boost/sstable_compaction_test: use scrub_test_framework in sstable_scrub_segregate_mode_test
test/boost/sstable_compaction_test: use scrub_test_framework in sstable_scrub_skip_mode_test
test/boost/sstable_compaction_test: use scrub_test_framework in sstable_scrub_validate_mode_test
test/boost/sstable_compaction_test: introduce scrub_test_framework
test/lib/random_schema: add uncompatible_timestamp_generator()
Add a positive test, as it turns out we had some false-positive
validation bugs in the validator and we need a regression test for this.
(cherry picked from commit 2335f42b2b)
The message says "index-data" but when printing the position, the data
position is printed first, causing confusion. Fix this and while at it,
also print the position of the partition start.
(cherry picked from commit 677be168c4)
The validate() consumes the content of partitions in a consume-loop.
Every time the consumer asks for a "break", the next PI block is loaded
and set on the validator, so it can validate that further clustering
elements are indeed from this block.
This loop assumed the consumer would only request interruption when the
current clustering block is finished. This is wrong, the consumer can
also request interruption when yielding is needed. When this is the
case, the next PI block doesn't have to be loaded yet, the current one
is not exhausted yet. Check this condition, before loading the next PI
block, to prevent false positive errors, due to mismatched PI block
and clustering elements from the sstable.
(cherry picked from commit 5bff7c40d3)
It is possible that the next partition has no PI and thus there won't be
a new PI block to overwrite the old one. This will result in
false-positive messages about rows being outside of the finished PI
block.
(cherry picked from commit e073df1dbb)
Promoted index entries can be written on any clustering elements,
icluding range tombstones. So the validating consumer also has the check
whether the current expected clustering block is finished, when
consuming a range tombstone. If it is, consumption has to be
interrupted, so that the outer-loop can load up the next promoted index
block, before moving on to the next clustering element.
(cherry picked from commit 2737899c21)
For range tombstone end-bounds, the validate_fragment_order() should be
passed a null tombstone, not a disengaged optional. The latter means no
change in the current tombstone. This caused the end bound of range
tombstones to not make it to the validator and the latter complained
later on partition-end that the partition has unclosed range tombstone.
(cherry picked from commit f46b458f0d)
The test becomes a lot shorter and it now uses random schema and random
data.
Indentation is left broken, to be fixed in a future patch.
(cherry picked from commit c35092aff6)
The test becomes a lot shorter and it now uses random schema and random
data.
Indentation is left broken, to be fixed in a future patch.
(cherry picked from commit 3f76aad609)
The test becomes a lot shorter and it now uses random schema and random
data. The test is also split in two: one test for abort mode and one for
skip mode.
Indentation is left broken, to be fixed in a future patch.
(cherry picked from commit 5237e8133b)
The test becomes a lot shorter and it now uses random schema and random
data.
Indentation is left broken, to be fixed in a future patch.
(cherry picked from commit 76785baf43)
Scrub tests require a lot of boilerplate code to work. This has a lot of
disadvantages:
* Tests are long
* The "meat" of the test is lost between all the boiler-plate, it is
hard to glean what a test actually does
* Tests are hard to write, so we have only a few of them and they test
multiple things.
* The boiler-plate differs sligthly from test-to-test.
To solve this, this patch introduces a new class, `scrub_test_frawmework`,
which is a central place for all the boiler-plate code needed to write
scrub-related tests. In the next patches, we will migrate scrub related
tests to this class.
(cherry picked from commit b6f0c4efa0)
Currently, when dividing memory tracked for a batch of updates
we do not take into account the overhead that we have for processing
every update. This patch adds the overhead for single updates
and joins the memory calculation path for batches and their parts
so that both use the same overhead.
Fixes#17854
(cherry picked from commit efcb718e0a)
Closesscylladb/scylladb#18107
Currently, we use the sum of the estimated_partitions from each
participant node as the estimated_partitions for sstable produced by
repair. This way, the estimated_partitions is the biggest possible
number of partitions repair would write.
Since repair will write only the difference between repair participant
nodes, using the biggest possible estimation will overestimate the
partitions written by repair, most of the time.
The problem is that overestimated partitions makes the bloom filter
consume more memory. It is observed that it causes OOM in the field.
This patch changes the estimation to use a fraction of the average
partitions per node instead of sum. It is still not a perfect estimation
but it already improves memory usage significantly.
Fixes#18140Closesscylladb/scylladb#18141
(cherry picked from commit 642f9a1966)
In testing, we've observed multiple cases where nodes would fail to
observe updated application states of other nodes in gossiper.
For example:
- in scylladb/scylladb#16902, a node would finish bootstrapping and enter
NORMAL state, propagating this information through gossiper. However,
other nodes would never observe that the node entered NORMAL state,
still thinking that it is in joining state. This would lead to further
bad consequences down the line.
- in scylladb/scylladb#15393, a node got stuck in bootstrap, waiting for
schema versions to converge. Convergence would never be achieved and the
test eventually timed out. The node was observing outdated schema state
of some existing node in gossip.
I created a test that would bootstrap 3 nodes, then wait until they all
observe each other as NORMAL, with timeout. Unfortunately, thousands of
runs of this test on different machines failed to reproduce the problem.
After banging my head against the wall failing to reproduce, I decided
to sprinkle randomized sleeps across multiple places in gossiper code
and finally: the test started catching the problem in about 1 in 1000
runs.
With additional logging and additional head-banging, I determined
the root cause.
The following scenario can happen, 2 nodes are sufficient, let's call
them A and B:
- Node B calls `add_local_application_state` to update its gossiper
state, for example, to propagate its new NORMAL status.
- `add_local_application_state` takes a copy of the endpoint_state, and
updates the copy:
```
auto local_state = *ep_state_before;
for (auto& p : states) {
auto& state = p.first;
auto& value = p.second;
value = versioned_value::clone_with_higher_version(value);
local_state.add_application_state(state, value);
}
```
`clone_with_higher_version` bumps `version` inside
gms/version_generator.cc.
- `add_local_application_state` calls `gossiper.replicate(...)`
- `replicate` works in 2 phases to achieve exception safety: in first
phase it copies the updated `local_state` to all shards into a
separate map. In second phase the values from separate map are used to
overwrite the endpoint_state map used for gossiping.
Due to the cross-shard calls of the 1 phase, there is a yield before
the second phase. *During this yield* the following happens:
- `gossiper::run()` loop on B executes and bumps node B's `heart_beat`.
This uses the monotonic version_generator, so it uses a higher version
then the ones we used for states added above. Let's call this new version
X. Note that X is larger than the versions used by application_states
added above.
- now node B handles a SYN or ACK message from node A, creating
an ACK or ACK2 message in response. This message contains:
- old application states (NOT including the update described above,
because `replicate` is still sleeping before phase 2),
- but bumped heart_beat == X from `gossiper::run()` loop,
and sends the message.
- node A receives the message and remembers that the max
version across all states (including heart_beat) of node B is X.
This means that it will no longer request or apply states from node B
with versions smaller than X.
- `gossiper.replicate(...)` on B wakes up, and overwrites
endpoint_state with the ones it saved in phase 1. In particular it
reverts heart_beat back to smaller value, but the larger problem is that it
saves updated application_states that use versions smaller than X.
- now when node B sends the updated application_states in ACK or ACK2
message to node A, node A will ignore them, because their versions are
smaller than X. Or node B will never send them, because whenever node
A requests states from node B, it only requests states with versions >
X. Either way, node A will fail to observe new states of node B.
If I understand correctly, this is a regression introduced in
38c2347a3c, which introduced a yield in
`replicate`. Before that, the updated state would be saved atomically on
shard 0, there could be no `heart_beat` bump in-between making a copy of
the local state, updating it, and then saving it.
With the description above, it's easy to make a consistent
reproducer for the problem -- introduce a longer sleep in
`add_local_application_state` before second phase of replicate, to
increase the chance that gossiper loop will execute and bump heart_beat
version during the yield. Further commit adds a test based on that.
The fix is to bump the heart_beat under local endpoint lock, which is
also taken by `replicate`.
The PR also adds a regression test.
Fixes: scylladb/scylladb#15393Fixes: scylladb/scylladb#15602Fixes: scylladb/scylladb#16668Fixes: scylladb/scylladb#16902Fixes: scylladb/scylladb#17493Fixes: scylladb/scylladb#18118
Ref: scylladb/scylla-enterprise#3720
(cherry picked from commit a0b331b310)
(cherry picked from commit 72955093eb)
Refs scylladb/scylladb#18184Closesscylladb/scylladb#18245
* github.com:scylladb/scylladb:
test: reproducer for missing gossiper updates
gossiper: lock local endpoint when updating heart_beat
Added support to track and limit the memory usage by sstable components. A reclaimable component of an SSTable is one from which memory can be reclaimed. SSTables and their managers now track such reclaimable memory and limit the component memory usage accordingly. A new configuration variable defines the memory reclaim threshold. If the total memory of the reclaimable components exceeds this limit, memory will be reclaimed to keep the usage under the limit. This PR considers only the bloom filters as reclaimable and adds support to track and limit them as required.
The feature can be manually verified by doing the following :
1. run a single-node single-shard 1GB cluster
2. create a table with bloom-filter-false-positive-chance of 0.001 (to intentionally cause large bloom filter)
3. populate with tiny partitions
4. watch the bloom filter metrics get capped at 100MB
The default value of the `components_memory_reclaim_threshold` config variable which controls the reclamation process is `.1`. This can also be reduced further during manual tests to easily hit the threshold and verify the feature.
Fixes https://github.com/scylladb/scylladb/issues/17747
Backported from #17771 to 5.4.
Closesscylladb/scylladb#18248
* github.com:scylladb/scylladb:
test_bloom_filter.py: disable reclaiming memory from components
sstable_datafile_test: add tests to verify auto reclamation of components
test/lib: allow overriding available memory via test_env_config
sstables_manager: support reclaiming memory from components
sstables_manager: store available memory size
sstables_manager: add variable to track component memory usage
db/config: add a new variable to limit memory used by table components
sstable_datafile_test: add testcase to verify reclamation from sstables
sstables: support reclaiming memory from components
Regression test for scylladb/scylladb#17493.
(cherry picked from commit 72955093eb)
Backport note: removed `timeout` parameter passed to `server_add`,
missing on this branch. (If server adding hangs, it will timeout after
`TOPOLOGY_TIMEOUT` from scylla_cluster.py)
Removed `force_gossip_join_boot` error injection from test, not present
in this branch. Starting nodes with `experimental_features` disabled.
Added missing `handle_state_normal.*finished` message.
By default the suitename in the junit files generated by pytest
is named `pytest` for all suites instead of the suite, ex. `topology_experimental_raft`
With this change, the junit files will use the real suitename
This change doesn't affect the Test Report in Jenkins, but it
raised part of the other task of publishing the test results to
elasticsearch https://github.com/scylladb/scylla-pkg/pull/3950
where we parse the XMLs and we need the correct suitename
Closesscylladb/scylladb#18172
(cherry picked from commit 223275b4d1)