Commit Graph

43684 Commits

Author SHA1 Message Date
Tomasz Grabiec
8fbfd595bb locator: load_sketch: Optimize pick()/unload()
They are executed frequently during tablet scheduling. Currently, they
have time complexity of O(N*log(N)) in terms of shard count. With
large shard counts, that has significant overhead.

This patch optimizes them down to O(log(N)).
2024-07-31 11:38:17 +02:00
Tomasz Grabiec
d0b0f95849 locator: load_sketch: Introduce load_type 2024-07-31 11:38:17 +02:00
Tomasz Grabiec
8f3b623144 test: perf: tablet_load_balancing: Report total tablet counts 2024-07-31 11:38:17 +02:00
Tomasz Grabiec
662a0ff038 test: perf: tablet_load_balancing: Print run parameters in the single simulation case too 2024-07-31 11:38:16 +02:00
Tomasz Grabiec
a040404875 test: perf: tablet_load_balancing: Report time it took to schedule migrations 2024-07-31 11:38:16 +02:00
Tomasz Grabiec
ae7fd80554 tablets: load_balancer: Log table load stats after each migration 2024-07-31 11:38:16 +02:00
Tomasz Grabiec
b8996a0f59 tablets: load_balancer: Log per-shard load distribution in debug level 2024-07-31 11:38:16 +02:00
Tomasz Grabiec
469e2f3f90 tablets: load_balancer: Improve per-table balance
Tablet load balancer tries to equalize tablet load between shards by
moving tablets. Currently, the tablet load balancer assumes that each
tablet has the same hotness. This may not be true, and some tables may
be hotter than others. If some nodes end up getting more tablets of
the hot table, we can end up with request load imbalance and reduced
performance.

In 79d0711c7e we implemented a
mitigation for the problem by randomly choosing the table whose tablet
replica should be moved. This should improve fairness of
movement. However, this proved to not be enough to get a good
distribution of tablets.

This change improves candidate selection to not relay on randomness
but rather evaluating candidates with respect to the impact on load
imbalance.  Also, if there is no good candidate, we consider picking
other source shards, not the most-loaded one. This is helpful because
when finishing node drain we get just a few candidates per shard, all
of which may belong to a single table, and the destination may already
be overloaded with that table. Another shard may contain tablets of
another table which is not yet overloaded on the destination. And
shards may be of similar load, so it doesn't matter much which shard
we choose to unload.

We also consider other destinations, not the least-loaded one. This
helps when draining nodes and the source node has few shard
candidates. Shards on the destination may have similar load so there
is more than one good destinatin candidate. By limiting ourselves to a
single shard, we increase the chance that we're overload the table on
that shard.

The algorithm was evaluated using "scylla perf-load-balancing", which
simulates a sequeunce of 8 node bootstraps and decommissions for
different node and shard counts, RF, and tablet counts.

For example, for the following parameters:

  params: {iterations=8, nodes=5, tablets1=128 (2.4/sh), tablets2=512 (9.6/sh), rf1=3, rf2=3, shards=32}

The results are:

After:

  Overcommit       : init : {table1={shard=1.25 (best=1.25), node=1.00}, table2={shard=1.04 (best=1.04), node=1.00}}
  Overcommit       : worst: {table1={shard=1.50 (best=1.25), node=1.02}, table2={shard=1.12 (best=1.04), node=1.01}}
  Overcommit       : last : {table1={shard=1.25 (best=1.25), node=1.00}, table2={shard=1.04 (best=1.04), node=1.00}}

Before:

  Overcommit (old) : init : {table1={shard=1.25 (best=1.25), node=1.00}, table2={shard=1.04 (best=1.04), node=1.00}}
  Overcommit (old) : worst: {table1={shard=4.00 (best=1.25), node=1.81}, table2={shard=1.25 (best=1.04), node=1.11}}
  Overcommit (old) : last : {table1={shard=2.50 (best=1.25), node=1.41}, table2={shard=1.25 (best=1.04), node=1.05}}

So shard overcommit for table1 was reduced from 4 to 1.5. Overcommit
of 4 means that the most-loaded shard has 4 times more tablets than
the average per-shard load in the cluster.

Also, node overcommit for table1 was reduced from 1.81 to 1.02.

The magnitude of improvement depends greatly on test configurtion, so on topology and tablet distribution.

The algorithm is not perfect, it finds a local optimum. In the above
test, overcommit of 1.5 is not the best possible (1.25).

One of the reason why the current algorithm doesn't achieve best
distribution is that it works with a single movement at a time and
replication constraints limit the choice of destinations. Viable
destinations for remaining candidates may by only on nodes which are
not least-loaded, and we won't be able to fill the least loaded
node. Doing so would require more complex movement involving moving a
tablet from one of the destination nodes which doesn't have a replica
on the least loaded node and then replacing it with the candidate from
the source node.

Another limitation is that the algorithm can only fix balance by
moving tablets away from most loaded nodes, and it does so due to
imbalance between nodes. So it cannot fix the imbalance which is
already present on the nodes if there is not much to move due to
similar load between nodes. It is designed to not make the imbalance
worse, so it works good if we started in a good shape.

Fixes #16824
2024-07-31 11:38:16 +02:00
Tomasz Grabiec
b7661aa6c9 tablets: load_balancer: Extract check_convergence()
Will be reused when evaluating different targets for migration in later
stages.

The refactoring drops updating of _stats.for_dc(dc).stop_no_candidates
and we update _stats.for_dc(dc).stop_load_inversion in both cases
where convergence check may fail. The reason is that stat updates must
be outside check_convergence(), since the new use case should not
update those stats (it doesn't stop balancing, just drops
candidates). Propagating the information for distinguishing the two
cases would be a burden. But it's not necessary, since both cases are
actually load inversion cases, one pre-migration the other
post-migration, so we don't need the distinction.

It's actually wrong to increment stop_no_candidates, since there may
still be candidates, it's the load which is inverted.
2024-07-31 11:26:11 +02:00
Tomasz Grabiec
41e643ddb9 tablets: load_balancer: Extract nodes_by_load_cmp
Will be reused in a different place.
2024-07-31 11:26:11 +02:00
Tomasz Grabiec
8a7257971d tablets: load_balancer: Maintain tablet count per table 2024-07-31 11:26:11 +02:00
Tomasz Grabiec
4e4f13ac9d tablets: load_balancer: Reuse src_node_info 2024-07-31 11:26:11 +02:00
Tomasz Grabiec
71b8d6b7aa test: perf: tablet_load_balancing: Print warnings about bad overcommit 2024-07-31 11:26:11 +02:00
Tomasz Grabiec
0d50a028a5 test: perf: tablet_load_balancing: Allow running a single simulation 2024-07-31 11:26:11 +02:00
Tomasz Grabiec
3f3660c3fe test: perf: tablet_load_balancing: Report best possible shard overcommit 2024-07-31 11:26:11 +02:00
Tomasz Grabiec
c89a320925 test: perf: tablet_load_balancing: Report global shard overcommit
Rather than maximum per-node shard overcommit. Global shard overcommit
is a better metric since we want to equalize global load not just
per-node load.
2024-07-31 11:26:11 +02:00
Pavel Emelyanov
9214aecbe7 storage_service: Remove orphan forward declaration of a method
The start_sys_dist_ks() itself was removed by bc051387c5

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>

Closes scylladb/scylladb#19928
2024-07-30 16:17:49 +03:00
Benny Halevy
e58ca8c44b service_level_controller: stop: always call subscription on_abort
We want to call `service_level_controller::do_abort()` in all cases.
The current code (introduced in
535e5f4ae7)
calls do_abort if abort was not requested, however, since
it does so by checking the subscription bool operator,
it would miss the case where abort was already requested
before the subscription took place (in service_level_controller
ctor).

With scylladb/seastar@470b539b1c and
scylladb/seastar@8ecce18c51
we can just unconditionally call the subscription `on_abort`
method, that ensures only-once semantics, even if abort
was already requested at subscription time.

Fixes scylladb/scylladb#19075

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>

Closes scylladb/scylladb#19929
2024-07-30 13:23:17 +03:00
Kefu Chai
35394c3f9a docs/dev: fix a typo
remove the extraneous "is".

Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>

Closes scylladb/scylladb#19902
2024-07-30 10:46:25 +03:00
Pavel Emelyanov
97154b0671 Merge 'mapreduce_service: complete coroutinization' from Avi Kivity
mapreduce_server was previously coroutinized, but only partially. This
series completes coroutinization and eliminates remaining continuation chains.

None of this code is performance sensitive as it runs at the super-coordinator level
and is amortized over a full scan of the entire table.

No backport needed as this is a cleanup.

Closes scylladb/scylladb#19913

* github.com:scylladb/scylladb:
  mapreduce_service: reindent
  mapreduce_service: coroutinize retrying_dispatcher::dispatch_to_node()
  mapreduce_service: coroutinize dispatch() inner lambda
2024-07-30 10:44:34 +03:00
Nadav Har'El
d293a5787f alternator: exclude CDC log table from ListTables
The Alternator command ListTables is supposed to list actual tables
created with CreateTable, and should list things like materialized views
(created for GSI or LSI) or CDC log tables.

We already properly excluded materialized views from the list - and
had the tests to prove it - but forgot both the exclusion and the testing
for CDC log tables - so creating a table xyz with streams enable would
cause ListTables to also list "xyz_scylla_cdc_log".

This patch fixes both oversights: It adds the code to exclude CDC logs
from the output of ListTables, add adds a test which reproduces the bug
before this fix, and verifies the fix works.

Fixes #19911.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>

Closes scylladb/scylladb#19914
2024-07-30 10:43:29 +03:00
Nadav Har'El
ca8b91f641 test: increase timeouts for /localnodes test
In commit bac7c33313 we introduced a new
test for the Alternator "/localnodes" request, checking that a node
that is still joining does not get returned. The tests used what I
thought were "very high" timeouts - we had a timeout of 10 seconds
for starting a single node, and injected a 20 second sleep to leave
us 10 seconds after the first sleep.

But the test failed in one extremely slow run (a debug build on
aarch64), where starting just a single node took more than 15 seconds!

So in this patch I increase the timeouts significantly: We increase
the wait for the node to 60 seconds, and the sleeping injection to
120 seconds. These should definitely be enough for anyone (famous
last words...).

The test doesn't actually wait for these timeouts, so the ridiculously
high timeouts shouldn't affect the normal runtime of this test.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>

Closes scylladb/scylladb#19916
2024-07-30 10:41:48 +03:00
Avi Kivity
52ee6127dd Merge 'Use boto3 in object_store test to list bucket' from Pavel Emelyanov
There's a test in object_store suite that verifies the contents of a bucket. It does with the plain http request, but unfortunately this doesn't work -- even local minio uses restricted bucket and using plain http request results in 403(Forbidden) error code. Test doesn't check it and continues working with empty list of objects which, in turn, is what it expects to see.

The fix is in using boto3. With it, the acc/secret pair is picked up and listing the bucket finally works.

Closes scylladb/scylladb#19889

* github.com:scylladb/scylladb:
  test/object_store: Use boto3.resource to list bucket
  test/object_store: Add get_s3_resource() helper
2024-07-29 13:49:50 +03:00
Pavel Emelyanov
8b1a106b62 test/object_store: Use boto3.resource to list bucket
Instead of plain http request, use the power of boto3 package. The
recently added get_s3_resource() facilitates creating one

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2024-07-29 12:29:16 +03:00
Pavel Emelyanov
172e1cb0da test/object_store: Add get_s3_resource() helper
It creates boto3.resource object that points to endpoint maintained
by s3_server argument (that tests obtain via fixture). This allows
using boto3 to access S3 bucket from local minio server.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2024-07-29 12:25:57 +03:00
Kefu Chai
1094c71282 cql3/statement: use compile-time format string
instead of using fmt::runtime, use compile-time format string in
order to detect the bad format string, or missing format arguments,
or arguments which are not formattable at compile time.

Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>

Closes scylladb/scylladb#19901
2024-07-28 21:54:43 +03:00
Benny Halevy
be880ab22c Update seastar submodule
* seastar 67065040...a7d81328 (30):
  > reactor: Initialize _aio_pollfd later
  > abortable_fifo: fix a typo in comment
  > net: Expose DNS error category
  > pollable_fd_state: use default-generated dtor
  > perftune: tune tcp_mem
  > scripts/perftune.py: clock source tweaking: special case Amazon and Google KVM virtualizations
  > abort_source: subscription: keep callback function alive after abort
  > github: disable ccache when building with C++ modules
  > github: add enable-ccache input to test.yaml
  > pollable_fd_state: Mark destructor protected and make non-virtual
  > reactor: Mark .configure() private
  > reactor: Set aio_nowait_supported once
  > reactor: Add .no_poll_aio to reactor_config
  > reactor: Move .max_poll_time on reactor_config
  > reactor: Move .task_quota on reactor_config
  > reactor: Move .strict_o_direct on reactor_config
  > reactor: Move .bypass_fsync on reactor_config
  > reactor: Move .max_task_backlog on reactor_config
  > reactor: Move .force_io_getevents_syscall on reactor_config
  > reactor: Move .have_aio_fsync on reactor_config
  > reactor: Move .kernel_page_cache on reactor_config
  > reactor: Move .handle_sigint on reactor_config
  > reactor_backend: Construct _polling_io from reactor config
  > reactor: Move config when constructing
  > reactor: Use designated initializers to set up reactor_config
  > native-stack: use queue::pop_eventually() in listener::accept()
  > abort_source: subscription: allow calling on_abort explicitly
  > file: document that close() returns the file object to uninitialized state
  > code-cleanup: do not include 'smp.hh' in 'reactor.hh'
  > code-cleanup: remove redundant includes of smp.hh

Closes scylladb/scylladb#19912
2024-07-28 21:04:45 +03:00
Kefu Chai
36f5032b2d db: correct the doxygen comment
the parameter names do not match with the ones we are using.
these comments were inherited from Origin, but we failed to update
them accordingly.

in this change, the comments are updated to reflect the function
signatures.

Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>

Closes scylladb/scylladb#19900
2024-07-28 18:24:57 +03:00
Kefu Chai
67e07bee25 build: cmake: use per-mode build dir
The build_unified.sh script accepts a --build-dir option, which
specifies the directory used for storing temporary files extracted
from tarballs defined by the --pkgs option. When performing parallel
builds of multiple modes, it's crucial that each build uses a unique
build directory. Reusing the same build directory for different modes
can lead to conflicts, resulting in build failures or, more seriously,
the creation of tarballs containing corrupted files.

so, in this change, we specify a different directory for each mode,
so that they don't share the same one.

Refs scylladb/scylladb#2717
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>

Closes scylladb/scylladb#19905
2024-07-28 18:11:37 +03:00
Avi Kivity
149a47088e mapreduce_service: reindent 2024-07-28 17:55:51 +03:00
Avi Kivity
0dd03789f3 mapreduce_service: coroutinize retrying_dispatcher::dispatch_to_node()
Simplify the function by converting it to a coroutine.

Note that while the final co_return co_await looks like a loop (and
therefore an await would introduce an O(n) allocation), it really isn't -
we retry at most once.
2024-07-28 17:54:01 +03:00
Avi Kivity
b019927a0e mapreduce_service: coroutinize dispatch() inner lambda
dispatch() is a coroutine, but the inner lambda that is executed per
node is still a continuation chain. Make it uniform by converting to
a coroutine.
2024-07-28 17:36:08 +03:00
Kefu Chai
ee80742c39 cql3: do not include unused headers
these unused includes were identified by clangd. see
https://clangd.llvm.org/guides/include-cleaner#unused-include-warning
for more details on the "Unused include" warning.

Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>

Closes scylladb/scylladb#19906
2024-07-28 17:29:07 +03:00
Benny Halevy
26abad23d9 sstable_directory: delete_atomically: allow sstables from multiple prefixes
Currently, delete_atomically can be called with
a list of sstables from mixed prefixes in two cases:
1. truncate: where we delete all the sstables in the table directory
2. tablet cleanup: similar to truncate but restricted to sstables in a
   single tablet replica

In both cases, it is possible that sstables in staging (or quarantine)
are mixed with sstables in the base directory.

Until a more comprehensive fix is in place,
(see https://github.com/scylladb/scylladb/pull/19555)
this change just lifts the ban on atomic deletion
of sstables from different prefixes, and acknowledging
that the implementation is not atomic across
prefixes.  This is better than crashing for now,
and can be backported more easily to branches
that support tablets so tablet migration can
be done safely in the presence of repair of
tables with views.

Refs scylladb/scylladb#18862

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>

Closes scylladb/scylladb#19816
2024-07-28 17:26:31 +03:00
Lakshmi Narayanan Sreethar
27b305b9d1 boost/bloom_filter_test: wait for total memory reclaimed update
The testcase `test_bloom_filter_reclaim_during_reload` checks the
SSTable manager's `_total_memory_reclaimed` against an expected value to
verify that a Bloom filter was reloaded. However, it does not wait for
the manager to update the variable, causing the check to fail if the
update has not occurred yet. Fix it by making the testcase wait until
the variable is updated to the expected value.

Fixes #19879

Signed-off-by: Lakshmi Narayanan Sreethar <lakshmi.sreethar@scylladb.com>

Closes scylladb/scylladb#19883
2024-07-26 08:15:11 +03:00
Tomasz Grabiec
851da230c8 Merge 'db/view: drop view updates to replaced node marked as left' from Piotr Dulikowski
When a node that is permanently down is replaced, it is marked as "left" but it still can be a replica of some tablets. We also don't keep IPs of nodes that have left and the `node` structure for such node returns an empty IP (all zeros) as the address.

This interacts badly with the view update logic. The base replica paired with the left node might decide to generate a view update. Because storage proxy still uses IPs and not host IDs, it needs to obtain the view replica's IP and tell the storage proxy to write a view update to that node - so, it chooses 0.0.0.0. Apparently, storage proxy decides to write a hint towards this address - hinted handoff on the other hand operates on host IDs and not IPs, so it attempts to translate the IP back, which triggers an assertion as there is no replica with IP 0.0.0.0.

As a quick workaround for this issue just drop view updates towards nodes which seem to have IPs that are all zeros. It would be more proper to keep the view updates as hints and replay them later to the new paired replica, but achieving this right now would require much more significant changes. For now, fixing a crash is more important than keeping views consistent with base replicas.

In addition to the fix, this PR also includes a regression test heavily based on the test that @kbr-scylla prepared during his investigation of the issue.

Fixes: scylladb/scylladb#19439

This issue can cause multiple nodes to crash at once and the fix is quite small, so I think this justifies backporting it to all affected versions. 6.0 and 6.1 are affected. No need to backport to 5.4 as this issue only happens with tablets, and tablets are experimental there.

Closes scylladb/scylladb#19765

* github.com:scylladb/scylladb:
  test: regression test for MV crash with tablets during decommission
  db/view: drop view updates to replaced node marked as left
2024-07-25 11:47:14 +02:00
Botond Dénes
1bfe73c2ea Merge 'Order API endpoints registration in main' from Pavel Emelyanov
There are few api::set_foo()-s left in main that are placed in ~~random~~ legacy order. This PR fixes it and makes few more associated cleanups.

refs: #2737

Closes scylladb/scylladb#19682

* github.com:scylladb/scylladb:
  api: Unset cache_service endpoints on stop
  main: Don't ignore set_cache_service() future
  api: Move storage API few steps above
  api: Register token-metadata API next to token-metadata itsels
  api: Do not return zero local host-id
  api: Move snitch API registration next to snitch itself
2024-07-25 09:59:38 +03:00
Pavel Emelyanov
456dbc122b api: Unset cache_service endpoints on stop
They currently stay registered long after the dependent services get
stopped. There's a need for batch unsetting (scylladb/seastar#1620), so
currently only this explicit listing :(

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2024-07-24 18:51:32 +03:00
Pavel Emelyanov
61fb0ad996 main: Don't ignore set_cache_service() future
The call itself seem to be in wrong place -- there's no "cache service"
also the API uses database and snapshot_ctl to work on. So it deserves
more cleanup, but at least don't throw the returned future<> away.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2024-07-24 18:51:32 +03:00
Pavel Emelyanov
e1eb48f9c2 api: Move storage API few steps above
The sequence currently is

sharded<storage_service>.start()
sharded<query_processor>.invoke_on_all(start_remote)
api::set_server_storage_service()

The last two steps can be safely swapped to keep storage service API
next to its service.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2024-07-24 18:51:32 +03:00
Pavel Emelyanov
6ae09cc6bf api: Register token-metadata API next to token-metadata itsels
Right now API registration happens quite late because it waits storage
service to register its "function" first. This can be done beforeheand
and the t.m. API can be moved to where it should be.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2024-07-24 18:51:32 +03:00
Pavel Emelyanov
10566256fd api: Do not return zero local host-id
The local host id is read from local token metadata and returned to the
caller as string. The t.m. itself starts with default-constructed host
id vlaue which is updated later. However, even such "unset" host id
value can be rendered as string without errors. This makes the correct
work of the API endpoint depend on the initialization sequence which may
(spoilter: it will) change in the future.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2024-07-24 18:51:32 +03:00
Pavel Emelyanov
29738f0cb6 api: Move snitch API registration next to snitch itself
Once sharded<snitch> is started, it can register its handlers

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2024-07-24 18:51:07 +03:00
Botond Dénes
6337372b9d test/boost/reader_concurrency_semaphore_test: un-flake test admission
The admission test has a section which tests admission when the
semaphore has inactive reads. This section (and therefore the enire
test) became flaky lately, after a seemingly unrelated seastar upgrade,
which improved timers.
The cause of the flakyness is the permit which is made inactive later:
this permit is created with 0 timeout (times out immediately). For some
time now, when the timeout timer of a permit fires, if the permit is
inactive, it is evicted. This is what makes the test fail: the inactive
read times out and ends up evicting this permit, which is not expected
for the test. The reason this was not a problem before, is that the test
finishes very quickly, usually, before the timer could even be polled by
the reactor. The recent seastar changes changed this and now the timer
sometimes get polled and fires, failing the test.

Fixes: #19801

Closes scylladb/scylladb#19859
2024-07-24 13:04:50 +03:00
Takuya ASADA
02b20089cb scylla_raid_setup: install update-initramfs when it's not available
scylla_raid_setup may fail on Ubuntu minimal image since it calls
update-initramfs without installing.

Closes scylladb/scylladb#19651
2024-07-24 11:55:16 +03:00
Pavel Emelyanov
b02d20d12d Merge 'Minor improvements around compaction groups' from Raphael "Raph" Carvalho
Minor changes, no backporting needed.

Closes scylladb/scylladb#19723

* github.com:scylladb/scylladb:
  replica: rename for_each_const_compaction_group()
  replica: Fix comment about compaction group
  replica: remove unused compaction_group_vector
2024-07-24 11:22:24 +03:00
Nadav Har'El
edc5bca6b1 alternator: do not allow authentication with a non-"login" role
Alternator allows authentication into the existing CQL roles, but
roles which have the flag "login=false" should be refused in
authentication, and this patch adds the missing check.

The patch also adds a regression test for this feature in the
test/alternator test framework, in a new test file
test/alternator/cql_rbac.py. This test file will later include more
tests of how the CQL RBAC commands (CREATE ROLE, GRANT, REVOKE)
affect authentication and authorization in Alternator.
In particular, these tests need to use not just the DynamoDB API but
also CQL, so this new test file includes the "cql" fixture that allows
us to run CQL commands, to create roles, to retrieve their secret keys,
and so on.

Fixes scylladb/scylladb#19735

Closes scylladb/scylladb#19740
2024-07-24 08:20:23 +02:00
Botond Dénes
84db147c58 Merge 'tasks: introduce virtual tasks' from Aleksandra Martyniuk
Introduce virtual tasks - task manager tasks which cover
cluster-wide operations.

Virtual tasks aren't kept in memory, instead their statuses
are retrieved from associated service when user requests
them with task manager API. From API users' perspective,
virtual tasks behave similarly to regular tasks, but they can
be queried from any node in a cluster.

Virtual tasks cannot have a parent task. They can have
children on each node in a cluster, but do not keep references
to them. So, if a direct child of a virtual task is unregistered
from task manager, it will no longer be shown in parent's
children vector.

virtual_task class corresponds to all virtual tasks in one
group. If users want to list all tasks in a module, a virtual_task
returns all recent supported operations; if they request virtual
task's status - info about the one specified operation is
presented. Time to live, number of tracked operations etc.
depend on the implementation of individual virtual_task.
All virtual_tasks are kept only on shard 0.

Refs: https://github.com/scylladb/scylladb/issues/15852

New feature, no backport needed.

Closes scylladb/scylladb#16374

* github.com:scylladb/scylladb:
  docs: describe virtual tasks
  db: node_ops: filter topology request entries
  test: add a topology suite for testing tasks
  node_ops: service: create streaming tasks
  node_ops: register node_ops_virtual_task in task manager
  service: node_ops: keep node ops module in storage service
  node_ops: implement node_ops_virtual_task methods
  db: service: modify methods to get topology_requests data
  db: service: add request type column to topology_requests
  node_ops: add task manager module and node_ops_virtual_task
  tasks: api: add virtual task support to get_task_status_recursively
  tasks: api: add virtual task support
  tasks: api: add virtual tasks support to get_tasks
  tasks: add task_handler to hide task and virtual_task differences from user
  tasks: modify invoke_on_task
  tasks: implement task_manager::virtual_task::impl::get_children
  tasks: keep virtual tasks in task manager
  tasks: introduce task_manager::virtual_task
2024-07-24 08:34:28 +03:00
Botond Dénes
0bb6413ea5 Merge 'github: disable scheduled workflow on forks' from Kefu Chai
as these workflows are scheduled periodically, and if they fail, notifications are sent to the repo's owner. to minimize the surprises to the contributors using github, let's disable these workflows on fork repos.

Closes scylladb/scylladb#19736

* github.com:scylladb/scylladb:
  github: do not run clang-tidy as a cron job
  github: disable scheduled workflow on forks
2024-07-24 07:50:39 +03:00
Avi Kivity
3c930a61c9 Merge 'test: scylla_cluster: support more test scenarios' from Patryk Jędrzejczak
We modify `ScyllaCluster.server_start` so that it changes seeds of the
starting node to all currently running nodes. This allows writing tests like
```python
s1 = await manager.server_add(start=False)
await manager.server_add()
await manager.server_start(s1.server_id)
```
However, it disallows writing tests that start multiple clusters. To fix this,
we add the `seeds` parameter to `server_start`.

We also improve the logic in `ScyllaCluster.add_server` to allow writing
tests like
```python
await manager.server_add(expected_error="...")
await manager.server_add()
```

This PR only adds improvements to the `test.py` framework, no need
to backport it.

Closes scylladb/scylladb#19847

* github.com:scylladb/scylladb:
  test: scylla_cluster: improve expected_error in add_server
  test: scylla_cluster: support more test scenarios
  test: scylla_cluster: correctly change seeds in server_start
2024-07-23 22:05:31 +03:00