Commit Graph

992 Commits

Author SHA1 Message Date
Avi Kivity
8576502c48 Merge 'raft topology: ban left nodes from the cluster' from Kamil Braun
Use the new Seastar functionality for storing references to connections to implement banning hosts that have left the cluster (either decommissioned or using removenode) in raft-topology mode. Any attempts at communication from those nodes will be rejected.

This works not only for nodes that restart, but also for nodes that were running behind a network partition and we removed them. Even when the partition resolves, the existing nodes will effectively put a firewall from that node.

Some changes to the decommission algorithm had to be introduced for it to work with node banning. As a side effect a pre-existing problem with decommission was fixed. Read the "introduce `left_token_ring` state" and "prepare decommission path for node banning" commits for details.

Closes #13850

* github.com:scylladb/scylladb:
  test: pylib: increase checking period for `get_alive_endpoints`
  test: add node banning test
  test: pylib: manager_client: `get_cql()` helper
  test: pylib: ScyllaCluster: server pause/unpause API
  raft topology: ban left nodes
  raft topology: skip `left_token_ring` state during `removenode`
  raft topology: prepare decommission path for node banning
  raft topology: introduce `left_token_ring` state
  raft topology: `raft_topology_cmd` implicit constructor
  messaging_service: implement host banning
  messaging_service: exchange host IDs and map them to connections
  messaging_service: store the node's host ID
  messaging_service: don't use parameter defaults in constructor
  main: move messaging_service init after system_keyspace init
2023-06-21 20:16:45 +03:00
Kamil Braun
87f65d01b8 messaging_service: store the node's host ID 2023-06-20 13:03:46 +02:00
Kamil Braun
7f3ad6bd25 main: move messaging_service init after system_keyspace init 2023-06-20 13:03:46 +02:00
Kamil Braun
8b152361f4 Merge 'raft topology: fixes after #13884' from Gusev Petr
This PR fixes some problems found after the PR was merged:
  * missed `node_to_work_on` assignment in `handle_topology_transition`;
  * change error reporting in `update_fence_version` from `on_internal_error` to regular exceptions, since that exceptions can happen during normal operation.
  * `update_fence_version` has beed moved  after `group0_service.setup_group0_if_exist` in `main.cc`, otherwise we use uninitialized `token_metadata::version` and get an error.

Fixes: #14303

Closes #14292

* github.com:scylladb/scylladb:
  main.cc: move update_fence_version after group0_service.setup_group0_if_exist
  shared_token_metadata: update_fence_version: on_internal_error -> throw
  storage_service: handle_topology_transition: fix missed node assignment
2023-06-20 13:02:17 +02:00
Petr Gusev
41b950dd21 main.cc: move update_fence_version after group0_service.setup_group0_if_exist
Otherwise, the validation
new_fence_version <= token_metadata::version
inside update_fence_version will use an uninitialized
token_metadata::version == 0
and we will get an error.

The test_topology_ops was improved to
catch this problem.

Fixes: #14303
2023-06-20 13:40:01 +04:00
Kamil Braun
028183c793 main, cql_test_env: simplify system_keyspace initialization
Initialization of `system_keyspace` is now all done at once instead of
being spread out through the entire procedure. This is doable because
`query_processor` is now available early. A couple of FIXMEs have been
resolved.
2023-06-18 13:39:27 +02:00
Kamil Braun
33c19baabc db: system_keyspace: take simpler service references in make
Take references to services which are initialized earlier. The
references to `gossiper`, `storage_service` and `raft_group0_registry`
are no longer needed.

This will allow us to move the `make` step right after starting
`system_keyspace`.
2023-06-18 13:39:27 +02:00
Kamil Braun
b34605d161 db: system_keyspace: call initialize_virtual_tables from main
`initialize_virtual_tables` was called from `system_keyspace::make`,
which caused this `make` function to take a bunch of references to
late-initialized services (`gossiper`, `storage_service`).

Call it from `main`/`cql_test_env` instead.

Note: `system_keyspace::make` is called from
`distributed_loader::init_system_keyspace`. The latter function contains
additional steps: populate the system keyspaces (with data from
sstables) and mark their tables ready for writes.

None of these steps apply to virtual tables.

There exists at least one writable virtual table, but writes into
virtual tables are special and the implementation of writes is
virtual-table specific. The existing writable virtual table
(`db_config_table`) only updates in-memory state when written to. If a
virtual table would like to create sstables, or populate itself with
sstable data on startup, it will have to handle this in its own
initialization function.

Separating `initialize_virtual_tables` like this will allow us to
simplify `system_keyspace` initialization, making it independent of
services used for distributed communication.
2023-06-18 13:39:27 +02:00
Pavel Emelyanov
900c609269 Merge 'Initialize query_processor early, without messaging_service or gossiper' from Kamil Braun
In https://github.com/scylladb/scylladb/pull/14231 we split `storage_proxy` initialization into two phases: for local and remote parts. Here we do the same with `query_processor`. This allows performing queries for local tables early in the Scylla startup procedure, before we initialize services used for cluster communication such as `messaging_service` or `gossiper`.

Fixes: #14202

As a follow-up we will simplify `system_keyspace` initialization, making it available earlier as well.

Closes #14256

* github.com:scylladb/scylladb:
  main, cql_test_env: start `query_processor` early
  cql3: query_processor: split `remote` initialization step
  cql3: query_processor: move `migration_manager&`, `forwarder&`, `group0_client&` to a `remote` object
  cql3: query_processor: make `forwarder()` private
  cql3: query_processor: make `get_group0_client()` private
  cql3: strongly_consistent_modification_statement: fix indentation
  cql3: query_processor: make `get_migration_manager` private
  tracing: remove `qp.get_migration_manager()` calls
  table_helper: remove `qp.get_migration_manager()` calls
  thrift: handler: move implementation of `execute_schema_command` to `query_processor`
  data_dictionary: add `get_version`
  cql3: statements: schema_altering_statement: move `execute0` to `query_processor`
  cql3: statements: pass `migration_manager&` explicitly to `prepare_schema_mutations`
  main: add missing `supervisor::notify` message
2023-06-16 17:41:08 +03:00
Kamil Braun
9f9f4c224b main, cql_test_env: start query_processor early
Start it right after `storage_proxy`.

We also need to start `cql_config` earlier
because `query_processor` uses it.
2023-06-16 14:29:59 +02:00
Kamil Braun
c212370cf1 cql3: query_processor: split remote initialization step
Pass `migration_manager&`, `forward_service&` and `raft_group0_client&`
in the remote init step which happens after the constructor.

Add a corresponding uninit remote step.
Make sure that any use of the `remote` services is finished before we
destroy the `remote` object by using a gate.

Thanks to this in a later commit we'll be able to move the construction
of `query_processor` earlier in the Scylla initialization procedure.
2023-06-16 14:29:59 +02:00
Petr Gusev
f6b019c229 raft topology: add fence_version
It's stored outside of topology table,
since it's updated not through RAFT, but
with a new 'fence' raft command.
The current value is cached in shared_token_metadata.
An initial fence version is loaded in main
during storage_service initialisation.
2023-06-15 15:48:00 +04:00
Kamil Braun
59d4bb3787 tracing: remove qp.get_migration_manager() calls
Pass `migration_manager&` from top-level instead.
2023-06-15 09:48:54 +02:00
Kamil Braun
817aff6615 main: add missing supervisor::notify message 2023-06-15 09:48:54 +02:00
Botond Dénes
a5ce2d5fb4 Merge 'Initialize storage_proxy early, without messaging_service and gossiper' from Kamil Braun
Move the initialization of `storage_proxy` early in the startup procedure, before starting
`system_keyspace`, `messaging_service`, `gossiper`, `storage_service` and more.

As a follow-up, we'll be able to move initialization of `query_processor` right
after `storage_proxy` (but this requires a bit of refactoring in
`query_processor` too).

Local queries through `storage_proxy` can be done after the early initialization step.
In a follow-up, when we do a similar thing for `query_processor`, we'll be able
to perform local CQL queries early as well. (Before starting `gossiper` etc.)

Closes #14231

* github.com:scylladb/scylladb:
  main, cql_test_env: initialize `storage_proxy` early
  main, cql_test_env: initialize `database` early
  storage_proxy: rename `init_messaging_service` to `start_remote`
  storage_proxy: don't pass `gossiper&` and `messaging_service&` during initialization
  storage_proxy: prepare for missing `remote`
  storage_proxy: don't access `remote` during local queries in `query_partition_key_range_concurrent`
  db: consistency_level: remove overload of `filter_for_query`
  storage_proxy: don't access `remote` when calculating target replicas for local queries
  storage_proxy: introduce const version of `remote()`
  replica: table: introduce `get_my_hit_rate`
  storage_proxy: `endpoint_filter`: remove gossiper dependency
2023-06-14 15:37:33 +03:00
Tomasz Grabiec
87bbd2614b raft: Populate address mapping from system.peers early
Currently, the mapping is initialized from the gossiper state when
group0 server is started and updated from a gossiper change
listener. Gossiper state is restored from system.peers in
storage_service::join_cluster(), which is later than
setup_group0_if_exists() is called.

The restarted server will hang in
group0_service.setup_group0_if_exist(), which waits for snapshot
loading, which waits for storage_service::topology_state_load(), which
waits for IP mapping for servers mentioned in the topology, and
produces logs like this:

  WARN  2023-06-12 15:45:21,369 [shard 0] storage_service - (rate limiting dropped 196 similar messages) raft topology: cannot map c94ae68f-869d-4727-8b2f-d40814e395f0 to ip, retrying.

This is a regression after f26179c, where group0 server is initialized
before the gossiper is started.

The fix is to load the mapping from system.peers before group0 is
started. Gossiper state is not available at this point, so we read the
mapping directly from system keyspace. This change will also be needed
to implement messaging by host id, even if raft is disabled, where we
will need to restore the mapping early.

Fixes #14217

Closes #14220
2023-06-14 11:52:47 +02:00
Kamil Braun
b23cc9b441 main, cql_test_env: initialize storage_proxy early
This is another part of splitting Scylla initialization into two phases:
local and remote parts. Performing queries is done with `storage_proxy`,
so for local queries we want to initialize it before we initialize
services specific to cluster communication such as `gossiper`,
`messaging_service`, `storage_service`.

`system_keyspace` should also be initialized after `storage_proxy` (and
is after this patch) so in the future we'll be able to merge the
multiple initialization steps of `system_keyspace` into one (it only
needs the local part to work).
2023-06-14 11:41:36 +02:00
Kamil Braun
a8f6afc2fd main, cql_test_env: initialize database early
We want to separate two phases of Scylla service initialization: first
we initialize the local part, which allows performing local queries,
then a remote part, which requires contacting other nodes in a cluster
and allows performing distributed queries.

The `database` object is crucial for both remote and local queries, but it
was created pretty late, after services such as `gossiper` or
`storage_service` which are used for distributed operations.

Fortunately we can easily move `database` initialization and all of its
prerequisites early in the init procedure.
2023-06-14 11:41:36 +02:00
Kamil Braun
a740fbf58a storage_proxy: rename init_messaging_service to start_remote
The function now has more responsibilities than before, rename it
and add a comment to better illustrate this.
2023-06-14 11:41:36 +02:00
Kamil Braun
f26e98c3be storage_proxy: don't pass gossiper& and messaging_service& during initialization
These services are now passed during `init_messaging_service`, and
that's when the `remote` object is constructed.

The `remote` object is then destroyed in `uninit_messaging_service`.

Also, `migration_manager*` became `migration_manager&` in
`init_messaging_service`.
2023-06-14 11:41:36 +02:00
Michał Sala
e0855b1de2 forward_service: introduce shutdown checks
This commit introduces a new boolean flag, `shutdown`, to the
forward_service, along with a corresponding shutdown method. It also
adds checks throughout the forward_service to verify the value of the
shutdown flag before retrying or invoking functions that might use the
messaging service under the hood.

The flag is set before messaging service shutdown, by invoking
forward_service::shutdown in main. By checking the flag before each call
that potentially involves the messaging service, we can ensure that the
messaging service is still operational. If the flag is false, indicating
that the messaging service is still active, we can proceed with the
call. In the event that the messaging service is shutdown during the
call, appropriate exceptions should be thrown somewhere down in called
functions, avoiding potential hangs.

This fix should resolve the issue where forward_service retries could
block the shutdown.

Fixes #12604

Closes #13922
2023-06-13 13:44:33 +03:00
Kamil Braun
2dbf6f32cd Merge 'Fix crash during restart of a single node with topology over raft' from Gleb
This is a regression introduced in f26179cd27.

Fixes: #14136

* 'gleb/set_group0' of github.com:scylladb/scylla-dev:
  test: restart first node to see if it can boot after restart
  service: move setting of group0 point in storage_service earlier
2023-06-07 10:21:17 +02:00
Pavel Emelyanov
66e43912d6 code: Switch to seastar API level 7
In that level no io_priority_class-es exist. Instead, all the IO happens
in the context of current sched-group. File API no longer accepts prio
class argument (and makes io_intent arg mandatory to impls).

So the change consists of
- removing all usage of io_priority_class
- patching file_impl's inheritants to updated API
- priority manager goes away altogether
- IO bandwidth update is performed on respective sched group
- tune-up scylla-gdb.py io_queues command

The first change is huge and was made semi-autimatically by:
- grep io_priority_class | default_priority_class
- remove all calls, found methods' args and class' fields

Patching file_impl-s is smaller, but also mechanical:
- replace io_priority_class& argument with io_intent* one
- pass intent to lower file (if applicatble)

Dropping the priority manager is:
- git-rm .cc and .hh
- sed out all the #include-s
- fix configure.py and cmakefile

The scylla-gdb.py update is a bit hairry -- it needs to use task queues
list for IO classes names and shares, but to detect it should it checks
for the "commitlog" group is present.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>

Closes #13963
2023-06-06 13:29:16 +03:00
Gleb Natapov
8598cebb11 service: move setting of group0 point in storage_service earlier
group0 pointer in storage_service should be set when group0 starts.
After f26179cd27 we start group0 earlier,
so we need to move setting of the group0 pointer as well.
2023-06-06 12:12:48 +03:00
Kamil Braun
8be69fc3a0 Merge 'Initialize group0 server on boot before allowing incoming requests' from Gleb
The series includes mostly cleanups and one bug fix.

The fix is for the race where messages that need to access group0 server are arriving
before the server is initialized.

* 'gleb/group0-sp-mm-race-v2' of github.com:scylladb/scylla-dev:
  service: raft: fix typo
  service: raft: split off setup_group0_if_exist from setup_group0
  storage_service: do not allow override_decommission flag if consistent cluster management is enabled
  storage_service: fix indentation after the previous patch
  storage_service: co-routinize storage_service::join_cluster() function
  storage_service: do not reload topology from peers table if topology over raft is enabled
  storage_service: optimize debug logging code in case debug log is not enabled
2023-06-01 17:37:58 +02:00
Gleb Natapov
f26179cd27 service: raft: split off setup_group0_if_exist from setup_group0
Currently setup_group0 is responsible to start existing group0 on restart
or create a new one and joining the cluster with it during bootstrap. We
want to create the server for existing group0 earlier, before we start
to accept messages because some messages may assume that the server
exists already. For that we split creation of exiting group0 server into
a separate function and call it on restart before the messaging service
starts accepting messages.

Fixes: #13887
2023-05-31 11:00:41 +03:00
Pavel Emelyanov
b0525e20d5 main: Ignore sleep_aborted exception in main
When scylla starts it may go to sleep along the way before the "serving"
message appears. If SIGINT is sent at that time the whole thing unrolls
and the main code ends up catching the sleep_aborted exception, printing
the error in logs and exiting with non-zero code. However, that's not an
error, just the start was interrupted earlier than it was expected by
the stop_signal thing.

fixes: #12898

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>

Closes #14034
2023-05-29 23:03:25 +03:00
Botond Dénes
57758ec3e1 Merge 'Put streaming sched group onto stream manager' from Pavel Emelyanov
The manager is in charge of updating IO bandwidth on the respective prio class. Nowadays it uses global priority-manager, but unifying sched classes effort will require it to use non-global streaming sched group. After the patch the sched class field is unused, but it's a preparation towards huge (really huge) "switch to seastar API level 7" patch

ref: #13963

Closes #13997

* github.com:scylladb/scylladb:
  stream_manager: Add streaming sched group copy
  cql_test_env: Move sched groups initialization up
2023-05-24 09:27:30 +03:00
Pavel Emelyanov
5aea6938ae commitlog: Introduce and use comitlog sched group
Nowadays all commitlog code runs in whatever sched group it's kicked
from. Since IO prio classes are going to be inherited from the current
sched group the commitlog IO loops should be moved into commitlog sched
group, not inherit a "random" one.

There are currently two places that need correct context for IO -- the
.cycle() method and segments replenisher.

`$ perf-simple-query --write -c2` results

--- Before the patch ---
194898.36 tps ( 56.3 allocs/op,  12.7 tasks/op,   54307 insns/op,        0 errors)
199286.23 tps ( 56.2 allocs/op,  12.7 tasks/op,   54375 insns/op,        0 errors)
199815.84 tps ( 56.2 allocs/op,  12.7 tasks/op,   54377 insns/op,        0 errors)
198260.98 tps ( 56.3 allocs/op,  12.7 tasks/op,   54380 insns/op,        0 errors)
198572.86 tps ( 56.2 allocs/op,  12.7 tasks/op,   54371 insns/op,        0 errors)

median 198572.86 tps ( 56.2 allocs/op,  12.7 tasks/op,   54371 insns/op,        0 errors)
median absolute deviation: 713.36
maximum: 199815.84
minimum: 194898.36

--- After the patch ---
194751.80 tps ( 56.3 allocs/op,  12.7 tasks/op,   54331 insns/op,        0 errors)
199084.70 tps ( 56.2 allocs/op,  12.7 tasks/op,   54389 insns/op,        0 errors)
195551.47 tps ( 56.3 allocs/op,  12.7 tasks/op,   54385 insns/op,        0 errors)
197953.47 tps ( 56.3 allocs/op,  12.7 tasks/op,   54386 insns/op,        0 errors)
198710.00 tps ( 56.3 allocs/op,  12.7 tasks/op,   54387 insns/op,        0 errors)

median 197953.47 tps ( 56.3 allocs/op,  12.7 tasks/op,   54386 insns/op,        0 errors)
median absolute deviation: 1131.24
maximum: 199084.70
minimum: 194751.80

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>

Closes #14005
2023-05-23 21:25:57 +03:00
Pavel Emelyanov
678f8fb1b7 stream_manager: Add streaming sched group copy
The manager in question is responsible for maintaining the streaming
class IO bandwidth update. Nowadays it does it via priority manager's
global streaming IO priority class field, but it will need to switch to
streaming sched group.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2023-05-23 14:31:23 +03:00
Tomasz Grabiec
493e7fc3de main: Load tablet metadata after schema commit log replay
There could be system.tablet mutations in the schema commit log. We
need to see them before loading sstables of user tables because we
need sharding information.
2023-05-21 18:50:11 +03:00
Kamil Braun
13df85ea11 Merge 'Cut feature_service -> system_keyspace dependency' from Pavel Emelyanov
This implicit link it pretty bad, because feature service is a low-level
one which lots of other services depend on. System keyspace is opposite
-- a high-level one that needs e.g. query processor and database to
operate. This inverse dependency is created by the feature service need
to commit enabled features' names into system keyspace on cluster join.
And it uses the qctx thing for that in a best-effort manner (not doing
anything if it's null).

The dependency can be cut. The only place when enabled features are
committed is when gossiper enables features on join or by receiving
state changes from other nodes. By that time the
sharded<system_keyspace> is up and running and can be used.

Despite gossiper already has system keyspace dependency, it's better not
to overload it with the need to mess with enabling and persisting
features. Instead, the feature_enabler instance is equipped with needed
dependencies and takes care of it. Eventually the enabler is also moved
to feature_service.cc where it naturally belongs.

Fixes: #13837

Closes #13172

* github.com:scylladb/scylladb:
  gossiper: Remove features and sysks from gossiper
  system_keyspace: De-static save_local_supported_features()
  system_keyspace: De-static load_|save_local_enabled_features()
  system_keyspace: Move enable_features_on_startup to feature_service (cont)
  system_keyspace: Move enable_features_on_startup to feature_service
  feature_service: Open-code persist_enabled_feature_info() into enabler
  gms: Move feature enabler to feature_service.cc
  gms: Move gossiper::enable_features() to feature_service::enable_features_on_join()
  gms: Persist features explicitly in features enabler
  feature_service: Make persist_enabled_feature_info() return a future
  system_keyspace: De-static load_peer_features()
  gms: Move gossiper::do_enable_features to persistent_feature_enabler::enable_features()
  gossiper: Enable features and register enabler from outside
  gms: Add feature_service and system_keyspace to feature_enabler
2023-05-18 18:21:06 +02:00
Piotr Dulikowski
760651b4ad error injection: allow enabling injections via config
Currently, error injections can be enabled either through HTTP or CQL.
While these mechanisms are effective for injecting errors after a node
has already started, it can't be reliably used to trigger failures
shortly after node start. In order to support this use case, this commit
adds possibility to enable some error injections via config.

A configuration option `error_injections_at_startup` is added. This
option uses our existing configuration framework, so it is possible to
supply it either via CLI or in the YAML configuration file.

- When passed in commandline, the option is parsed as a
  semicolon-separated list of error injection names that should be
  enabled. Those error injections are enabled in non-oneshot mode.

  The CLI option is marked as not used in release mode and does not
  appear in the option list.

  Example:

      --error-injections-at-startup failure_point1;failure_point2

- When provided in YAML config, the option is parsed as a list of items.
  Each item is either a string or a map or parameters. This method is
  more flexible as it allows to provide parameters for each injection
  point. At this time, the only benefit is that it allows enabling
  points in oneshot mode, but more parameters can be added in the future
  if needed.

  Explanatory example:

      error_injections_at_startup:
      - failure_point1 # enabled in non-oneshot mode
      - name: failure_point2 # enabled in oneshot mode
        one_shot: true       # due to one_shot optional parameter

The primary goal of this feature is to facilitate testing of raft-based
cluster features. An error injection will be used to enable an
additional feature to simulate node upgrade.

Tests: manual

Closes #13861
2023-05-15 09:14:07 +03:00
Tomasz Grabiec
a91e83fad6 Merge "issue raft read barrier before pulling schema" from Gleb
Schema pull may fail because the pull does not contain everything that
is needed to instantiate a schema pointer. For instance it does not
contain a keyspace. This series changes the code to issue raft read
barrier before the pull which will guaranty that the keyspace is created
before the actual schema pull is performed.
2023-05-14 14:14:24 +03:00
Pavel Emelyanov
2153751d45 sstables: Introduce sharded<storage_manager>
The manager in question keeps track of whatever sstables_manager needs
to work with the storage (spoiler: only S3 one). It's main-local sharded
peering service, so that container() call can be used by next patches.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2023-05-11 19:36:01 +03:00
Gleb Natapov
7caf1d26fb migration manager: Make schema pull abortable.
Now which schema pull may issues raft read barrier it may stuck if
majority is not available. Make the operation abortable and abort it
during queries if timeout is reached.
2023-05-11 16:31:23 +03:00
Avi Kivity
1d351dde06 Merge 'Make S3 client work with real S3' from Pavel Emelyanov
Current S3 client was tested over minio and it takes few more touches to work with amazon S3.

The main challenge here is to support singed requests. The AWS S3 server explicitly bans unsigned multipart-upload requests, which in turn is the essential part of the sstables S3 backend, so we do need signing. Signing a request has many options and requirements, one of them is -- request _body_ can be or can be not included into signature calculations. This is called "(un)signed payload". Requests sent over plain HTTP require payload signing (i.e. -- request body should be included into signature calculations), which can a bit troublesome, so instead the PR uses unsigned payload (i.e. -- doesn't include the request body into signature calculation, only necessary headers and query parameters), but thus also needs HTTPS.

So what this set does is makes the existing S3 client code sign requests. In order to sign the request the code needs to get AWS key and secret (and region) from somewhere and this somewhere is the conf/object_storage.yaml config file. The signature generating code was previously merged (moved from alternator code) and updated to suit S3 client needs.

In order to properly support HTTPS the PR adds special connection factory to be used with seastar http client. The factory makes DNS resolving of AWS endpoint names and configures gnutls systemtrust.

fixes: #13425

Closes #13493

* github.com:scylladb/scylladb:
  doc: Add a document describing how to configure S3 backend
  s3/test: Add ability to run boost test over real s3
  s3/client: Sign requests if configured
  s3/client: Add connection factory with DNS resolve and configurable HTTPS
  s3/client: Keep server port on config
  s3/client: Construct it with config
  s3/client: Construct it with sstring endpoint
  sstables: Make s3_storage with endpoint config
  sstables_manager: Keep object storage configs onboard
  code: Introduce conf/object_storage.yaml configuration file
2023-05-04 18:08:54 +03:00
Tomasz Grabiec
e385ce8a2b Merge "fix stack use after free during shutdown" from Gleb
storage_service uses raft_group0 but the during shutdown the later is
destroyed before the former is stopped. This series move raft_group0
destruction to be after storage_service is stopped already. For the
move to work some existing dependencies of raft_group0 are dropped
since they do not really needed during the object creation.

Fixes #13522
2023-05-04 15:14:18 +02:00
Gleb Natapov
dc6c3b60b4 init: move raft_group0 creation before storage_service
storage_service uses raft_group0 so the later needs to exists until
the former is stopped.
2023-05-04 13:03:18 +03:00
Gleb Natapov
e9fb885e82 service/raft: raft_group0: drop dependency on cdc::generation_service
raft_group0 does not really depends on cdc::generation_service, it needs
it only transiently, so pass it to appropriate methods of raft_group0
instead of during its creation.
2023-05-04 13:03:07 +03:00
Pavel Emelyanov
98b9c205bb s3/client: Sign requests if configured
If the endpoint config specifies AWS key, secret and region, all the
S3 requests get signed. Signature should have all the x-amz-... headers
included and should contain at least three of them. This patch includes
x-ams-date, x-amz-content-sha256 and host headers into the signing list.
The content can be unsigned when sent over HTTPS, this is what this
patch does.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2023-05-03 20:23:37 +03:00
Pavel Emelyanov
3dd82485f6 s3/client: Add connection factory with DNS resolve and configurable HTTPS
Existing seastar's factories work on socket_address, but in S3 we have
endpoint name which's a DNS name in case of real S3. So this patch
creates the http client for S3 with the custom connection factory that
does two things.

First, it resolves the provided endpoint name into address.
Second, it loads trust-file from the provided file path (or sets system
trust if configured that way).

Since s3 client creation is no-waiting code currently, the above
initialization is spawned in afiber and before creating the connection
this fiber is waited upon.

This code probably deserves living in seastar, but for now it can land
next to utils/s3/client.cc.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2023-05-03 20:23:19 +03:00
Pavel Emelyanov
3bec5ea2ce s3/client: Keep server port on config
Currently the code temporarily assumes that the endpoint port is 9000.
This is what tests' local minio is started with. This patch keeps the
port number on endpoint config and makes test get the port number from
minio starting code via environment.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2023-05-03 20:19:43 +03:00
Pavel Emelyanov
2f6aa5b52e code: Introduce conf/object_storage.yaml configuration file
In order to access real S3 bucket, the client should use signed requests
over https. Partially this is due to security considerations, partially
this is unavoidable, because multipart-uploading is banned for unsigned
requests on the S3. Also, signed requests over plain http require
signing the payload as well, which is a bit troublesome, so it's better
to stick to secure https and keep payload unsigned.

To prepare signed requests the code needs to know three things:
- aws key
- aws secret
- aws region name

The latter could be derived from the endpoint URL, but it's simpler to
configure it explicitly, all the more so there's an option to use S3
URLs without region name in them we could want to use some time.

To keep the described configuration the proposed place is the
object_storage.yaml file with the format

endpoints:
  - name: a.b.c
    port: 443
    aws_key: 12345
    aws_secret: abcdefghijklmnop
    ...

When loaded, the map gets into db::config and later will be propagated
down to sstables code (see next patch).

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2023-05-03 20:19:15 +03:00
Kamil Braun
30cc07b40d Merge 'Introduce tablets' from Tomasz Grabiec
This PR introduces an experimental feature called "tablets". Tablets are
a way to distribute data in the cluster, which is an alternative to the
current vnode-based replication. Vnode-based replication strategy tries
to evenly distribute the global token space shared by all tables among
nodes and shards. With tablets, the aim is to start from a different
side. Divide resources of replica-shard into tablets, with a goal of
having a fixed target tablet size, and then assign those tablets to
serve fragments of tables (also called tablets). This will allow us to
balance the load in a more flexible manner, by moving individual tablets
around. Also, unlike with vnode ranges, tablet replicas live on a
particular shard on a given node, which will allow us to bind raft
groups to tablets. Those goals are not yet achieved with this PR, but it
lays the ground for this.

Things achieved in this PR:

  - You can start a cluster and create a keyspace whose tables will use
    tablet-based replication. This is done by setting `initial_tablets`
    option:

    ```
        CREATE KEYSPACE test WITH replication = {'class': 'NetworkTopologyStrategy',
                        'replication_factor': 3,
                        'initial_tablets': 8};
    ```

    All tables created in such a keyspace will be tablet-based.

    Tablet-based replication is a trait, not a separate replication
    strategy. Tablets don't change the spirit of replication strategy, it
    just alters the way in which data ownership is managed. In theory, we
    could use it for other strategies as well like
    EverywhereReplicationStrategy. Currently, only NetworkTopologyStrategy
    is augmented to support tablets.

  - You can create and drop tablet-based tables (no DDL language changes)

  - DML / DQL work with tablet-based tables

    Replicas for tablet-based tables are chosen from tablet metadata
    instead of token metadata

Things which are not yet implemented:

  - handling of views, indexes, CDC created on tablet-based tables
  - sharding is done using the old method, it ignores the shard allocated in tablet metadata
  - node operations (topology changes, repair, rebuild) are not handling tablet-based tables
  - not integrated with compaction groups
  - tablet allocator piggy-backs on tokens to choose replicas.
    Eventually we want to allocate based on current load, not statically

Closes #13387

* github.com:scylladb/scylladb:
  test: topology: Introduce test_tablets.py
  raft: Introduce 'raft_server_force_snapshot' error injection
  locator: network_topology_strategy: Support tablet replication
  service: Introduce tablet_allocator
  locator: Introduce tablet_aware_replication_strategy
  locator: Extract maybe_remove_node_being_replaced()
  dht: token_metadata: Introduce get_my_id()
  migration_manager: Send tablet metadata as part of schema pull
  storage_service: Load tablet metadata when reloading topology state
  storage_service: Load tablet metadata on boot and from group0 changes
  db, migration_manager: Notify about tablet metadata changes via migration_listener::on_update_tablet_metadata()
  migration_notifier: Introduce before_drop_keyspace()
  migration_manager: Make prepare_keyspace_drop_announcement() return a future<>
  test: perf: Introduce perf-tablets
  test: Introduce tablets_test
  test: lib: Do not override table id in create_table()
  utils, tablets: Introduce external_memory_usage()
  db: tablets: Add printers
  db: tablets: Add persistence layer
  dht: Use last_token_of_compaction_group() in split_token_range_msb()
  locator: Introduce tablet_metadata
  dht: Introduce first_token()
  dht: Introduce next_token()
  storage_proxy: Improve trace-level logging
  locator: token_metadata: Fix confusing comment on ring_range()
  dht, storage_proxy: Abstract token space splitting
  Revert "query_ranges_to_vnodes_generator: fix for exclusive boundaries"
  db: Exclude keyspace with per-table replication in get_non_local_strategy_keyspaces_erms()
  db: Introduce get_non_local_vnode_based_strategy_keyspaces()
  service: storage_proxy: Avoid copying keyspace name in write handler
  locator: Introduce per-table replication strategy
  treewide: Use replication_strategy_ptr as a shorter name for abstract_replication_strategy::ptr_type
  locator: Introduce effective_replication_map
  locator: Rename effective_replication_map to vnode_effective_replication_map
  locator: effective_replication_map: Abstract get_pending_endpoints()
  db: Propagate feature_service to abstract_replication_strategy::validate_options()
  db: config: Introduce experimental "TABLETS" feature
  db: Log replication strategy for debugging purposes
  db: Log full exception on error in do_parse_schema_tables()
  db: keyspace: Remove non-const replication strategy getter
  config: Reformat
2023-04-27 09:40:18 +02:00
Pavel Emelyanov
9bb4ee160f gossiper: Remove features and sysks from gossiper
Now gossiper doesn't need those two as its dependencies, they can be
removed making code shorter and dependencies graph simpler.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2023-04-25 17:06:06 +03:00
Pavel Emelyanov
858db9f706 system_keyspace: Move enable_features_on_startup to feature_service
This code belongs to feature service, system keyspace shoulnd't be aware
of any pecularities of startup features enabling, only loading and
saving the feature lists.

For now the move happens only in terms of code declarations, the
implementation is kept in its old place to reduce the patch churn.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2023-04-25 17:00:30 +03:00
Gleb Natapov
9849409c2a service/raft: raft_group0: drop dependency on migration_manager
raft_group0 does not really depends on migration_manager, it needs it only
transiently, so pass it to appropriate methods of raft_group0 instead
of during its creation.
2023-04-25 12:38:01 +03:00
Gleb Natapov
d5d156d474 service/raft: raft_group0: drop dependency on query_processor
raft_group0 does not really depends on query_processor, it needs it only
transiently, so pass it to appropriate methods of raft_group0 instead
of during its creation.
2023-04-25 12:35:57 +03:00
Gleb Natapov
029f1737ef service/raft: raft_group0: drop dependency on storage_service
raft_group0 does not really depends on storage_service, it needs it only
transiently, so pass it to appropriate methods of raft_group0 instead
of during its creation.
2023-04-25 11:07:47 +03:00