Commit Graph

29966 Commits

Author SHA1 Message Date
Tomasz Grabiec
00a9326ae7 Merge "raft: let modify_config finish on a follower that removes itself" from Kamil
When forwarding a reconfiguration request from follower to a leader in
`modify_config`, there is no reason to wait for the follower's commit
index to be updated. The only useful information is that the leader
committed the configuration change - so `modify_config` should return as
soon as we know that.

There is a reason *not* to wait for the follower's commit index to be
updated: if the configuration change removes the follower, the follower
will never learn about it, so a local waiter will never be resolved.

`execute_modify_config` - the part of `modify_config` executed on the
leader - is thus modified to finish when the configuration change is
fully complete (including the dummy entry appended at the end), and
`modify_config` - which does the forwarding - no longer creates a local
waiter, but returns as soon as the RPC call to the leader confirms that
the entry was committed on the leader.

We still return an `entry_id` from `execute_modify_config` but that's
just an artifact of the implementation.

Fixes #9981.

A regression test was also added in randomized_nemesis_test.

* kbr/modify-config-finishes-v1:
  test: raft: randomized_nemesis_test: regression test for #9981
  raft: server: don't create local waiter in `modify_config`
2022-01-31 20:14:50 +01:00
Kamil Braun
97ff98f3a7 service: migration_manager: retry schema change command on transient failures
The call to `raft::server::add_entry` in `announce_with_raft` may fail
e.g. due to a leader change happening when we try to commit the entry.
In cases like this it makes sense to retry the command so we don't
prematurely report an error to the client.

This may result in double application of the command. Fortunately, the schema
change command is idempotent thanks to the group 0 state ID mechanism
(originally used to prevent conflicting concurrent changes from happening).
Indeed, once a command passes the state ID check, it changes the group 0
history last state ID, causing all later applications of that same
command to fail the check. Similarly, once a command fails the state ID
check, it means that the last state ID is different than the one
observed when the command was being constructed, so all further
applications of the command will also fail the check (it is not possible
for the last state ID to change from X to Y then back to X).

Note that this reasoning only works for commands with `prev_state_id`
engaged, such as the ones which we're using in
`migration_manager::announce_with_raft`. It would not work with
"unconditional commands" where `prev_state_id` is `nullopt` - for those
commands no state ID check is performed. It could still be safe to retry
those commands if they are idempotent for a different reason.

(Note: actually, our schema commands are already idempotent even without
the state ID check, because they simply apply a set of mutations, and
applying the same mutations twice is the same as applying them once.)
Message-Id: <20220131152926.18087-1-kbraun@scylladb.com>
2022-01-31 19:49:31 +01:00
Takuya ASADA
218dd3851c scylla_swap_setup: add --swap-size-bytes
Currently, --swap-size does not able to specify exact file size because
the option takes parameter only in GB.
To fix the limitation, let's add --swpa-size-bytes to specify swap size
in bytes.
We need this to implement preallocate swapfile while building IaaS
image.

see scylladb/scylla-machine-image#285

Closes #9971
2022-01-31 18:32:32 +02:00
Benny Halevy
4272dd0b28 storage_proxy: mutate_counter_on_leader_and_replicate: use container to get to shard proxy
Rather than using the global helper, get_local_storage_proxy.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Message-Id: <20220131151516.3461049-2-bhalevy@scylladb.com>
2022-01-31 18:14:31 +02:00
Benny Halevy
8acdc6ebdc storage_proxy: paxos: don't use global storage_proxy
Rather than calling get_local_storage_proxy(),
use paxos_response_handler::_proxy.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Message-Id: <20220131151516.3461049-1-bhalevy@scylladb.com>
2022-01-31 18:14:31 +02:00
Calle Wilund
445e1d3e41 commitlog: Ensure we never have more than one new_segment call at a time
Refs #9896

Found by @eliransin. Call to new_segment was wrapped in with_timeout.
This means that if primary caller timed out, we would leave new_segment
calls running, but potentially issue new ones for next caller.

This could lead to reserve segment queue being read simultanously. And
it is not what we want.

Change to always use the shared_future wait, all callers, and clear it
only on result (exception or segment)

Closes #10001
2022-01-31 16:50:22 +02:00
Nadav Har'El
8a745593a2 Merge 'alternator: fill UnprocessedKeys for failed batch reads' from Piotr Sarna
DynamoDB protocol specifies that when getting items in a batch
failed only partially, unprocessed keys can be returned so that
the user can perform a retry.
Alternator used to fail the whole request if any of the reads failed,
but right now it instead produces the list of unprocessed keys
and returns them to the user, as long as at least 1 read was
successful.

This series comes with a test based on Scylla's error injection mechanism, and thus is only useful in modes which come with error injection compiled in. In release mode, expect to see the following message:
SKIPPED (Error injection not enabled in Scylla - try compiling in dev/debug/sanitize mode)

Fixes #9984

Closes #9986

* github.com:scylladb/scylla:
  test: add total failure case for GetBatchItem
  test: add error injection case for GetBatchItem
  test: add a context manager for error injection to alternator
  alternator: add error injection to BatchGetItem
  alternator: fill UnprocessedKeys for failed batch reads
2022-01-31 15:28:24 +02:00
Piotr Sarna
c87126198d test: add total failure case for GetBatchItem
The test verifies that if all reads from a batch operation
failed, the result is an error, and not a success response
with UnprocessedKeys parameter set to all keys.
2022-01-31 14:21:55 +01:00
Piotr Sarna
e79c2943fc test: add error injection case for GetBatchItem
The new test case is based on Scylla error injection mechanism
and forces a partial read by failing some requests from the batch.
2022-01-31 14:21:55 +01:00
Piotr Sarna
99c5bec0e2 test: add a context manager for error injection to alternator
With the new context manager it's now easier to request an error
to be injected via REST API. Note that error injection is only
enabled in certain build modes (dev, debug, sanitize)
and the test case will be skipped if it's not possible to use
this mechanism.
2022-01-31 14:21:55 +01:00
Tomasz Grabiec
8297ae531d Merge "Automatically retry CQL DDL statements in presence of concurrent changes" from Kamil
Schema changes on top of Raft do not allow concurrent changes.
If two changes are attempted concurrently, one of them gets
`group0_concurrent_modification` exception.

Catch the exception in CQL DDL statement execution function and retry.

In addition, improve the description of CQL DDL statements
in group 0 history table.

Add a test which checks that group 0 history grows iff a schema change does
not throw `group0_concurrent_modification`. Also check that the retry
mechanism works as expected.

* kbr/ddl-retry-v1:
  test: unit test for group 0 concurrent change protection and CQL DDL retries
  cql3: statements: schema_altering_statement: automatically retry in presence of concurrent changes
2022-01-31 14:12:35 +01:00
Tomasz Grabiec
b78bab7286 Merge "raft: fixes and improvements to the library and nemesis test" from Kamil
Raft randomized nemesis test was improved by adding some more
chaos: randomizing the network delay, server configuration,
ticking speed of servers.

This allowed to catch a serious bug, which is fixed in the first patch.

The patchset also fixes bugs in the test itself and adds quality of life
improvements such as better diagnostics when inconsistency is detected.

* kbr/nemesis-random-v1:
  test: raft: randomized_nemesis_test: print state of each state machine when detecting inconsistency
  test: raft: randomized_nemesis_test: print details when detecting inconsistency
  test: raft: randomized_nemesis_test: print snapshot details when taking/loading snapshots in `impure_state_machine`
  test: raft: randomized_nemesis_test: keep server id in impure_state_machine
  test: raft: randomized_nemesis_test: frequent snapshotting configuration
  test: raft: randomized_nemesis_test: tick servers at different speeds in generator test
  test: raft: randomized_nemesis_test: simplify ticker
  test: raft: randomized_nemesis_test: randomize network delay
  test: raft: randomized_nemesis_test: fix use-after-free in `environment::crash()`
  test: raft: randomized_nemesis_test: fix use-after-free in two-way rpc functions
  test: raft: randomized_nemesis_test: rpc: don't propagate `gate_closed_exception` outside
  test: raft: randomized_nemesis_test: fix obsolete comment
  raft: fsm: print configuration entries appearing in the log
  raft: `operator<<(ostream&, ...)` implementation for `server_address` and `configuration`
  raft: server: abort snapshot applications before waiting for rpc abort
  raft: server: logging fix
  raft: fsm: don't advance commit index beyond matched entries
2022-01-31 13:25:27 +01:00
Calle Wilund
7ca72ffd19 database: Make wrapped version of timed_out_error a timed_out_error
Refs #9919

in a6202ae  throw_commitlog_add_error was added to ensure we had more
info on errors generated writing to commit log.

However, several call sites catch timed_out_error explicitly, not
checking for nested etc.
97bb1be and 868b572 tried to deal with it, by using check routines.
It turns out there are call sites left, and while these should be
changed, it is safer and quicker for now to just ensure that
iff we have a timed_out_error, we throw yet another timed_out_error.

Closes #10002
2022-01-31 14:15:23 +02:00
Piotr Sarna
d50ed944f2 alternator: add error injection to BatchGetItem
When error injection is enabled at compile time, it's now possible
to inject an error into BatchGetItem in order to produce a partial
read, i.e. when only part of the items were retrieved successfully.
2022-01-31 12:56:00 +01:00
Piotr Sarna
31f4f062a2 alternator: fill UnprocessedKeys for failed batch reads
DynamoDB protocol specifies that when getting items in a batch
failed only partially, unprocessed keys can be returned so that
the user can perform a retry.
Alternator used to fail the whole request if any of the reads failed,
but right now it instead produces the list of unprocessed keys
and returns them to the user, as long as at least 1 read was
successful.

NOTE: tested manually by compiling Scylla with error injection,
which fails every nth request. It's rather hard to figure out
an automatic test case for this scenario.

Fixes #9984
2022-01-31 12:56:00 +01:00
Mikołaj Sielużycki
93d6eb6d51 compacting_reader: Support fast_forward_to position range.
Fast forwarding is delegated to the underlying reader and assumes the
it's supported. The only corner case requiring special handling that has
shown up in the tests is producing partition start mutation in the
forwarding case if there are no other fragments.

compacting state keeps track of uncompacted partition start, but doesn't
emit it by default. If end of stream is reached without producing a
mutation fragment, partition start is not emitted. This is invalid
behaviour in the forwarding case, so I've added a public method to
compacting state to force marking partition as non-empty. I don't like
this solution, as it feels like breaking an abstraction, but I didn't
come across a better idea.

Tests: unit(dev, debug, release)

Message-Id: <20220128131021.93743-1-mikolaj.sieluzycki@scylladb.com>
2022-01-31 13:37:36 +02:00
Nadav Har'El
a25e265373 test/alternator: improve comment on why we need "global_random"
Improve the comment that explains why we needed to use an explicitly
shared random sequence instead of the usual "random". We now understand
that we need this workaround to undo what the pytest-randomly plugin does.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20220130155557.1181345-1-nyh@scylladb.com>
2022-01-31 10:07:56 +01:00
Nadav Har'El
59fe6a402c test/cql-pytest: use unique keys instead of random keys
Some of the tests in test/cql-pytest share the same table but use
different keys to ensure they don't collide. Before this patch we used a
random key, which was usually fine, but we recently noticed that the
pytest-randomly plugin may cause different tests to run through the *same*
sequence of random numbers and ruin our intent that different tests use
different keys.

So instead of using a *random* key, let's use a *unique* key. We can
achieve this uniqueness trivially - using a counter variable - because
anyway the uniqueness is only needed inside a single temporary table -
which is different in every run.

Another benefit is that it will now be clearer that the tests are
deterministic and not random - the intent of a random_string() key
was never to randomly walk the entire key space (random_string()
anyway had a pretty narrow idea of what a random string looks like) -
it was just to get a unique key.

Refs #9988 (fixes it for cql-pytest, but not for test/alternator)

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
2022-01-31 09:01:23 +02:00
Tomasz Grabiec
b734615f51 util: cached_file: Fix corruption after memory reclamation was triggered from population
If memory reclamation is triggered inside _cache.emplace(), the _cache
btree can get corrupted. Reclaimers erase from it, and emplace()
assumes that the tree is not modified during its execution. It first
locates the target node and then does memory allocation.

Fix by running emplace() under allocating section, which disables
memory reclamation.

The bug manifests with assert failures, e.g:

./utils/bptree.hh:1699: void bplus::node<unsigned long, cached_file::cached_page, cached_file::page_idx_less_comparator, 12, bplus::key_search::linear, bplus::with_debug::no>::refill(Less) [Key = unsigned long, T = cached_file::cached_page, Less = cached_file::page_idx_less_comparator, NodeSize = 12, Search = bplus::key_search::linear, Debug = bplus::with_debug::no]: Assertion `p._kids[i].n == this' failed.

Fixes #9915

Message-Id: <20220130175639.15258-1-tgrabiec@scylladb.com>
2022-01-30 19:57:35 +02:00
Benny Halevy
3cee0f8bd9 shared_token_metadata: mutate_token_metadata: bump cloned copy ring_version
Currently this is done only in
storage_service::get_mutable_token_metadata_ptr
but it needs to be done here as well for code paths
calling mutate_token_metadata directly.

Currently, this it is only called from network_topology_strategy_test.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Message-Id: <20220130152157.2596086-1-bhalevy@scylladb.com>
2022-01-30 18:15:08 +02:00
Piotr Sarna
471205bdcf test/alternator: use a global random generator for all test cases
It was observed (perhaps it depends on the Python implementation)
that an identical seed was used for multiple test cases,
which violated the assumption that generated values are in fact
unique. Using a global generator instead makes sure that it was
only seeded once.

Tests: unit(dev) # alternator tests used to fail for me locally
  before this patch was applied
Message-Id: <315d372b4363f449d04b57f7a7d701dcb9a6160a.1643365856.git.sarna@scylladb.com>
2022-01-30 16:40:20 +02:00
Tomasz Grabiec
3e31126bdf Merge "Brush up the initial tokens generation code" from Pavel Emelyanov
On start the storage_service sets up initial tokens. Some dangling
variables, checks and code duplication had accumulated over time.

* xemul/br-storage-service-bootstrap-leftovers:
  dht: Use db::config to generate initial tookens
  database, dht: Move get_initial_tokens()
  storage_service: Factor out random/config tokens generation
  storage_service: No extra get_replace_address checks
  storage_service: Remove write-only local variable
2022-01-28 15:54:45 +01:00
Pavel Emelyanov
89a7c750ea Merge "Deglobalize repair_meta_map" from Benny
This series moves the static thread_local repair_meta_map instances
into the repair_service shards.

Refs #9809

Test: unit(release) (including scylla-gdb)
Dtest: repair_additional_test.py::TestRepairAdditional::{test_repair_disjoint_row_2nodes,test_repair_joint_row_3nodes_2_diff_shard_count} replace_address_test.py::TestReplaceAddress::test_serve_writes_during_bootstrap[rbo_enabled](release)

* git@github.com:bhalevy/scylla.git deglobalize-repair_meta_map-v1
  repair_service: deglobalize get_next_repair_meta_id
  repair_service: deglobalize repair_meta_map
  repair_service: pass reference to service to row_level_repair_gossip_helper
  repair_meta: define repair_meta_ptr
  repair_meta: move static repair_meta map functions out of line
  repair_meta: make get_set_diff a free function
  repair: repair_meta: no need to keep sharded<netw::messaging_service>
  repair: repair_meta: derive subordinate services from repair_service
  repair: pass repair_service to repair_meta
2022-01-28 14:12:33 +02:00
Avi Kivity
34252eda26 Update seastar submodule
* seastar 5524f229b...0d250d15a (6):
  > core: memory: Avoid current_backtrace() on alloc failure when logging suppressed
Fixes #9982
  > Merge "Enhance io-tester and its rate-limited job" from Pavel E
  > queue: pop: assert that the queue is not empty
  > io_queue: properly declare io_queue_for_tests
  > reactor: Fix off-by-end-of-line misprint in legacy configuration
  > fair_queue: Fix move constructor
2022-01-28 14:12:33 +02:00
Tomasz Grabiec
7ee79fa770 logalloc: Add more logging
Message-Id: <20220127232009.314402-1-tgrabiec@scylladb.com>
2022-01-28 14:12:33 +02:00
Kamil Braun
d10b508380 test: raft: randomized_nemesis_test: regression test for #9981 2022-01-27 17:50:40 +01:00
Kamil Braun
28b5792481 raft: server: don't create local waiter in modify_config
When forwarding a reconfiguration request from follower to a leader in
`modify_config`, there is no reason to wait for the follower's commit
index to be updated. The only useful information is that the leader
committed the configuration change - so `modify_config` should return as
soon as we know that.

There is a reason *not* to wait for the follower's commit index to be
updated: if the configuration change removes the follower, the follower
will never learn about it, so a local waiter will never be resolved.

`execute_modify_config` - the part of `modify_config` executed on the
leader - is thus modified to finish when the configuration change is
fully complete (including the dummy entry appended at the end), and
`modify_config` - which does the forwarding - no longer creates a local
waiter, but returns as soon as the RPC call to the leader confirms that
the entry was committed on the leader.

We still return an `entry_id` from `execute_modify_config` but that's
just an artifact of the implementation.

Fixes #9981.
2022-01-27 17:49:40 +01:00
Pavel Emelyanov
1525c04db3 dht: Use db::config to generate initial tookens
The replica::database is passed into the helper just to get the
config from. Better to use config directly without messing with
the database.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2022-01-27 16:41:29 +03:00
Pavel Emelyanov
77532a6a36 database, dht: Move get_initial_tokens()
The helper in question has nothing to do with replica/database and
is only used by dht to convert config option to a set of tokens.
It sounds like the helper deserves living where it's needed.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2022-01-27 16:41:29 +03:00
Pavel Emelyanov
50170366ea storage_service: Factor out random/config tokens generation
There's a place in normal node start that parses the initial_token
option or generates num_tokens random tokens. This code is used almost
unchanged since being ported from its java version. Later there appeared
the dht::get_bootstrap_token() with the same internal logic.

This patch generalizes these two places. Logging messages are unified
too (dtest seem not to check those).

The change improves a corner case. The normal node startup code doesn't
check if the initial_token is empty and num_tokens is 0 generating empty
bootstrap_tokens set. It fails later with an obscure 'remove_endpoint
should be used instead' message.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2022-01-27 16:41:29 +03:00
Pavel Emelyanov
7b521405e4 storage_service: No extra get_replace_address checks
The get_replace_address() returns optional<inet_address>, but
in many cases it's used under if (is_replacing()) branch which,
in turn, returns bool(get_replace_address()) and this is only
executed if the returned optional is engaged.

Extra checks can be removed making the code tiny bit shorter.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2022-01-27 16:41:29 +03:00
Pavel Emelyanov
330f2cfcfc storage_service: Remove write-only local variable
The set of tokens used to be use after being filled, but now
it's write-only.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2022-01-27 16:41:25 +03:00
Kamil Braun
4a52b802ac test: unit test for group 0 concurrent change protection and CQL DDL retries
Check that group 0 history grows iff a schema change does not throw
`group0_concurrent_modification`. Check that the CQL DDL statement retry
mechanism works as expected.
2022-01-27 11:26:15 +01:00
Kamil Braun
edd8344706 cql3: statements: schema_altering_statement: automatically retry in presence of concurrent changes
Schema changes on top of Raft do not allow concurrent changes.
If two changes are attempted concurrently, one of them gets
`group0_concurrent_modification` exception.

Catch the exception in CQL DDL statement execution function and retry.

In addition, the description of CQL DDL statements in group 0 history
table was improved.
2022-01-27 11:26:14 +01:00
Benny Halevy
f8db9e1bd8 repair_service: deglobalize get_next_repair_meta_id
Rather than using a static unit32_t next_id,
move the next_id variable into repair_service shard 0
and manage it there.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2022-01-27 11:34:21 +02:00
Benny Halevy
90ba9013be repair_service: deglobalize repair_meta_map
Move the static repair_meta_map into the repair_service
and expose it from there.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2022-01-27 11:01:47 +02:00
Benny Halevy
e6b6fdc9a0 repair_service: pass reference to service to row_level_repair_gossip_helper
Note that we can't pass the repair_service container()
from its ctor since it's not populated until all shards start.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2022-01-27 11:00:26 +02:00
Benny Halevy
3008ecfd4e repair_meta: define repair_meta_ptr
Keep repair_meta in repair_meta_map as shared_ptr<repair_meta>
rather than lw_shared_ptr<repair_meta> so it can be defined
in the header file and use only forward-declared
class repair_meta.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2022-01-27 09:18:14 +02:00
Benny Halevy
fdc0a9602c repair_meta: move static repair_meta map functions out of line
Define the static {get,insert,remove}_repair_meta functions out
of the repair_meta class definition, on the way of moving them,
along with the repair_meta_map itself, to repair_service.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2022-01-27 09:15:09 +02:00
Benny Halevy
b5427cc6d1 repair_meta: make get_set_diff a free function
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2022-01-27 09:13:09 +02:00
Benny Halevy
224e7497e0 repair: repair_meta: no need to keep sharded<netw::messaging_service>
All repair_meta needs is the local instance.
Need be, it's a peering service so the container()
can be used if needed.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2022-01-27 09:13:09 +02:00
Benny Halevy
c4ac92b2b7 repair: repair_meta: derive subordinate services from repair_service
Use repair_service as the authoritative source for
the database, messaging_service, system_distributed_keyspace,
and view_update_generator, similar to repair_info.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2022-01-27 09:12:53 +02:00
Benny Halevy
a71d6333e4 repair: pass repair_service to repair_meta
Prepare for old the repair_meta_map in repair_service.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2022-01-27 09:12:51 +02:00
Tomasz Grabiec
ba6c02b38a Merge "Clear old entries from group 0 history when performing schema changes" from Kamil
When performing a change through group 0 (which right now means schema
changes), clear entries from group 0 history table which are older
than one week.

This is done by including an appropriate range tombstone in the group 0
history table mutation.

* kbr/g0-history-gc-v2:
  idl: group0_state_machine: fix license blurb
  test: unit test for clearing old entries in group0 history
  service: migration_manager: clear old entries from group 0 history when announcing
2022-01-26 16:12:40 +01:00
Kamil Braun
95ac8ead4f test: raft: randomized_nemesis_test: print state of each state machine when detecting inconsistency 2022-01-26 16:09:41 +01:00
Kamil Braun
e249ea5aef test: raft: randomized_nemesis_test: print details when detecting inconsistency
If the returned result is inconsistent with the constructed model, print
the differences in detail instead of just failing an assertion.
2022-01-26 16:09:41 +01:00
Kamil Braun
1170e47af4 test: raft: randomized_nemesis_test: print snapshot details when taking/loading snapshots in impure_state_machine
Useful for debugging.
2022-01-26 16:09:41 +01:00
Kamil Braun
b8158e0b43 test: raft: randomized_nemesis_test: keep server id in impure_state_machine
Will be used for logging.
2022-01-26 16:09:41 +01:00
Kamil Braun
3c01449472 test: raft: randomized_nemesis_test: frequent snapshotting configuration
With probability 1/2, run the test with a configuration that causes
servers to take snapshots frequently.
2022-01-26 16:09:41 +01:00
Kamil Braun
7546a9ebb5 test: raft: randomized_nemesis_test: tick servers at different speeds in generator test
Previously all servers were ticked at the same moment, every 10
network/timer ticks.

Now we tick each server with probability 1/10 on each network/timer
tick. Thus, on average, every server is ticked once per 10 ticks.
But now we're able to obtain more interesting behaviors.
E.g. we can now observe servers which are stalling for as long as 10 ticks
and servers which temporarily speed up to tick once per each network tick.
2022-01-26 16:09:41 +01:00