Add precompiled header support to CMakeLists.txt and configure.py -
it improves compilation time by approximately 10%.
New header `stdafx.hh` is added, don't include it manually -
the compiler will include it for you. The header contains includes from
external libraries used by Scylla - seastar, standard library,
linux headers and zlib.
The feature is enabled by default, use CMake option `Scylla_USE_PRECOMPILED_HEADER`
or configure.py --disable-precompiled-header to disable.
The feature should be disabled, when trying to check headers - otherwise
you might get false negatives on missing includes from seastar / abseil and so on.
Note: following configuration needs to be added to ccache.conf:
sloppiness = pch_defines,time_macros,include_file_mtime,include_file_ctime
Closesscylladb/scylladb#26617
In the new API the biggest change is to implement the only
data_sink_impl::put(span<temporary_buffer>) overload.
Encrypted file impl and sstables compress sink use fallback_put() helper
that generates a chain of continuations each holding a buffer.
The counting_data_sink in transport had mostly been patched to correct
implementation by the previous patch, the change here is to replace
vector argument with span one.
Most other sinks just re-implement their put(vector<temporary_buffer>)
overload by iterating over span and non-preemptively grabbing buffers
from it.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
The former helper is implemented like this:
future<> invoke_with_counting(fn) {
if (not_needed)
return fn();
return futurize_invoke(something).then([fn] {
return fn()
}).finally(something_else);
}
and all put() overloads are like
future<> put(arg) {
return invoke_with_counting([this, arg] {
return lower_sink.put(arg);
});
}
The problem is that with seastar API level 9, the put() overload will
have to move the passed buffers into stable storage before preempting.
In its current implementation, when counting is needed the
invoke_with_counting will link lower_sink.put() invocation to the
futurize_invoke(something) future. Despite "something" is
non-preempting, and futurize_invoke() on it returns ready future, in
debug mode ready_future.then() does preempt, and the API level 9 put()
contract will be violated.
To facilitate the switch to new API level, this patch rewrites one of
put() overloads to look like
future<> put(arg) {
if (not_needed) {
return lower_sink.put(arg);
}
something;
return lower_sink(arg).finally(something_else);
}
Other put()-s will be removed by next patch anyway, but this put() will
be patched and will call lower_sink.put() without preemption.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
The API to put scattered_message into output_stream() is gone in seastar
API level 9, transport is the only place in Scylla that still uses it.
The change is to put the response as a sequence of temporary_buffer-s.
This preserves the zero-copy-ness of the reply, but needs few things to
care about.
First, the response header frame needs to be put as zero-copy buffer
too. Despite output_stream() supports semi-mixed mode, where z.c.
buffers can follow the buffered writes, it won't apply here. The socket
is flushed() in batched mode, so even if the first reply populates the
stream with data and flushes it, the next response may happen to start
putting the header frame before delayed flush took place.
Second, because socket is flushed in batch-flush poller, the temporary
buffers that are put into it must hold the foreigh_ptr with the response
object. With scattered message this was implemented with the help of a
delter that was attached to the message, now the deleter is shared
between all buffers.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Before `sl:driver` was introduced, service levels were assigned as
follows:
1. New connections were processed in `main`.
2. After user authentication was completed, the connection's SL was
changed to the user's SL (or `sl:default` if the user had no SL).
This commit introduces `service_level_state` to `client_state` and
implements the following logic in `transport/server`:
1. If `sl:driver` is not present in the system (for example, it was
removed), service levels behave as described above.
2. If `sl:driver` is present, the flow is:
I. New connections use `sl:driver`.
II. After user authentication is completed, the connection's SL is
changed to the user's SL (or `sl:default`).
III. If a REGISTER (to events) request is handled, the client is
processing the control connection. We mark the client_state
to permanently use `sl:driver`.
The aforementioned state `2.III` is represented by
`_control_connection` flag in `client_state`.
Fixes: scylladb/scylladb#24411
Before this change, unauthorized connections stayed in `main`
scheduling group. It is not ideal, in such case, rather `sl:default`
should be used, to have a consistent behavior with a scenario
where users is authenticated but there is no service level assigned
to the user.
This commit adds a call to `update_scheduling_group` at the end of
connection creation for an unauthenticated user, to make sure the
service level is switched to `sl:default`.
Fixes: scylladb/scylladb#26040
Before this change, new connections were handled in a default
scheduling group (`main`), because before the user is authenticated
we do not know which service level should be used. With the new
`sl:driver` service level, creation of new connections can be moved to
`sl:driver`.
We switch the service level as early as possible, in `do_accepts`.
There is a possibility, that `sl:driver` will not exist yet, for
instance, in specific upgrade cases, or if it was removed. Therefore,
we also switch to `sl:driver` after a connection is accepted.
Refs: scylladb/scylladb#24411
Moved files:
- generic_server.hh
- generic_server.cc
- protocol_server.hh
Fixes: #22112
This is a cleanup, no need to backport
Closesscylladb/scylladb#25090
The latter is recommended in seastar, and the former was left as
compatibility alias. Latest seastar explicitly marks it as deprecated so
once the submodule is updated, compilation logs will explode.
Most of the patch is generated with
for f in $(git grep -l '\<distributed<[A-Za-z0-9:_]*>') ; do sed -e 's/\<distributed<\([A-Za-z0-9:_]*\)>/sharded<\1>/g' -i $f; done
for f in $(git grep -l distributed.hh); do sed -e 's/distributed.hh/sharded.hh/' -i $f ; done
and a small manual change in test/perf/perf.hh
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Closesscylladb/scylladb#26136
Before `sl:driver` was introduced, service levels were assigned as
follows:
1. New connections were processed in `main`.
2. After user authentication was completed, the connection's SL was
changed to the user's SL (or `sl:default` if the user had no SL).
This commit introduces `service_level_state` to `client_state` and
implements the following logic in `transport/server`:
1. If `sl:driver` is not present in the system (for example, it was
removed), service levels behave as described above.
2. If `sl:driver` is present, the flow is:
I. New connections use `sl:driver`.
II. After user authentication is completed, the connection's SL is
changed to the user's SL (or `sl:default`).
III. If a REGISTER (to events) request is handled, the client is
processing the control connection. We mark the client_state
to permanently use `sl:driver`.
The aforementioned state `2.III` is represented by
`_control_connection` flag in `client_state`.
Fixes: scylladb/scylladb#24411
Before this change, unauthorized connections stayed in `main`
scheduling group. It is not ideal, in such case, rather `sl:default`
should be used, to have a consistent behavior with a scenario
where users is authenticated but there is no service level assigned
to the user.
This commit adds a call to `update_scheduling_group` at the end of
connection creation for an unauthenticated user, to make sure the
service level is switched to `sl:default`.
Fixes: scylladb/scylladb#26040
Before this change, new connections were handled in a default
scheduling group (`main`), because before the user is authenticated
we do not know which service level should be used. With the new
`sl:driver` service level, creation of new connections can be moved to
`sl:driver`.
We switch the service level as early as possible, in `do_accepts`.
There is a possibility, that `sl:driver` will not exist yet, for
instance, in specific upgrade cases, or if it was removed. Therefore,
we also switch to `sl:driver` after a connection is accepted.
Refs: scylladb/scylladb#24411
Depends on https://github.com/scylladb/seastar/pull/2651
Missing columns have been present since probably forever -
they were added to the schema but never assigned any value:
```
cqlsh> select * from system.clients;
------------------+------------------------
...
ssl_cipher_suite | null
ssl_enabled | null
ssl_protocol | null
...
```
This patch sets values of these columns:
- with a TLS connection, the 3 TLS-related fields are filled in,
- without TLS, `ssl_enabled` is set to `false` and other columns are
`null`,
- if there's an error while inspecting TLS values, the connection is
dropped.
We want to save the TLS info of a connection just after accepting it,
but without waiting for a TLS handshake to complete, so once the
connection is accepted, we're inspecting it in the background for the
server to be able to accept next connections immediately.
Later, when we construct system.clients virtual table, the previously
saved data can be instantaneously assigned to client_data, which is a
struct representing a row in system.clients table. This way we don't
slow down constructing this table by more than necessary, which is
relevant for cases with plenty of connections.
Fixes: #9216Closesscylladb/scylladb#22961
`cql_transport::response::make_frame` used to throw `protocol_exception`.
With this change it will return `result_with_exception_ptr<sstring>` instead.
Code changes are propagated to `cql_transport::cql_server::response::make_message`
and from there to `cql_transport::cql_server::connection::write_response`.
`write_response` continuation calling `make_message` used to transform
the exception from `make_message` to an exception future, and now the
logic stays the same, just explicitly stated at this code layer,
so the behavior is not changed.
Refs: #24567
As part of the effort to replace `protocol_exception` throws,
`validate_utf8` from `cql_transport::request_reader` throw is replaced
with returning `utils::result_with_exception_ptr`. This change affects
only the three places it is called from in the same file
`transport/request.hh`.
Refs: #24567
Replace throwing `protocol_exception` with returning it as a result
or an exceptional future in the transport server module. The goal is
to improve performance.
Most of the `protocol_exception` throws were made from
`fragmented_temporary_buffer` module, by passing `exception_thrower()`
to its `read*` methods. `fragmented_temporary_buffer` is changed so
that it now accepts an exception creator, not exception thrower.
`fragmented_temporary_buffer_concepts::ExceptionCreator` concept replaced
`fragmented_temporary_buffer_concepts::ExceptionThrower` and all
methods that have been throwing now return failed result of type
`utils::result_with_exception_ptr`. This change is then propagated to the callers.
The scope of this patch is `protocol_exception`, so commitlog just calls
`.value()` method on the result. If the result failed, that will throw the
exception from the result, as defined by `utils::result_with_exception_ptr_throw_policy`.
This means that the behavior of commitlog module stays the same.
transport server module handles results gracefully. All the caller functions
that return non-future value `T` now return `utils::result_with_exception_ptr<T>`.
When the caller is a function that returns a future, and it receives
failed result, `make_exception_future(std::move(failed_result).value())`
is returned. The rest of the callstack up to the transport server `handle_error`
function is already working without throwing, and that's how zero throws is
achieved.
Fixes: #24567
When shutting down in `generic_server`, connections are now closed in two steps.
First, only the RX (receive) side is shut down. Then, after all ongoing requests
are completed, or a timeout happened the connections are fully closed.
Fixesscylladb/scylladb#24481
In PR #23156, a new function `sleep_until_timeout_passes` was introduced
to wait until a read request times out or completes. However, the function
did not handle cases where the sleep is aborted via _abort_source, which
could result in WARN messages like "Exceptional future is ignored" during
shutdown.
This change adds proper handling for that exception, eliminating the warning.
`protocol_exception` is thrown in several places. This has become a performance issue, especially when starting/restarting a server. To alleviate this issue, throwing the exception has to be replaced with returning it as a result or an exceptional future.
This PR replaces throws in the `transport/server` module. This is achieved by using result_with_exception, and in some places, where suitable, just by creating and returning an exceptional future.
There are four commits in this PR. The first commit introduces tests in `test/cqlpy`. The second commit refactors transport server `handle_error` to not rethrow exceptions. The third commit refactors reusable buffer writer callbacks. The fourth commit replaces throwing `protocol_exception` to returning it.
Based on the comments on an issue linked in https://github.com/scylladb/scylladb/issues/24567, the main culprit from the side of protocol exceptions is the invalid protocol version one, so I tested that exception for performance.
In order to see if there is a measurable difference, a modified version of `test_protocol_version_mismatch` Python is used, with 100'000 runs across 10 processes (not threads, to avoid Python GIL). One test run consisted of 1 warm-up run and 5 measured runs. First test run has been executed on the current code, with throwing protocol exceptions. Second test urn has been executed on the new code, with returning protocol exceptions. The performance report is in https://github.com/scylladb/scylladb/pull/24738#issuecomment-3051611069. It shows ~10% gains in real, user, and sys time for this test.
Testing
Build: `release`
Test file: `test/cqlpy/test_protocol_exceptions.py`
Test name: `test_protocol_version_mismatch` (modified for mass connection requests)
Test arguments:
```
max_attempts=100'000
num_parallel=10
```
Throwing `protocol_exception` results:
```
real=1:26.97 user=10:00.27 sys=2:34.55 cpu=867%
real=1:26.95 user=9:57.10 sys=2:32.50 cpu=862%
real=1:26.93 user=9:56.54 sys=2:35.59 cpu=865%
real=1:26.96 user=9:54.95 sys=2:32.33 cpu=859%
real=1:26.96 user=9:53.39 sys=2:33.58 cpu=859%
real=1:26.95 user=9:56.85 sys=2:34.11 cpu=862% # average
```
Returning `protocol_exception` as `result_with_exception` or an exceptional future:
```
real=1:18.46 user=9:12.21 sys=2:19.08 cpu=881%
real=1:18.44 user=9:04.03 sys=2:17.91 cpu=869%
real=1:18.47 user=9:12.94 sys=2:19.68 cpu=882%
real=1:18.49 user=9:13.60 sys=2:19.88 cpu=883%
real=1:18.48 user=9:11.76 sys=2:17.32 cpu=878%
real=1:18.47 user=9:10.91 sys=2:18.77 cpu=879% # average
```
This PR replaced `transport/server` throws of `protocol_exception` with returns. There are a few other places where protocol exceptions are thrown, and there are many places where `invalid_request_exception` is thrown. That is out of scope of this single PR, so the PR just refs, and does not resolve issue #24567.
Refs: #24567
This PR improves performance in cases when protocol exceptions happen, for example during connection storms. It will require backporting.
Closesscylladb/scylladb#24738
* github.com:scylladb/scylladb:
test/cqlpy: add cpp exception metric test conditions
transport/server: replace protocol_exception throws with returns
utils/reusable_buffer: accept non-throwing writer callbacks via result_with_exception
transport/server: avoid exception-throw overhead in handle_error
test/cqlpy: add protocol_exception tests
Replace throwing protocol_exception with returning it as a result
or an exceptional future in the transport server module. This
improves performance, for example during connection storms and
server restarts, where protocol exceptions are more frequent.
In functions already returning a future, protocol exceptions are
propagated using an exceptional future. In functions not already
returning a future, result_with_exception is used.
Notable change is checking v.failed() before calling v.get() in
process_request function, to avoid throwing in case of an
exceptional future.
Refs: #24567
Make make_bytes_ostream and make_fragmented_temporary_buffer accept
writer callbacks that return utils::result_with_exception instead of
forcing them to throw on error. This lets callers propagate failures
by returning an error result rather than throwing an exception.
Introduce buffer_writer_for, bytes_ostream_writer, and fragmented_buffer_writer
concepts to simplify and document the template requirements on writer callbacks.
This patch does not modify the actual callbacks passed, except for the syntax
changes needed for successful compilation, without changing the logic.
Refs: #24567
Previously, connection::handle_error always called f.get() inside a try/catch,
forcing every failed future to throw and immediately catch an exception just to
classify it. This change eliminates that extra throw/catch cycle by first checking
f.failed(), getting the stored std::exception_ptr via f.get_exception(), and
then dispatching on its type via utils::try_catch<T>(eptr).
The error-response logic is not changed - cassandra_exception, std::exception,
and unknown exceptions are caught and processed, and any exceptions thrown by
write_response while handling those exceptions continues to escape handle_error.
Refs: #24567
It's not a good usage as there is only one non-empty implementation.
Also we need to change it further in the following commit which
makes it incompatible with listener code.
cql_server_config
This also repeats previous patch for another updateable_value. The thing
here is that this config option is passed further to generic_server, but
not used by transport::server itslef.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
to cql_server_config
Similarly to previous patch -- move yet another updateable_value to let
transport::server eventually stop messing with db::config.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
This is updateable_value that's initialized from db::config named_value
to tackle its shard-unsafety. However, the cql_server_config is created
by controller using sharded_parameter() helper, so that is can be safely
passed to server.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
C++20 introduced two new attributes--likely and unlikely--that
function as a built-in replacement for __builtin_expect implemented
in various compilers. Since it makes code easier to read and it's
an integral part of the language, there's no reason to not use it
instead.
Closesscylladb/scylladb#24786
This reverts commit 0b516da95b, reversing
changes made to 30199552ac. It breaks
cluster.random_failures.test_random_failures.test_random_failures
in debug mode (at least).
Fixes#24513
It has been observed to generate ~200 kiB allocations.
Since we have already been made aware of that, we can silence the warning
to clean up the logs.
Closesscylladb/scylladb#24360
This change is preparing ground for state update unification for raft bound subsystems. It introduces schema_applier which in the future will become generic interface for applying mutations in raft.
Pulling `database::apply()` out of schema merging code will allow to batch changes to subsystems. Future generic code will first call `prepare()` on all implementations, then single `database::apply()` and then `update()` on all implementations, then on each shard it will call `commit()` for all implementations, without preemption so that the change is observed as atomic across all subsystems, and then `post_commit()`.
Backport: no, it's a new feature
Fixes: https://github.com/scylladb/scylladb/issues/19649Closesscylladb/scylladb#20853
* github.com:scylladb/scylladb:
storage_service: always wake up load balancer on update tablet metadata
db: schema_applier: call destroy also when exception occurs
db: replica: simplify seeding ERM during shema change
db: remove cleanup from add_column_family
db: abort on exception during schema commit phase
db: make user defined types changes atomic
replica: db: make keyspace schema changes atomic
db: atomically apply changes to tables and views
replica: make truncate_table_on_all_shards get whole schema from table_shards
service: split update_tablet_metadata into two phases
service: pull out update_tablet_metadata from migration_listener
db: service: add store_service dependency to schema_applier
service: simplify load_tablet_metadata and update_tablet_metadata
db: don't perform move on tablet_hint reference
replica: split add_column_family_and_make_directory into steps
replica: db: split drop_table into steps
db: don't move map references in merge_tables_and_views()
db: introduce commit_on_shard function
db: access types during schema merge via special storage
replica: make non-preemptive keyspace create/update/delete functions public
replica: split update keyspace into two phases
replica: split creating keyspace into two functions
db: rename create_keyspace_from_schema_partition
db: decouple functions and aggregates schema change notification from merging code
db: store functions and aggregates change batch in schema_applier
db: decouple tables and views schema change notifications from merging code
db: store tables and views schema diff in schema_applier
db: decouple user type schema change notifications from types merging code
service: unify keyspace notification functions arguments
db: replica: decouple keyspace schema change notifications to a separate function
db: add class encapsulating schema merging
It's not a good usage as there is only one non-empty implementation.
Also we need to change it further in the following commit which
makes it incompatible with listener code.
The patch removes connection advertising functions and moves the logic to constructors and destructors, providing a more robust way of counting connections. This change was also necessary to allow skipping the connection process function during shedding, as the active connections counter needs to be decremented.
The patch doesn't fix any active bug, just improves the flow.
Backport: none, it's a cosmetic change
Closesscylladb/scylladb#23890
* github.com:scylladb/scylladb:
generic_server: make shutdown() return void
generic_server: skip connection processing logic after shedding the connection
transport: generic_server: remove no longer used connection advertising code
transport: move new connection trace logs into connection class ctor/dtor
transport: move cql connections counting into connection class ctor/dtor
This is a step towards replacing advertise_new_connection/unadvertise_connection
by RAII which is less error prone. Advertising will be removed in subsequent commit.
This is a step towards replacing advertise_new_connection/unadvertise_connection
by RAII which is less error prone. Advertising will be removed in subsequent commit.
Metadata id was introduced in CQLv5 to make metadata of prepared
statement consistent between driver and database. This commit introduces
a protocol extension that allows to use the same mechanism in CQLv4.
This change:
- Introduce SCYLLA_USE_METADATA_ID protocol extension for CQLv4
- Introduce METADATA_CHANGED flag in RESULT. The flag cames directly
from CQLv5 binary protocol. In CQLv4, the bit was never used, so we
assume it is safe to reuse it.
- Implement handling of metadata_id and METADATA_CHANGED in RESULT rows
- Implement returning metadata_id in RESULT prepared
- Implement reading metadata_id from EXECUTE
- Added description of SCYLLA_USE_METADATA_ID in documentation
Metadata_id is wrapped in cql_metadata_id_wrapper because we need to
distinguish the following situations:
- Metadata_id is not supported by the protocol (e.g. CQLv4 without the
extension is used)
- Metadata_id is supported by the protocol but not set - e.g. PREPARE
query is being handled: it doesn't contain metadata_id in the
request but the reply (RESULT prepared) must contain metadata_id
- Metadata_id is supported by the protocol and set, any number of
bytes >= 0 is allowed, according to the CQLv5 protocol specification
Fixesscylladb/scylladb#20860
Before this change, if a read executor had just enough targets to
achieve query's CL, and there was a connection drop (e.g. node failure),
the read executor waited for the entire request timeout to give drivers
time to execute a speculative read in a meantime. Such behavior don't
work well when a very long query timeout (e.g. 1800s) is set, because
the unfinished request blocks topology changes.
This change implements a mechanism to thrown a new
read_failure_exception_with_timeout in the aforementioned scenario.
The exception is caught by CQL server which conducts the waiting, after
ERM is released. The new exception inherits from read_failure_exception,
because layers that don't catch the exception (such as mapreduce
service) should handle the exception just a regular read_failure.
However, when CQL server catch the exception, it returns
read_timeout_exception to the client because after additional waiting
such an error message is more appropriate (read_timeout_exception was
also returned before this change was introduced).
This change:
- Rewrite cql_server::connection::process_request_one to use
seastar::futurize_invoke and try_catch<> instead of utils::result_try
- Add new read_failure_exception_with_timeout and throws it in storage_proxy
- Add sleep in CQL server when the new exception is caught
- Catch local exceptions in Mapreduce Service and convert them
to std::runtime_error.
- Add get_cql_exclusive to manager_client.py
- Add test_long_query_timeout_erm
No backport needed - minor issue fix.
Closesscylladb/scylladb#23156
* github.com:scylladb/scylladb:
test: add test_long_query_timeout_erm
test: add get_cql_exclusive to manager_client.py
mapreduce: catch local read_failure_exception_with_timeout
transport: storage_proxy: release ERM when waiting for query timeout
transport: remove redundant references in process_request_one
transport: fix the indentation in process_request_one
transport: add futures in CQL server exception handling
Before this change, if a read executor had just enough targets to
achieve query's CL, and there was a connection drop (e.g. node failure),
the read executor waited for the entire request timeout to give drivers
time to execute a speculative read in a meantime. Such behavior don't
work well when a very long query timeout (e.g. 1800s) is set, because
the unfinished request blocks topology changes.
This change implements a mechanism to thrown a new
read_failure_exception_with_timeout in the aforementioned scenario.
The exception is caught by CQL server which conducts the waiting, after
ERM is released. The new exception inherits from read_failure_exception,
because layers that don't catch the exception (such as mapreduce
service) should handle the exception just a regular read_failure.
However, when CQL server catch the exception, it returns
read_timeout_exception to the client because after additional waiting
such an error message is more appropriate (read_timeout_exception was
also returned before this change was introduced).
This change:
- Add new read_failure_exception_with_timeout exception
- Add throw of read_failure_exception_with_timeout in storage_proxy
- Add abort_source to CQL server, as well as to_stop() method for
the correct abort handling
- Add sleep in CQL server when the new exception is caught
Refs #21831
The references were added and used in previous commits to
limit the number of line changes for a reviewer convenience.
This commit removes the redundant references to make the code
more clear and concise.
Prepare for the next commit that will introduce a
seastar::sleep in handling of selected exception.
This commit:
- Rewrite cql_server::connection::process_request_one to use
seastar::futurize_invoke and try_catch<> instead of
utils::result_try.
- The intentation is intentionally incorrect to reduce the
number of changed lines. Next commits fix it.
The db::config is top-level configuration of scylla, we generally try to
avoid using it even in scylla components: each uses its own config
initialized by the service creator out of the db::config itself. The
generic_server is not an exception, all the more so, it already has its
own config.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Closesscylladb/scylladb#23705
Adds new live updatable config: uninitialized_connections_semaphore_cpu_concurrency.
It should help to reduce cpu usage by limiting cpu concurrency for new connections. As a last resort when those connections are waiting for initial processing too long (over 1m) they are shed.
New connections_shed and connections_blocked metrics are added for tracking.
Testing:
- manually via simple program creating high number of connection and constantly re-connecting
- added benchmark
Following are benchmark results:
Before:
```
> build/release/test/perf/perf_generic_server --smp=1
170101.41 tps ( 13.1 allocs/op, 0.0 logallocs/op, 7.0 tasks/op, 4695 insns/op, 3178 cycles/op, 0 errors)
[...]
throughput: mean=173850.06 standard-deviation=1844.48 median=174509.66 median-absolute-deviation=874.23 maximum=175087.49 minimum=170588.54
instructions_per_op: mean=4725.59 standard-deviation=13.35 median=4729.38 median-absolute-deviation=12.49 maximum=4738.61 minimum=4709.96
cpu_cycles_per_op: mean=3135.08 standard-deviation=32.13 median=3122.68 median-absolute-deviation=22.29 maximum=3179.38 minimum=3103.15
```
After:
```
> build/release/test/perf/perf_generic_server --smp=1
167373.19 tps ( 13.1 allocs/op, 0.0 logallocs/op, 7.0 tasks/op, 4821 insns/op, 3371 cycles/op, 0 errors)
[...]
throughput:
mean= 171199.55 standard-deviation=2484.58
median= 171667.06 median-absolute-deviation=2087.63
maximum=173689.11 minimum=167904.76
instructions_per_op:
mean= 4801.90 standard-deviation=16.54
median= 4796.78 median-absolute-deviation=9.32
maximum=4830.71 minimum=4789.81
cpu_cycles_per_op:
mean= 3245.26 standard-deviation=32.28
median= 3230.44 median-absolute-deviation=16.52
maximum=3297.39 minimum=3215.62
```
The patch adds around 67 insns/op so it's effect on performance should be negligible.
Fixes: https://github.com/scylladb/scylladb/issues/22844Closesscylladb/scylladb#22828
* github.com:scylladb/scylladb:
transport: move on_connection_close into connection destructor
test: perf: make aggregated_perf_results formatting more human readable
transport: add blocked and shed connection metrics
generic_server: throttle and shed incoming connections according to semaphore limit
generic_server: add data source and sink wrappers bookkeeping network IO
generic_server: coroutinize part of server::do_accepts
test: add benchmark for generic_server
test: perf: add option to count multiple ops per time_parallel iteration
generic_server: add semaphore for limiting new connections concurrency
generic_server: add config to the constructor
generic_server: add on_connection_ready handler