Commit Graph

271 Commits

Author SHA1 Message Date
Avi Kivity
16ff92745f Merge 'perf: add alternator frontend to perf_simple_query' from Piotr Sarna
The perf_simple_query tool is extended with another protocol
aside from CQL - alternator. The alternative (pun intended) benchmark
can be executed by using the `--alternator X` parameter, where X
specifies one of the alternator's mandatory write isolation options:
 - "forbid_rmw" - forbids RMW (read-modify-write) requests
 - "unsafe" - never uses LWT (lightweight transactions), even for RMW
 - "always_use_lwt" - uses LWT even for non-RMW requests
 - "only_rmw_uses_lwt" - that one's rather self-explanatory

Alternator cooperates with existing `--write` and `--delete` parameters.

Aside from being able to check for improvements/regressions
in the alternator module, it's also possible to check how different
isolation levels influence the number of allocations and overall
performance, or to compare alternator against CQL.

Example output showing the difference in isolation levels:

```bash
$ ./build/release/test/perf/perf_simple_query_g --smp 1 \
    --write --alternator only_rmw_uses_lwt --default-log-level error
random-seed=1235000092
Started alternator executor
10873.76 tps (202.9 allocs/op,  12.4 tasks/op,  369921 insns/op)
11096.09 tps (202.7 allocs/op,  12.1 tasks/op,  374792 insns/op)
11100.09 tps (203.0 allocs/op,  12.1 tasks/op,  376469 insns/op)
11068.98 tps (203.1 allocs/op,  12.1 tasks/op,  377132 insns/op)
11081.24 tps (203.2 allocs/op,  12.1 tasks/op,  377290 insns/op)

median 11081.24 tps (203.2 allocs/op,  12.1 tasks/op,  377290 insns/op)
median absolute deviation: 14.85
maximum: 11100.09
minimum: 10873.76

$ ./build/release/test/perf/perf_simple_query_g --smp 1 \
    --random-seed 1235000092 --write --alternator always_use_lwt \
    --default-log-level error
random-seed=1235000092
Started alternator executor
3605.35 tps (877.4 allocs/op, 174.6 tasks/op,  986666 insns/op)
3555.71 tps (890.0 allocs/op, 174.4 tasks/op, 1006945 insns/op)
3530.20 tps (899.7 allocs/op, 174.1 tasks/op, 1021908 insns/op)
3437.65 tps (908.2 allocs/op, 174.6 tasks/op, 1033992 insns/op)
3409.88 tps (913.2 allocs/op, 174.4 tasks/op, 1041240 insns/op)

median 3530.20 tps (899.7 allocs/op, 174.1 tasks/op, 1021908 insns/op)
median absolute deviation: 75.15
maximum: 3605.35
minimum: 3409.88
```

Closes #8656

* github.com:scylladb/scylla:
  perf: add alternator frontend to perf_simple_query
  cdc: make metadata.hh self-sufficient
  test: add minimal alternator_test_env
2021-05-18 16:17:54 +03:00
Piotr Sarna
6c6ccda8a0 perf: add alternator frontend to perf_simple_query
The perf_simple_query tool is extended with another protocol
aside from CQL - alternator. The alternative (pun intended) benchmark
can be executed by using the `--alternator X` parameter, where X
specifies one of the alternator's mandatory write isolation options:
 - "forbid_rmw" - forbids RMW (read-modify-write) requests
 - "unsafe" - never uses LWT (lightweight transactions), even for RMW
 - "always_use_lwt" - uses LWT even for non-RMW requests
 - "only_rmw_uses_lwt" - that one's rather self-explanatory

Alternator cooperates with existing --write and --delete parameters.

Aside from being able to check for improvements/regressions
in the alternator module, it's also possible to check how different
isolation levels influence the number of allocations and overall
performance, or to compare alternator against CQL.

$ ./build/release/test/perf/perf_simple_query_g --smp 1 \
    --write --alternator only_rmw_uses_lwt --default-log-level error
random-seed=1235000092
Started alternator executor
10873.76 tps (202.9 allocs/op,  12.4 tasks/op,  369921 insns/op)
11096.09 tps (202.7 allocs/op,  12.1 tasks/op,  374792 insns/op)
11100.09 tps (203.0 allocs/op,  12.1 tasks/op,  376469 insns/op)
11068.98 tps (203.1 allocs/op,  12.1 tasks/op,  377132 insns/op)
11081.24 tps (203.2 allocs/op,  12.1 tasks/op,  377290 insns/op)

median 11081.24 tps (203.2 allocs/op,  12.1 tasks/op,  377290 insns/op)
median absolute deviation: 14.85
maximum: 11100.09
minimum: 10873.76

$ ./build/release/test/perf/perf_simple_query_g --smp 1 \
    --random-seed 1235000092 --write --alternator always_use_lwt \
    --default-log-level error
random-seed=1235000092
Started alternator executor
3605.35 tps (877.4 allocs/op, 174.6 tasks/op,  986666 insns/op)
3555.71 tps (890.0 allocs/op, 174.4 tasks/op, 1006945 insns/op)
3530.20 tps (899.7 allocs/op, 174.1 tasks/op, 1021908 insns/op)
3437.65 tps (908.2 allocs/op, 174.6 tasks/op, 1033992 insns/op)
3409.88 tps (913.2 allocs/op, 174.4 tasks/op, 1041240 insns/op)

median 3530.20 tps (899.7 allocs/op, 174.1 tasks/op, 1021908 insns/op)
median absolute deviation: 75.15
maximum: 3605.35
minimum: 3409.88
2021-05-18 15:10:31 +02:00
Piotr Sarna
b6d6247a74 test: add minimal alternator_test_env
A minimal implementation of alternator test env, a younger cousin
of cql_test_env, is implemented. Note that using this environment
for unit tests is strongly discouraged in favor of the official
test/alternator pytest suite. Still, alternator_test_env has its uses
for microbenchmarks.
2021-05-18 15:10:31 +02:00
Botond Dénes
82bff1bcc6 test: cql_test_env: use proper scheduling groups
Currently `cql_test_env` runs its `func` in the default (main) group and
also leaves all scheduling groups in `dbcfg` default initialized to the
same scheduling group. This results in every part of the system,
normally isolated from each other, running in the same (default)
scheduling group. Not a big problem on its own, as we are talking about
tests, but this creates an artificial difference between the test and
the real environment, which is ever more pronounced since certain query
parameters are selected based on the current scheduling group.
To bring cql test env just that little bit closer to the real thing,
this patch creates all the scheduling groups main does (well almost) and
configures `dbcfg` with them.
Creating and destroying the scheduling group on each setup-teardown of
cql test env breaks some internal seastar components which don't like
seeing the same scheduling group with the same name but different id. So
create the scheduling groups once on first access and keep them around
until the test executable is running.

Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <20210514141614.128213-2-bdenes@scylladb.com>
2021-05-18 13:44:54 +03:00
Botond Dénes
c98b0d0de8 test: cql_test_env: add trace logs to execute_cql()
In tests executing tons of these, it is useful to be able to enable a
trace logging of each one, to see which is the last successful one.

Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <20210514140531.118390-1-bdenes@scylladb.com>
2021-05-18 10:06:22 +03:00
Tomasz Grabiec
a9dd7a295d tests: perf_row_cache_reads: Add scenario for lots of rows covered by a range tombstone
Reproduces #8626.

Output:

    test_scan_with_range_delete_over_rows
    Populating with rows
    Rows: 702710
    Scanning...
    read: 540.007324 [ms], preemption: {count: 2356, 99%: 1.131752 [ms], max: 1.148589 [ms]}, cache: 251/252 [MB]
    read: 651.942688 [ms], preemption: {count: 1176, 99%: 1.131752 [ms], max: 1.009652 [ms]}, cache: 251/252 [MB]
2021-05-12 11:58:36 +02:00
Avi Kivity
61c7f874cc Merge 'Add per-service-level timeouts' from Piotr Sarna
Ref: #7617

This series adds timeout parameters to service levels.

Per-service-level timeouts can be set up in the form of service level parameters, which can in turn be attached to roles. Setting up and modifying role-specific timeouts can be achieved like this:
```cql
CREATE SERVICE LEVEL sl2 WITH read_timeout = 500ms AND write_timeout = 200ms AND cas_timeout = 2s;
ATTACH SERVICE LEVEL sl2 TO cassandra;
ALTER SERVICE LEVEL sl2 WITH write_timeout = null;
```
Per-service-level timeouts take precedence over default timeout values from scylla.yaml, but can still be overridden for a specific query by per-query timeouts (e.g. `SELECT * from t USING TIMEOUT 50ms`).

Closes #7913

* github.com:scylladb/scylla:
  docs: add a paragraph describing service level timeouts
  test: add per-service-level timeout tests
  test: add refreshing client state
  transport: add updating per-service-level params
  client_state: allow updating per service level params
  qos: allow returning combined service level options
  qos: add a way of merging service level options
  cql3: add preserving default values for per-sl timeouts
  qos: make getting service level public
  qos: make finding service level public
  treewide: remove service level controller from query state
  treewide: propagate service level to client state
  sstables: disambiguate boost::find
  cql3: add a timeout column to LIST SERVICE LEVEL statement
  db: add extracting service level info via CQL
  types: add a missing translation for cql_duration
  cql3: allow unsetting service level timeouts
  cql3: add validating service level timeout values
  db: add setting service level params via system_distributed
  cql3: add fetching service level attrs in ALTER and CREATE
  cql3: add timeout to service level params
  qos: add timeout to service level info
  db,sys_dist_ks: add timeout to the service level table
  migration_manager: allow table updates with timestamp
  cql3: allow a null keyword for CQL properties
2021-05-11 18:39:10 +03:00
Piotr Sarna
43f1f9e445 test: add refreshing client state
With a helper client state refresher, some attributes
which are usually only refreshed after a client disconnects
and then reconnects, can be verified in the test suite.
2021-05-10 12:39:41 +02:00
Piotr Sarna
e257ec11c0 treewide: remove service level controller from query state
... since it's accessible through its member, client state.
2021-05-10 11:48:14 +02:00
Piotr Sarna
d1f2e8b469 treewide: propagate service level to client state
... since it's going to be used to set up per-service-level
timeouts.
2021-05-10 11:48:14 +02:00
Pavel Emelyanov
6de8bb663b storage_service: Remove migration notifier dependency
The only reason why storage service keeps a refernce on the migration
notifier is that the latter was needed by cdc before previous patch.
Now cdc gets the notifier directly from main, so storage service is
a bit more off the hook.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2021-04-29 22:47:13 +03:00
Pavel Emelyanov
3a7ca647af cdc: Provide migration notifier right at once
The only way db_context's migration notifier reference is set up
is via cdc_service->db_context::builder->.build chain of calls.
Since the builder's notifier optional reference is always
disengaged (the .with_migration_notifier is removed by previous
patch) the only possible notifier reference there is from the
storage service which, in turn, is the same as in main.cc.

Said that -- push the notifier reference onto db_context directly.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2021-04-29 22:40:24 +03:00
Avi Kivity
fa43d7680c Merge "Close flat mutation readers" from Benny
"
This patchset adds future-returning close methods to all
flat_mutation_reader-s and makes sure that all readers
are explicitly closed and waited for.

The main motivation for doing so is for providing a path
for cancelling outstanding i/o requests via a the input_stream
close (See https://github.com/scylladb/seastar/issues/859)
and wait until they complete.

Also, this series also introduces a stop
method to reader_concurrency_semaphore to be used when
shutting down the database, instead of calling
clear_inactive_readers in the database destructor.

The series does not change microbenchmarks performance in a significant way.
It looks like the results are within the tests' jitter.

- perf_simple_query: (in transactions per second, more is better)
before: median 184701.83 tps (90 allocs/op, 20 tasks/op)
after:  median 188970.69 tps (90 allocs/op, 20 tasks/op) (+2.3%)

- perf_mutation_readers: (in time per iteration, less is better)
combined.one_row                               65.042ns  -> 57.961ns  (-10.9%)
combined.single_active                         46.634us  -> 46.216us  ( -0.9%)
combined.many_overlapping                      364.752us -> 371.507us ( +1.9%)
combined.disjoint_interleaved                  43.634us  -> 43.448us  ( -0.4%)
combined.disjoint_ranges                       43.011us  -> 42.991us  ( -0.0%)
combined.overlapping_partitions_disjoint_rows  57.609us  -> 58.820us  ( +2.1%)
clustering_combined.ranges_generic             93.464ns  -> 96.236ns  ( +3.0%)
clustering_combined.ranges_specialized         86.537ns  -> 87.645ns  ( +1.3%)
memtable.one_partition_one_row                 903.546ns -> 957.639ns ( +6.0%)
memtable.one_partition_many_rows               6.474us   -> 6.444us   ( -0.5%)
memtable.one_large_partition                   905.593us -> 878.271us ( -3.0%)
memtable.many_partitions_one_row               13.815us  -> 14.718us  ( +6.5%)
memtable.many_partitions_many_rows             161.250us -> 158.590us ( -1.6%)
memtable.many_large_partitions                 24.237ms  -> 23.348ms  ( -3.7%)
average                                        -0.02%

Fixes #1076
Refs #2927

Test: unit(release, debug)
Perf: perf_mutation_readers, perf_simple_query (release)
Dtest: next-gating(release),
       materialized_views_test:TestMaterializedViews.interrupt_build_process_and_resharding_max_to_half_test repair_additional_test:RepairAdditionalTest.repair_disjoint_row_3nodes_diff_shard_count_test(debug)
"

* tag 'flat_mutation_reader-close-v7' of github.com:bhalevy/scylla: (94 commits)
  mutation_reader: shard_reader: get rid of stop
  mutation_reader: multishard_combining_reader: get rid of destructor
  flat_mutation_reader: abort if not closed before destroyed
  flat_mutation_reader: require close
  repair: row_level_repair: run: close repair_meta when done
  repair: repair_reader: close underlying reader on_end_of_stream
  perf: everywhere: close flat_mutation_reader when done
  test: everywhere: close flat_mutation_reader when done
  mutation_partition: counter_write_query: close reader when done
  index: built_indexes_reader: implement close
  mutation_writer: multishard_writer: close readers when done
  mutation_writer: feed_writer: close reader when done
  table: for_all_partitions_slow: close iteration_step reader when done
  view_builder: stop: close all build_step readers
  stream_transfer_task: execute: close send_info reader when done
  view_update_generator: start: close staging_sstable_reader when done
  view: build_progress_virtual_reader: implement close method
  view: generate_view_updates: close builder readers when done
  view_builder: initialize_reader_at_current_token: close reader before reassigning it
  view_builder: do_build_step: close build_step reader when done
  ...
2021-04-25 13:53:11 +03:00
Benny Halevy
aa5289f255 test: everywhere: close flat_mutation_reader when done
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2021-04-25 11:35:07 +03:00
Benny Halevy
29b2b1f8dd reader_lifecycle_policy: close inactive_read
Make sure to close the unregistered inactive_read
before it's destroyed, if the unregistered reader_opt
is engaged.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2021-04-25 11:35:07 +03:00
Benny Halevy
2f4134e1cc reader_concurrency_semaphore: broken: make broken_semaphore the default exception
Rather than explcitily generating it by all callers
and then not using the argument at all.

Prepare for providing a different exception_ptr
from a stop() path to be introduced in the next patch.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2021-04-25 11:35:07 +03:00
Benny Halevy
2c1edb1a94 mutation_reader: reader_lifecycle_policy: return future from destroy_reader
So we can wait on it from to-be-introduced shard_reader::close().

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2021-04-25 11:35:07 +03:00
Benny Halevy
7d42a71310 mutation_reader: position_reader_queue: add close method
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2021-04-25 11:35:07 +03:00
Benny Halevy
593bc9806d memtable: memtable_snapshot_source: make sure to close readers
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2021-04-25 11:35:07 +03:00
Benny Halevy
5dce9997ff test/lib: mutation_source_test: close readers
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2021-04-25 11:16:10 +03:00
Benny Halevy
266d060aef test/lib: flat_reader_assertions: close reader in destructor
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2021-04-25 11:16:10 +03:00
Pavel Emelyanov
b7a4fb0cf0 tests: Have own migration manager instances
No more global migration manager usage left, so all the tests
can be patched to use local migration manager instance. In fact,
it's only the cql_test_env that's such.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2021-04-23 17:13:24 +03:00
Pavel Emelyanov
37c91c4c5c tests: Use migration_manager from cql_test_env
All the tests that need migration manager are run inside
cql_test_env context and can use the migration manager
from the env. For now this is still the global one, but
next patch will change this.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2021-04-23 17:13:24 +03:00
Pavel Emelyanov
b7fe191e3d storage_service: Keep migration manager on board
The storage service needs migration manager to sync schema
on lifecycle notifiers and to stop the guy on drain. So this
patch just pushes the migration manager reference all the
way through the storage service constructor.

Few words about tests. Since now storage service needs the
migration manager in constructor, some tests should take it
from somewhere. The cql_test_env already has (and uses) it,
all the others can just provide a not-started sharded one,
it won't be in use in _those_ tests anyway.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2021-04-23 17:13:24 +03:00
Avi Kivity
daeddda7cc treewide: remove inclusions of storage_proxy.hh from headers
storage_proxy.hh is huge and includes many headers itself, so
remove its inclusions from headers and re-add smaller headers
where needed (and storage_proxy.hh itself in source files that
need it).

Ref #1.
2021-04-20 21:23:00 +03:00
Avi Kivity
14a4173f50 treewide: make headers self-sufficient
In preparation for some large header changes, fix up any headers
that aren't self-sufficient by adding needed includes or forward
declarations.
2021-04-20 21:23:00 +03:00
Eliran Sinvani
144fe02c23 unit test: Add unit test for per user sla syntax
This commit adds the infrastructure needed to test per user sla,
more specificaly, a service level accessor that triggers the
update_service_levels_from_distributed_data function uppon any
change to the dystributed sla data.
A test was added that indirectly consumes this infrastructure by
changing the distributed service level data with cql queries.
Message-Id: <23b2211e409446c4f4e3e57b00f78d9ff75fc978.1609249294.git.sarna@scylladb.com>
2021-04-12 16:31:26 +02:00
Kamil Braun
7ffb0d826b clustering_order_reader_merger: handle empty readers
The merger could return end-of-stream if some (but not all) of the
underlying readers were empty (i.e. not even returning a
`partition_start`). This could happen in places where it was used
(`time_series_sstable_set::create_single_key_sstable_reader`) if we
opened an sstable which did not have the queried partition but passed
all the filters (specifically, the bloom filter returned a false
positive for this sstable).

The commit also extends the random tests for the merger to include empty
readers and adds an explicit test case that catches this bug (in a
limited scope: when we merge a single empty reader).

It also modifies `test_twcs_single_key_reader_filtering` (regression
test for #8432) because the time where the clustering key filter is
invoked changes (some invocations move from the constructor of the
merger to operator()). I checked manually that it still catches the bug
when I reintroduce it.

Fixes #8445.

Closes #8446
2021-04-12 10:34:52 +03:00
Konstantin Osipov
c83cf1f965 uuid: switch the API to use std::chrono
A follow up for the patch for #7611. This change was requested
during review and moved out of #7611 to reduce its scope.

The patch switches UUID_gen API from using plain integers to
hold time units to units from std::chrono.

For one, we plan to switch the entire code base to std::chrono units,
to ensure type safety. Secondly, using std::chrono units allows to
increase code reuse with template metaprogramming and remove a few
of UUID_gen functions that beceme redundant as a result.

* switch  get_time_UUID(), unix_timestamp(), get_time_UUID_raw(), switch
  min_time_UUID(), max_time_UUID(), create_time_safe() to
  std::chrono
* remove unused variant of from_unix_timestamp()
* remove unused get_time_UUID_bytes(), create_time_unsafe(),
  redundant get_adjusted_timestamp()
* inline get_raw_UUID_bytes()
* collapse to similar implementations of get_time_UUID()
* switch internal constants to std::chrono
* remove unnecessary unique_ptr from UUID_gen::_instance
Message-Id: <20210406130152.3237914-2-kostja@scylladb.com>
2021-04-06 17:12:54 +03:00
Botond Dénes
0f1a72ba59 test: test_reader_lifecycle_policy: keep semaphores alive until all ops cease
To ensure the semaphores outlive all permits created as part of the
tests.
2021-03-26 14:22:43 +02:00
Botond Dénes
7980140549 test: test_utils: do_check()/do_require(): tone down log to trace
They are way too noisy to be at debug level.

Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <20210318143547.101932-1-bdenes@scylladb.com>
2021-03-18 16:59:59 +02:00
Pavel Emelyanov
1de235f4da query_processor: Keep migration manager onboard
The query processor sits upper than the migration manager,
in the services layering, it's started after and (will be)
stopped before the migration manager.

The migration manager is needed in schema altering statements
which are called with query processor argument. They will
later get the migration manager from the query processor.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2021-03-15 19:00:58 +03:00
Avi Kivity
1287a5e1d0 test: index_reader_assertions: fix misuse of trichotomic comparator in has_monotonic_positions
has_monotonic_positions() wants to check for a greater-than-or-equal-to
relation, but actually tests for not-equal, since it treats a
trichotomic comparator as a less-than comparator. This is clearly seen
in the BOOST_FAIL message just below.

Fix by aligning the test with the intended invariant. Luckily, the tests
still pass.

Ref #1449.

Closes #8222
2021-03-07 13:44:37 +02:00
Avi Kivity
5f4bf18387 Revert "Merge 'sstables: add versioning to the sstable_set ' from Wojciech Mitros"
This reverts commit 31909515b3, reversing
changes made to ef97adc72a. It shows many
serious regressions in dtest.

Fixes #8197.
2021-03-02 13:21:22 +02:00
Botond Dénes
af0a23e75c test/cql_test_env: do_with_cql_test_env(): add thread_attributes parameter
To allow conveniently setting the scheduling group `func` is to be run
in.
2021-03-02 07:53:53 +02:00
Avi Kivity
8747c684e0 Merge 'Move timeouts to client state' from Piotr Sarna
This series is extracted from #7913 as it may prove useful to other series as well, and #7913 might take a while until its merged, given that it also depends on other unmerged pull requests.

The idea of this series is to move timeouts to the client state, which will allow changing them independently for each session - e.g. by setting per-service-level timeouts and initializing the values from attached service levels (see #7867).

Closes #8140

* github.com:scylladb/scylla:
  treewide: remove timeout config from query options
  cql3: use timeout config from client state instead of query options
  cql3: use timeout config from client state instead of query options
  cql3: use timeout config from client state instead of query options
  service: add timeout config to client state
2021-03-01 20:34:35 +02:00
Avi Kivity
31909515b3 Merge 'sstables: add versioning to the sstable_set ' from Wojciech Mitros
Currently, the sstable_set in a table is copied before every change
to allow accessing the unchanged version by existing sstable readers.
This patch changes the sstable_set to a structure that keeps all its
versions that are referenced somewhere and provides a way of getting
a reference to an immutable version of the set.
Each sstable in the set is associated with the versions it is alive in,
and is removed when all such versions don't have references anymore.
To avoid copying, the object holding all sstables in the set version is
changed to a new structure, sstable_list, which was previously an alias
for std::unordered_set<shared_sstable>, and which implements most of the
methods of an unordered_set, but its iterator uses the actual set with
all sstables from all referenced versions and iterates over those
sstables that belong to the captured version.
The methods that modify the sets contents give strong exception guarantee
by trying to insert new sstables to its containers, and erasing them in
the case of an caught exception.
To release shared_sstables as soon as possible (i.e. when all references
to versions that contain them die), each time a version is removed, all
sstables that were referenced exclusively by this version are erased. We
are able to find these sstables efficiently by storing, for each version,
all sstables that were added and erased in it, and, when a version is
removed, merging it with the next one. When a version that adds an sstable
gets merged with a version that removes it, this sstable is erased.

Fixes #2622

Signed-off-by: Wojciech Mitros wojciech.mitros@scylladb.com

Closes #8111

* github.com:scylladb/scylla:
  sstables: add test for checking the latency of updating the sstable_set in a table
  sstables: move column_family_test class from test/boost to test/lib
  sstables: use fast copying of the sstable_set instead of rebuilding it
  sstables: replace the sstable_set with a versioned structure
  sstables: remove potential ub
  sstables: make sstable_set constructor less error-prone
2021-03-01 14:16:36 +02:00
Kamil Braun
e2f03e4aba cdc: move (most of) CDC generation management code to the new service
Currently all management of CDC generations happens in storage_service,
which is a big ball of mud that does many unrelated things.

Previous commits have introduced a new service for managing CDC
generations. This code moves most of the relevant code to this new
service.

However, some part still remains in storage_service: the bootstrap
procedure, which happens inside storage_service, must also do some
initialization regarding CDC generations, for example: on restart it
must retrieve the latest known generation timestamp from disk; on
bootstrap it must create a new generation and announce it to other
nodes. The order of these operations w.r.t the rest of the startup
procedure is important, hence the startup procedure is the only right
place for them.

Still, what remains in storage_service is a small part of the entire
CDC generation management logic; most of it has been moved to the
new service. This includes listening for generation changes and
updating the data structures for performing CDC log writes (cdc::metadata).
Furthermore these functions now return futures (and are internally
coroutines), where previously they required a seastar::async context.
2021-02-26 12:06:12 +01:00
Piotr Sarna
c5214eb096 treewide: remove timeout config from query options
Timeout config is now stored in each connection, so there's no point
in tracking it inside each query as well. This patch removes
timeout_config from query_options and follows by removing now
unnecessary parameters of many functions and constructors.
2021-02-25 17:20:27 +01:00
Piotr Sarna
7ceafda70a service: add timeout config to client state
Future patches will use this per-connection timeout config
to allow setting different timeouts for each session,
based on roles.
2021-02-25 17:20:26 +01:00
Kamil Braun
d4937daaea cdc: introduce cdc::generation_service
This commit introduces a new service crafted to handle CDC generation
management: listening and reacting to generation changes in the cluster.

The implementation is a stub for now, the service reacts to generation
changes by simply logging the event.

The commit plugs the service in, initializing it in main and test code,
passing a reference to storage_service and having storage_service start
the service (using the `after_join` method): the service only starts
doing its job after the node joins the token ring (either on bootstrap
or restart).
2021-02-22 12:45:43 +01:00
Kamil Braun
8e72c33d7c main: move cdc_service initialization just prior to storage_service
initialization

As a preparation for introducing CDC generation management service.

cdc_service will depend on the generation service.
But the generation service needs some other services to work
properly. In particular, it uses the local database, so it should be
initialized after the local database.

The only service that will need the cdc generation service is
storage_service, so we can place the generation service initialization
code right before storage_service initialization code. So the order will
be cdc_generation_service -> cdc_service -> storage_service.
2021-02-22 12:43:10 +01:00
Avi Kivity
78d1afeabd Merge "Use radix tree to store cells on a row" from Pavel E
"
Current storage of cells in a row is a union of vector and set. The
vector holds 5 cell_and_hash's inline, up to 32 ones in the external
storage and then it's switched to std::set. Once switched, the whole
union becomes the waste of space, as it's size is

   sizeof(vector head) + 5 * sizeof(cell and hash) = 90+ bytes

and only 3 pointers from it are used (std::set header). Also the
overhead to keep cell_and_hash as a set entry is more then the size
of the structure itself.

Column ids are 32-bit integers that most likely come sequentialy.
For this kind of a search key a radix tree (with some care for
non-sequential cases) can be beneficial.

This set introduces a compact radix tree, that uses 7-bit sub values
from the search key to index on each node and compacts the nodes
themselves for better memory usage. Then the row::_storage is replaced
with the new tree.

The most notable result is the memory footprint decrease, for wide
rows down to 2x times. The performance of micro-benchmarks is a bit
lower for small rows and (!) higer for longer (8+ cells). The numbers
are in patch #12 (spoiler: they are better than for v2)

v3:
- trimmed size of radix down to 7 bits
- simplified the nodes layouts, now there are 2 of them (was 4)
- enhanced perf_mutation to test N-cells schema
- added AVX intra-nodes search for medium-sized nodes
- added .clone_from() method that helped to improve perf_mutation
- minor
  - changed functions not to return values via refs-arguments
  - fixed nested classes to properly use language constructors
  - renamed index_to to key_t to distinguish from node_index_t
  - improved recurring variadic templates not to use sentinel argument
  - use standard concepts

v2:
- fixed potential mis-compilation due to strict-aliasing violation
- added oracle test (radix tree is compared with std::map)
- added radix to perf_collection
- cosmetic changes (concepts, comments, names)

A note on item 1 from v2 changelog. The nodes are no longer packed
perfectly, each has grown 3 bytes. But it turned out that when used
as cells container most of this growth drowned in lsa alignments.

next todo:
- aarch64 version of 16-keys node search

tests: unit(dev), unit(debug for radix*), pref(dev)
"

* 'br-radix-tree-for-cells-3' of https://github.com/xemul/scylla:
  test/memory_footpring: Print radix tree node sizes
  row: Remove old storages
  row: Prepare row::equal for switch
  row: Prepare row::difference for switch
  row: Introduce radix tree storage type
  row-equal: Re-declare the cells_equal lambda
  test: Add tests for radix tree
  utils: Compact radix tree
  array-search: Add helpers to search for a byte in array
  test/perf_collection: Add callback to check the speed of clone
  test/perf_mutation: Add option to run with more than 1 columns
  test/perf_mutation: Prepare to have several regular columns
  test/perf_mutation: Use builder to build schema
2021-02-18 21:19:14 +02:00
Kamil Braun
3d7b990300 system_distributed_keyspace: use mutation API to insert CDC streams
The `storage_proxy::mutate` low-level API is much more powerful than
the CQL API. This power is not needed for this commit but for the next.
2021-02-18 11:44:59 +01:00
Benny Halevy
50ca693a02 main: disable stall detector during startup
We see long reactor stalls from `logalloc::prime_segment_pool`
in debug mode yet the stall detector's purpose is to detect
reactor stalls during normal operation where they can increase
the latency of other queries running in parallel.

Since this change doesn't actually fix the stalls but rather
hides them, the following annotations will just refrence
the respective github issues rather than auto-close them.

Refs #7150
Refs #5192
Refs #5960

Restore blocked_reactor_notify_ms right before
starting storage_proxy.  Once storage_proxy is up, this node
affects cluster latency, and so stalls should be reported so
they can be fixed.

Test: secondary_index_test --blocked-reactor-notify-ms 1 (release)
DTest: CASSANDRA_DIR=../scylla/build/release SCYLLA_EXT_OPTS="--blocked-reactor-notify-ms 2" ./scripts/run_test.sh materialized_views_test:TestMaterializedViews.interrupt_build_process_with_resharding_half_to_max_test

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Message-Id: <20210216112052.27672-1-bhalevy@scylladb.com>
2021-02-16 13:28:31 +02:00
Pavel Emelyanov
1bdfa355ea row: Remove old storages
Now when the 3rd storage type (radix tree) is all in, old
storage can be safely removed.  The result is:

1. memory footprint

sizeof(class row):  112 => 16 bytes
sizeof(rows_entry): 126 => 120 bytes

the "in cache" value depends on the number of cells:

num of cells     master       patch
         1       752         656
         2       808         712
         3       864         768
         4       920         824
         5       968         936
         6      1136         992
         ...
         16     1840        1672
         17     1904        1992  (+88)
         18     1976        2048  (+72)
         19     2048        2104  (+56)
         20     2120        2160  (+40)
         21     2184        2208  (+24)
         22     2256        2264  ( +8)
         23     2328        2320
         ...
         32     2960        2808

After 32 cells the storage switches into rbtree with
24-bytes per-cell overhead and the radix tree improvement
rocketlaunches

           64     7872        6056
           128   15040        9512
           256   29376       18568

2. perf_mutation test is enhanced by this series and the
   results differ depending on the number of columns used

                    tps value
--column-count    master   patch
          1       59.9k    57.6k  (-3.8%)
          2       59.9k    57.5k
          4       59.8k    57.6k
          8       57.6k    57.7k  <- eq
         16       56.3k    57.6k
         32       53.2k    57.4k  (+7.9%)

A note on this. Last time 1-column test was ~5% worse which
was explained by inline storage of 5 cells that's present on
current implementation and was absent in radix tree.

An attempt to make inline storage for small radix trees
resulted in complete loss of memory footprint gain, but gave
fraction of percent to perf_mutation performance. So this
version doesn't have inline nodes.

The 1.2% improvement from v2 surprisingly came from the
tree::clone_from() which in v2 was work-around-ed by slow
walk+emplace sequence while this version has the optimized
API call for cloning.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2021-02-15 20:35:06 +03:00
Benny Halevy
e532585126 test: sstables::test_env: do_with: futurize_invoke func
Otherwise, if `func` throws, test_env isn't stopped, as it should.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Message-Id: <20210214190157.211858-1-bhalevy@scylladb.com>
2021-02-15 15:45:49 +02:00
Wojciech Mitros
693b4e0fcd sstables: move column_family_test class from test/boost to test/lib
Column_family_test allows performing private methods on column_family's
sstable_set. It may be useful not only in the boost tests, so it's moved
from test/boost/sstable_test.hh to test/lib/sstable_test_env.hh.
sstable_test.hh includes sstable_test_env.hh, so no includes need to be
changed.

Signed-off-by: Wojciech Mitros <wojciech.mitros@scylladb.com>
2021-02-11 11:02:55 +01:00
Avi Kivity
7f3083739f Merge "sstables: Share partition index pages between readers" from Tomasz
"
Before this patch, each index reader had its own cache of partition
index pages. Now there is a shared cache, owned by the sstable object.
This allows concurrent reads to share partition index pages and thus
reduce the amount of I/O.

It used to be like that a few years ago, but we moved to per-reader
cache to implement incremental promoted index parsing, to avoid OOMs
with large partitions. At that time, the solution involved caching
input streams inside partition index entries, which couldn't be reused
between readers. This could have been solved differently. Instead of
caching input streams, we can cache information needed to created them
(temporary_buffer<>). This solution takes this approach.

This series is also needed before we can implement promoted index
caching. That's because before the promoted index can be shared by
readers, the partition index entries, which hold the promoted index,
must also be shareable.

The pages live as long as there is at least one index reader
referencing them. So it only helps when there is concurrent access. In
the future we will keep them for longer and evict on memory pressure.

Promoted index cursor is no longer created when the partition index
entry is parsed, by it's created on-demand when the top-level cursor
enters the partition. The promoted index cursor is owned by the
top-level cursor, not by the partition index entry.

Below are the results of an experiment performed on my laptop which
demonstrates the improvement in performance.

Load driver command line:

  ./scylla-bench                   \
       -workload uniform           \
       -mode read                  \
       --partition-count=10        \
       -clustering-row-count=1     \
       -concurrency 100

Scylla command line:

  scylla --developer-mode=1 -c1 -m1G --enable-cache=0

The workload is IO-bound.
Before, we needed 2 I/O per read, now we need 1 (amortized).
The throughput is ~70% higher.

Before:

 time   ops/s  rows/s errors max    99.9th 99th   95th   90th   median mean
   1s    4706    4706      0 35ms   30ms   27ms   25ms   24ms   21ms   21ms
   2s    4646    4646      0 42ms   31ms   31ms   27ms   25ms   21ms   22ms
 3.1s    4670    4670      0 40ms   27ms   26ms   25ms   25ms   21ms   21ms
 4.1s    4581    4581      0 39ms   33ms   33ms   27ms   26ms   21ms   22ms
 5.1s    4345    4345      0 40ms   37ms   35ms   32ms   31ms   21ms   23ms
 6.1s    4328    4328      0 49ms   40ms   34ms   32ms   31ms   22ms   23ms
 7.1s    4198    4198      0 45ms   36ms   35ms   31ms   30ms   22ms   24ms
 8.2s    3913    3913      0 51ms   50ms   50ms   39ms   35ms   24ms   26ms
 9.2s    4524    4524      0 34ms   31ms   30ms   28ms   27ms   21ms   22ms

After:

 time   ops/s  rows/s errors max    99.9th 99th   95th   90th   median mean
   1s    7913    7913      0 25ms   25ms   20ms   15ms   14ms   12ms   13ms
   2s    7913    7913      0 18ms   18ms   18ms   16ms   14ms   12ms   13ms
   3s    8125    8125      0 20ms   20ms   17ms   15ms   14ms   12ms   12ms
   4s    5609    5609      0 41ms   35ms   29ms   28ms   27ms   13ms   18ms
 5.1s    8020    8020      0 18ms   17ms   17ms   15ms   14ms   12ms   13ms
 6.1s    7102    7102      0 27ms   27ms   24ms   19ms   18ms   13ms   14ms
 7.1s    5780    5780      0 26ms   26ms   26ms   23ms   22ms   17ms   18ms
 8.1s    6530    6530      0 37ms   34ms   26ms   22ms   20ms   15ms   15ms
 9.1s    7937    7937      0 19ms   19ms   17ms   17ms   16ms   12ms   13ms

Tests:

  - unit [release]
  - scylla-bench
"

* tag 'share-partition-index-v1' of github.com:tgrabiec/scylla:
  sstables: Share partition index pages between readers
  sstables: index_reader: Drop now unnecessary index_entry::close_pi_stream()
  sstables: index_reader: Do not store cluster index cursor inside partition indexes
2021-02-04 17:27:49 +02:00
Tomasz Grabiec
5ed559c8c6 sstables: index_reader: Do not store cluster index cursor inside partition indexes
Currently, the partition index page parser will create and store
promoted index cursors for each entry. The assumption is that
partition index pages are not shared by readers so each promoted index
cursor will be used by a single index_reader (the top-level cursor).

In order to be able to share partition index entries we must make the
entries immutable and thus move the cursor outside. The promoted index
cursor is now created and owned by each index_reader. There is at most
one such active cursor per index_reader bound (lower/upper).
2021-02-04 15:23:55 +01:00