Commit Graph

132 Commits

Author SHA1 Message Date
Botond Dénes
ac9935b645 multishard_mutation_query: remove now pointless compact_for_result_state typedef
No need to switch on the now defunct emit_only_live_rows.
2022-07-12 08:44:33 +03:00
Botond Dénes
4d2ce5c304 mutation_compactor: remove emit_only_live_rows template parameter
Now that we use emit_only_live_rows::no everywhere we can remove this
template parameters. Only the template parameter is removed, the
internal logic around it is left in place (will be removed in a next
patch), by hard-wiring `only_live()`.
2022-07-12 08:43:49 +03:00
Botond Dénes
bedc82e52c tree: use emit_only_live_rows::no
emit_only_live_rows is a convenience so downstream consumers of the
mutation compactors don't have to check the `bool is_live` already
passed to them. This convenience however causes a template parameter and
additional logic for the compactor. As the most prominent of these
consumers (the query result builder) will soon have to switch to
emit_only_live_rows::no for other reasons anyway (it will want to count
tombstones), we take the opportunity to switch everybody to ::no. This
can be done with very little additional complexity to these consumer --
basically an additional if or two.
This prepares the ground for removing this template parameter and the
associate logic from the compactor.
2022-07-12 08:41:51 +03:00
Botond Dénes
742dc10185 querier: querier_cache: de-override insert() methods
Soon, the currently two distinct types of queriers will be merged, as
the template parameter differentiating them will be gone. This will make
using type based overload for insert() impossible, as 2 out of the 3
types will be the same. Use different names instead.
2022-07-12 08:41:48 +03:00
Botond Dénes
fd5f8f2275 query: have replica provide the last position
Use the recently introduced query-result facility to have the replica
set the position where the query should continue from. For now this is
the same as what the implicit position would have been previously (last
row in result), but it opens up the possibility to stop the query at a
dead row.
2022-06-23 13:36:24 +03:00
Botond Dénes
7b6b7a49cd mutlishard_mutation_query: propagate compaction state to result builder
Not used in this patch, facilitates further patching.
2022-06-23 13:36:24 +03:00
Botond Dénes
738cb99c53 multishard_mutation_query: defer creating result builder until needed
Currently the result builder is created two frames above the method in
which actually needed. Push down a factory method instead and create it
where actually used. This allows us to pass it arguments that are
present only in the method which uses it.
2022-06-23 13:36:24 +03:00
Botond Dénes
58d53b66c1 querier: rely on compactor for position tracking
For some time now the compactor track its own position. The querier can
make use of this instead of duplicating this effort.
2022-06-23 13:36:24 +03:00
Benny Halevy
5babc609c6 multishard_mutation_query: do_query: couroutinize save_readers lambda
To keep it simple. It is unlikely to throw.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2022-06-08 09:31:17 +03:00
Benny Halevy
921092955b multishard_mutation_query: do_query: prevent exceptions using coroutine::as_future
Optimize error handling by preventing
exception try/catch using coroutine::as_future.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2022-06-08 09:31:17 +03:00
Benny Halevy
7a76ba4038 multishard_mutation_query: read_page: prevent exceptions using coroutine::as_future
Optimize error handling by preventing
exception try/catch using coroutine::as_future
to get query::consume_page's result.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2022-06-08 09:31:15 +03:00
Benny Halevy
817a0f316a multishard_mutation_query: save_readers: fixup indentation
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2022-06-08 09:23:14 +03:00
Benny Halevy
804d727b8b multishard_mutation_query: coroutinize save_readers
And use smp::invoke_on_all rather than a home-brewed
version of parallel_for_each over all shard ids.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2022-06-08 09:23:14 +03:00
Benny Halevy
22e5352cc2 multishard_mutation_query: lookup_readers: make noexcept
Sot it can be co_awaited efficiently using coroutine::as_future,
othwise, any exceptions will escape `as_future`.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2022-06-08 09:23:14 +03:00
Benny Halevy
ea3935507e multishard_mutation_query: optimize lookup_readers
No need to call _db.invoke_on inside a parallel_for_each
loop over all shards.  Just use _db.invoke_on_all instead.

Besides that, there's no need for a .then continuation
for assigning the per-shard reader in _readers[shard].
It can be done by the functor running on each db shard.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2022-06-08 09:23:14 +03:00
Benny Halevy
055141fc2e multishard_mutation_query: do_query: stop ctx if lookup_readers fails
lookup_readers might fail after populating some readers
and those better be closed before returning the exception.

Fixes #10351

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>

Closes #10425
2022-04-26 11:11:52 +03:00
Botond Dénes
d0ea895671 readers: move multishard reader & friends to reader/multishard.cc
Since the multishard reader family weighs more than 1K SLOC, it gets
its own .cc file.
2022-03-30 15:42:51 +03:00
Botond Dénes
0b5217052d querier: switch to v2 compactor output
The change is mostly mechanical: update all compactor instances to the
_v2 variant and update all call-sites, of which there is not that many.
As a consequence of this patch, queries -- both single-partition and
range-scans -- now do the v2->v1 conversion in the consumers, instead of
in the compactor.
2022-03-11 09:24:05 +02:00
Pavel Emelyanov
063da81ab7 code: Convert nothrow construction assertions into concepts
The small_vector also has N>0 constraint that's also converted

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2022-02-24 19:44:50 +03:00
Botond Dénes
f1e9e3b3b7 compact_mutation: drop support for v1 input 2022-02-21 12:29:24 +02:00
Botond Dénes
f2e2b84038 multishard_mutation_query: migrate to v2
Mostly mechanical transformation. The main difference is in the detached
compaction state, from which we now get the range tombstone change,
instead of the range tombstone list. The code around this is a bit
awkward, will become simpler when compactor drops v1 support.
2022-02-21 12:29:24 +02:00
Avi Kivity
fcb8d040e8 treewide: use Software Package Data Exchange (SPDX) license identifiers
Instead of lengthy blurbs, switch to single-line, machine-readable
standardized (https://spdx.dev) license identifiers. The Linux kernel
switched long ago, so there is strong precedent.

Three cases are handled: AGPL-only, Apache-only, and dual licensed.
For the latter case, I chose (AGPL-3.0-or-later and Apache-2.0),
reasoning that our changes are extensive enough to apply our license.

The changes we applied mechanically with a script, except to
licenses/README.md.

Closes #9937
2022-01-18 12:15:18 +01:00
Avi Kivity
134601a15e Merge "Convert input side of mutation compactor to v2" from Botond
"
With this series the mutation compactor can now consume a v2 stream. On
the output side it still uses v1, so it can now act as an online
v2->v1 converter. This allows us to push out v2->v1 conversion to as far
as the compactor, usually the next to last component in a read pipeline,
just before the final consumer. For reads this is as far as we can go,
as the intra-node ABI and hence the result-sets built are v1. For
compaction we could go further and eliminate conversion altogether, but
this requires some further work on both the compactor and the sstable
writer and so it is left to be done later.
To summarize, this patchset enables a v2 input for the compactor and it
updates compaction and single partition reads to use it.
"

* 'mutation-compactor-consume-v2/v1' of https://github.com/denesb/scylla:
  table: add make_reader_v2()
  querier: convert querier_cache and {data,mutation}_querier to v2
  compaction: upgrade compaction::make_interposer_consumer() to v2
  mutation_reader: remove unecessary stable_flattened_mutations_consumer
  compaction/compaction_strategy: convert make_interposer_consumer() to v2
  mutation_writer: migrate timestamp_based_splitting_writer to v2
  mutation_writer: migrate shard_based_splitting_writer to v2
  mutation_writer: add v2 clone of feed_writer and bucket_writer
  flat_mutation_reader_v2: add reader_consumer_v2 typedef
  mutation_reader: add v2 clone of queue_reader
  compact_mutation: make start_new_page() independent of mutation_fragment version
  compact_mutation: add support for consuming a v2 stream
  compact_mutation: extract range tombstone consumption into own method
  range_tombstone_assembler: add get_range_tombstone_change()
  range_tombstone_assembler: add get_current_tombstone()
2022-01-12 14:37:19 +02:00
Botond Dénes
790e73141f compact_mutation: add support for consuming a v2 stream
Consuming either a v1 or v2 stream is supported now, but compacted
fragments are still emitted in the v1 format, thus the compactor acts an
online downgrader when consuming a v2 stream. This allows pushing out
downgrade to v1 on the input side all the way into the compactor. This
means that reads for example can now use an all v2 reader pipeline, the
still mandatory downgrade to v1 happening at the last possible place:
just before creating the result-set. Mandatory because our intra-node
ABI is still v1.
There are consumers who are ready for v2 in principle (e.g. compaction),
they have to wait a little bit more.
2022-01-07 13:42:31 +02:00
Avi Kivity
bbad8f4677 replica: move ::database, ::keyspace, and ::table to replica namespace
Move replica-oriented classes to the replica namespace. The main
classes moved are ::database, ::keyspace, and ::table, but a few
ancillary classes are also moved. There are certainly classes that
should be moved but aren't (like distributed_loader) but we have
to start somewhere.

References are adjusted treewide. In many cases, it is obvious that
a call site should not access the replica (but the data_dictionary
instead), but that is left for separate work.

scylla-gdb.py is adjusted to look for both the new and old names.
2022-01-07 12:04:38 +02:00
Avi Kivity
ae3a360725 database: Move database, keyspace, table classes to replica/ directory
The database, keyspace, and table classes represent the replica-only
part of the objects after which they are named. Reading from a table
doesn't give you the full data, just the replica's view, and it is not
consistent since reconciliation is applied on the coordinator.

As a first step in acknowledging this, move the related files to
a replica/ subdirectory.
2022-01-06 17:07:30 +02:00
Michael Livshin
a1b8ba23d2 reader_concurrency_semaphore: convert to flat_mutation_reader_v2
Signed-off-by: Michael Livshin <michael.livshin@scylladb.com>
2021-12-21 11:26:17 +02:00
Raphael S. Carvalho
c3c23dd1e5 multishard_mutation_query: make multi_range_reader::fill_buffer() work even after EOS
if fill_buffer() is called after EOS, underlying reader will
be fast forwarded to a range pointed to by an invalid iterator,
so producing incorrect results.

fill_buffer() is changed to return early if EOS was found,
meaning that underlying reader already fast forwarded to
all ranges managed by multi_range_reader.

Usually, consume facilities check for EOS, before calling
fill_buffer() but most reader impl check for EOS to avoid
correctness issues. Let's do the same here.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20211208131423.31612-1-raphaelsc@scylladb.com>
2021-12-08 15:39:11 +02:00
Botond Dénes
5380cb0102 multishard_mutation_query: don't drop data during stateful multi-range reads
When multiple ranges are passed to `multishard_{mutation,data}_query()`,
it wraps the multishard reader with a multi-range one. This interferes
with the disassembly of the multishard reader's buffer at the end of the
page, because the multi-range reader becomes the top-level reader,
denying direct access to the multishard reader itself, whose buffer is
then dropped. This confuses the reading logic, causing data corruption
on the next page(s). A further complication is that the multi-range
reader can include data from more then one range in its buffer when
filling it. To solve this, a special-purpose multi-range is introduced
and used instead of the generic one, which solves both these problems by
guaranteeing that:
* Upon calling fill_buffer(), the entire content of the underlying
  multishard reader is moved to that of the top-level multi-range
  reader. So calling `detach_buffer()` guarantees to remove all
  unconsumed fragments from the top-level readers.
* fill_buffer() will never mix data from more than one ranges. It will
  always stop on range boundaries and will only cross if the last range
  was consumed entirely.

With this, multi-range reads finally work with reader-saving.
2021-12-03 10:45:06 +02:00
Botond Dénes
953603199e multishard_combining_reader: reader_lifecycle_policy: allow saving read range on fast-forward
The reader_lifecycle_policy API was created around the idea of shard
readers (optionally) being saved and reused on the next page. To do
this, the lifecycle policy has to also be able to control the lifecycle
of by-reference parameters of readers: the slice and the range. This was
possible from day 1, as the readers are created through the lifecycle
policy, which can intercept and replace the said parameters with copies
that are created in stable storage. There was one whole in the design
though: fast-forwarding, which can change the range of the read, without
the lifecycle policy knowing about this. In practice this results in
fast-forwarded readers being saved together with the wrong range, their
range reference becoming stale. The only lifecycle implementation prone
to this is the one in `multishard_mutation_query.cc`, as it is the only
one actually saving readers. It will fast-forward its reader when the
query happens over multiple ranges. There were no problems related to
this so far because no one passes more than one range to said functions,
but this is incidental.
This patch solves this by adding an `update_read_range()` method to the
lifecycle policy, allowing the shard reader to update the read range
when being fast forwarded. To allow the shard reader to also have
control over the lifecycle of this range, a shared pointer is used. This
control is required because when an `evictable_reader` is the top-level
reader on the shard, it can invoke `create_reader()` with an edited
range after `update_read_range()`, replacing the fast-forwarded-to
range with a new one, yanking it out from under the feet of the
evictable reader itself. By using a shared pointer here, we can ensure
the range stays alive while it is the current one.
2021-12-03 10:27:44 +02:00
Botond Dénes
3210dee4a6 multishard_mutation_query: fix reverse scans
The read itself has to be done with the reversed schema (query schema)
but the result building has to be done with the table schema. For data
queries this doesn't matter, but replicate the distinction for
consistency (and because this might change).
2021-11-23 14:22:01 +02:00
Tomasz Grabiec
cc56a971e8 database, treewide: Introduce partition_slice::is_reversed()
Cleanup, reduces noise.

Message-Id: <20211014093001.81479-1-tgrabiec@scylladb.com>
2021-10-14 12:39:16 +03:00
Botond Dénes
42b677ef6f querier: consume_page(): remove now unused max_size parameter 2021-09-29 12:15:48 +03:00
Botond Dénes
41facb3270 treewide: move reversing to the mutation sources
Push down reversing to the mutation-sources proper, instead of doing it
on the querier level. This will allow us to test reverse reads on the
mutation source level.
The `max_size` parameter of `consume_page()` is now unused but is not
removed in this patch, it will be removed in a follow-up to reduce
churn.
2021-09-29 12:15:45 +03:00
Botond Dénes
22e216563a mutlishard_mutation_query: set max result size on used permits
08042c1688 added the query max result size
to the permit but only set it for single partition queries. This patch
does the same for range-scans in preparation of `query::consume_page()`
not propagating max size soon.
2021-09-28 17:03:57 +03:00
Botond Dénes
922295dd8e multishard_mutation_query: add tracepoint with compaction stats
Add the content of the compaction stats introduced in the previous patch
to the tracing data. This will help diagnose query performance related
problems caused by tombstones.
2021-09-22 14:00:24 +03:00
Benny Halevy
4476800493 flat_mutation_reader: get rid of timeout parameter
Now that the timeout is taken from the reader_permit.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2021-08-24 16:30:51 +03:00
Benny Halevy
9b0b13c450 reader_concurrency_semaphore: adjust reactivated reader timeout
Update the reader's timeout where needed
after unregistering inactive_read.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2021-08-24 16:30:51 +03:00
Benny Halevy
605a1e6943 multishard_mutation_query: create_reader: validate saved reader permit
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2021-08-24 16:30:51 +03:00
Benny Halevy
fe479aca1d reader_permit: add timeout member
To replace the timeout parameter passed
to flat_mutation_reader methods.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2021-08-24 14:29:44 +03:00
Piotr Sarna
776ab4bcb1 multishard_mutation_query: pass exceptions without throwing
In order to avoid needless throwing, exceptions are passed
directly wherever possible. Two mechanisms which help with that are:
 1. make_exception_future<> for futures
 2. co_return coroutine::exception(...) for coroutines
    which return future<T> (the mechanism does not work for future<>
    without parameters, unfortunately)
2021-07-26 17:05:14 +02:00
Botond Dénes
7bfa40a2f1 treewide: use make_tracking_only_permit()
For all those reads that don't (won't or can't) pass through admission
currently.
2021-07-14 17:19:02 +03:00
Botond Dénes
426b46c4ed mutation_reader: reader_lifecycle_policy: add obtain_reader_permit()
This method is both a convenience method to obtain the permit, as well
as an abstraction to allow different implementations to get creative.
For example, the main implementation, the one in multishard mutation
query returns the permit of the saved reader one was successful. This
ensures that on a multi-paged read the same permit is used across as
much pages as possible. Much more importantly it ensures the evictable
reader wrapping the actual reader both use the same permit.
2021-07-14 16:48:43 +03:00
Botond Dénes
7fcf4a63c5 multishard_mutation_query: use the passed-in permit to create new reader
Ensure that when the reader has to be created anew the passed-in permit
is used to create it, instead of the one left over in remote-parts,
which is that of the already evicted reader.
This lays the groundwork to ensure the same permit is used across all
pages of a read, by a future patch which creates the wrapping reader
with the existing permit.
2021-07-14 16:48:43 +03:00
Botond Dénes
28c2b54875 mutation_reader: reader_lifecycle_policy: remove convenience methods
These convenience methods are not used as much anymore and they are not
even really necessary as the register/unregister inactive read API got
streamlined a lot to the point where all of these "convenience methods"
are just one-liners, which we can just inline into their few callers
without loosing readability.
2021-06-16 11:29:37 +03:00
Botond Dénes
8c7447effd mutation_reader: reader_lifecycle_policy::destroy_reader(): require to be called on native shard
Currently shard_reader::close() (its caller) goes to the remote shard,
copies back all fragments left there to the local shard, then calls
`destroy_reader()`, which in the case of the multishard mutation query
copies it all back to the native shard. This was required before because
`shard_reader::stop()` (`close()`'s) predecessor) couldn't wait on
`smp::submit_to()`. But close can, so we can get rid of all this
back-and-forth and just call `destroy_reader()` on the shard the reader
lives on, just like we do with `create_reader()`.
2021-06-16 11:29:35 +03:00
Botond Dénes
4ecf061c90 reader_lifecycle_policy implementations: fix indentation
Left broken from the previous patch.
2021-06-16 11:21:38 +03:00
Botond Dénes
a7e59d3e2c mutation_reader: reader_lifecycle_policy::destroy_reader(): de-futurize reader parameter
The shard reader is now able to wait on the stopped reader and pass the
already stopped reader to `destroy_reader()`, so we can de-futurize the
reader parameter of said method. The shard reader was already patched to
pass a ready future so adjusting the call-site is trivial.
The most prominent implementation, the multishard mutation query, can
now also drop its `_dismantling_gate` which was put in place so it can
wait on the background stopping if readers.

A consequence of this move is that handling errors that might happen
during the stopping of the reader is now handled in the shard reader,
not all lifecycle policy implementations.
2021-06-16 11:21:38 +03:00
Botond Dénes
ab8d2a04a5 multishard_mutation_query: destroy remote parts in the foreground
Currently the foreign fields of the reader meta are destroyed in the
background via the foreign pointer's destructor (with one exception).
This makes the already complicated life-cycle of these parts and their
dependencies even harder to reason about, especially in tests, where
even things like semaphores live only within the test.
This patch makes sure to destroy all these remote fields in the
foreground in either `save_reader()` or `stop()`, ensuring that once
`stop()` returns, everything is cleaned up.
2021-06-16 11:21:38 +03:00
Avi Kivity
a55b434a2b treewide: extent copyright statements to present day 2021-06-06 19:18:49 +03:00