Commit Graph

188 Commits

Author SHA1 Message Date
Botond Dénes
a9013030cf multishard_mutation_reader: add a trace message for each shard reader created
So we can see in the trace output, the shards that actually participated
in the reads. There is a single message for each shard reader.

Fixes: #6888
Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <20200803132338.95013-1-bdenes@scylladb.com>
2020-08-03 16:24:46 +03:00
Botond Dénes
5de0afdab7 mutation_reader: expose new_reader_base_cost
So that test code can use it.
2020-07-20 11:23:39 +03:00
Botond Dénes
627054c3d7 mutation_reader: shard_reader: fix fast-forwarding with read-ahead
The current `fast_forward_to(const dht::partition_range&)`
implementation has two problems:
* If the reader was not created yet, but there is an ongoing read-ahead
  (which is going to create it), the function bails out. This will
  result in this shard reader not being fast-forwarded to the new range
  at all.
* If the reader was already created and there is an ongoing read-ahead,
  the function will wait for this to complete, then fast-forward the
  reader, as it should. However, the buffer is cleared *before* the
  read-ahead is waited for. So if the read-ahead brings in new data,
  this will land in the buffer. This data will be outside of the
  fast-forwarded-to range and worse, as we just cleared the buffer, it
  might violate mutation fragment stream monotonicity requirements.

This patch fixes both of these bugs. Targeted reproducer unit tests are
coming in the next patches.
2020-07-01 09:51:02 +03:00
Botond Dénes
e31f7316c0 mutation_reader: evictable_reader: add assert against pause handle leak
We are currently investigating a segmentation fault, which is suspected
to be caused by a leaked pause handle. Although according to the latest
theory the handle leak is not the root cause of the issue, just a
symptom, its better to catch any bugs that would cause a handle leaking
at the act, and not later when some side-effect causes a segfault.

Refs: #6613
Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <20200625153729.522811-1-bdenes@scylladb.com>
2020-06-28 12:08:25 +03:00
Botond Dénes
542d9c3711 mutation_reader: expose evictable_reader
Expose functions for the outside world to create evictable readers. We
expose two functions, which create an evictable reader with
`auto_pause::yes` and `auto_pause::no` respectively. The function
creating the latter also returns a handle in addition to the reader,
which can be used to pause the reader.
2020-06-23 21:08:21 +03:00
Botond Dénes
1cc31deff9 mutation_reader: evictable_reader: add auto_pause flag
Currently the evictable reader unconditionally pauses the underlying
reader after each use (`fill_buffer()` or `fast_forward_to()` call).
This is fine for current users (the multishard reader), but the future
user we are doing all this refactoring for -- repair -- will want to
control when the underlying reader is paused "manually". Both these
behaviours can easily be supported in a single implementation, so we
add an `auto_pause` flag to allow the creator of the evictable reader
to control this.
2020-06-23 21:08:21 +03:00
Botond Dénes
af9e1c23e1 mutation_reader: make evictable_reader a flat_mutation_reader
The `evictable_reader` class is almost a proper flat mutation reader
already, it roughly offers the same interface. This patch makes this
formal: changing the class to inherit from `flat_mutation_reader::impl`,
and implement all virtual methods. This also entails a departure from
using the lifecycle policy to pause/resume and create readers, instead
using more general building blocks like the reader concurrency semaphore
and a mutation source.
2020-06-23 21:08:21 +03:00
Botond Dénes
4485864ada mutation_reader: s/inactive_shard_read/inactive_evictable_reader/
Rename `inactive_shard_read` to `inactive_evictable_reader` to reflect
that the fact that the evictable reader is going to be of general use,
not specific to the multishard reader.
2020-06-23 10:01:38 +03:00
Botond Dénes
b6ed054c08 mutation_reader: move inactive_shard_reader code up
It will be used by the `evictable_reader` code too in the next patches.
2020-06-23 10:01:38 +03:00
Botond Dénes
e3ea1c9080 mutation_reader: fix indentation
Deferred from the previous patch.
2020-06-23 10:01:38 +03:00
Botond Dénes
f9d1916499 mutation_reader: shard_reader: extract remote_reader as evictable_reader
We want to make the evictable reader mechanism used in the multishard
reader pipeline available for general (re)use, as a standalone
flat mutation reader implementation. The first step is extracting
`shard_reader::remote_reader` the class implementing this logic into a
top-level class, also renamed to `evictable_reader`.
2020-06-23 10:01:38 +03:00
Avi Kivity
a4c44cab88 treewide: update concepts language from the Concepts TS to C++20
Seastar recently lost support for the experimental Concepts Technical
Specification (TS) and gained support for C++20 concepts. Re-enable
concepts in Scylla by updating our use of concepts to the C++20
standard.

This change:
 - peels off uses of the GCC6_CONCEPT macro
 - removes inclusions of <seastar/gcc6-concepts.hh>
 - replaces function-style concepts (no longer supported) with
   equation-style concepts
 - semicolons added and removed as needed
 - deprecated std::is_pod replaced by recommended replacement
 - updates return type constraints to use concepts instead of
   type names (either std::same_as or std::convertible_to, with
   std::same_as chosen when possible)

No attempt is made to improve the concepts; this is a specification
update only.
Message-Id: <20200531110254.2555854-1-avi@scylladb.com>
2020-06-02 09:12:21 +03:00
Botond Dénes
4409579352 mutation_reader: restricted_reader: work in terms of reader_permit
We want to refactor all read resource tracking code to work through the
read_permit, so refactor the restricted reader to also do so.
2020-05-28 11:34:35 +03:00
Botond Dénes
06dd3d9077 queue_reader: push(): eliminate unneeded continuation on full buffer case
Currently, push() attaches a continuation to the _not_full future, if
push() is called when the buffer is already full. This is not needed as
we can safely push the fragment even if the buffer is already full.
Furthermore we can eliminate the possibility of push() being called when
the buffer is full, by checking whether it is full *after* pushing the
fragment, not before.

Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <20200521055840.376019-1-bdenes@scylladb.com>
2020-05-21 09:34:44 +03:00
Botond Dénes
54a0d8536e restricting_mutation_reader: include own buffer in buffer size calculation
Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <20200519102902.231042-1-bdenes@scylladb.com>
2020-05-19 18:23:15 +03:00
Glauber Costa
e44b2826ab compaction: avoid abandoned futures when using interposers
When using interposers, cancelling compactions can leave futures
that are not waited for (resharding, twcs)

The reason is when consume_end_of_stream gets called, it tries to
push end_of_stream into the queue_reader_handle. Because cancelling
a compaction is done through an exception, the queue_reader_handle
is terminated already at this time. Trying to push to it generates
another exception and prevents us from returning the future right
below it.

This patch adds a new method is_terminated() and if we detect
that the queue_reader_handle is already terminated by this point,
we don't try to push. We call it is_terminated() because the check
is to see if the queue_reader_handle has a _reader. The reader is
also set to null on successful destruction.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
Reviewed-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <20200430175839.8292-1-glauber@scylladb.com>
2020-05-01 16:30:23 +03:00
Botond Dénes
196dd5fa9b treewide: throw std::bad_function_call with backtraces
We typically use `std::bad_function_call` to throw from
mandatory-to-implement virtual functions, that cannot have a meaningful
implementation in the derived class. The problem with
`std::bad_function_call` is that it carries absolutely no information
w.r.t. where was it thrown from.

I originally wanted to replace `std::bad_function_call` in our codebase
with a custom exception type that would allow passing in the name of the
function it is thrown from to be included in the exception message.
However after I ended up also including a backtrace, Benny Halevy
pointed out that I might as well just throw `std:bad_function_call` with
a backtrace instead. So this is what this patch does.

All users are various unimplemented methods of the
`flat_mutation_reader::impl` interface.

Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <20200408075801.701416-1-bdenes@scylladb.com>
2020-04-08 13:54:06 +02:00
Piotr Jastrzebski
e72696a8e6 sharding_info: rename the class to sharder
Also rename all variables that were named si or sinfo
to sharder.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2020-03-30 18:42:33 +02:00
Piotr Jastrzebski
031f589dba multishard_combining_reader: use token_for_next_shard from sharding info not partitioner
Previously this function was accessing sharding logic
through partitioner obtained from the schema.

While converting tests, dummy_partitioner is turned into
dummy_sharding_info.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2020-03-30 18:42:25 +02:00
Rafael Ávila de Espíndola
c5795e8199 everywhere: Replace engine().cpu_id() with this_shard_id()
This is a bit simpler and might allow removing a few includes of
reactor.hh.

Signed-off-by: Rafael Ávila de Espíndola <espindola@scylladb.com>
Message-Id: <20200326194656.74041-1-espindola@scylladb.com>
2020-03-27 11:40:03 +03:00
Avi Kivity
342c967b6a Merge "Introduce compacting reader" from Botond
"
Allow adding compacting to any reader pipeline. The intended users are
streaming and repair, with the goal to prevent wasting transfer
bandwidth with data that is purgeable.
No current user in the tree.

Tests: unit(dev), mutation_reader_test.compacting_reader_*(debug)
"

* 'compacting-reader/v3' of https://github.com/denesb/scylla:
  test: boost/mutation_reader_test: add unit test for compacting_reader
  test: lib/flat_mutation_reader_assertions: be more lenient about empty mutations
  test: lib/mutation_source_test: make data compaction friendly
  test: random_mutation_generator: add generate_uncompactable mode
  mutation_reader: introduce compacting_reader
2020-03-16 16:41:50 +02:00
Botond Dénes
8286a0b1bd mutation_reader: introduce compacting_reader
Compacting reader compacts the output of another reader on-the-fly.
Performs compaction-type compaction (`compact_for_sstables::yes`).
It will be used in streaming and repair to eliminate purgeable data from
the stream, thus prevent wasting transfer bandwidth.
2020-03-16 13:58:13 +02:00
Piotr Jastrzebski
924ed7bb1c make_multishard_combining_reader: stop taking partitioner
The function already takes schema so there's no need
for it to take partitioner. It can be obtained using
schema::get_partitioner

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2020-03-15 10:25:20 +01:00
Botond Dénes
dfc8b2fc45 treewide: replace reader_resource_tracer with reader_permit
The former was never really more than a reader_permit with one
additional method. Currently using it doesn't even save one from any
includes. Now that readers will be using reader_permit we would have to
pass down both to mutation_source. Instead get rid of
reader_resource_tracker and just use reader_permit. Instead of making it
a last and optional parameter that is easy to ignore, make it a
first class parameter, right after schema, to signify that permits are
now a prominent part of the reader API.

This -- mostly mechanical -- patch essentially refactors mutation_source
to ask for the reader_permit instead of reader_resource_tracking and
updates all usage sites.
2020-01-28 08:13:16 +02:00
Botond Dénes
c0f96db2d9 reader_concurrency_semaphore: mv reader_resources and reader_permit to reader_permit.hh
In the next patches we will replace `reader_resource_tracker` and have
code use the `reader_permit` directly. In subsequent patches, the
`reader_permit` will get even more usages as we attempt to make the
tracking of reader resource more accurate by tracking more parts of it.
So the grand plan is that the current `reader_concurrency_semaphore.hh`
is split into two headers:
* `reader_concurrency_semaphore.hh` - containing the semaphore proper.
* `reader_permit.hh` - a very lightweight header, to be used by
  components which only want to track various parts of the resource
  consumption of reads.
2020-01-28 08:13:16 +02:00
Botond Dénes
2005495857 reader_concurrency_semaphore: reader_permit: make it a value type
Currently `reader_permit` is passed around as
`lw_shared_ptr<reader_permit>`, which is clunky to write and use and is
also an unnecessary leak of details on how permit ownership is managed.
Make `reader_permit` a simple value type, making it a little bit easier
and safer to use.
In the next patches we will get rid of `reader_resource_tracker` and
instead have code use the permit instance directly, so this small
improvement in usability will go a long way towards preventing eye sore.
2020-01-28 08:13:16 +02:00
Piotr Dulikowski
2b4ca0c562 mutation_reader: gallop mode for combined reader
In case when a single reader contributes a stream of fragments
and keeps winning over other readers, mutation_reader_merger will
enter gallop mode, in which it is assumed that the reader will keep
winning over other readers. Currently, a reader needs to contribute
3 fragments to enter that mode.

In gallop mode, fragments returned by the galloping reader will be
compared with the best fragment from _fragment_heap. If it wins, the
fragment is directly returned. Otherwise, gallop mode ends and
merging performed as in general case, which involves heap operations.

In current implementation, when the end of partition is encountered
while in gallop mode, the gallop mode is ended unconditionally.

Fixes #3593.
2019-10-30 09:51:18 +01:00
Piotr Dulikowski
2a46a09e7c mutation_reader: refactor prepare_next
Move out logic responsible for adding readers at partition boundary
into `maybe_add_readers_at_partition_boundary`, and advancing one reader
into `prepare_one`. This will allow to reuse this logic outside
`prepare_next`.
2019-10-30 09:49:12 +01:00
Botond Dénes
6bfe468a17 multishard_combining_reader: remote_reader::recreate_reader(): restore indentation 2019-08-13 09:47:55 +03:00
Botond Dénes
68353acc1c multishard_combining_reader: remote_reader: use next instead of last pos
Currently the remote reader uses the last seen fragment's position to
calculate the position the reader should continue from when the reader
is recreated after having been evicted. Recently it was discovered that
this logic breaks down badly when this last position is a non-full
clustering prefix (a range tombstone start bound). In this case, if only
the last position is available, there is no good way of computing the
starting position. Starting after this position will potentially miss
any rows that fall into the prefix (the current behaviour). Starting
from before it will cause all range tombstones with said prefix to be
re-emitted, causing other problems. A better solution is to exploit the
fact that sometimes we also know what the next fragment is.
These "some" times are the exact times that are problematic with the
current approach -- when the last fragment is a range tombstone.
Exploiting this extra knowledge allows for a much better way for
calculating the starting position: instead of maintaining the last
position, we maintain the next position, which is always safe to start
from. This is not always possible, but in many cases we can know for
sure what the next position is, for example if the last position was a
static row we can be sure the next position is the first clustering
position (or partition end). In the few cases where we cannot calculate
the next position we fall back to the previous logic and start from
*after* the last positions. The good news is that in these remaining
cases (the last fragment is a clustering row) it is safe to do so.

This patch also does some refactoring of the remote-reader internals,
all fill-buffer related logic is grouped together in a single
`fill_buffer()` method.
2019-08-13 09:47:55 +03:00
Botond Dénes
3949189918 multishard_combining_reader: remote_reader::do_fill_buffer(): reorganize drop logic
To make it more readable.
2019-08-13 09:47:55 +03:00
Botond Dénes
339be3853d foreign_reader: silence warning about discarded future
And add a comment explaining why this is fine.

Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <20190801062234.69081-1-bdenes@scylladb.com>
2019-08-01 10:11:24 +03:00
Botond Dénes
0f30bc0004 mutation_reader: move away from variadic futures
Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <20190724102246.20450-1-bdenes@scylladb.com>
2019-07-27 13:21:24 +03:00
Botond Dénes
2ccd8ee47c queue_reader: use the reader's buffer as the queue
The queue reader currently uses two buffers, a `_queue` that the
producer pushes fragments into and its internal `_buffer` where these
fragments eventually end up being served to the consumer from.
This double buffering is not necessary. Change the reader to allow the
producer to push fragments directly into the internal `_buffer`. This
complicates the code a little bit, as the producer logic of
`seastar::queue` has to be folded into the queue reader. On the other
hand this introduces proper memory consumption management, as well as
reduces the amount of consumed memory and eliminates the possibility of
outside code mangling with the queue. Another big advantage of the
change is that there is now an explicit way to communicate the EOS
condition, no need to push a disengaged `mutation_fragment_opt`.

The producer of the queue reader now pushes the fragments into the
reader via an opaque `queue_reader_handle` object, which has the
producer methods of `seastar::queue`.

Existing users of queue readers are refactored to use the new interface.

Since the code is more complex now, unit tests are added as well.
2019-06-04 13:39:26 +03:00
Botond Dénes
a597e46792 Make queue_reader public
Extract it from `mutlishard_writer.cc` and move it to
`mutation_reader.{hh,cc}` so other code can start using it too.
2019-06-03 12:08:37 +03:00
Botond Dénes
eba310163d multishard_combining_reader: fix handling of non-strictly monotonous positions
The shard readers under a multishard reader are paused after every
operation executed on them. When paused they can be evicted at any time.
When this happens, they will be re-created lazily on the next
operation, with a start position such that they continue reading from
where the evicted reader left off. This start position is determined
from the last fragment seen by the previous reader. When this position
is clustering position, the reader will be recreated such that it reads
the clustering range (from the half-read partition): (last-ckey, +inf).
This can cause problems if the last fragment seen by the evicted reader
was a range-tombstone. Range tombstones can share the same clustering
position with other range tombstones and potentially one clustering row.
This means that when the reader is recreated, it will start from the
next clustering position, ignoring any unread fragments that share the
same position as the last seen range tombstone.
To fix, ensure that on each fill-buffer call, the buffer contains all
fragments for the last position. To this end, when the last fragment in
the buffer is a range tombstone (with pos x), we continue reading until
we see a fragment with a position y that is greater. This way it is
ensured that we have seen all fragments for pos x and it is safe to
resume the read, starting from after position x.
2019-04-26 11:38:12 +03:00
Botond Dénes
a3f79bfe5e mutlishard_combining_reader: reorder shard_reader::remote_reader::do_fill_buffer() code
Reduce the number of indentations - use early return for the short path.
2019-04-24 10:55:16 +03:00
Botond Dénes
bbd3f0acc3 multishard_combining_reader: shard_reader::remote_reader extract fill-buffer logic into do_fill_buffer() 2019-04-24 10:55:16 +03:00
Avi Kivity
88322086cb Merge "Add fuzzer-type unit test for range scans" from Botond
"
This series adds a fuzzer-type unit test for range scans, which
generates a semi-random dataset and executes semi-random range scans
against it, validating the result.
This test aims to cover a wide range of corner cases with the help of
randomness. Data and queries against it are generated in such a way that
various corner cases and their combinations are likely to be covered.

The infrastructure under range-scans have gone under massive changes in
the last year, growing in complexity and scope. The correctness of range
scans is critical for the correct functioning of any Scylla cluster, and
while the current unit tests served well in detecting any major problems
(mostly while developing), they are too simplistic and can only be
relied on to check the correctness of the basic functionality. This test
aims to extend coverage drastically, testing cases that the author of
the range-scan code or that of the existing unit tests didn't even think
exists, by relying on some randomness.

Fixes: #3954 (deprecates really)
"

* 'more-extensive-range-scan-unit-tests/v2' of https://github.com/denesb/scylla:
  tests/multishard_mutation_query_test: add fuzzy test
  tests/multishard_mutation_query_test: refactor read_all_partitions_with_paged_scan()
  tests/test_table: add advanced `create_test_table()` overload
  tests/test_table: make `create_test_table()` customizable
  query: add trim_clustering_row_ranges_to()
  tests/test_table: add keyspace and table name params
  tests/test_table: s/create_test_cf/create_test_table/
  tests: move create_test_cf() to tests/test_table.{hh,cc}
  tests/multishard_mutation_query_test: drop many partition test
  tests/multishard_mutation_query_test: drop range tombstone test
2019-02-27 17:26:53 +02:00
Paweł Dziepak
b524f96a74 mutation_reader_merger: drop unneded readers in small batches
It was observed that destroying readers as soon as they are not needed
negatively affects performance of relatively small reads. We don't want
to keep them alive for too long either, since they may own a lot of
memory, but deferring the destruction slightly and removing them in
batches of 4 seems to solve the problem for the small reads.
2019-02-22 14:43:38 +00:00
Paweł Dziepak
435e24f509 mutation_reader_merger: track readers by iterators and not pointers
mutation_reader_merger uses a std::list of mutation_reader to keep them
alive while the rest of the logic operates on non-owning pointers.

This means that when it is a time to drop some of the readers that are
no longer needed, the merger needs to scan the list looking for them.
That's not ideal.

The solution is to make the logic use iterators to elements in that
list, which allows for O(1) removal of an unneeded reader. Iterators to
list are just pointers to the node and are not invalidated by unrelated
additions and removals.
2019-02-22 14:33:10 +00:00
Botond Dénes
9000626647 shard_reader: auto pause readers after being used
Previously it was the responsibility of the layer above (multishard
combining reader) to pause readers, which happened via an explicit
`pause()` call. This proved to be a very bad design as we kept finding
spots where the multishard reader should have paused the reader to avoid
potential deadlocks (due to starved reader concurrency semaphores), but
didn't.

This commit moves the responsibility of pausing the reader into the
shard reader. The reader is now kept in a paused state, except when it
is actually used (a `fill_buffer()` or `fast_forward_to()` call is
executing). This is fully transparent to the layer above.
As a side note, the shard reader now also hides when the reader is
created. This also used to be the responsibility of the multishard
reader, and although it caused no problems so far, it can be considered
a leak of internal details. The shard reader now automatically creates
the remote reader on the first time it is attempted to be used.

The code has been reorganized, such that there is now a clear separation
of responsibilities. The multishard combining reader handles the
combining of the output of the shard readers, as well as issuing
read-aheads. The shard reader handles read-ahead and creating the
remote reader when needed, as well as transferring the results of remote
reads to the "home" shard. The remote reader
(`shard_reader::remote_reader`, new in this patch) handles
pausing-resuming as well as recreating the reader after it was evicted.
Layers don't access each other's internals (like they used to).

After this commit, the reader passed to `destroy_reader()` will always
be in paused state.
2019-02-12 16:20:51 +02:00
Botond Dénes
37006135dc shard_reader: make reader creation sync
Reader creation happens through the `reader_lifecycle_policy` interface,
which offers a `create_reader()` method. This method accepts a shard
parameter (among others) and returns a future. Its implementation is
expected to go to the specified shard and then return with the created
reader. The method is expected to be called from the shard where the
shard reader (and consequently the multishard reader) lives. This API,
while reasonable enough, has a serious flaw. It doesn't make batching
possible. For example, if the shard reader issues a call to the remote
shard to fill the remote reader's buffer, but finds that it was evicted
while paused, it has to come back to the local shard just to issue the
recreate call. This makes the code both convoluted and slow.
Change the reader creation API to be synchronous, that is, callable from
the shard where the reader has to be created, allowing for simple call
sites and batching.
This change requires that implementations of the lifecycle policy update
any per-reader data-structure they have from the remote shard. This is
not a problem however, as these data-structures are usually partitioned,
such that they can be accessed safely from a remote shard.
Another, very pleasant, consequence of this change is that now all
methods of the lifecycle interface are sync and thus calls to them
cannot overlap anymore.

This patch also removes the
`test_multishard_combining_reader_destroyed_with_pending_create_reader`
unit test, which is not useful anymore.

For now just emulate the old interface inside shard reader. We will
overhaul the shard reader after some further changes to minimize
noise.
2019-02-12 16:20:51 +02:00
Botond Dénes
57d1f6589c shard_reader: use semaphore directly to pause-resume
The shard reader relies on the `reader_lifecycle_policy` for pausing and
resuming the remote reader. The lifecycle policy's API was designed to
be as general as possible, allowing for any implementation of
pause/resume. However, in practice, we have a single implementation of
pause/resume: registering/unregistering the reader with the relevant
`reader_concurrency_semaphore`, and we don't expect any new
implementations to appear in the future.
Thus, the generic API of the lifecycle policy, is needlessly abstract
making its implementations needlessly complex. We can instead make this
very concrete and have the lifecycle policy just return the relevant
semaphore, removing the need for every implementor of the lifecycle
policy interface to have a duplicate implementation of the very same
logic.

For now just emulate the old interface inside shard reader. We will
overhaul the shard reader after some further changes to minimize noise.
2019-02-12 16:20:51 +02:00
Botond Dénes
fae5a2a8c8 shard_reader: recreate_reader(): fix empty range case
If the shard reader is created for a singular range (has a single
partition), and then it is evicted after reaching EOS, when recreated we
would have to create a reader that reads an empty range, since the only
partition the range has was already read. Since it is not possible to
create a reader with an empty range, we just didn't recreate the reader
in this case. This is incorrect however, as the code might still attempt
to read from this reader, if only due to a bug, and would trigger a
crash. The correct fix is to create an empty reader that will
immediately be at EOS.
2019-02-12 16:20:51 +02:00
Botond Dénes
cd807586f6 foreign_reader: rip out the now unused private API
Drop all the glue code, needed in the past so the shard reader can be
implemented on top of foreign reader. As the shard reader moved away
from foreign reader, this glue code is not needed anymore.
2019-02-12 16:20:51 +02:00
Botond Dénes
d80bc3c0a5 shard_reader: move away from foreign_reader
In the past, shard reader wrapped a foreign reader instance, adding
functionality required by the multishard reader on top. This has worked
well to a certain degree, but after the addition of pause-resume of
shard reader, the cooperation with foreign reader became more-and-more a
struggle. It has now gotten to a point, where it feels like shard reader
is fighting foreign reader as much as it reuses it. This manifested
itself in the ever growing amount of glue code, and hacks baked into
foreign reader (which is supposed to be of general use), specific to
the usage in the multishard reader.
It is time we don't force this code-reuse anymore and instead implement
all the required functionality in shard reader directly.
2019-02-12 16:20:51 +02:00
Botond Dénes
da0c01c68b multishard_combining_reader: make shard_reader a shared pointer
Some members of shard reader have to be accessed even after it is
destroyed. This is required by background work that might still be
pending when the reader is destroyed. This was solved by creating a
special `state` struct, which contained all the members of the shard
readers that had to be accessed even after it was destroyed. This state
struct was managed through a shared pointer, that each continuation that
was expected to outlive the reader, held a copy of. This however created
a minefield, where each line of the code had to be carefully audited to
access only fields that will be guaranteed to remain valid.
Fix this mess by making the whole class a shared pointer, with
`enable_shared_from_this`. Now each continuation just has to make sure
to keep `this` alive and code can now access all members freely (well,
almost).
2019-02-12 16:20:51 +02:00
Botond Dénes
f1c3421eb4 multishard_combining_reader: move the shard reader definition out
Shard reader started its life as a very thin layer above foreign reader,
with just some convenience methods added. As usual, by now it has grown
into a hairy monster, its class definition out-growing even that of the
multishard reader itself. It is time shard reader is moved into the
top-level scope, improving the readability of both classes.
2019-02-12 16:20:51 +02:00
Botond Dénes
7114b59309 multishard_combining_reader: disentangle shard_reader
Currently shard reader has a reference to the owning multishard reader
and it freely accesses its members. This resulted in a mess, where it's
not clear what exactly shard reader depends on. Disentangle this mess,
by making the shard reader self-sufficient, passing all it depends on
into its constructor.
2019-02-12 16:20:51 +02:00