It compares only timestamps, but it should use intrinsic ordering of
the tombstone, which takes deletio ntime into consideration as well.
If we have two range tombstones with the same timestamp but different
deletion time (odd case, but still), then the one with the higher
deletion time should win. That's what all other parts of the system
use to resolve merges, in particular range_tombstone_list and
compact_mutation_state (the fragment stream compactor).
Not respecting this ordering violates the following equality:
do_compact(do_compact(m1) + m2) == do_compact(m1 + m2)
which may results in some clustered rows being missing in the
right-hand side, but not in the left-hand side, due to differences in
range tombstones.
This impacts only tests currently.
Message-Id: <1528705602-7218-1-git-send-email-tgrabiec@scylladb.com>
This patch changes the implementation of atomic_cell and
atomic_cell_or_collection to use the data::cell implementation which is
based on the new in-memory representation infrastructure.
As a prepratation for the switch to the new cell representation this
patch changes the type returned by atomic_cell_view::value() to one that
requires explicit linearisation of the cell value. Even though the value
is still implicitly linearised (and only when managed by the LSA) the
new interface is the same as the target one so that no more changes to
its users will be needed.
This change speeds up merging of partition versions with many rows in
case the merged version has many rows which fall between existing rows
in the target version. This is often the case for time-series
workloads, which insert rows at the front. Lookup can be avoided for
all but the first row in the stride because we already have a
reference to the successor in the target tree, we only need to check
that the current entry in the target tree is still the successor.
This change greatly reduces amount of lookups per row during version
merging of large partitions in time-series workloads.
Partitions can get very large. Destroying them all at once can stall
the reactor for significant amount of time. We want to avoid that by
doing destruction incrementally, deferring in between. A new API is
added for that at various levels:
stop_iteration clear_gently() noexcept;
It returns stop_iteration::yes when the object is fully cleared and
can be now destroyed quickly. So a deferring destruction can look like
this:
return repeat([this] { return clear_gently(); });
The reason why clear_gently() doesn't return a future<> itself is that some
contexts cannot defer, like memory reclamation.
row::find_cell() may be called for cells that do not exist in that row.
In such case nullptr shall be returned, this patch makes sure that
it is not dereferenced.
Message-Id: <20180522091726.24396-1-pdziepak@scylladb.com>
Calling fully qualified std::swap() prohibits the cell objects from
using their own swap implementations. This patch invokes std::swap in
the usual ADL-friendly way.
When views contain a primary key column that is not part of the base
table primary key, that column determines whether the row is live or
not. We need to ensure that when that cell is dead, and thus the
derived row marker, either by normal deletion of by TTL, so is the
rest of the row.
This patch introduces the idea of shawdowing row marker. We map the
status of the regular base column in the view's PK to the view row's
marker. If this marker is dead, so is that cell in the base table, and
so should the view row become. To enforce that, a view row's dead
marker shadows the whole row if that view includes a base regular
column in its PK.
Fixes#3360
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
When we introduced the CPU scheduler, we have also introduced a group
for commitlog - but never used it. There is also doubtful value in
separating reads from writes, since they are often part of the same
workload.
To accomodate for that, let's rename the query group to "statement"
(query is not incorrect, just confusing), and move the write path,
currently ungrouped, inside it.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
Use the querier_cache (represented by the passed-in
querier_cache_context) object to lookup saved queriers at the start of
the page and save them at the end of it if it is likely that there will
be more page requests.
Instead of evicting whole partitions, evicts whole rows.
As part of this, invalidation of partition entries was changed to not
evict from snapshots right away, but unlink them and let them be
evicted by the reclaimer.
For row-level eviction we need to ensure that each version has
complete rows so that eviction from older versions doesn't affect the
value of the row in newer snapshots.
This is achieved by copying the row from an older version before
applying the increment in the new version.
Only affects evictable entries, memtables are not affected.
Every evictable version will have a dummy entry at the end so that it can be
tracked in the LRU.
It is also needed to allow old versions to stay around (with
tombstones and static rows) after all rows are evicted. Such versions
must be fully discontinuous, and we need some entry to mark that.
This change is a preparation for introducing row-level eviction, such that entries
can be evicted from older versions without having to touch other versions.
Currently continuity flags on entries are interpreted relative to the
combined view merged from all entries. For example:
v2: <key=2, cont=1>
v1: <key=1, cont=1>
In v2, the flag on entry key=2 marks the range (1, 2) as
continuous. This is problematic because if the old version is evicted, continuity
will change in an incorrect way:
v2: <key=2, cont=1>
Here, the range (-inf, 1) would be marked as continuous, which is not true.
To solve this problem, we change the rules for continuity
interpretation in MVCC. Each version will have its own continuity,
fully specified in that version, independent of continuity of other
versions. Continuity of the snapshot will be a union of continuous
ranges in each version.
It is assumed that continuous intervals in different versions are non-
overlapping, except for points corresponding to complete rows, in
which case a later version may overlap with an older version
(overwrite). We make use of this assumption to make calculation of the
union of intervals on merging easier. I make use of the above
assumption in mutation_partition::apply_monotonically().
MVCC population of incomplete entries already almost maintains the
non-overlapping invariant, because population intervals correspond to
intervals which are incomplete in the old snapshot. The only change
needed is to ensure that both population bounds will have entries in
the latest version. Population from memtables doesn't mark any
intervals as continuous, so also conforms. The only change needed
there is to not inherit continuity flags from the old snapshot,
effectively making the new version internally discontinuous except for
row points.
The example from the beginning will become:
v2: <key=1, cont=0> <key=2, cont=1>
v1: <key=1, cont=1>
When marking a range as continuous with some rows present only in
older versions, we need to insert entries in the latest version, so
that we can mark the range as continuous. The easiest solution is to
copy the entry from the old version. Another option would be to add
support for incomplete rows and insert such instead. This way we would
avoid duplicating row contents. This optimization is deferred.
This entails doing the cell hash calculation slightly differently,
where the cell is hashed individually, the resulting hash being added
to the running one.
Instead of propagating a flag all through the call chain, we detect
whether we are in the new mode by the employed hash algorithm.
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
This enables us to only branch once per row on the actual hash
algorithm, instead of once per row data item.
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
We add storage to a row to hold the cached hashes of each individual
cell. We don't store the hash in each cell because that would a)
change the cell equality function, and b) require us to change a cell
in a potentially fragmented buffer.
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Use the digester class instead of md5_hasher to encapsulate the
decision of which hash algorithm to use.
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
"Changes merging in MVCC to apply newer version to older instead of older to
newer.
Before (v0 = oldest):
(((v3 + v2) + v1) + v0)
After:
(v0 + (v1 + (v2 + v3)))
or:
(((v0 + v1) + v2) + v3)
There are several reasons to do this:
1) When continuity merging will change semantics to support eviction
from older versions, it will be easier to implement apply() if we
can assume that we merge newer to older instead of older to
newer, since newer version may have entries falling into a
continuous interval in older, but not the other way around. If we
didn't revert the order, apply() would have to keep track of
lower bound of a continuous interval in the right-hand side
argument (older version) as it is applied and update continuity
flags in the left hand side by scanning all entries overlapping
with it. If order is reversed, merging only needs to deal with
the current entry. Also, if we were to keep the old order, we
cannot simply move entries from the left hand side as we merge
because we need to keep track of the lower bound of a continuous
interval, and we need to provide monotonic exception
guarantees. So merging would be both more complicated and slower.
2) With large partitions older versions are typically larger than
newer versions, and since merging is O(N_right*(1 + log(N_left))),
it's better to merge newer into older.
This fixes latency spikes seen in perf_cache_eviction.
Fixes #2715."
* tag 'tgrabiec/reverse-order-of-mvcc-version-merging-v1' of github.com:scylladb/seastar-dev:
mvcc: Reverse order of version merging
anchorless_list: Introduce last()
mvcc: Implement partition_entry::upgrade() using squashed()
mvcc: Extract version merging functions
mutation_partition: Add rows_entry::set_dummy()
position_in_partition: Introduce after_key()
Change merging to apply newer version to older instead of older to
newer.
Before:
(((v3 + v2) + v1) + v0)
After:
(v0 + (v1 + (v2 + v3)))
or equivalent:
(((v0 + v1) + v2) + v3)
There are several reasons to do this:
1) When continuity merging will change semantics to support eviction
from older versions, it will be easier to implement apply() if we
can assume that we merge newer to older instead of older to
newer, since newer version may have entries falling into a
continuous interval in older, but not the other way around. If we
didn't revert the order, apply() would have to keep track of
lower bound of a continuous interval in the right-hand side
argument (older version) as it is applied and update continuity
flags in the left hand side by scanning all entries overlapping
with it. If order is reversed, merging only needs to deal with
the current entry. Also, if we were to keep the old order, we
cannot simply move entries from the left hand side as we merge
because we need to keep track of the lower bound of a continuous
interval, and we need to provide monotonic exception
guarantees. So merging would be both more complicated and slower.
2) With large partitions older versions are typically larger than
newer versions, and since merging is O(N_right*(1 + log(N_left))),
it's better to merge newer into older.
Fixes#2715.
We pass the timeout that we received from data_query/mutation_query
down to consume, which is responsible for actually reading the data.
To make those timeouts actionable, though, we'll have to patch
fill_buffer(). This will happen in the next patch.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
data_query and mutation_query are patched so that they start accepting a
per-query timeout. We will default to no timeout, and then no callers
will be changed yet.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
This fixes the problem of equal_continuity() being prone to false
positives due to redundant information (extra dummy rows) present in
one of the partitions. get_continuity() is minified, so is not prone
to this.