This patch ensures we correctly serialize range tombstones for dense
non-compound schemas, which until now assumed the bounds were compound
composite. We also fix the reading function, which assumed the same
thing. This affected Apache Cassandra compatibility.
Fixes#2986
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
This patch adds support to sstable_writer to be able to control
correct range tombstone serialization.
When range tombstone serialization will be fixed in subsequent
patches, it will only be enabled when the whole cluster supports the
feature to allow for rollbacks.
The feature needs to be enabled for an sstable as a whole, to prevent
problems with it being enabled during an sstable write.
Thus, the sstable writer will pass on this information to the sstable
methods that carry out the actual file writing.
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
This patch adds a cluster feature to enable correct serialization of
non-compound range tombstones. We thus support rollbacks during an
upgrade, as we will only change range tombstone serialization when the
cluster is fully upgraded and all nodes are capable of reading the new
format.
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
This patch changes the range tombstone read path to deal with
correctly written non-compound range tombstones, while also
maintaining backward compatibility and reading old Scylla-generated
range tombstones.
The fix for the write path will activate an sstable feature which will
connect with this patch.
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
We cannot represent ranged deletions with non-inclusive bounds on our
current storage format for schemas that are non-compound, since the
clustering key won't include the EOC byte.
Refs #2986
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Promoted indexes generated before this patch by Scylla are considered
incorrect if they belong to a non-compound schema, due to #2993.
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
This patch adds additional metadata to the scylla sstable component.
Namely, it adds a list of features that the current sstable supports.
The upcoming usages of the feature list are meant for backward
compatibility, but the implementation makes no such assumptions.
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
This patch refactors writing a promoted index entry to leverage the
column_name_writer. It not only reduces code duplication, but also
solves two important bugs:
1) Column names for schema types other than compound non-dense were
not correctly serialized, as the wrong overload of
write_column_name() was being called, which assumed the specified
composite to be compound.
2) Before, for some schema types we were passing an empty
clustering_key to maybe_flush_pi_block(), which caused it to bypass
appending open range tombstones to the data file, causing wrong
query results to be returned.
Fixes#2979Fixes#2992Fixes#2993
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
This patch lifts the logic to write a column name depending on the
schema's denseness and compoundness into a function, so that it may
later be reused in other places. We still duplicate the same logic
when writing a clustered row because the index writer requires it for
now.
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
A schema can only have static columns if it has at least one
clustering column. A schema with a clustering column is always
compound, unless it is created with compact storage. A schema created
with compact storage cannot have static columns, so we can remove dead
code from the sstable write path.
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Encapsulate the decision to write the row_marker and to write a
corresponding entry in the promoted index. We now avoid writing the
index entry if there is no row marker, and just start indexing the row
at the first cell.
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
It's hard to make sense of the metric transport.requests_blocked_memory
because it shows a queue size. Specially in production setups scraping
at every 15 seconds, that doesn't tell us much.
We solve that in other layers that record blocking by providing both a
requests_blocked_memory and requests_blocked_memory_current
Fixes#3010
Signed-off-by: Glauber Costa <glauber@scylladb.com>
Message-Id: <20171123033329.32596-1-glauber@scylladb.com>
Prometheus histograms have 3 embedded metrics: count, buckets, and sum.
Currently we fill up count and buckets but sum is left at 0. This is
particularly bad, since according to the prometheus documentation, the
best way to calculate histogram averages is to write:
rate(metric_sum[5m]) / rate(metric_count[5m])
One way of keeping track of the sum is adding the value we sampled,
every time we sample. However, the interface for the estimated histogram
has a method that allows to add a metric while allowing to adjust the
count for missing metrics (add_nano())
That makes acumulating a sum inaccurate--as we will have no values for
the points that were added. To overcome that, when we call add_nano(),
we pretend we are introducing new_count - _count metrics, all with the
same value.
Long term, doing away with sampling may help us provide more accurate
results.
After this patch, we are able to correctly calculate latency averages
through the data exported in prometheus.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
Message-Id: <20171122144558.7575-1-glauber@scylladb.com>
* seastar-dev.git haaawk/flat_reader_remove_read_rows:
sstable_mutation_test: use read_rows_flat instead of read_rows
perf_sstable: use read_rows_flat instead of read_rows
Remove sstable::read_rows
Introduce sstable::read_row_flat and sstable::read_range_rows_flat methods
and use them in sstable::as_mutation_source.
* https://github.com/scylladb/seastar-dev/tree/haaawk/flat_reader_sstables_v3:
Introduce conversion from flat_mutation_reader to streamed_mutation
Add sstables::read_rows_flat and sstables::read_range_rows_flat
Turn sstable_mutation_reader into a flat_mutation_reader
sstable: add getter for filter_tracker
Move mp_row_consumer methods implementations to the bottom
Remove unused sstable_mutation_reader constructor
Replace "sm" with "partition" in get_next_sm and on_sm_finished
Move advance_to_upper_bound above sstable_mutation_reader
Store sstable_mutation_reader pointer in mp_row_consumer
Stop using streamed_mutation in consumer and reader
Stop using streamed_mutation in sstable_data_source
Delete sstable_streamed_mutation
Introduce sstable::read_row_flat
Migrate sstable::as_mutation_source to flat_mutation_reader
Remove single_partition_reader_adaptor
Merge data_consume_context::impl into data_consume_context
Create data_consume_context_opt.
Merge on_partition_finished into mark_partition_finished
Check _partition_finished instead of _current_partition_key
Merge sstable_data_source into sstable_mutation_reader
Remove sstable_data_source
Remove get_next_partition and partition_header
to check whether partition is finished. In next patch
_current_partition_key will be merged with sstable_data_source::_key
and won't be cleared any more.
Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
This will be used in sstable_mutation_reader before
first fill_buffer is called and a proper data_consume_context
is created.
Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Since we want to support cross building, we shouldn't hardcode GPG file path,
even these files provided on recent version of mock.
This fixes build error on some older build environment such as CentOS-7.2.
Fixes#3002
Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1511277722-22917-1-git-send-email-syuu@scylladb.com>
These patches convert queries (data, mutation and counter) to flat
mutation readers. All of them already use consume_flattened() to
consume a flat stream of data, so the only major missing thing
was adding support for reversed partitions to
flat_mutation_reader::consume().
* pdziepak flat_mutation_reader-queries/v3-rebased:
flat_mutation_reader: keep reference to decorated key valid
flat_muation_reader: support consuming reversed partitions
tests/flat_mutation_reader: add test for
flat_mutation_reader::consume()
mutation_partition: convert queries to flat_mutation_readers
tests/row_cache_stress_test: do not use consume_flattened()
mutation_reader: drop consume_flattened()
streamed_mutation: drop reverse_streamed_mutation()
Some queries may need the fragments that belong to partition to be
emitted in the reversed order. Current support for that is very limited
(see #1413), but should work reasonably well for small partitions.