"This patch series adds support for the `duration` type in CQL, which
was added to Cassandra in 3.10.
As part of this work, it was necessary also to add support for the
`vint` and `unsigned vint` types to the native protocol implementation,
which are part of v5 of the specification.
To test interactively, it is necessary to use cqlsh distributed with
Cassandra, as the version we distribute does not yet support the
duration type."
* 'jhk/duration_protocol/v5' of https://github.com/hakuch/scylla:
Support `duration` CQL native type
CQL native protocol: Add support for `vint` serialization
duration_test.cc: Add test for printing zero duration
duration.cc: Remove nop `const` qualifier on return type
Change `const` qualifier declaration order for `duration`
duration.cc: Simplify range checking
Rename `duration` to `cql_duration`
Now that we don't go directly to reconciliation for range queries, the
result isn't required to have the row and partition counts calculated
(we no longer transform a reconciled_result to a query::result).
Furthermore, this line was causing a lot of dtests to fail on account
of them not expecting an error line in the logs.
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20170810225351.12610-1-duarte@scylladb.com>
`duration` is a new native type that was introduced in Cassandra 3.10 [1].
Support for parsing and the internal representation of the type was added in
8fa47b74e8.
Important note: The version of cqlsh distributed with Scylla does not have
support for durations included (it was added to Cassandra in [2]). To test this
change, you can use cqlsh distributed with Cassandra.
Duration types are useful when working with time-series tables, because they can
be used to manipulate date-time values in relative terms.
Two interesting applications are:
- Aggregation by time intervals [3]:
`SELECT * FROM my_table GROUP BY floor(time, 3h)`
- Querying on changes in date-times:
`SELECT ... WHERE last_heartbeat_time < now() - 3h`
(Note: neither of these is currently supported, though columns with duration
values are.)
Internally, durations are represented as three signed counters: one for months,
for days, and for nanoseconds. Each of these counters is serialized using a
variable-length encoding which is described in version 5 of the CQL native
protocol specification.
The representation of a duration as three counters means that a semantic
ordering on durations doesn't exist: Is `1mo` greater than `1mo1d`? We cannot
know, because some months have more days than others. Durations can only have a
concrete absolute value when they are "attached" to absolute date-time
references. For example, `2015-04-31 at 12:00:00 + 1mo`.
That duration values are not comparable presents some difficulties for the
implementation, because most CQL types are. Like in Cassandra's implementation
[2], I adopted a similar strategy to the way restrictions on the `counter` type
are checked. A type "references" a duration if it is either a duration or it
contains a duration (like a `tuple<..., duration, ...>`, or a UDT with a
duration member).
The following restrictions apply on durations. Note that some of these contexts
are either experimental features (materialized views), or not currently
supported at run-time (though support exists in the parser and code, so it is
prudent to add the restrictions now):
- Durations cannot appear in any part of a primary key, either for tables or
materialized views.
- Durations cannot be directly used as the element type of a `set`, nor can they
be used as the key type of a `map`. Because internal ordering on durations is
based on a byte-level comparison, this property of Cassandra was intended to
help avoid user confusion around ordering of collection elements.
- Secondary indexes on durations are not supported.
- "Slice" relations (<=, <, >=, >) are not supported on durations with `WHERE`
restrictions (like `SELECT ... WHERE span <= 3d`). Multi-column restrictions
only work with clustering columns, which cannot be `duration` due to the
first rule.
- "Slice" relations are not supported on durations with query conditions (like
`UPDATE my_table ... IF span > 5us`).
Backwards incompatibility note:
As described in the documentation [4], duration literals take one of two
forms: either ISO 8601 formats (there are three), or a "standard" format. The ISO
8601 formats start with "P" (like "P5W"). Therefore, identifiers that have this
form are no longer supported.
Fixes#2240.
[1] https://issues.apache.org/jira/browse/CASSANDRA-11873
[2] bfd57d13b7
[3] https://issues.apache.org/jira/browse/CASSANDRA-11871
[4] http://cassandra.apache.org/doc/latest/cql/types.html#working-with-durations
Version 5 of the native protocol for CQL [1] adds the `vint` and `unsigned vint`
types.
An unsigned integer encoded as a `vint` has a variable size based on the
magnitude of the value. The first byte indicates the total number of bytes.
For signed integers, a "zig-zag" encoding scheme ensures that small negative
values are encoded as short-length `vint`s (0 -> 0, -1 -> 1, 1 -> 2, 2 -> 3, -2
-> 4, etc).
[1] https://github.com/apache/cassandra/blob/trunk/doc/native_protocol_v5.spec
"sstables will sometimes have narrow/disjont ranges (e.g. LCS L1+).
This can be exploited when reading from a range of sstables by opening
sstables on-demand thus saving memory, processing and potentially I/O.
To achieve this combined_mutation_reader is refactored such that the
reader selection logic is moved-out into a reader_selector class.
combined_mutation_reader now takes a reader_selector instance in its
constructor and asks it for new readers for the current ring position
on every call to operator()().
At the moment two specializations of reader_selector are provided:
* list_reader_selector which implements the current logic, that is using
a provided mutation_reader list, and
* incremental_reader_selector which implements the on-demand opening
logic discussed above.
Fixes#1935"
* 'bdenes/optimize_combined_reader-v6' of https://github.com/denesb/scylla:
Add combined_mutation_reader_test unit test
Remove range_sstable_reader
Add incremental_reader_selector
Add reader_selector to combined_mutation_reader
sstable_set::incremental_selector: select() now returns a selection
incremental_reader_selector is a specialization of reader_selector for
the case when sstables have narrow and/or disjoint token ranges. To
exploit this it creates new readers on-demand when their sstable's
token range intersects with the current ring position.
combined_mutation_reader now accepts as a constructor argument a
reader_selector instance whoose task is to create new readers on
each call to operator()() if needed and possible.
This way it is possible to control how readers are created through
different specializations of reader_selector.
The previous logic is refactored into list_reader_selector which
is using a pre-provided mutation_reader list and forwards all of them to
combined_mutation_reader at once.
We are moving to aptly to release .deb packages, that requires debian repository
structure changes.
After the change, we will share 'pool' directory between distributions.
However, our .deb package name on specific release is exactly same between
distributions, so we have file name confliction.
To avoid the problem, we need to append distribution name on package version.
Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1502312935-22348-1-git-send-email-syuu@scylladb.com>
A seletion contains - in addition to the list of sstables - a next_token
which is a hint as to what is the next best token to call select() with.
This should be the smallest token such that at the next call to
select() the least number of new sstables will be returned, without
skipping any.
"With this series, all the following cluster operations:
- bootstrap
- rebuild
- decommission
- removenode
will use the same code to do the streaming.
The range_streamer is now extended to support both fetch from and push
to peer node. Another big change is now the range_streamer will stream
less ranges at a time, so less data, per stream_plan and range_streamer
will remember which ranges are failed to stream and can retry later.
The retry policy is very simple at the moment it retries at most 5 times
and sleep 1 minutes, 1.5^2 minutes, 1.5^3 minutes ....
Later, we can introduce api for user to decide when to stop retrying and
the retry interval.
The benefits:
- All the cluster operation shares the same code to stream
- We can know the operation progress, e.g., we can know total number of
ranges need to be streamed and number of ranges finished in
bootstrap, decommission and etc.
- All the cluster operation can survive peer node down during the
operation which usually takes long time to complete, e.g., when adding
a new node, currently if any of the existing node which streams data to
the new node had issue sending data to the new node, the whole bootstrap
process will fail. After this patch, we can fix the problematic node
and restart it, the joining node will retry streaming from the node
again.
- We can fail streaming early and timeout early and retry less because
all the operations use stream can survive failure of a single
stream_plan. It is not that important for now to have to make a single
stream_plan successful. Note, another user of streaming, repair, is now
using small stream_plan as well and can rerun the repair for the
failed ranges too.
This is one step closer to supporting the resumable add/remove node
opeartions."
* tag 'asias/use_range_streamer_everywhere_v4' of github.com:cloudius-systems/seastar-dev:
storage_service: Use the new range_streamer interface for removenode
storage_service: Use the new range_streamer interface for decommission
storage_service: Use the new range_streamer interface for rebuild
storage_service: Use the new range_streamer interface for bootstrap
dht: Extend range_streamer interface
We experienced 'Constructing RAID volume...' takes too much time on some AMIs,
this is because setup script stuck at 'yum -y install mdadm xfsprogs'.
We don't have to install these packages on AMI startup time, we should
preinstall them on AMI creating time.
Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1502192796-21040-1-git-send-email-syuu@scylladb.com>
* Configuration for cluster_name is commented-out in config file.
* Default value set to empty string and if not rewritten by user then
warning is printed and value is reset to "ScyllaDB Cluster".
Fixes#2648.
Message-Id: <20170808113322.9313-1-daniel@scylladb.com>
ScyllaDB loves python & python loves ScyllaDB.
It would benefit the project to start enforcing some code guidelines
and basic QA with a linter along a PEP8 respect thanks to flake8.
This patch adds a tox config to at least start with an assessment
of the work to be done on all .py files in the code base.
To reduce its noise, tests on long lines (> 80char) are ignored
for now.
Signed-off-by: Ultrabug <ultrabug@gentoo.org>
Message-Id: <20170726134242.8927-1-ultrabug@gentoo.org>
index's file output stream uses write behind but it's not closed
when sstable write fails and that may lead to crash.
It happened before for data file (which is obviously easier to
reproduce for it) and was fixed by 0977f4fdf8.
Fixes#2673.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20170807171146.10243-1-raphaelsc@scylladb.com>
After this patch and the following patches to use the new
range_streamder interface, all the following cluster operations:
- bootstrap
- rebuild
- decommission
- removenode
will use the same code to do the streaming.
The range_streamer is now extended to support both fetch from and push
to peer node. Another big change is now the range_streamer will stream
less ranges at a time, so less data, per stream_plan and range_streamer
will remember which ranges are failed to stream and can retry later.
The retry policy is very simple at the moment it retries at most 5 times
and sleep 1 minutes, 1.5^2 minutes, 1.5^3 minutes ....
Later, we can introduce api for user to decide when to stop retrying and
the retry interval.
The benefits:
- All the cluster operation shares the same code to stream
- We can know the operation progress, e.g., we can know total number of
ranges need to be streamed and number of ranges finished in
bootstrap, decommission and etc.
- All the cluster operation can survive peer node down during the
operation which usually takes long time to complete, e.g., when adding
a new node, currently if any of the existing node which streams data to
the new node had issue sending data to the new node, the whole bootstrap
process will fail. After this patch, we can fix the problematic node
and restart it, the joining node will retry streaming from the node
again.
- We can fail streaming early and timeout early and retry less because
all the operations use stream can survive failure of a single
stream_plan. It is not that important for now to have to make a single
stream_plan successful. Note, another user of streaming, repair, is now
using small stream_plan as well and can rerun the repair for the
failed ranges too.
This is one step closer to supporting the resumable add/remove node
opeartions.
* seastar f14d2a3...7a49ae5 (8):
> sharded: improve support for cooperating sharded<> services
> sharded: support for peer services
> semaphore: add a version of with_semaphore that takes a duration timeout
> scripts: perftune.py: fix the CPU mask generation for more than 64 CPUs
> Revert "future-utils: make when_all() (vector variant) exception safe"
> Revert "future-utils: fix gross compilation errors in when_all()"
> future-utils: fix gross compilation errors in when_all()
> future-utils: make when_all() (vector variant) exception safe
Includes change to batchlog_manager constructor to adapt it to
seastar::sharded::start() change.
This reverts commit 98757069a5. We have the
failure detector which will detect an unresponsive node and fail the RPC.
Adding a timeout can just introduce false positives.
In commit f38e4ff3f, we have separated streaming reads from normal reads
for the purpose of determining the maximum number of reads going on.
However, we'll now be totally unaware of how many reads will be
happening on behalf of streaming and that can be important information
when debugging issues.
This patch adds this metric so we don't fly blind.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
Message-Id: <1501909973-32519-1-git-send-email-glauber@scylladb.com>
We have problem to run fstrim with nomerges=2, so we need to change
the parameter to 1 during fstrim execution.
To do this, this fix changes follow things:
- revert dropping scylla_fstrim on Ubuntu 16.04/CentOS
- disable distribution provided fstrim script
- enable scylla_fstrim on all distributions
- introduce --set-nomerges on scylla-blocktune
- scylla_fstrim call scylla-blocktune by following order:
- 'scylla-blocktune --set-nomerges 1'
- 'fstrim' for each devices
- 'scylla-blocktune --set-nomerges 2'
Fixes#2649
Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1501531393-21109-1-git-send-email-syuu@scylladb.com>
Streaming reads and normal reads share a semaphore, so if a bunch of
streaming reads use all available slots, no normal reads can proceed.
Fix by assigning streaming reads their own semaphore; they will compete
with normal reads once issued, and the I/O scheduler will determine the
winner.
Fixes#2663.
Message-Id: <20170802153107.939-1-avi@scylladb.com>