The uninitialized session has no peer associated with it yet. There is
no point sending the failed message when abort the session. Sending the
failed message in this case will send to a peer with uninitialized
dst_cpu_id which will casue the receiver to pass a bogus shard id to
smp::submit_to which cases segfault.
In addition, to be safe, initialize the dst_cpu_id to zero. So that
uninitialized session will send message to shard zero instead of random
bogus shard id.
Fixes the segfault issue found by
repair_additional_test.py:RepairAdditionalTest.repair_abort_test
Fixes#3115
Message-Id: <9f0f7b44c7d6d8f5c60d6293ab2435dadc3496a9.1515380325.git.asias@scylladb.com>
After 611774b, we're blind again on which sstable caused a compaction
to fail, leaving us with cryptic message as follow:
compaction_manager - compaction failed: std::runtime_error (compressed
chunk failed checksum)
After this change, now both read failure in compaction or regular read
will report the guilty sstable, see:
compaction_manager - compaction failed: std::runtime_error (SSTable reader
found an exception when reading sstable ./data/.../keyspace1-standard1
ka-1-Data.db : std::runtime_error(compressed chunk failed checksum))
Fixes#3006.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20180102230752.14701-1-raphaelsc@scylladb.com>
"This patchset implements the compaction controller for I/O shares. The
goal is to automatic adjust compaction shares based on a
strategy-specific backlog. A higher backlog will translate into higher
shares.
As compaction progresses, that reduces the backlog. As new data is
flushed, that increases the backlog. The goal of the controler is to
keep the backlog constant at a certain rate, so that we don't go neither
too fast or too slow.
Tracking reads and writes:
==========================
Tracking of reads and writes happen through the read_monitor and the
write_monitor. The write monitor is an existing interface that has the
purpose of releasing the write permit at particular points of the write
process. We enhance it so to get a reference to an instance that tracks
the current offset inside the sstables::file_writer. This way the
backlog tracker can always know for sure what's the offset of the
current write.
A similar thing is done for reads. The data_consumer already tracks the
position of the current read, and we isolate that into a structure to
which we can get a reference. A read_monitor allows us to connect the
compaction to that reference.
Lifetime management:
====================
In general, tracking objects will be owned by their callers and passed
down as references. The compaction object will own the read monitors and
the compaction write monitors and the memtable flush write monitor will
be kept alive in a do_with block around the flush itself.
The backlog_{write,read}_progress_manager needs to be kept alive until
the SSTable is no longer in progress. For writes, that means until we
are able to add the SSTable charges in full, and for reads (compaction)
that means until we are able to remove the charges in full.
It is important to do that to avoid spikes in the graph. If we remove
the progress managers in a different operation than updating the SSTable
list we will be left in a temporary state where charges appear or
disappear abruptly, to be fixed when the final
add_sstable/remove_sstable happens. So we want those things to happen
together.
The compaction_backlog_tracker is kept alive until the strategy changes,
for example, through ALTER TABLE. Current charges are transferred to the
new strategy's compaction_backlog_tracker object when we do that. If the
type of strategy changes, the current read charges are forgotten. We can
do that because those running compaction will not really contribute to
decrease the backlog of the new compaction strategy.
Tranfer of Charges
==================
When ALTER TABLE happens, we need to transfer ongoing writes to the new
backlog manager. Ongoing reads will still be tracked by the
backlog_manager that originated them.
The rationale for that is that reads still belong to the current
compaction, with the strategy that generated them. But new Tables being
written will add to the backlog of the new strategy.
Note that ALTER TABLE operations not necessarily cause a change of
Strategy. We can be using the same strategy but just changing
properties. If that is the case, we expect no discontinuity in the
backlog graph (tested).
Resharding
==========
Resharding compactions are more complex than normal compactions because
the SSTables are created in one shard and later sent to another shard.
It is better, then, to track resharding compactions separately and let
them have their own backlog tracker, which will insert backlog in
proportion to the amount of data to be resharded.
Memtable Flush I/O Controller
=============================
With the current infrastructure it becomes trivial to add a new
controller, for either I/O or CPU. This patchset then adds an I/O
controller for memtable flushes, using the same backlog algorithm that
we already used for CPU."
* 'compaction-controller-io-v5' of github.com:glommer/scylla:
database: add a controller for I/O on memtable flushes.
document the compaction controller
compaction: adjust shares for compactions
backlog_controllers: implement generic I/O controller
factor out some of the controller code
io shares: multiply all shares by 10
compaction_strategy: implement backlog manager for the SizeTiered strategy
infrastructure for backlog estimator for compaction work.
sstables: notify about end of data component write
sstables: add read_monitor_generator
sstables: add read_monitor
sstables: enhance data consumer with a position tracker
sstables: enhance the file_writer with an offset tracker
sstables: pass references instead of pointers for write_monitor
compaction: control destruction of readers
'"The issue is triggered by compaction of sstables of level higher than 0.
The problem happens when interval map of partitioned sstable set stores
intervals such as follow:
[-9223362900961284625 : -3695961740249769322 ]
(-3695961740249769322 : -3695961103022958562 ]
When selector is called for first interval above, the exclusive lower
bound of the second interval is returned as next token, but the
inclusivess info is not returned.
So reader_selector was returning that there *were* new readers when
the current token was -3695961740249769322 because it was stored in
selector position field as inclusive, but it's actually exclusive.
This false positive was leading to infinite recursion in combined
reader because sstable set's incremental selector itself knew that
there were actually *no* new readers, and therefore *no* progress
could be made."
Fixes #2908.'
* 'high_level_compaction_infinite_recursion_fix_v4' of github.com:raphaelsc/scylla:
tests: test for infinite recursion bug when doing high-level compaction
Fix potential infinite recursion when combining mutations for leveled compaction
dht: make it easier to create ring_position_view from token
dht: introduce is_min/max for ring_position
If a node shuts itself down due to I/O error (such as ENOSPC), then
nodetool status will show the cluster status at the time the shutdown
occured.
In fact the node will be in shutdown status (nodetool gossipinfo shows
the correct status), however, `nodetool status` does not interpret the
shutdown status, instead it use the output of:
curl -X GET --header "Accept: application/json"
"http://127.0.0.1:10000/gossiper/endpoint/live"
to decide if a node is in UN status.
To fix, do not include the node itself in the output of get_live_members
Without this patch, when a node is shutdown due to I/O error:
UN 127.0.0.1 296.2 MB 256 ? 056ff68e-615c-4412-8d35-a4626569b9fd rack1
With this patch, when a node is shutdown due to I/O error:
?N 127.0.0.1 296.2 MB 256 ? 056ff68e-615c-4412-8d35-a4626569b9fd rack1
Fixes#1629
Message-Id: <039196a478b5b1a8749b3fdaf7e16cfe2eb73a2f.1498528642.git.asias@scylladb.com>
The algorithm and principle of operation is the same as the CPU
controller. It is, however, always enabled and we will operate on
I/O shares.
I/O-bound workloads are expected to hit the maximum once virtual
dirty fills up and stay there while the load is steady.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
Compactions can be a heavy disk user and the I/O scheduler can always
guarantee that it uses its fair share of disk.
Such fair share can, however, be a lot more than what compaction indeed
need. This patch draws on the controllers infrastructure to adjust the
I/O shares that the compaction class will get so that compaction
bandwidth is dynamically adjusted.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
The control algorithm we are using for memtables have proven itself
quite successful. We will very likely use the same for other processes,
like compactions.
Make the code a bit more generic, so that a new controller has to only
set the desired parameters
Signed-off-by: Glauber Costa <glauber@scylladb.com>
The issue is triggered by compaction of sstables of level higher than 0.
The problem happens when interval map of partitioned sstable set stores
intervals such as follow:
[-9223362900961284625 : -3695961740249769322 ]
(-3695961740249769322 : -3695961103022958562 ]
When selector is called for first interval above, the exclusive lower
bound of the second interval is returned as next token, but the
inclusivess info is not returned.
So reader_selector was returning that there *were* new readers when
the current token was -3695961740249769322 because it was stored in
selector position field as inclusive, but it's actually exclusive.
This false positive was leading to infinite recursion in combined
reader because sstable set's incremental selector itself knew that
there were actually *no* new readers, and therefore *no* progress
could be made.
Fix is to use ring_position in reader_selector, such that
inclusiveness would be respected.
So reader_selector::has_new_readers() won't return false positive
under the conditions described above.
Fixes#2908.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Fixes#3096
The credentials processing for transitional auth was broken
in ba6a41d, "auth: Switch to sharded service which effectively removed
the "virtualization" of underlying auth in the SASL challenge.
As a quick workaround, add the permissive exception handling to
sasl object as well.
Message-Id: <20180103102724.1083-1-calle@scylladb.com>
Returning token_endpoints when there are many tokens and end points can
take a long time.
This patch uses output stream to return the result.
Instead of returning a vector, it uses the streaming functionality in
json layer.
Fixes#2476
Message-Id: <20180103081907.5175-1-amnon@scylladb.com>
Technically all that matters is the proportion among the shares so this
change is functionally a noop. However, The CPU scheduler being proposed
has shares that go all the way up to 1000. In the hopes of being able to
unify I/O and CPU controllers one day, this patch brings the I/O shares
more in line with what Avi is doing for the CPU scheduler.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
The SizeTiered backlog for a single SSTable is defined as:
Bi = Ei * log4(T / Si)
Where:
- Si is the size of this individual SSTable
- T is the sum of sizes for all individual SSTables
- Ei is the effective bytes in this SSTable.
The Effective size of an SSTable is:
- The uncompacted size for an SSTable under compaction
- The partially written size for an SSTable being written
- The SSTable size for an SSTable that is not undergoing
any of those processes.
The Aggregate Backlog for the entire Table is just the sum of
all individual SSTable backlogs, including the SSTables currently
being written.
Care is taken to avoid iterating over all SSTables, by separating
the aggregate backlog into a static component (sstables not changing) and
a component of SSTables that are undergoing change.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
This patch adds infrastucture in various points in the system to allow
us to determine the amount of work present as backlog from compactions.
What needs to be done can be explained in three major pieces:
1) Add hooks in the points where sstables are added or inserted to a
column family (or more precisely, to a compaction_strategy object).
2) Add hooks in reads and write monitors that allows a compaction
backlog estimator (tracker) to become aware of bytes that are
partially written and compacted away.
3) Add a per-column family class (compaction_backlog_tracker) that
can be used to track work that is done and relevant to compactions
(like the two above), and a compaction manager to provide a
system-wide backlog based on the response of the individual trackers.
The definition of how much backlog one has is strategy-specific. The
Null strategy is easy, as it never really has any backlog, and so is the
major strategy - since what it really matters is the backlog of the
underlying compaction strategy.
Although backlogs are strategy-specific, they should be "compatible", in
the sense that if a particular strategy has more work to do, it should
yield a higher number than its counterparts.
All the others are presented in this patch as unimplemented: they will
always advertise a mild backlog that should yield a constant
CPU-utilization if used alone.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
We need to notify the monitor that the offset tracker that we are using is
about to be destroyed and will no longer be valid.
While we could modify the file_writer interface so that we could capture
the offset_tracker and take ownership of it - guaranteeing it is alive
until we reach the existing on_write_completed(), this feels like a
layer violation.
It is also potentially useful in general to offer the monitor callers
with knowledge that writing the data portion is done.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
Passing the read monitor down to the sstable readers is tricky. The
point of interest - like compaction - are usually very far from the
interfaces that register the monitor, like read_rows. Between the two,
there is usually a mutation_reader, which is and ought to be totally
unaware of the read monitor: technically, a mutation_reader may not even
know it is backed by sstables.
The solution is to create a read_monitor_generator, that can be passed
from the upper layers, like compaction, to the layers that are actually
making the decision of which sstables to create readers for.
Note that we don't need an equivalent piece of infrastructure for
writes, because writes don't happen through hidden layers and have all
the information they need to initialize their monitors.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
Similar to the write_monitor, it will track progress of an sstable
being read. In the current interface, we will notify interested users
about what is the current position in the data file.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
Callers, like compactions, will be able to know at any time the current
progress of a read.
As we do that, the currently unimplemented position() method of
data_consume_context becomes redundant and is removed.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
Callers, like the memtable flusher or compactions will be able to find
out the current amount of bytes written at any time.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
This came from Avi's review on the read_monitors. He suggests we
wouldn't keep shared pointers, and would instead have the caller
ensuring lifetime. That makes sense, but having the writer interface
using shared_ptr and the read interface using references would lead to
an inconsistent interface.
For the sake of consistency we will change the write monitor to take
references before we do that. From database.cc's perspective, we could
now keep the monitors in a do_with() block, but we will keep the
shared_ptrs to manage their lifetime in anticipation of upcoming patches
in this series, where we'll have to pass them somewhere else.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
Compactions run from a seastar::thread, in run(). They will either fail
or succeed, and from the point of view of ordering of destruction
between the compaction object and its readers:
- if compaction succeed, we have no control over who gets destructed
first since both objects will be going out of scope.
- if they fail, we will forceably destruct the compaction object, at
which point the readers are still alive
From the point of view of lifetime management, it would be nice to make
sure that the compaction object outlives whichever other objects it
needs during compaction.
This nice to have will become paramount when we start adding
read_monitors to the compaction object, that have to, themselves outlive
the readers.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
"When we get two range tombstones with the same lower bound from
different data sources (e.g. two sstable), which need to be combined
into a single stream, they need to be de-overlapped, because each
mutation fragment in the stream must have a different position. If we
have range tombstones [1, 10) and [1, 20), the result of that
de-overlapping will be [1, 10) and [10, 20]. The problem is that if
the stream corresponds to a clustering slice with upper bound greater
than 1, but lower than 10, the second range tombstone would appear as
being out of the query range. This is currently violating assumptions
made by some consumers, like cache populator.
One effect of this may be that a reader will miss rows which are in
the range (1, 10) (after the start of the first range tombstone, and
before the start of the second range tombstone), if the second range
tombstone happens to be the last fragment which was read for a
discontinuous range in cache and we stopped reading at that point
because of a full buffer and cache was evicted before we resumed
reading, so we went to reading from the sstable reader again. There
could be more cases in which this violation may resurface.
There is also a related bug in mutation_fragment_merger. If the reader
is in forwarding mode, and the current range is [1, 5], the reader
would still emit range_tombstone([10, 20]). If that reader is later
fast forwarded to another range, say [6, 8], it may produce fragments
with smaller positions which were emitted before, violating
monotonicity of fragment positions in the stream.
A similar bug was also present in partition_snapshot_flat_reader.
Possible solutions:
1) relax the assumption (in cache) that streams contain only relevant
range tombstones, and only require that they contain at least all
relevant tombstones
2) allow subsequent range tombstones in a stream to share the same
starting position (position is weakly monotonic), then we don't need
to de-overlap the tombstones in readers.
3) teach combining readers about query restrictions so that they can drop
fragments which fall outside the range
4) force leaf readers to trim all range tombstones to query restrictions
This patch implements solution no 2. It simplifies combining readers,
which don't need to accumulate and trim range tombstones.
I don't like solution 3, because it makes combining readers more
complicated, slower, and harder to properly construct (currently
combining readers don't need to know restrictions of the leaf
streams).
Solution 4 is confined to implementations of leaf readers, but also
has disadvantage of making those more complicated and slower.
There is only one consumer which needs the tombstones with monotonic positions, and
that is the sstable writer.
Fixes #3093."
* tag 'tgrabiec/fix-out-of-range-tombstones-v1' of github.com:scylladb/seastar-dev:
tests: row_cache: Introduce test for concurrent read, population and eviction
tests: sstables: Add test for writing combined stream with range tombstones at same position
tests: memtable: Test that combined mutation source is a mutation source
tests: memtable: Test that memtable with many versions is a mutation source
tests: mutation_source: Add test for stream invariants with overlapping tombstones
tests: mutation_reader: Test fast forwarding of combined reader with overlapping range tombstones
tests: mutation_reader: Test combined reader slicing on random mutations
tests: mutation_source_test: Extract random_mutation_generator::make_partition_keys()
mutation_fragment: Introduce range()
clustering_interval_set: Introduce overlaps()
clustering_interval_set: Extract private make_interval()
mutation_reader: Allow range tombstones with same position in the fragment stream
sstables: Handle consecutive range_tombstone fragments with same position
tests: streamed_mutation_assertions: Merge range_tombstones with the same position in produces_range_tombstone()
streamed_mutation: Introduce peek()
mutation_fragment: Extract mergeable_with()
mutation_reader: Move definition of combining mutation reader to source file
mutation_reader: Use make_combined_reader() to create combined reader
"delayed_tasks has a bug that if the object is destroyed while a timer
callback is queued, the callback will then try to access freed memory.
This series replaces the whole thing with sleep_abortable()."
* 'auth-delayed-tasks/v2' of https://github.com/duarten/scylla:
auth: Replace delayed_tasks with sleep_abortable
utils/exponential_backoff_retry: Add helper to automate retries
utils/exponential_backoff_retry: Add abort_source-based retry
Missed opportunity to feed shard id to sstable being written when
working on 67c5c8dc67, so when sstable is reopened after sealed,
its shard doesn't need to be recomputed by open procedure.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20171231024529.13664-1-raphaelsc@scylladb.com>
delayed_tasks has a bug that if the object is destroyed while a timer
callback is queued, the callback will then try to access freed memory.
This could be fixed by providing a stop() function that waits for
pending callbacks, but we can just replace the whole thing by levering
the abort_source-enabled exponential_backoff_retry.
This patch adds the do_until_value static member function to
exponential_backoff_retry, which retries the specified function until
it returns an engaged optional.
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Scylla metadata could be empty due to bugs like the one introduced by
115ff10. Let's make shard computation resilient to empty sharding
metadata by falling back to the approach that uses first and last
keys to compute shards.
Refs #2932.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20171223120140.3642-2-raphaelsc@scylladb.com>
SSTable can generate an empty sharding metadata after a bug like
the one introduced here 115ff10, that results in tokens being
generated using base table for the view table. That leads to
sstable being deleted in subsequent boot because all shards will
agree on its deletion given that it will not belong to anybody,
and also compaction to crash because this relies on resulting
sstable belonging to one shard at least.
I wouldn't like to spend days debugging it again because sstable
write silently generated empty sharding metadata, so let's make
write fail when it happens (see issue #2932 for details).
Refs #2932.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20171223120140.3642-1-raphaelsc@scylladb.com>
Class optimized_optional was moved into seastar, and its usage
simplified so move_and_disengage() is replaced in favour of
std::exchange(_, { }).
* seastar adaca37...b0f5591 (9):
> Merge "core: Introduce cancellation mechanism" from Duarte
> Fix Seastar build that no longer builds with --enable-dpdk after the recent commit fd87ea2
> noncopyable_function: support function objects whose move constructors throw
> Adding new hardware options to new config format, using new config format for dpdk device
> Fix check for Boost version during pre-build configuration.
> variant_utils: add variant_visitor constructor for C++17 mode
> Merge "Allows json object to be stream to an" from Amnon
> Merge 'Default to C++17' from Avi
> Add const version of subscript operator to circular_buffer
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20171228112126.18142-1-duarte@scylladb.com>
After materialized views has been implemented (although not enabled by
default), unimplemented::cause::VIEWS is no longer used. I think we can
drop it.
By the way, there are other no longer used unimplemented reasons, we
should probably drop them too.
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20171224131318.4893-1-nyh@scylladb.com>
We provided "boost1.63" package for Debian 8 since we couldn't build
"scylla-boost163" package witch is available on Ubuntu14/16, but I fixed the
problem and now we have it for Debian 8 too, so switch to it.
Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1514220163-25985-1-git-send-email-syuu@scylladb.com>