"I.e. implement storage_proxy::mutate_atomically, which in turn means
roughly the same as mutate, with write/remove from the batchlog table
intermixed.
This patch restructures some stuff in storage_proxy to avoid to much code
duplication, with the assumption (amongst other) that dead nodes will be few
etc."
Our thrift code performs an elaborate dance to convert a result/exception
reported in a future<> to the cob/exn_cob flow required by the thrift
library. However, if the exception if thrown before the first continuation,
no one will catch it will be leaked, eventually resulting in a crash.
Fix by replacing the complete() infrastructure, which took a future as a
parameter, with a with_cob() helper that instead takes a function to
execute. This allows it to catch both exceptions thrown directly and
exceptions reported via the future.
Fixes#133.
What we implement is ka, not la. Since the summary is the one element that
actually changed in the 2.2 implementation, it is particularly important that
we get this one right. I have previously missed this.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
Currently, each column family creates a fiber to handle compaction requests
in parallel to the system. If there are N column families, N compactions
could be running in parallel, which is definitely horrible.
To solve that problem, a per-database compaction manager is introduced here.
Compaction manager is a feature used to service compaction requests from N
column families. Parallelism is made available by creating more than one
fiber to service the requests. That being said, N compaction requests will
be served by M fibers.
A compaction request being submitted will go to a job queue shared between
all fibers, and the fiber with the lowest amount of pending jobs will be
signalled.
Signed-off-by: Raphael S. Carvalho <raphaelsc@cloudius-systems.com>
We need to also catch exceptions in top-level connection::process() so
that they are converted to proper CQL protocol errors.
Signed-off-by: Pekka Enberg <penberg@cloudius-systems.com>
storage_service is a singleton, and wants a database for initialization.
On the other hand, database is a proper object that is created and
destroyed for each test. As a result storage_service ends up using
a destroyed object.
Work around this by:
- leaking the database object so that storage_service has something
to play with
- doing the second phase of storage_service initialization only once
(endsize / (1024*1024)) is an integer calculation, so if endsize is
lower than 1024^2, the result would be 0.
Signed-off-by: Raphael S. Carvalho <raphaelsc@cloudius-systems.com>
I made the mistake of running scylla on a spinning disk. Since a disk
can serve about 100 reads/second, that set the tone for the whole benchmark.
Fix by improving cache preload when flushing a memtable. If we can detect
that a mutation is not part of any sstable (other than the one we just wrote),
we can add insert it into the cache.
After this, running a mixed cassandra-stress returns the expected results,
even on a spinning disk.
Add a FIXME about something I'm unsure about - does repair only need to
repair this node, or also make an effort to also repair the other nodes
(or more accurately, their specific token-ranges being repaired) if we're
already communicating with them?
Signed-off-by: Nadav Har'El <nyh@cloudius-systems.com>
If a stream failed, print a clear error message that repair failed, instead
of ignoring it and letting Seastar's generic "warning, exception was ignored"
be the only thing the user will see.
Signed-off-by: Nadav Har'El <nyh@cloudius-systems.com>
The previous repair code exchanged data with the other nodes which have
one arbitrary token. This will only work correctly when all the nodes
replicate all the data. In a more realistic scenario, the node being
repaired holds copies of several token ranges, and each of these ranges
has a different set of replicas we need to perform the repair with.
So this patch does the right thing - we perform a separate repair_range()
for each of the local ranges, and each of those will find a (possibly)
different set of nodes to communicate with.
Signed-off-by: Nadav Har'El <nyh@cloudius-systems.com>
This patch adds a method get_ranges() to replication-strategy.
It returns the list of token ranges held by the given endpoint.
It will be used by the replication code, which needs to know
in particular which token ranges are held by *this* node.
This function is the analogue of Origin's getAddressRanges().get(endpoint).
As in Origin, also here the implementation is not meant to be efficient,
and will not be used in the fast path.
Signed-off-by: Nadav Har'El <nyh@cloudius-systems.com>
Now, make_local_reader does not need partition_range to be alive when we
read the mutation reader. No need to store it in stream_detail for its
lifetime.
Since all the gossip callback (e.g., on_change) are executed inside a
seastar::async context, we can make wait for the operations like update
system table to complete.
So that do_before_change_notifications and do_on_change_notifications
are under seastar::async.
Now, before_change callbacks are inside seastar::async context.
It is easier to futurize apply_new_states and handle_major_state_change.
Now, on_change, on_join and on_restart callbacks are inside
seastar::async context.