- Call for read_property_file() directly from the start().
- Immediately return ready future from the start() for non-IO
CPUs.
- Remove the not needed invoke_on_all() invocations.
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
If snitch has been created while it had to fail we have to stop the
global (distributed) snitch in order to avoid the assert.
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
- Fixed a typo.
- snitch_ptr: make operator= return a reference to the parent object.
- i_endpoint_snitch: set the _state in a default start() implementation.
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
"Cleanups to the CQL server implementation. The biggest change is moving
event notifier to a separate source file in an attempt to make server.cc
smaller and more modularized."
To free memory, we need to allocate memory. In lsa compaction, we convert
N segments with average occupancy of (N-1)/N into N-1 new segments. However,
to do that, we need to allocate segments, which we may not be able to do
due to the low memory condition which caused us to compact anyway.
Fix by introducing a segment reserve, which we normally try to ensure is
full. During low memory conditions, we temporarily allow allocating from
the emergency reserve.
Since we do not support shard to shard connections at the moment, ip
address should fully decide if a connection to a remote node exists or
not. messaging_service maintains connections to remote node using
std::unordered_map<shard_id, shard_info, shard_id::hash> _clients;
With this patch, we can possibly reduce number of tcp connections
between two nodes.
Move the connection class to server.hh so that we can move event
notifier implementation to a separate source file.
Signed-off-by: Pekka Enberg <penberg@cloudius-systems.com>
Move member function definitions outside of the class definition in
preparation for moving the latter to a header file.
Signed-off-by: Pekka Enberg <penberg@cloudius-systems.com>
I'm not sure what happened. We have the same commented code in both .hh
and .cc. It is very confusing when enabling some of the code. Let's
remove the duplicated code in .cc and leave the in .hh only.
"I.e. implement storage_proxy::mutate_atomically, which in turn means
roughly the same as mutate, with write/remove from the batchlog table
intermixed.
This patch restructures some stuff in storage_proxy to avoid to much code
duplication, with the assumption (amongst other) that dead nodes will be few
etc."
Our thrift code performs an elaborate dance to convert a result/exception
reported in a future<> to the cob/exn_cob flow required by the thrift
library. However, if the exception if thrown before the first continuation,
no one will catch it will be leaked, eventually resulting in a crash.
Fix by replacing the complete() infrastructure, which took a future as a
parameter, with a with_cob() helper that instead takes a function to
execute. This allows it to catch both exceptions thrown directly and
exceptions reported via the future.
Fixes#133.
What we implement is ka, not la. Since the summary is the one element that
actually changed in the 2.2 implementation, it is particularly important that
we get this one right. I have previously missed this.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
Currently, each column family creates a fiber to handle compaction requests
in parallel to the system. If there are N column families, N compactions
could be running in parallel, which is definitely horrible.
To solve that problem, a per-database compaction manager is introduced here.
Compaction manager is a feature used to service compaction requests from N
column families. Parallelism is made available by creating more than one
fiber to service the requests. That being said, N compaction requests will
be served by M fibers.
A compaction request being submitted will go to a job queue shared between
all fibers, and the fiber with the lowest amount of pending jobs will be
signalled.
Signed-off-by: Raphael S. Carvalho <raphaelsc@cloudius-systems.com>
We need to also catch exceptions in top-level connection::process() so
that they are converted to proper CQL protocol errors.
Signed-off-by: Pekka Enberg <penberg@cloudius-systems.com>