The code does:
auto fm = make_lw_shared<std::vector<frozen_mutation>>(schema.begin(), schema.end());
return net::get_local_messaging_service().send_message_oneway(net::messaging_verb::DEFINITIONS_UPDATE,
std::move(id), std::move(fm));
ms.register_handler(net::messaging_verb::DEFINITIONS_UPDATE, [this] (std::vector<frozen_mutation> m) {
...
}
We should not send a lw_shared_ptr, but std::vector<frozen_mutation>.
However, from Gleb:
Well, this is not a bug, this is really cool (and to be honest
unintended) feature of RPC. It is smart enough to detect that it is a
smart pointer to an object and dereference it. But in this particular
case there is not justification to use shared_ptr in the first place.
So, drop the lw_shared_ptr anyway.
"This series adds the storage proxy metrics API. The definition are based on the
StorageProxyMetrics definition. This series also adds stats object to the
storage_proxy a getter function for it and an implementation based on it, but
it is currently does not adds the code to manipulate the counters."
This adds a stat object with counters that will be used by the API. The
stat object instance will be return with a get_stats method.
Signed-off-by: Amnon Heiman <amnon@cloudius-systems.com>
In Origin, partitions in range query results are ordered using
decorated_key ordering. This allows the use of token() function for
incremental iterating over results, as mentioned here:
http://www.datastax.com/dev/blog/client-side-improvements-in-cassandra-2-0
We may also need this to implement paging.
The old code didn't preserve ordering, because it didn't merge-sort
data coming from different shards. The fix relies on
query_mutations_locally(), which already preserves the ordering. We're
going to use mutation queries for range queries anyway.
"As previously said, there were some unidentified bugs that prevented update_tokens from
working properly. The first one was sent alongside the series, the second one took me more
time, but it is fixed here."
At this point, users of the interface are futurized already, so we
just need to make sure they call the right function.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
Soon that will involve a query. The idiom make_ready_future<>().then()
is a bit unusual to say the least, but it will soon be replace by an
actual future.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
Most of the time we're querying internal, we are going for the system.
Defaulting to it just makes it easier, and callers can still change it
with set_keyspace if needed.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
Announce schema mutations in a cluster via the DEFINITIONS_UPDATE verb
and pass them to merge_schema() at endpoints.
Signed-off-by: Pekka Enberg <penberg@cloudius-systems.com>
We always operate on the local storage proxy so pass it by reference.
This simplifies DEFINITIONS_UPDATE message handler where all we have is
a "this" pointer to the local storage proxy.
Signed-off-by: Pekka Enberg <penberg@cloudius-systems.com>
Forwarding lambda is reused, so we cannot move captures out of it
and we cannot pass references to them either since lambda can be
destroyed before send completes.
Works only if all replicas (participating in CL) has the same live
data. Does not detects mismatch in tombstones (no infrastructure yet).
Does not report timeout yet.
If local mutation write takes longer then write timeout mutation will
be deleted while it is processed by database engine. Fix this by storing
mutation in shared pointer and hold to the pointer until mutation is
locally processed.
This reverts commit 52aa0a3f91.
After c9909dd183 this is no longer needed since reference to a
handler is not used in abstract_write_response_handler::wait() continuation.
Conflicts:
service/storage_proxy.cc
Currently mutation clustering uses two timers, one expires when wait for
cl timeouts and is canceled when cl is achieved, another expires if some
endpoints do not answer for a long time (cl may be already achieved at
this point and first timer will be canceled). This is too complicated
especially since both timers can expire simultaneously. Simplify it by
having only one timer and checking in a callback whether cl was achieved.