"This change makes comparing floating point values behave exactly like
it does in case of Java compareTo() function, which is different than
the usual IEEE 754 behaviour."
To implement that, we will resort to a cache mechanism, instead of doing the
query all the time. This is mainly because we want to avoid overfuturization
of the callers, that are usually just interested in passing simple strings
around.
We will be able to intercept all updates to it, and maintain consistency with our
internal cache. The updates are not done in this patchset.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
In Java method compareTo() of Float and Double types doesn't strictly
follow IEEE 754. Firstly, NaN is equal NaN and is greater than any other
value. Secondly, positive zero is larger than negative zero.
Signed-off-by: Paweł Dziepak <pdziepak@cloudius-systems.com>
Because the cql types deal with a raw inet address and not the gms container, we need
a method to fetch it
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
"These patches add verification that strings passed using CQL binary protocol
are valid utf8. Exception hierarchy is also adjusted to make sure that
protocol exceptions are properly translated to an appropriate error code.
This makes DTEST cql_tests.py:TestCQL.invalid_string_literals_test pass."
Reviewed-by: Pekka Enberg <penberg@cloudius-systems.com>
This function is called at bootstrap, to make sure the system tables exist in
the keyspace list. I honestly don't know why do we have to force a delete +
reconstruct. But let's keep consistency with Origin here.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
transport_exception is an interface implemented by both
cassandra_exception and protocol_exception. The logic implemented in
both these subclasses is identical.
This patch removes transport_exception and makes protocl_exception a
subclass of cassandra_exception.
Signed-off-by: Paweł Dziepak <pdziepak@cloudius-systems.com>
From Pekka:
"This series enables DROP TABLE statement in the front-end. Our
back-end is not really prepared to deal with it so throw "not
implemented" exception from migration manager."
This builder is the one used to build the convenient result_set (not
on fast path). The builder was assuming that the whole set of columns
was always queried, which resulted in buffer underflow exceptions
during parsing of the results if this was not the case. Let's also
handle queries which have narrower column sets.
In streaming code, we need core to core connection(the second connection
from B to A). That is when node A initiates a stream to node B, it is
possible that node A will transfer data to node B and vice verse, so we
need two connections. When node A creates a tcp connection (within the
messaging_service) to node B, we have a connection ip_a:core_a to
ip_b:core_b. When node B creates a connection to node B, we can not
guarantee it is ip_b:core_b to ip_a:core_a.
Current messaging_service does not support core to core connection yet,
although we use shard_id{ip, cpu_id} as the destination of the message.
We can solve the issue in upper layer. We can pass extra cpu_id as a
user msg.
Node A sends stream_init_message with my_cpu_id = current_cpu_id
Node B receives stream_init_message, it runs on whatever cpu this
connection goes to, then it sends response back with Node B's
current_cpu_id.
After this, each node knows which cpu_id to send to each other.
TODO: we need to handle the case when peer node reboots with different
number of cpus.
This is a bit different from Origin. We always send back a
prepare_message even if the initializer requested no data from the
follower, to unify the handling.
Each outgoing_file_message might contain multiple mutations. Send them
one mutation per RPC call (using frozen_mutation), instead of one big
outgoing_file_message per one RPC call.
It is common for some operations, like system table updates, to try and guarantee
some particular ordering of operations.
The way Origin does it, is by simply incrementing one to the current timestamp.
Our calls, however, are being dispatched through our internal query processor, which
has a builtin client_state.
Our client_state has a mechanism to guarantee monotonicity, by adding 1 if needed
to operations that happen subsequentially. By using a clock that is not wired up
to this mechanism, we can't really guarantee that if other operations happened to
get in between.
If we expose this mechanism through the query_processor, we will be able to guarantee
that.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
Fix release version to be compatible with origin to avoid confusing
clients.
Before:
[penberg@nero apache-cassandra-2.1.7]$ ./bin/cqlsh --no-color 127.0.0.1
Connected to Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra SeastarDB v0.1 | CQL spec 3.2.0 | Native protocol v3]
Use HELP for help.
cqlsh>
After:
[penberg@nero apache-cassandra-2.1.7]$ ./bin/cqlsh --no-color 127.0.0.1
Connected to Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 2.2.0 | CQL spec 3.2.0 | Native protocol v3]
Use HELP for help.
cqlsh>
Signed-off-by: Pekka Enberg <penberg@cloudius-systems.com>