When partition has no live regular rows, but has some data live in the
static row, then it should appear in the results, even though we
didn't select any static column.
To reproduce:
create table cf (k blob, c blob, v blob, s1 blob static, primary key (k, c));
update cf set s1 = 0x01 where k = 0x01;
update cf set s1 = 0x02 where k = 0x02;
select k from cf;
The "select" statement should return 2 rows, but was returning 0.
The following query worked fine, because static columns were included:
select * from cf;
The data query should contain only live data, so we shouldn't write a
partition entry if it's supposed to be absent from the results. We
can'r tell that though until we've processed all the data. To solve
this problem, query result writer is using an optimistic approach,
where the partition header will be retracted from the buffer
(cheaply), if it turns out there's no live data in it.
"This change makes comparing floating point values behave exactly like
it does in case of Java compareTo() function, which is different than
the usual IEEE 754 behaviour."
In Java method compareTo() of Float and Double types doesn't strictly
follow IEEE 754. Firstly, NaN is equal NaN and is greater than any other
value. Secondly, positive zero is larger than negative zero.
Signed-off-by: Paweł Dziepak <pdziepak@cloudius-systems.com>
"These patches add verification that strings passed using CQL binary protocol
are valid utf8. Exception hierarchy is also adjusted to make sure that
protocol exceptions are properly translated to an appropriate error code.
This makes DTEST cql_tests.py:TestCQL.invalid_string_literals_test pass."
Reviewed-by: Pekka Enberg <penberg@cloudius-systems.com>
This function is called at bootstrap, to make sure the system tables exist in
the keyspace list. I honestly don't know why do we have to force a delete +
reconstruct. But let's keep consistency with Origin here.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
transport_exception is an interface implemented by both
cassandra_exception and protocol_exception. The logic implemented in
both these subclasses is identical.
This patch removes transport_exception and makes protocl_exception a
subclass of cassandra_exception.
Signed-off-by: Paweł Dziepak <pdziepak@cloudius-systems.com>
From Pekka:
"This series enables DROP TABLE statement in the front-end. Our
back-end is not really prepared to deal with it so throw "not
implemented" exception from migration manager."
This builder is the one used to build the convenient result_set (not
on fast path). The builder was assuming that the whole set of columns
was always queried, which resulted in buffer underflow exceptions
during parsing of the results if this was not the case. Let's also
handle queries which have narrower column sets.
In streaming code, we need core to core connection(the second connection
from B to A). That is when node A initiates a stream to node B, it is
possible that node A will transfer data to node B and vice verse, so we
need two connections. When node A creates a tcp connection (within the
messaging_service) to node B, we have a connection ip_a:core_a to
ip_b:core_b. When node B creates a connection to node B, we can not
guarantee it is ip_b:core_b to ip_a:core_a.
Current messaging_service does not support core to core connection yet,
although we use shard_id{ip, cpu_id} as the destination of the message.
We can solve the issue in upper layer. We can pass extra cpu_id as a
user msg.
Node A sends stream_init_message with my_cpu_id = current_cpu_id
Node B receives stream_init_message, it runs on whatever cpu this
connection goes to, then it sends response back with Node B's
current_cpu_id.
After this, each node knows which cpu_id to send to each other.
TODO: we need to handle the case when peer node reboots with different
number of cpus.
This is a bit different from Origin. We always send back a
prepare_message even if the initializer requested no data from the
follower, to unify the handling.
Each outgoing_file_message might contain multiple mutations. Send them
one mutation per RPC call (using frozen_mutation), instead of one big
outgoing_file_message per one RPC call.
It is common for some operations, like system table updates, to try and guarantee
some particular ordering of operations.
The way Origin does it, is by simply incrementing one to the current timestamp.
Our calls, however, are being dispatched through our internal query processor, which
has a builtin client_state.
Our client_state has a mechanism to guarantee monotonicity, by adding 1 if needed
to operations that happen subsequentially. By using a clock that is not wired up
to this mechanism, we can't really guarantee that if other operations happened to
get in between.
If we expose this mechanism through the query_processor, we will be able to guarantee
that.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>