This builder is the one used to build the convenient result_set (not
on fast path). The builder was assuming that the whole set of columns
was always queried, which resulted in buffer underflow exceptions
during parsing of the results if this was not the case. Let's also
handle queries which have narrower column sets.
Fix release version to be compatible with origin to avoid confusing
clients.
Before:
[penberg@nero apache-cassandra-2.1.7]$ ./bin/cqlsh --no-color 127.0.0.1
Connected to Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra SeastarDB v0.1 | CQL spec 3.2.0 | Native protocol v3]
Use HELP for help.
cqlsh>
After:
[penberg@nero apache-cassandra-2.1.7]$ ./bin/cqlsh --no-color 127.0.0.1
Connected to Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 2.2.0 | CQL spec 3.2.0 | Native protocol v3]
Use HELP for help.
cqlsh>
Signed-off-by: Pekka Enberg <penberg@cloudius-systems.com>
"As previously said, there were some unidentified bugs that prevented update_tokens from
working properly. The first one was sent alongside the series, the second one took me more
time, but it is fixed here."
"(Partial?) implementation of BatchLogManager.
Requires the token function/restriction series.
Functional as in that it can create batchlog mutations, and do replay
of data in this system table.
Since range queries does not yet work, it only handles a very small
table contents.
It is not used yet either, but will eventually be needed for batch statements
etc."
"This patches prevent compression validation code from rejecting statements
with an empty sstable_compression property. Correct behaviour in such cases
is not to use compression at all.
This would have been a 10 minuts fix if antlr3 didn't prepare surprise of its
own. Empty STRING_LITERALS weren't handled properly and a workaround for that
problem was introduced."
Somewhat simplifies version of the Origin code, since from what I
can see, there is less need for us to do explicit query sends in
the BLM itself, instead we can just go through storage_proxy.
I could be wrong though.
The serialization for "already exists" error is special as explained in
the CQL binary protocol specification:
0x2400 Already_exists: The query attempted to create a keyspace or a
table that was already existing. The rest of the ERROR message
body will be <ks><table> where:
<ks> is a [string] representing either the keyspace that
already exists, or the keyspace in which the table that
already exists is.
<table> is a [string] representing the name of the table that
already exists. If the query was attempting to create a
keyspace, <table> will be present but will be the empty
string.
Fix that to unconfuse "cqlsh" when attemting to create a duplicate
keyspace or table.
Signed-off-by: Pekka Enberg <penberg@cloudius-systems.com>
setText() is meant to override the token value. However, when antlr3 is
asked to return the token value and needs to determine whether setText()
was called it checks if the string set by setText() is empty or not.
This basically means that it is impossible to override token value with
an empty string.
This problem is solved by using a string with a single byte set to -1 to
represent empty string.
Signed-off-by: Paweł Dziepak <pdziepak@cloudius-systems.com>
"Implementation of:
* Token func
* Token restriction
* Token relation
* Token cql parsing
This series contains some hefty refactoring of the cql3::restrictions
interfaces, to handle the slightly different conditions of the token
relation. (See individual commit comment)"
Needed to reasonably cleanly implement token restrictions.
* Fixed constness for various virtuals.
* primary_key_restrictions now inherit abstract_restriction,
similuar to Origin (for better or for worse), to avoid
duplicating attributes etc.
* primary_key_restrictions bounds & values renamed (so not to
collide with restriction), and some logic pushed downwards
(building bounds), to avoid abstraction breakage in
statement_restrictions
* primary_key_restrictions merging is now potentially replacing
to make dispatching token/multicolumn restrictions simpler
At this point, users of the interface are futurized already, so we
just need to make sure they call the right function.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
The comparator used to sort the set should the underlying types' comparator,
not the set's comparator.
Using the latter will eventually crash with out of bound reads if we're lucky (I was),
or sort incorrectly if we are unlucky.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
This function is called at startup and makes sure that the cluster_name field
in system.local exists, and if it exists, that it matches the expected value.
To simplifly things, I am leaving the sstable check out. For us, that would be
a map-reduce operation, and if the sstables are indeed corrupted, we would have
caught that already for sure.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
This patch provide a function to store the current schema version.
Currently, it is called every time the node boots, with a random schema.
That is incorrect and will be fixed shortly. But for now, cqlsh needs
to see a valid value here, so this will do.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
Soon that will involve a query. The idiom make_ready_future<>().then()
is a bit unusual to say the least, but it will soon be replace by an
actual future.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
We will have to flush it from other places as well, so wrap the flushing code
into a method - specially because the current code has issues and it will be
easier to deal with it if it is in a single place.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>