This adds an API that expose the token to endpoint mapping and the token
that are associate with an address.
Both method will be used by the API to expose the tokens.
Signed-off-by: Amnon Heiman <amnon@cloudius-systems.com>
This adds the following definitions to the storage_service swagger
definition file:
/storage_service/tokens
/storage_service/tokens/{endpoint}
/storage_service/commitlog
/storage_service/tokens_endpoint
Signed-off-by: Amnon Heiman <amnon@cloudius-systems.com>
These adds a helper function to transfer a list of object to a list of
string. It will be used by the API implementation.
Signed-off-by: Amnon Heiman <amnon@cloudius-systems.com>
Pekka says:
"This series adds comparison operators for query result sets that are
needed by schema merging code. The operators are implemented using a
newly added "data_value" type that encodes the type of the value."
Schema merging code needs to be able to compare two result sets to
determine if a keyspace, for example, has changed. Add comparison
operators for that.
Signed-off-by: Pekka Enberg <penberg@cloudius-systems.com>
Add a data_value class that also encodes a value type. This makes it
easier to use than plain boost::any for comparing two values.
Signed-off-by: Pekka Enberg <penberg@cloudius-systems.com>
There is no guarantee that vector tmp will be alive by the time
it's written via output stream. Let's use do_with here.
Signed-off-by: Raphael S. Carvalho <raphaelsc@cloudius-systems.com>
We are using signed quantities to be compatible with the code java uses.
However, the current code will eventually overflow.
To avoid that, let's cast the quantities to unsigned, and then back to signed
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
There is a tricky bug in our current filter implementation: is_present will
return a different value depending on the order keys are inserted.
The problem here, is that _bitmap.size() will return the maximum *currently*
used bit in the set. Therefore, when we hash a given key, the maximum bit it
sets in the bitmap is used as "max" in the expression
results[i] = (base % max);
If the next keys do not set any bit higher than this one, everything works as
expected, because the keys will always hash the same way.
However, if one of the following keys happens to set a bit higher than the
highest bit which was set at the time a certain key was set, it will hash using
two different values of "max" in the aforementioned expression; one at insertion,
and another one at the test.
We should be using max_size() to be sure that we will always have the same
hash results.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
Noted by Avi while we were searching for a fix for an unrelated bug.
We haven't actually seen this trigger, but the current in-tree behavior
is wrong.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
Now that we can write composites directly, we no longer should use bytes_view.
As a matter of fact, write(out, ... bytes_view(x)) is wrong, because our write
function can't handle rvalue-references very well. Doing both those things, we
can fix a tricky bug that showed up recently in our stress tests.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
This way, we don't need to rely on an external byte_view conversion to write
this element. Note that because we don't have a writer to const byte&, we will
cast away the qualifier.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
We create some composites for use in writing the column names, but we have
done nothing to guarantee they will stay alive.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
Move the get() logic in fstream.cc into the file::dma_read_bulk()
fixing some issues:
- Fix the funny "alignment" calculation.
- Make sure the length is aligned too.
- Added new functions:
- dma_read(pos, len): returns a temporary_buffer with read data and
doesn't assume/require any alignment from either "pos"
or "len". Unlike dma_read_bulk() this function will
trim the resulting buffer to the requested size.
- dma_read_exactly(pos, len): does exactly what dma_read(pos, len) does but it
will also throw and exception if it failed to read
the required number of bytes (e.g. EOF is reached).
- Changed the names of parameters of dma_read(pos, buf, len) in order to emphasize
that they have to be aligned.
- Added a description to dma_read(pos, buf, len) to make it even more clear.
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
From Glauber:
"Here are some fixes for the sstables write path. The code is made
simpler by mainly:
- taking advantage of the fact that we don't have to chain futures
that will do nothing more than write a value to the output stream.
The compiler will do that for us if we use the recursive template
interface,
- moving most of the composite logic to sstable/key.cc.
The last one has the interesting side effect of making the code correct.
A nice bonus."
The column_name can be comprised of more than one element. This will
be the case for collections, that will embed part of the collection
data in the column name.
While doing this, we also take advantage of the infrastructure we have
added to composite. We are duplicating this logic unnecessarily, which
is prone to bugs. Those bugs are not just theoretical in this case: the
static prefix is not "followed by a null composite, i.e. three null bytes."
as the code implies.
The static prefix is created by appending one empty element for each element in
the clustering key, and will only be three null bytes in the case where the
clustering key has one element.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
The column_name can be comprised of more than one element. This will
be the case for collections, that will embed part of the collection
data in the column name.
While doing this, we also take advantage of the infrastructure we have
added to composite. We are duplicating this logic unnecessarily, which
is prone to bugs. Those bugs are not just theoretical in this case: we
are not writing the final marker for empty composite keys.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>