The logger class constructor registers itself with the logger registry,
in order to enable dynamically setting log levels. However, since
thread_local variables may be (and are) initialized at the time of first
use, when the program starts up no loggers are registered.
Fix by making loggers global, not thread_local. This requires that the
registry use locking to prevent registration happening on different threads
from corrupting the registry.
Note that technically global variables can also be initialized at the
point of first use, and there is no portable way for classes to self-register.
However this is the best we can do.
This class implements an add-only vector that ensures that the elements are
unique.
As long as items are only added this class does essentially the same what Java's
LinkedHashSet does.
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
New in v2:
- Moved sequenced_set to its own .hh file.
There was a possibility for initialization disorder of static member _classes
and its usage in another static class.
Defining the _classes inside the static method that is called when it's accessed ensures
the proper initialization (aka "standard trick", quoting Avi ;)).
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
New in v2:
- storage_service: add a non-const version of get_token_metadata().
- get_broadcast_address(): check if net::get_messaging_service().local_is_initialized()
before calling net::get_local_messaging_service().listen_address().
- get_broadcast_address(): return an inet_address by value.
- system_keyspace: introduce db::system_keyspace::endpoint_dc_rack
- fb_utilities: use listen_address as broadcast_address for now
Without the check added in this patch if the class doesn't exist
a std::bad_function_call is thrown which is not very informative.
Signed-off-by: Paweł Dziepak <pdziepak@cloudius-systems.com>
max_size, as currently used, will return -1 and while it will fix the previous
bug, it will uncover another.
We can go back to using size, as long as we make sure that all sites correctly
pick a size when creating the bitset. Aside from that, for compatibility with
the java code, the total number of bits has to be a power of two.
The best way to achieve those goals, is to just set the size ourselves through
resize() in the filter constructor. num_blocks() * bits_per_block is guaranteed
to yield a power of two as we need, and in case one caller did not explicitly
set a size, it will be set from this moment on.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
We are using signed quantities to be compatible with the code java uses.
However, the current code will eventually overflow.
To avoid that, let's cast the quantities to unsigned, and then back to signed
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
There is a tricky bug in our current filter implementation: is_present will
return a different value depending on the order keys are inserted.
The problem here, is that _bitmap.size() will return the maximum *currently*
used bit in the set. Therefore, when we hash a given key, the maximum bit it
sets in the bitmap is used as "max" in the expression
results[i] = (base % max);
If the next keys do not set any bit higher than this one, everything works as
expected, because the keys will always hash the same way.
However, if one of the following keys happens to set a bit higher than the
highest bit which was set at the time a certain key was set, it will hash using
two different values of "max" in the aforementioned expression; one at insertion,
and another one at the test.
We should be using max_size() to be sure that we will always have the same
hash results.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
Java uses long, so we should use int64_t. Using uint64_t causes the wrong
indexes to be calculated, and therefore, the filter to respond incorrectly
to a given key.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
I actually wrote this code before I learned that we could just return a local
vector by copy without major hassles.
Never too late for a cleanup.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
This comes from Origin, but the changes I had to do are quite large.
These files also represents many files, but I found it to be inconvenient
to keep all the originals, simply because we would end up with way too many
files: one .cc and one .hh per filter + an enveloping .hh so users could include
without knowing which filter to use.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
In Java it is possible to create an object by knowing its class name in
runtime. Replication strategies are created this way (I presume class
name comes from configuration somehow), so when I translated the code to
urchin I wrote replication_strategy_registry class to map a class name to
a factory function. Now I see that this is used in other places too (I
see that snitch class created in the same way), so instead of repeating
the same code for each class hierarchy that is created from its name in
origin this patch tries to introduce an infrastructure to do that easily.
Signed-off-by: Avi Kivity <avi@cloudius-systems.com>
This gives about 30% increase in tps in:
build/release/tests/perf/perf_simple_query -c1 --query-single-key
This patch switches query result format from a structured one to a
serialized one. The problems with structured format are:
- high level of indirection (vector of vectors of vectors of blobs), which
is not CPU cache friendly
- high allocation rate due to fine-grained object structure
On replica side, the query results are probably going to be serialized
in the transport layer anyway, so this change only subtracts
work. There is no processing of the query results on replica other
than concatenation in case of range queries. If query results are
collected in serialized form from different cores, we can concatenate
them without copying by simply appending the fragments into the
packet. This optimization is not implemented yet.
On coordinator side, the query results would have to be parsed from
the transport layer buffers anyway, so this also doesn't add work, but
again saves allocations and copying. The CQL server doesn't need
complex data structures to process the results, it just goes over it
linearly consuming it. This patch provides views, iterators and
visitors for consuming query results in serialized form. Currently the
iterators assume that the buffer is contiguous but we could easily
relax this in future so that we can avoid linearization of data
received from seastar sockets.
The coordinator side could be optimized even further for CQL queries
which do not need processing (eg. select * from cf where ...) we
could make the replica send the query results in the format which is
expected by the CQL binary protocol client. So in the typical case the
coordinator would just pass the data using zero-copy to the client,
prepending a header.
We do need structure for prefetched rows (needed by list
manipulations), and this change adds query result post-processing
which converts serialized query result into a structured one, tailored
particularly for prefetched rows needs.
This change also introduces partition_slice options. In some queries
(maybe even in typical ones), we don't need to send partition or
clustering keys back to the client, because they are already specified
in the query request, and not queried for. The query results hold now
keys as optional elements. Also, meta-data like cell timestamp and
ttl is now also optional. It is only needed if the query has
writetime() or ttl() functions in it, which it typically won't have.
UUID_gen::create_time_safe() does not synchronize across cores. The
comment says that it assumes it runs on a single core. This is no
longer true, we can run urchin on many cores. This easily leads to
UUID conflicts with more than one core. Fix by adding a per-core
unique number to the node part of the UUID.
bytes and sstring are distinct types, since their internal buffers are of
different length, but bytes_view is an alias of sstring_view, which makes
it possible of objects of different types to leak across the abstraction
boundary.
Fix this by making bytes a basic_sstring<int8_t, ...> instead of using char.
int8_t is a 'signed char', which is a distinct type from char, so now
bytes_view is a distinct type from sstring_view.
uint8_t would have been an even better choice, but that diverges from Origin
and would have required an audit.
This patch converts (for very small value of 'converts') some
replication related classes. Only static topology is supported (it is
created in keyspace::create_replication_strategy()). During mutation
no replication is done, since messaging service is not ready yet,
only endpoints are calculated.