To avoid spreading the futures all over, we will resort to a cache with this,
the same way we did for the dc/rack information.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
"Related to 108
Does not fix the problem (fully at least), but at least:
* Throws exceptions instead of crashing
* Tries to back off slighly (allocate less) if possible
* Logs it
Also recycles segments to keep them from being fragmented by mem system"
I'm not sure what happened. We have the same commented code in both .hh
and .cc. It is very confusing when enabling some of the code. Let's
remove the duplicated code in .cc and leave the in .hh only.
There is another field I missed, index_interval. It is not actually used for
2.1.8 - so that's why it is easy to stop, but it at least exists.
2.1.8 already has "min_index_interval" and "max_index_interval". If we see a
table that contains index_interval, that will become "min_index_interval".
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
They are multi-cell in Origin. This has nothing to do with 2.2 vs 2.1,
and it is just a plain bug.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
We are currently assigning non-partition keys the index 0. That is not what
happens in Origin:
cqlsh> create table ks.twoclust \
(ks int, cl1 int, cl2 int, r1 text, r2 text, primary key (ks, cl1, cl2));
cqlsh> select columnfamily_name, column_name, component_index \
from system.schema_columns where keyspace_name='ks';
columnfamily_name | column_name | component_index
-------------------+-------------+-----------------
twoclust | cl1 | 0
twoclust | cl2 | 1
twoclust | ks | null
twoclust | r1 | 2
twoclust | r2 | 2
This is happening because we use column.position(), which has no knowledge of
the clustering keys at all. We should instead pass that by the schema, which
will then do the right thing.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
We will invoke the schema builder from schema_tables.cc, and at that point, the
information about compact storage no longer exists anywhere. If we just call it
like this, it will be the same as calling it with compact_storage::no, which
will trigger a (wrong) recomputation for compact_storage::yes CFs
The best way to solve that, is make the compact_storage parameter mandatory
every time we create a new table - instead of defaulting to no. This will
ensure that the correct dense and compound calculation are always done when
calling the builder with a parameter, and not done at all when we call it
without a parameter.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
This table exists in 2.1.8, and although it is dropped in 2.2, we
should at least list its schema.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
2.1.8 tables have 3 more fields in their system tables, that 2.2 don't.
Since we aim at 2.1 compatibility, we have to include them.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
They do not exist in 2.2, and don't serve a huge purpose. But we will
need them for compatibility with 2.1
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
Let's leave their schema in here, since it's ready and we may need them in the
future. But since they are not present in 2.1.8, we will remove them from the
schema list.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
ASan does not like commit 05c23c7f73
("database: Add create_keyspace_on_all() helper"):
==8112==WARNING: AddressSanitizer failed to allocate 0x7f88b84fc690 bytes
==8112==AddressSanitizer's allocator is terminating the process instead of returning 0
==8112==If you don't like this behavior set allocator_may_return_null=1
==8112==Sanitizer CHECK failed: ../../../../libsanitizer/sanitizer_common/sanitizer_allocator.cc:147 ((0)) != (0) (0, 0)
I was not able to determine the source of the bug. Make ASan happy by
reverting the code movement and using the "cpu zero" trick we use for
table creation.
Signed-off-by: Pekka Enberg <penberg@cloudius-systems.com>
The code is merge_tables() is a twisted maze of tricks that is hard to
restructure so that event notification can be done cleanly like with
keyspaces.
The problem there is that we need to run bunch of database operations
for the merging that really need to happen on all the shards. To fix
the issue, lets cheat a little and simply only run CQL event
notification on cpu zero.
This seems to fix cluster schema propagation issues in urchin-dtest. I
can now run TestSimpleCluster.simple_create_insert_select_test without
any additional delays inserted into the test code.
Signed-off-by: Pekka Enberg <penberg@cloudius-systems.com>
We already have all_tables() function converted and there's really no
use for compile() unless we switch to using CQL to create the schema
tables.
Signed-off-by: Pekka Enberg <penberg@cloudius-systems.com>
There's nothing legacy about it so rename legacy_schema_tables to
schema_tables. The naming comes from a Cassandra 3.x development branch
which is not relevant for us in the near future.
Signed-off-by: Pekka Enberg <penberg@cloudius-systems.com>
"This series implements initial support for CQL events. We introduce
migration_listener hook in migration manager as well as event notifier
in the CQL server that's built on top of it to send out the events via
CQL binary protocol. We also wire up create keyspace events to the
system so subscribed clients are notified when a new keyspace is
created.
There's still more work to be done to support all the events. That
requires some work to restructure existing code so it's better to merge
this initial series now and avoid future code conflicts."
Add a create_keyspace_on_all() helper which is needed for sending just
one event notification per created keyspace, not one per shard.
Signed-off-by: Pekka Enberg <penberg@cloudius-systems.com>
We should pass inet_address.addr().
With this, tokens in system.peers are updated correctly.
(1 rows)
cqlsh> SELECT tokens from system.peers;
tokens
------------------------------------------------------------------------
{'-5463187748725106974', '8051017138680641610', '8833112506891013468'}
(1 rows)
I got this error If I pass inet_address to it.
boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<boost::bad_any_cast>
> (boost::bad_any_cast: failed conversion using boost::any_cast)
Several of Scylla's options (in db/config.hh) have the type "string_map"
(unordered_map<sstring, sstring>). The intent was such options could get
multiple "key=value" settings. However, this never actually worked correctly,
and had two bugs:
1. Any option name with a space in it would fail, for example:
$ scylla --logger-log-level 'BatchLog Manager=info'
error: the argument ('BatchLog Manager=info') for option '--level'
is invalid
2. Trying to set multiple entries in the map did *not* work. For example,
$ scylla --logger-log-level a=info --logger-log-level b=info
error: option '--level' cannot be specified more than once
The problem is that boost::program_options does not actually understand
unordered_map<sstring, sstring>: It doesn't know it is a container (it
only recognizes std::vector) so it doesn't allow multiple options, and
it doesn't know how to convert a string to it, so it uses boost::lexical_cast
which for strings, cuts the string at a space...
The solution is to write a custom "validate()" function overload, which
boost::program_options uses to validate (and consume) options into object
types it doesn't understand by default. Getting this function in the right
place in the code was a difficult exercise, but here it is, a working
implementation :-) And it fixes the above two bugs.
Signed-off-by: Nadav Har'El <nyh@cloudius-systems.com>
Abillity to enable/disable specific sub-modules - this settings do not
affect system tables which are allways persisted,cached and written to
commitlog
enable-in-memory-data-store marks if tables will be written/read to/from
disk
enable-commitllog marks if tables will be written to commitlog
enable-cache marks if tables will be written/read to/from cache
Please note in-memory-data-store does not change the read path so "old"
sstables are still read and cache may be used to cache their data
Signed-off-by: Shlomi Livne <shlomi@cloudius-systems.com>