Update the SCYLLA_REPO_URL and VERSION defaults to point to the latest
unstable 4.0 version. This will be used if someone runs "docker build"
locally. For the releases, the release pipelines will pass the stable
version repository URL and a specific release version.
Over the years, Scylla updated the sstable format from the KA format to
the LA format, and most recently to the MC format. On a mixed cluster -
as occurs during a rolling upgrade - we want all the nodes, even new ones,
to write sstables in the format preferred by the old version. The thinking
is that if the upgrade fails, and we want to downgrade all nodes back to
the older version, we don't want to lose data because we already have
too-new sstables.
So the current code starts by selecting the oldest format we ever had - KA,
and only switching this choice to LA and MC after we verify that all the
nodes in the cluster support these newer formats.
But before an agreement is reached on the new format, sstables may already
be created in the antique KA format. This is usually harmless - we can
read this format just fine. However, the KA format has a problem that it is
unable to represent table names or keyspaces with the "-" character in them,
because this character is used to separate the keyspace and table names in
the file name. For CQL, a "-" is not allowed anyway in keyspace or table
names; But for Alternator, this character is allowed - and if a KA table
happens to be created by accident (before the LA or MC formats are chosen),
it cannot be read again during boot, and Scylla cannot reboot.
The solution that this patch takes is to change Scylla's default sstable
format to LA (and, as before, if the entire cluster agrees, the newer MC
format will be used). From now on, new KA tables will never be written.
But we still fully support *reading* the KA format - this is important in
case some very old sstables never underwent compaction.
The old code had, confusingly, two places where the default KA format
was chosen. This patch fixes is so the new default (LA) is specified in
only one place.
Fixes#6071.
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20200324232607.4215-2-nyh@scylladb.com>
(cherry picked from commit 91aba40114)
Merged pull request https://github.com/scylladb/scylla/pull/6030 from
Piotr Dulikowski:
Adds CDC-related metrics.
Following counters are added, both for total and failed operations:
Total number of CDC operations that did/did not perform splitting,
Total number of CDC operations that touched a particular mutation part.
Total number of preimage selects.
Fixes#6002.
Tests: unit(dev, debug)
* 'cdc-metrics' of github.com:piodul/scylla:
storage_proxy: track CDC operations in LWT flow
storage_proxy: track CDC operations in logged batches
storage_proxy: track CDC operations in standard flow
storage_proxy: add cdc tracker hooks to write response handlers
storage_proxy: move "else if" remainder into "else" block
cdc: create an operation_result_tracker object
cdc: add an object for tracking progress of cdc mutations
cdc: count touched mutation parts in transformer::transform
cdc: track preimage selects in metrics
cdc: register metric counters
cdc: fix non-atomic updates in splitting
Compaction automatically adds gc grace period to expiry times already,
no need to add it when creating the tombstones. Remove the redundant
additions form the code. The direct impact is really minor as this is
only used in tests, but it might confuse readers who are looking at how
tombstones are created across the codebase.
Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <20200323120948.92104-1-bdenes@scylladb.com>
Adds a field to abstract_write_response_handler that points to the cdc
operation result tracker, and a function for registering the tracker in
the handlers that currently write to a CDC log table.
CDC metrics, apart from tracking "total" metrics for all performed CDC
operations, also track metrics for "failed" operations. Because the
result of the CDC operation depends on whether all CDC mutations were
written successfully by storage_proxy, checking for failure and
incrementing appropriate counters is deferred after all write response
handlers finish.
The `cdc::operation_result_tracker` object was created for that purpose.
It contains all the details needed to accurately update the metrics
based on what actually happened in the `augment_mutation_call` function,
and holds a flag which tells if any of write response handlers failed.
This object is supposed to be referenced by write response handlers for
CDC mutations created after the same `augment_mutation_call`. After all
write response handlers are destroyed, the destructor of
`operation_result_tracker` will update appropriate metrics.
Actual creating and attaching this object to write response handlers
will be done in subsequent commits.
Modifies the transformer::transform so that it also returns a set of
flags indicating what parts of the mutation (e.g. rows, tombstones,
collections, etc.) were processed during transforming.
This patch defines a CDC metrics object and registers all of its
counters.
storage_proxy is chosen as the owner of the metrics object. Because in
subsequent commits it will become possible for CDC metrics to be updated
after a write operation ends, and because the cdc_service has shorter
lifetime than storage_proxy, we could risk a use-after-free if we placed
this object inside cdc_service.
This patch fixes a bug in mutation splitting logic of CDC. In the part
that handles updates of non-atomic clustering columns, the column
definition was fetched from a static column of the same id instead of
the actual definition of the clustering column. It could cause the value
to be written to a wrong column.
Tests: unit(dev)
This implements support for triggering major compations through the REST
API. Please note that "split_output" is not supported and Glauber Costa
confirmed this this is fine:
"We don't support splits, nor do I think we should."
Signed-off-by: Ivan Prisyazhnyy <ivan@scylladb.com>
"
Make sure all headers compile on their own, without requiring any
additional includes externally.
Even though this requirement is not documented in our coding guides it
is still quasi enforced and we semi-regularly get and merge patches
adding missing includes to headers.
This patch-set fixes all headers and adds a `{mode}-headers` target that
can be used to verify each header. This target should be built by
promotion to ensure no new non-conforming code sneaks in.
Individual headers can be verified using the
`build/dev/path/to/header.hh.o` target, that is generated for every
header.
The majority of the headers was just missing `seastarx.hh`. I think we
should just include this via a compiler flag to remove the noise from
our code (in a followup).
"
* 'compiling-headers/v2' of https://github.com/denesb/scylla:
configure.py: add {mode}-headers phony target
treewide: add missing headers and/or forward declarations
test/boost/sstable_test.hh: move generic stuff to test/lib/sstable_utils.hh
sstables: size_tiered_backlog_tracker: move methods out-of-line
sstables: date_tiered_compaction_strategy.hh: move methods out-of-line
* seastar 3c498abcab...92c488706c (14):
> dpdk: restore including reactor.hh
> tests: distributed_test: add missing #include <mutex>
> reactor: un-static-ify make_pollfn()
> merge: Reduce inclusions of reactor.hh
A few #includes added to compensate for this
> sharded: delete move constructor
> future: Avoid a move constructor call
> future: Erase types a bit more in then_wrapped
> memory: Drop a never nullopt optional
> semaphore: specify get_units and with_semaphore as noexcept
> spinlock.hh: Add include for <cassert> header
> dpdk: Avoid a variable sized array
> future: Add an explicit promise member to continuation
> net: remove smart pointer wrappers around pollable_fd
> Merge "cleanup reactor file functions" from Benny
This patch fixes a bug in mutation splitting logic of CDC. In the part
that handles updates of non-atomic clustering columns, the schema for
serializing that column was looked up incorrectly in the table schema -
instead of a `regular_column`, a `static_column` was looked up.
Due to how the `column_at` function works, a correct schema was always
returned if the table had no static columns. Therefore, in order for
this bug to manifest, a table with a static column and a regular column
with non-atomic collection was needed.
Closes#3295
The error_injection class allows injecting custom handlers into normal control
flow at the pre-determined injection points.
This is especially useful in various testing scenarios:
* Throwing an exception at some rare and extreme corner-cases
* Injecting a delay to test for timeouts to be handled correctly
* More advanced uses with custom lambda as an injection handler
Injection points are defined by `inject` calls.
Enabling and disabling injections are done by the corresponding
`enable` and `disable` calls.
REST frontend APIs is provided for convenience.
Branch URL: https://github.com/alecco/scylla/tree/as_error_injection
Tests: unit {{dev}}, unit {{debug}}
* 'as_error_injection' of github.com:alecco/scylla:
api: add error injection to REST API
utils: add error injection
sstable_test.hh started as collection of utilities shared between the
various `_sstable_test.cc` files. Predictably other tests started using
it as well, among them some that are non boost unit tests. This poses a
problem as if we add the missing boost/test/unit_test.hpp include to
sstable_test.hh these tests will suddenly have missing symbols from
boost::test. To avoid linking boost::test into all these users, extract
utilities more widely used into sstable_utils.hh
We now have a utils file for SSTables. This is potentially useful for
other tests.
As a matter of fact, this function is repeated right now for the
resharding test. And to add insult to injury, the version in the
resharding test has the parameters shard and number of tokens flipped,
which although extremely confusing is the predictable outcome of
such repetition
Signed-off-by: Glauber Costa <glauber@scylladb.com>
The following error was seen in
materialized_views_test.py:TestMaterializedViews.decommission_node_during_mv_insert_4_nodes_test
INFO [shard 0] repair - repair id 3 to sync data for
keyspace=ks, status=started repair/repair.cc:662:36: runtime error: member call
on null pointer of type 'const struct schema'
Aborting on shard 0.
The problem is in the test a keyspace was created without creating any
table. Since db19a76b1f(selective_token_range_sharder: stop calling
global_partitioner()), in get_partitioner_for_tables, we access nullptr
when no table is present.
schema_ptr last_s;
for (auto t: tables) {
// set last_s
}
last_s->get_partitione()
To fix:
1) Skip the repair in sync_data_using_repair if there is no table in the keyspace
2) Throw if no schema_ptr is found in get_partitioner_for_tables. Be defensive.
After:
INFO [shard 0] repair - decommission_with_repair: started with keyspace=ks, leaving_node=127.0.0.2, nr_ranges=744
INFO [shard 0] repair - repair id 3 to sync data for keyspace=ks, status=started
WARN [shard 0] repair - repair id 3 to sync data for keyspace=ks, no table in this keyspace
INFO [shard 0] repair - repair id 3 completed successfully
INFO [shard 0] repair - repair id 3 to sync data for keyspace=ks, status=succeeded
Tests: materialized_views_test.py:TestMaterializedViews.decommission_node_during_mv_insert_4_nodes_test
Fixes: #6022
repair: Ignore keyspace that is removed in sync_data_using_repair
When a keyspace is removed during node operations, we should not fail
the whole operation. Ignore the keyspace that is removed.
Fixes#5942
* asias-repair_fix_5942:
repair: Stop the nodes that have run repair_row_level_start
repair: Ignore keyspace that is removed in sync_data_using_repair
Seems like adduser in redhat variants and deiban variants are incompatible,
and there is no addgroup in redhat variants.
Since adduser in install.sh is implemented on debian variants, does not work on redhat compatible.
To fix this we need to use 'useradd' / 'groupadd' instead.
Fixes#6018
"
In debug mode some tests take veeery looong time to finish,
those tests are better to be started first. This set adds
this by marking such long tests in suite.yaml files.
Tests: unit(dev)
"
* 'br-split-unit-tests-sorting-2' of https://github.com/xemul/scylla:
test.py: Mark some tests as "run_first"
test.py: Generate list with short names
test.py: Rename "long" to "skip_in_debug_mode"
Primary key and clustering key column should not have a corresponding
"cdc$deleted_<name>" column in cdc log table, because it does not make
sense to delete such a column from a row.
Fixes: #6049
Tests: unit(dev)
This reverts commit 0b34d88957. According
to Rafael Avila de Espindola:
"I have bisected the recent failures [in commitlog_test] on next to this
patch."
This reverts commit 458ef4bb06. According
to Glauber Costa:
"It may give us the illusion that fixes something for a particular case
but this fix is wrong.
I am trying to help Raphael figure out why the backlog is wrong but
this patch is not the answer."
Simple REST API for error injection is implemented.
The API allow the following operations:
* injecting an error at given injection name
* listing injections
* disabling an injection
* disabling all injections
Currently the API enables/disables on all shards.
Closes#3295
Signed-off-by: Alejo Sanchez <alejo.sanchez@scylladb.com>
Error injection class is implemented in order to allow injecting
various errors (exceptions, stalls, etc.) in code for testing
purposes.
Error injection is enabled via compile flag
SCYLLA_ENABLE_ERROR_INJECTION
TODO: manage shard instances
Enable error injection in debug/dev/sanitize modes.
Unit tests for error injection class.
Closes#3295
Signed-off-by: Pavel Solodovnikov <pa.solodovnikov@scylladb.com>
Signed-off-by: Alejo Sanchez <alejo.sanchez@scylladb.com>
Running the Alternator tests is easy after you manually run Scylla, but
sometimes it's convenient to have a script which just does everything
automatically: start Scylla in a temporary directory, set it up properly
for the tests (especially the authentication), run all the tests, and remove
the temporary directory. This is what this alternator-tests/run script does.
This script can be run by Jenkins, for example, to check all the Alternator
tests. The script assumes some things (including cqlsh, pytest and the boto3
library) are already installed, and that Scylla has been compiled - by
default it takes the latest built build/*/scylla, but this can be overridden
by a command like
SCYLLA=build/release/scylla alternator-test/run
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20200311091918.16170-1-nyh@scylladb.com>
Merged patch series from Piotr Sarna:
This series hooks alernator to admission control, similarly to how
CQL server uses it. The estimated memory consumption is set to 2x
raw JSON request, since that seems to be the upper limit of
how much more memory rapidjson allocates during parsing.
Note, that since Seastar HTTP currently reads the whole contents
upfront, there's no easy way to apply admission control before reading
the request - that would involve some changes to our HTTP API.
Note 2: currently, admission control in CQL does not properly pass
memory consumption information for requests that are bounced
to another shard - that would require either transferring semaphore units
between shards or keeping a foreign pointer to the original units.
As a result, alternator also does not pass correct admission control
info between shards, and all places in code which do that are marked
with clear FIXMEs.
Fixes#5029
Piotr Sarna (5):
storage_service: add memory limiter semaphore getter
alternator: add service permit to callbacks
alternator: add memory limiter to alternator server
alternator: add addmission control stats entry
alternator: hook admission control to alternator server
alternator/executor.cc | 113 ++++++++++++++++++++++--------------
alternator/executor.hh | 32 +++++-----
alternator/rmw_operation.hh | 1 +
alternator/server.cc | 83 +++++++++++++++-----------
alternator/server.hh | 8 ++-
alternator/stats.cc | 2 +
alternator/stats.hh | 1 +
main.cc | 3 +-
service/storage_service.hh | 4 ++
9 files changed, 149 insertions(+), 98 deletions(-)
Before this patch, when db/view/view.hh was modified, 89 source files had to
be recompiled. After this patch, this number is down to 5.
Most of the irrelevant source files got view.hh by including database.hh,
which included view.hh just for the definition of statistics. So in this
patch we split the view statistics to a separate header file, view_stats.hh,
and database.hh only includes that. A few source files which included
only database.hh and also needed view.hh (for materialized-view related
functions) now need to include view.hh explicitly.
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20200319121031.540-1-nyh@scylladb.com>
The `result_callback` was a callback returned by `augment_mutation_call`
that was supposed to be used in the CDC postimage implementation.
Because CDC postimage was implemented without using this callback, and
currently a no-op function is always returned, this callback can safely
be removed.
Those tests take long time to finish, so it makes sense to start
them earlier than others.
The provided list of long tests consists of those running more
than 10 minutes in debug mode.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
It returns a future, so converting an exception to an exceptional
future simplifies error handling in the caller.
Without this code like the one in
standard_role_manager::create_metadata_tables_if_missing has a
surprising behavior:
return when_all_succeed(
create_metadata_table_if_missing(...),
create_metadata_table_if_missing(...));
Since it might not wait for both futures. We could use the lambda
version of when_all_succeed, but changing
create_metadata_table_if_missing seems a nice API improvement.
Signed-off-by: Rafael Ávila de Espíndola <espindola@scylladb.com>
Message-Id: <20200317002051.117832-4-espindola@scylladb.com>
View updates sent as part of the view building process should never
be ignored, but fd49fd7 introduced a bug which may cause exactly that:
the updates are mistakenly sent to background, so the view builder
will not receive negative feedback if an update failed, which will
in turn not cause a retry. Consequently, view building may report
that it "finished" building a view, while some of the updates were
lost. A simple fix is to restore previous behaviour - all updates
triggered by view building are now waited for.
Fixes#6038
Tests: unit(dev),
dtest: interrupt_build_process_with_resharding_low_to_half_test