Files
scylladb/tests/perf_row_cache_update.cc
Tomasz Grabiec d22fdf4261 row_cache: Improve safety of cache updates
Cache imposes requirements on how updates to the on-disk mutation source
are made:
  1) each change to the on-disk muation source must be followed
     by cache synchronization reflecting that change
  2) The two must be serialized with other synchronizations
  3) must have strong failure guarantees (atomicity)

Because of that, sstable list update and cache synchronization must be
done under a lock, and cache synchronization cannot fail to synchronize.

Normally cache synchronization achieves no-failure thing by wiping the
cache (which is noexcept) in case failure is detect. There are some
setup steps hoever which cannot be skipped, e.g. taking a lock
followed by switching cache to use the new snapshot. That truly cannot
fail.  The lock inside cache synchronizers is redundant, since the
user needs to take it anyway around the combined operation.

In order to make ensuring strong exception guarantees easier, and
making the cache interface easier to use correctly, this patch moves
the control of the combined update into the cache. This is done by
having cache::update() et al accept a callback (external_updater)
which is supposed to perform modiciation of the underlying mutation
source when invoked.

This is in-line with the layering. Cache is layered on top of the
on-disk mutation source (it wraps it) and reading has to go through
cache. After the patch, modification also goes through cache. This way
more of cache's requirements can be confined to its implementation.

The failure semantics of update() and other synchronizers needed to
change due to strong exception guaratnees. Now if it fails, it means
the update was not performed, neither to the cache nor to the
underlying mutation source.

The database::_cache_update_sem goes away, serialization is done
internally by the cache.

The external_updater needs to have strong exception guarantees. This
requirement is not new. It is however currently violated in some
places. This patch marks those callbacks as noexcept and leaves a
FIXME. Those should be fixed, but that's not in the scope of this
patch. Aborting is still better than corrupting the state.

Fixes #2754.

Also fixes the following test failure:

  tests/row_cache_test.cc(949): fatal error: in "test_update_failure": critical check it->second.equal(*s, mopt->partition()) has failed

which started to trigger after commit 318423d50b. Thread stack
allocation may fail, in which case we did not do the necessary
invalidation.
2017-09-04 10:04:29 +02:00

105 lines
3.6 KiB
C++

/*
* Copyright (C) 2015 ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#include <core/distributed.hh>
#include <core/app-template.hh>
#include <core/sstring.hh>
#include <core/thread.hh>
#include "utils/managed_bytes.hh"
#include "utils/logalloc.hh"
#include "row_cache.hh"
#include "log.hh"
#include "schema_builder.hh"
#include "memtable.hh"
#include "tests/perf/perf.hh"
#include "disk-error-handler.hh"
thread_local disk_error_signal_type commit_error;
thread_local disk_error_signal_type general_disk_error;
static
dht::decorated_key new_key(schema_ptr s) {
static thread_local int next = 0;
return dht::global_partitioner().decorate_key(*s,
partition_key::from_single_value(*s, to_bytes(sprint("key%d", next++))));
}
static
clustering_key new_ckey(schema_ptr s) {
static thread_local int next = 0;
return clustering_key::from_single_value(*s, to_bytes(sprint("ckey%d", next++)));
}
int main(int argc, char** argv) {
namespace bpo = boost::program_options;
app_template app;
app.add_options()
("debug", "enable debug logging")
("partitions", bpo::value<unsigned>()->default_value(128), "number of partitions per memtable")
("cell-size", bpo::value<unsigned>()->default_value(1024), "cell size in bytes")
("rows", bpo::value<unsigned>()->default_value(128), "row count per partition")
("no-cache", "do not update cache");
return app.run(argc, argv, [&app] {
if (app.configuration().count("debug")) {
logging::logger_registry().set_all_loggers_level(logging::log_level::debug);
}
return seastar::async([&] {
auto s = schema_builder("ks", "cf")
.with_column("pk", bytes_type, column_kind::partition_key)
.with_column("ck", bytes_type, column_kind::clustering_key)
.with_column("v", bytes_type, column_kind::regular_column)
.build();
cache_tracker tracker;
row_cache cache(s, make_empty_snapshot_source(), tracker);
size_t partitions = app.configuration()["partitions"].as<unsigned>();
size_t cell_size = app.configuration()["cell-size"].as<unsigned>();
size_t row_count = app.configuration()["rows"].as<unsigned>();
bool update_cache = !app.configuration().count("no-cache");
std::vector<mutation> mutations;
for (unsigned i = 0; i < partitions; ++i) {
mutation m(new_key(s), s);
for (size_t j = 0; j < row_count; j++) {
m.set_clustered_cell(new_ckey(s), "v", data_value(bytes(bytes::initialized_later(), cell_size)), 2);
}
mutations.emplace_back(std::move(m));
}
time_it([&] {
auto mt = make_lw_shared<memtable>(s);
for (auto&& m : mutations) {
mt->apply(m);
}
if (update_cache) {
cache.update([] {}, *mt).get();
}
}, 5, 1);
});
});
}