Compare commits

..

12 Commits

Author SHA1 Message Date
Pekka Enberg
e10588e37a streaming/stream_session: Don't stop stream manager
We cannot stop the stream manager because it's accessible via the API
server during shutdown, for example, which can cause a SIGSEGV.

Spotted by ASan.
Message-Id: <1453130811-22540-1-git-send-email-penberg@scylladb.com>
2016-01-20 10:29:34 +02:00
Pekka Enberg
bb0aeb9bb2 api/messaging_service: Fix heap-buffer-overflows in set_messaging_service()
Fix various issues in set_messaging_service() that caused
heap-buffer-overflows when JMX proxy connects to Scylla API:

  - Off-by-one error in 'num_verb' definition

  - Call to initializer list std::vector constructor variant that caused
    the vector to be two elements long.

  - Missing verb definitions from the Swagger definition that caused
    response vector to be too small.

Spotted by ASan.
Message-Id: <1453125439-16703-1-git-send-email-penberg@scylladb.com>
2016-01-20 10:29:27 +02:00
Takuya ASADA
d2c97d9620 dist: use our own CentOS7 Base image
Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1453241256-23338-4-git-send-email-syuu@scylladb.com>
2016-01-20 09:41:39 +02:00
Takuya ASADA
ddbe20f65c dist: stop ntpd before running ntpdate
New CentOS Base Image runs ntpd by default, so shutdown it before running ntpdate.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1453241256-23338-3-git-send-email-syuu@scylladb.com>
2016-01-20 09:41:33 +02:00
Takuya ASADA
88bf12aa0b dist: disable SELinux only when it enabled
New CentOS7 Base Image disabled SELinux by default, and running 'setenforce 0' on the image causes error, we won't able to build AMI.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1453241256-23338-2-git-send-email-syuu@scylladb.com>
2016-01-20 09:41:29 +02:00
Takuya ASADA
87fdf2ee0d dist: extend coredump size limit
16GB is not enough for some larger machines, so extend it.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1453115792-21989-2-git-send-email-syuu@scylladb.com>
2016-01-18 13:38:56 +02:00
Takuya ASADA
d6992189ed dist: preserve environment variable when running scylla_prepare on sudo
sysconfig parameters are passed via environment variables, but sudo resets it by default.
Need to keep them beyond sudo.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1453115792-21989-1-git-send-email-syuu@scylladb.com>
2016-01-18 13:23:55 +02:00
Tomasz Grabiec
5bf1afa059 config: Set default logging level to info
Commit d7b403db1f changed the default in
logging::logger. It affected tests but not scylla binary, where it's
being overwritten in main.cc.
Message-Id: <1452777008-21708-1-git-send-email-tgrabiec@scylladb.com>
2016-01-14 15:12:28 +02:00
Tomasz Grabiec
b013ed6357 cql3: Disable ALTER TABLE unless experimental features are on 2016-01-14 14:32:15 +02:00
Tomasz Grabiec
d4d0dd9cda tests: cql_test_env: Enable experimental features 2016-01-14 14:32:10 +02:00
Tomasz Grabiec
5865a43400 config: Add 'experimental' switch 2016-01-14 14:32:05 +02:00
Pekka Enberg
b81292d5d2 release: prepare for 0.16 2016-01-14 13:21:50 +02:00
214 changed files with 4098 additions and 5145 deletions

103
IDL.md
View File

@@ -1,103 +0,0 @@
#IDL definition
The schema we use similar to c++ schema.
Use class or struct similar to the object you need the serializer for.
Use namespace when applicable.
##keywords
* class/struct - a class or a struct like C++
class/struct can have final or stub marker
* namespace - has the same C++ meaning
* enum class - has the same C++ meaning
* final modifier for class - when a class mark as final it will not contain a size parameter. Note that final class cannot be extended by future version, so use with care
* stub class - when a class is mark as stub, it means that no code will be generated for this class and it is only there as a documentation.
* version attributes - mark with [[version id ]] mark that a field is available from a specific version
* template - A template class definition like C++
##Syntax
###Namespace
```
namespace ns_name { namespace-body }
```
* ns_name: either a previously unused identifier, in which case this is original-namespace-definition or the name of a namespace, in which case this is extension-namespace-definition
* namespace-body: possibly empty sequence of declarations of any kind (including class and struct definitions as well as nested namespaces)
###class/struct
`
class-key class-name final(optional) stub(optional) { member-specification } ;(optional)
`
* class-key: one of class or struct.
* class-name: the name of the class that's being defined. optionally followed by keyword final, optionally followed by keyword stub
* final: when a class mark as final, it means it can not be extended and there is no need to serialize its size, use with care.
* stub: when a class is mark as stub, it means no code will generate for it and it is added for documentation only.
* member-specification: list of access specifiers, and public member accessor see class member below.
* to be compatible with C++ a class definition can be followed by a semicolon.
###enum
`enum-key identifier enum-base { enumerator-list(optional) }`
* enum-key: only enum class is supported
* identifier: the name of the enumeration that's being declared.
* enum-base: colon (:), followed by a type-specifier-seq that names an integral type (see the C++ standard for the full list of all possible integral types).
* enumerator-list: comma-separated list of enumerator definitions, each of which is either simply an identifier, which becomes the name of the enumerator, or an identifier with an initializer: identifier = integral value.
Note that though C++ allows constexpr as an initialize value, it makes the documentation less readable, hence is not permitted.
###class member
`type member-access attributes(optional) default-value(optional);`
* type: Any valid C++ type, following the C++ notation. note that there should be a serializer for the type, but deceleration order is not mandatory
* member-access: is the way the member can be access. If the member is public it can be the name itself. if not it could be a getter function that should be followed by braces. Note that getter can (and probably should) be const methods.
* attributes: Attributes define by square brackets. Currently are use to mark a version in which a specific member was added [ [ version version-number] ] would mark that the specific member was added in the given version number.
###template
`template < parameter-list > class-declaration`
* parameter-list - a non-empty comma-separated list of the template parameters.
* class-decleration - (See class section) The class name declared become a template name.
##IDL example
Forward slashes comments are ignored until the end of the line.
```
namespace utils {
// An example of a stub class
class UUID stub {
int64_t most_sig_bits;
int64_t least_sig_bits;
}
}
namespace gms {
//an enum example
enum class application_state:int {STATUS = 0,
LOAD,
SCHEMA,
DC};
// example of final class
class versioned_value final {
// getter and setter as public member
int version;
sstring value;
}
class heart_beat_state {
//getter as function
int32_t get_generation();
//default value example
int32_t get_heart_beat_version() = 1;
}
class endpoint_state {
heart_beat_state get_heart_beat_state();
std::map<application_state, versioned_value> get_application_state_map();
}
class gossip_digest {
inet_address get_endpoint();
int32_t get_generation();
//mark that a field was added on a specific version
int32_t get_max_version() [ [version 0.14.2] ];
}
class gossip_digest_ack {
std::vector<gossip_digest> digests();
std::map<inet_address, gms::endpoint_state> get_endpoint_state_map();
}
}
```

View File

@@ -15,7 +15,7 @@ git submodule update --recursive
* Installing required packages:
```
sudo yum install yaml-cpp-devel lz4-devel zlib-devel snappy-devel jsoncpp-devel thrift-devel antlr3-tool antlr3-C++-devel libasan libubsan gcc-c++ gnutls-devel ninja-build ragel libaio-devel cryptopp-devel xfsprogs-devel numactl-devel hwloc-devel libpciaccess-devel libxml2-devel python3-pyparsing
sudo yum install yaml-cpp-devel lz4-devel zlib-devel snappy-devel jsoncpp-devel thrift-devel antlr3-tool antlr3-C++-devel libasan libubsan gcc-c++ gnutls-devel ninja-build ragel libaio-devel cryptopp-devel xfsprogs-devel
```
* Build Scylla

View File

@@ -1,6 +1,6 @@
#!/bin/sh
VERSION=0.17
VERSION=0.16
if test -f version
then

View File

@@ -106,7 +106,7 @@
"required":true,
"allowMultiple":false,
"type":"string",
"paramType":"query"
"paramType":"string"
}
]
}

View File

@@ -196,10 +196,6 @@
"value": {
"type": "string",
"description": "The version value"
},
"version": {
"type": "int",
"description": "The application state version"
}
}
}

View File

@@ -234,12 +234,12 @@
"type":"string",
"enum":[
"CLIENT_ID",
"ECHO",
"MUTATION",
"MUTATION_DONE",
"READ_DATA",
"READ_MUTATION_DATA",
"READ_DIGEST",
"GOSSIP_ECHO",
"GOSSIP_DIGEST_SYN",
"GOSSIP_DIGEST_ACK2",
"GOSSIP_SHUTDOWN",
@@ -247,6 +247,7 @@
"TRUNCATE",
"REPLICATION_FINISHED",
"MIGRATION_REQUEST",
"STREAM_INIT_MESSAGE",
"PREPARE_MESSAGE",
"PREPARE_DONE_MESSAGE",
"STREAM_MUTATION",

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2015 ScyllaDB
* Copyright 2015 Cloudius Systems
*/
/*
@@ -52,98 +52,67 @@ static std::unique_ptr<reply> exception_reply(std::exception_ptr eptr) {
return std::make_unique<reply>();
}
future<> set_server_init(http_context& ctx) {
future<> set_server(http_context& ctx) {
auto rb = std::make_shared < api_registry_builder > (ctx.api_doc);
return ctx.http_server.set_routes([rb, &ctx](routes& r) {
r.register_exeption_handler(exception_reply);
httpd::directory_handler* dir = new httpd::directory_handler(ctx.api_dir,
new content_replace("html"));
r.put(GET, "/ui", new httpd::file_handler(ctx.api_dir + "/index.html",
new content_replace("html")));
r.add(GET, url("/ui").remainder("path"), new httpd::directory_handler(ctx.api_dir,
new content_replace("html")));
rb->register_function(r, "system",
"The system related API");
set_system(ctx, r);
r.add(GET, url("/ui").remainder("path"), dir);
rb->set_api_doc(r);
});
}
rb->register_function(r, "storage_service",
"The storage service API");
set_storage_service(ctx,r);
rb->register_function(r, "commitlog",
"The commit log API");
set_commitlog(ctx,r);
rb->register_function(r, "gossiper",
"The gossiper API");
set_gossiper(ctx,r);
rb->register_function(r, "column_family",
"The column family API");
set_column_family(ctx, r);
static future<> register_api(http_context& ctx, const sstring& api_name,
const sstring api_desc,
std::function<void(http_context& ctx, routes& r)> f) {
auto rb = std::make_shared < api_registry_builder > (ctx.api_doc);
return ctx.http_server.set_routes([rb, &ctx, api_name, api_desc, f](routes& r) {
rb->register_function(r, api_name, api_desc);
f(ctx,r);
});
}
future<> set_server_storage_service(http_context& ctx) {
return register_api(ctx, "storage_service", "The storage service API", set_storage_service);
}
future<> set_server_gossip(http_context& ctx) {
return register_api(ctx, "gossiper",
"The gossiper API", set_gossiper);
}
future<> set_server_load_sstable(http_context& ctx) {
return register_api(ctx, "column_family",
"The column family API", set_column_family);
}
future<> set_server_messaging_service(http_context& ctx) {
return register_api(ctx, "messaging_service",
"The messaging service API", set_messaging_service);
}
future<> set_server_storage_proxy(http_context& ctx) {
return register_api(ctx, "storage_proxy",
"The storage proxy API", set_storage_proxy);
}
future<> set_server_stream_manager(http_context& ctx) {
return register_api(ctx, "stream_manager",
"The stream manager API", set_stream_manager);
}
future<> set_server_gossip_settle(http_context& ctx) {
auto rb = std::make_shared < api_registry_builder > (ctx.api_doc);
return ctx.http_server.set_routes([rb, &ctx](routes& r) {
rb->register_function(r, "failure_detector",
"The failure detector API");
set_failure_detector(ctx,r);
rb->register_function(r, "cache_service",
"The cache service API");
set_cache_service(ctx,r);
rb->register_function(r, "endpoint_snitch_info",
"The endpoint snitch info API");
set_endpoint_snitch(ctx, r);
});
}
future<> set_server_done(http_context& ctx) {
auto rb = std::make_shared < api_registry_builder > (ctx.api_doc);
return ctx.http_server.set_routes([rb, &ctx](routes& r) {
rb->register_function(r, "compaction_manager",
"The Compaction manager API");
set_compaction_manager(ctx, r);
rb->register_function(r, "lsa", "Log-structured allocator API");
set_lsa(ctx, r);
rb->register_function(r, "commitlog",
"The commit log API");
set_commitlog(ctx,r);
rb->register_function(r, "hinted_handoff",
"The hinted handoff API");
set_hinted_handoff(ctx, r);
rb->register_function(r, "failure_detector",
"The failure detector API");
set_failure_detector(ctx,r);
rb->register_function(r, "messaging_service",
"The messaging service API");
set_messaging_service(ctx, r);
rb->register_function(r, "storage_proxy",
"The storage proxy API");
set_storage_proxy(ctx, r);
rb->register_function(r, "cache_service",
"The cache service API");
set_cache_service(ctx,r);
rb->register_function(r, "collectd",
"The collectd API");
set_collectd(ctx, r);
rb->register_function(r, "endpoint_snitch_info",
"The endpoint snitch info API");
set_endpoint_snitch(ctx, r);
rb->register_function(r, "compaction_manager",
"The Compaction manager API");
set_compaction_manager(ctx, r);
rb->register_function(r, "hinted_handoff",
"The hinted handoff API");
set_hinted_handoff(ctx, r);
rb->register_function(r, "stream_manager",
"The stream manager API");
set_stream_manager(ctx, r);
rb->register_function(r, "system",
"The system related API");
set_system(ctx, r);
});
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2015 ScyllaDB
* Copyright 2015 Cloudius Systems
*/
/*
@@ -21,17 +21,31 @@
#pragma once
#include "http/httpd.hh"
#include "json/json_elements.hh"
#include "database.hh"
#include "service/storage_proxy.hh"
#include <boost/lexical_cast.hpp>
#include <boost/algorithm/string/split.hpp>
#include <boost/algorithm/string/classification.hpp>
#include "api/api-doc/utils.json.hh"
#include "utils/histogram.hh"
#include "http/exception.hh"
#include "api_init.hh"
namespace api {
struct http_context {
sstring api_dir;
sstring api_doc;
httpd::http_server_control http_server;
distributed<database>& db;
distributed<service::storage_proxy>& sp;
http_context(distributed<database>& _db, distributed<service::storage_proxy>&
_sp) : db(_db), sp(_sp) {}
};
future<> set_server(http_context& ctx);
template<class T>
std::vector<sstring> container_to_vec(const T& container) {
std::vector<sstring> res;

View File

@@ -1,51 +0,0 @@
/*
* Copyright 2016 ScylaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#pragma once
#include "database.hh"
#include "service/storage_proxy.hh"
#include "http/httpd.hh"
namespace api {
struct http_context {
sstring api_dir;
sstring api_doc;
httpd::http_server_control http_server;
distributed<database>& db;
distributed<service::storage_proxy>& sp;
http_context(distributed<database>& _db,
distributed<service::storage_proxy>& _sp)
: db(_db), sp(_sp) {
}
};
future<> set_server_init(http_context& ctx);
future<> set_server_storage_service(http_context& ctx);
future<> set_server_gossip(http_context& ctx);
future<> set_server_load_sstable(http_context& ctx);
future<> set_server_messaging_service(http_context& ctx);
future<> set_server_storage_proxy(http_context& ctx);
future<> set_server_stream_manager(http_context& ctx);
future<> set_server_gossip_settle(http_context& ctx);
future<> set_server_done(http_context& ctx);
}

View File

@@ -49,7 +49,7 @@ void set_compaction_manager(http_context& ctx, routes& r) {
s.ks = c->ks;
s.cf = c->cf;
s.unit = "keys";
s.task_type = sstables::compaction_name(c->type);
s.task_type = "compaction";
s.completed = c->total_keys_written;
s.total = c->total_partitions;
summaries.push_back(std::move(s));
@@ -67,14 +67,11 @@ void set_compaction_manager(http_context& ctx, routes& r) {
return make_ready_future<json::json_return_type>(json_void());
});
cm::stop_compaction.set(r, [&ctx] (std::unique_ptr<request> req) {
auto type = req->get_query_param("type");
return ctx.db.invoke_on_all([type] (database& db) {
auto& cm = db.get_compaction_manager();
cm.stop_compaction(type);
}).then([] {
return make_ready_future<json::json_return_type>(json_void());
});
cm::stop_compaction.set(r, [] (std::unique_ptr<request> req) {
//TBD
// FIXME
warn(unimplemented::cause::API);
return make_ready_future<json::json_return_type>("");
});
cm::get_pending_tasks.set(r, [&ctx] (std::unique_ptr<request> req) {

View File

@@ -44,7 +44,6 @@ void set_failure_detector(http_context& ctx, routes& r) {
// method that the state index are static but the name can be changed.
version_val.application_state = static_cast<std::underlying_type<gms::application_state>::type>(a.first);
version_val.value = a.second.value;
version_val.version = a.second.version;
val.application_state.push(version_val);
}
res.push_back(val);

View File

@@ -30,7 +30,6 @@
#include "repair/repair.hh"
#include "locator/snitch_base.hh"
#include "column_family.hh"
#include "log.hh"
namespace api {
@@ -272,21 +271,15 @@ void set_storage_service(http_context& ctx, routes& r) {
});
ss::force_keyspace_cleanup.set(r, [&ctx](std::unique_ptr<request> req) {
//TBD
// FIXME
// the nodetool clean up is used in many tests
// this workaround willl let it work until
// a cleanup is implemented
warn(unimplemented::cause::API);
auto keyspace = validate_keyspace(ctx, req->param);
auto column_families = split_cf(req->get_query_param("cf"));
if (column_families.empty()) {
column_families = map_keys(ctx.db.local().find_keyspace(keyspace).metadata().get()->cf_meta_data());
}
return ctx.db.invoke_on_all([keyspace, column_families] (database& db) {
std::vector<column_family*> column_families_vec;
auto& cm = db.get_compaction_manager();
for (auto entry : column_families) {
column_family* cf = &db.find_column_family(keyspace, entry);
cm.submit_cleanup_job(cf);
}
}).then([]{
return make_ready_future<json::json_return_type>(0);
});
auto column_family = req->get_query_param("cf");
return make_ready_future<json::json_return_type>(0);
});
ss::scrub.set(r, [&ctx](std::unique_ptr<request> req) {
@@ -405,13 +398,9 @@ void set_storage_service(http_context& ctx, routes& r) {
});
ss::get_logging_levels.set(r, [](std::unique_ptr<request> req) {
//TBD
unimplemented();
std::vector<ss::mapper> res;
for (auto i : logging::logger_registry().get_all_logger_names()) {
ss::mapper log;
log.key = i;
log.value = logging::level_name(logging::logger_registry().get_logger_level(i));
res.push_back(log);
}
return make_ready_future<json::json_return_type>(res);
});

View File

@@ -47,7 +47,7 @@ static hs::progress_info get_progress_info(const streaming::progress_info& info)
res.direction = info.dir;
res.file_name = info.file_name;
res.peer = boost::lexical_cast<std::string>(info.peer);
res.session_index = 0;
res.session_index = info.session_index;
res.total_bytes = info.total_bytes;
return res;
}
@@ -70,7 +70,7 @@ static hs::stream_state get_state(
for (auto info : result_future.get_coordinator().get()->get_all_session_info()) {
hs::stream_info si;
si.peer = boost::lexical_cast<std::string>(info.peer);
si.session_index = 0;
si.session_index = info.session_index;
si.state = info.state;
si.connecting = si.peer;
set_summaries(info.receiving_summaries, si.receiving_summaries);
@@ -109,16 +109,14 @@ void set_stream_manager(http_context& ctx, routes& r) {
});
hs::get_total_incoming_bytes.set(r, [](std::unique_ptr<request> req) {
gms::inet_address peer(req->param["peer"]);
return streaming::get_stream_manager().map_reduce0([peer](streaming::stream_manager& sm) {
gms::inet_address ep(req->param["peer"]);
utils::UUID plan_id = gms::get_local_gossiper().get_host_id(ep);
return streaming::get_stream_manager().map_reduce0([plan_id](streaming::stream_manager& stream) {
int64_t res = 0;
for (auto sr : sm.get_all_streams()) {
if (sr) {
for (auto session : sr->get_coordinator()->get_all_stream_sessions()) {
if (session->peer == peer) {
res += session->get_bytes_received();
}
}
streaming::stream_result_future* s = stream.get_receiving_stream(plan_id).get();
if (s != nullptr) {
for (auto si: s->get_coordinator()->get_all_session_info()) {
res += si.get_total_size_received();
}
}
return res;
@@ -128,12 +126,12 @@ void set_stream_manager(http_context& ctx, routes& r) {
});
hs::get_all_total_incoming_bytes.set(r, [](std::unique_ptr<request> req) {
return streaming::get_stream_manager().map_reduce0([](streaming::stream_manager& sm) {
return streaming::get_stream_manager().map_reduce0([](streaming::stream_manager& stream) {
int64_t res = 0;
for (auto sr : sm.get_all_streams()) {
if (sr) {
for (auto session : sr->get_coordinator()->get_all_stream_sessions()) {
res += session->get_bytes_received();
for (auto s : stream.get_receiving_streams()) {
if (s.second.get() != nullptr) {
for (auto si: s.second.get()->get_coordinator()->get_all_session_info()) {
res += si.get_total_size_received();
}
}
}
@@ -144,16 +142,14 @@ void set_stream_manager(http_context& ctx, routes& r) {
});
hs::get_total_outgoing_bytes.set(r, [](std::unique_ptr<request> req) {
gms::inet_address peer(req->param["peer"]);
return streaming::get_stream_manager().map_reduce0([peer](streaming::stream_manager& sm) {
gms::inet_address ep(req->param["peer"]);
utils::UUID plan_id = gms::get_local_gossiper().get_host_id(ep);
return streaming::get_stream_manager().map_reduce0([plan_id](streaming::stream_manager& stream) {
int64_t res = 0;
for (auto sr : sm.get_all_streams()) {
if (sr) {
for (auto session : sr->get_coordinator()->get_all_stream_sessions()) {
if (session->peer == peer) {
res += session->get_bytes_sent();
}
}
streaming::stream_result_future* s = stream.get_sending_stream(plan_id).get();
if (s != nullptr) {
for (auto si: s->get_coordinator()->get_all_session_info()) {
res += si.get_total_size_received();
}
}
return res;
@@ -163,12 +159,12 @@ void set_stream_manager(http_context& ctx, routes& r) {
});
hs::get_all_total_outgoing_bytes.set(r, [](std::unique_ptr<request> req) {
return streaming::get_stream_manager().map_reduce0([](streaming::stream_manager& sm) {
return streaming::get_stream_manager().map_reduce0([](streaming::stream_manager& stream) {
int64_t res = 0;
for (auto sr : sm.get_all_streams()) {
if (sr) {
for (auto session : sr->get_coordinator()->get_all_stream_sessions()) {
res += session->get_bytes_sent();
for (auto s : stream.get_initiated_streams()) {
if (s.second.get() != nullptr) {
for (auto si: s.second.get()->get_coordinator()->get_all_session_info()) {
res += si.get_total_size_received();
}
}
}

View File

@@ -91,62 +91,6 @@ class auth_migration_listener : public service::migration_listener {
static auth_migration_listener auth_migration;
/**
* Poor mans job schedule. For maximum 2 jobs. Sic.
* Still does nothing more clever than waiting 10 seconds
* like origin, then runs the submitted tasks.
*
* Only difference compared to sleep (from which this
* borrows _heavily_) is that if tasks have not run by the time
* we exit (and do static clean up) we delete the promise + cont
*
* Should be abstracted to some sort of global server function
* probably.
*/
void auth::auth::schedule_when_up(scheduled_func f) {
struct waiter {
promise<> done;
timer<> tmr;
waiter() : tmr([this] {done.set_value();})
{
tmr.arm(SUPERUSER_SETUP_DELAY);
}
~waiter() {
if (tmr.armed()) {
tmr.cancel();
done.set_exception(std::runtime_error("shutting down"));
}
logger.trace("Deleting scheduled task");
}
void kill() {
}
};
typedef std::unique_ptr<waiter> waiter_ptr;
static thread_local std::vector<waiter_ptr> waiters;
logger.trace("Adding scheduled task");
waiters.emplace_back(std::make_unique<waiter>());
auto* w = waiters.back().get();
w->done.get_future().finally([w] {
auto i = std::find_if(waiters.begin(), waiters.end(), [w](const waiter_ptr& p) {
return p.get() == w;
});
if (i != waiters.end()) {
waiters.erase(i);
}
}).then([f = std::move(f)] {
logger.trace("Running scheduled task");
return f();
}).handle_exception([](auto ep) {
return make_ready_future();
});
}
bool auth::auth::is_class_type(const sstring& type, const sstring& classname) {
if (type == classname) {
return true;
@@ -184,7 +128,7 @@ future<> auth::auth::setup() {
}).then([] {
service::get_local_migration_manager().register_listener(&auth_migration); // again, only one shard...
// instead of once-timer, just schedule this later
schedule_when_up([] {
sleep(SUPERUSER_SETUP_DELAY).then([] {
// setup default super user
return has_existing_users(USERS_CF, DEFAULT_SUPERUSER_NAME, USER_NAME).then([](bool exists) {
if (!exists) {

View File

@@ -112,9 +112,5 @@ public:
static future<> setup_table(const sstring& name, const sstring& cql);
static future<bool> has_existing_users(const sstring& cfname, const sstring& def_user_name, const sstring& name_column_name);
// For internal use. Run function "when system is up".
typedef std::function<future<>()> scheduled_func;
static void schedule_when_up(scheduled_func);
};
}

View File

@@ -160,8 +160,8 @@ future<> auth::password_authenticator::init() {
return auth::setup_table(CREDENTIALS_CF, create_table).then([this] {
// instead of once-timer, just schedule this later
auth::schedule_when_up([] {
return auth::has_existing_users(CREDENTIALS_CF, DEFAULT_USER_NAME, USER_NAME).then([](bool exists) {
sleep(auth::SUPERUSER_SETUP_DELAY).then([] {
auth::has_existing_users(CREDENTIALS_CF, DEFAULT_USER_NAME, USER_NAME).then([](bool exists) {
if (!exists) {
cql3::get_local_query_processor().process(sprint("INSERT INTO %s.%s (%s, %s) VALUES (?, ?) USING TIMESTAMP 0",
auth::AUTH_KS,

View File

@@ -206,10 +206,6 @@ public:
}
}
void write(const char* ptr, size_t size) {
write(bytes_view(reinterpret_cast<const signed char*>(ptr), size));
}
// Writes given sequence of bytes with a preceding length component encoded in big-endian format
inline void write_blob(bytes_view v) {
assert((size_type)v.size() == v.size());

View File

@@ -53,11 +53,6 @@ canonical_mutation::canonical_mutation(const mutation& m)
}())
{ }
utils::UUID canonical_mutation::column_family_id() const {
data_input in(_data);
return db::serializer<utils::UUID>::read(in);
}
mutation canonical_mutation::to_mutation(schema_ptr s) const {
data_input in(_data);

View File

@@ -49,10 +49,16 @@ public:
// is not intended, user should sync the schema first.
mutation to_mutation(schema_ptr) const;
utils::UUID column_family_id() const;
friend class db::serializer<canonical_mutation>;
};
//
//template<>
//struct hash<canonical_mutation> {
// template<typename Hasher>
// void operator()(Hasher& h, const canonical_mutation& m) const {
// m.feed_hash(h);
// }
//};
namespace db {

View File

@@ -34,8 +34,6 @@ enum class compaction_strategy_type {
};
class compaction_strategy_impl;
class sstable;
struct compaction_descriptor;
class compaction_strategy {
::shared_ptr<compaction_strategy_impl> _compaction_strategy_impl;
@@ -48,9 +46,7 @@ public:
compaction_strategy(compaction_strategy&&);
compaction_strategy& operator=(compaction_strategy&&);
// Return a list of sstables to be compacted after applying the strategy.
compaction_descriptor get_sstables_for_compaction(column_family& cfs, std::vector<lw_shared_ptr<sstable>> candidates);
future<> compact(column_family& cfs);
static sstring name(compaction_strategy_type type) {
switch (type) {
case compaction_strategy_type::null:

View File

@@ -269,7 +269,6 @@ scylla_core = (['database.cc',
'sstables/partition.cc',
'sstables/filter.cc',
'sstables/compaction.cc',
'sstables/compaction_manager.cc',
'log.cc',
'transport/event.cc',
'transport/event_notifier.cc',
@@ -317,7 +316,6 @@ scylla_core = (['database.cc',
'utils/big_decimal.cc',
'types.cc',
'validation.cc',
'service/priority_manager.cc',
'service/migration_manager.cc',
'service/storage_proxy.cc',
'cql3/operator.cc',
@@ -355,6 +353,7 @@ scylla_core = (['database.cc',
'utils/bloom_filter.cc',
'utils/bloom_calculations.cc',
'utils/rate_limiter.cc',
'utils/compaction_manager.cc',
'utils/file_lock.cc',
'utils/dynamic_bitset.cc',
'gms/version_generator.cc',
@@ -395,6 +394,7 @@ scylla_core = (['database.cc',
'service/load_broadcaster.cc',
'service/pager/paging_state.cc',
'service/pager/query_pagers.cc',
'streaming/streaming.cc',
'streaming/stream_task.cc',
'streaming/stream_session.cc',
'streaming/stream_request.cc',
@@ -407,6 +407,13 @@ scylla_core = (['database.cc',
'streaming/stream_coordinator.cc',
'streaming/stream_manager.cc',
'streaming/stream_result_future.cc',
'streaming/messages/stream_init_message.cc',
'streaming/messages/retry_message.cc',
'streaming/messages/received_message.cc',
'streaming/messages/prepare_message.cc',
'streaming/messages/file_message_header.cc',
'streaming/messages/outgoing_file_message.cc',
'streaming/messages/incoming_file_message.cc',
'streaming/stream_session_state.cc',
'gc_clock.cc',
'partition_slice_builder.cc',
@@ -459,21 +466,7 @@ api = ['api/api.cc',
'api/system.cc'
]
idls = ['idl/gossip_digest.idl.hh',
'idl/uuid.idl.hh',
'idl/range.idl.hh',
'idl/keys.idl.hh',
'idl/read_command.idl.hh',
'idl/token.idl.hh',
'idl/ring_position.idl.hh',
'idl/result.idl.hh',
'idl/frozen_mutation.idl.hh',
'idl/reconcilable_result.idl.hh',
'idl/streaming.idl.hh',
'idl/paging_state.idl.hh',
]
scylla_tests_dependencies = scylla_core + api + idls + [
scylla_tests_dependencies = scylla_core + [
'tests/cql_test_env.cc',
'tests/cql_assertions.cc',
'tests/result_set_assertions.cc',
@@ -486,10 +479,11 @@ scylla_tests_seastar_deps = [
]
deps = {
'scylla': idls + ['main.cc'] + scylla_core + api,
'scylla': ['main.cc'] + scylla_core + api,
}
tests_not_using_seastar_test_framework = set([
'tests/types_test',
'tests/keys_test',
'tests/partitioner_test',
'tests/map_difference_test',
@@ -556,31 +550,16 @@ else:
args.pie = ''
args.fpie = ''
# a list element means a list of alternative packages to consider
# the first element becomes the HAVE_pkg define
# a string element is a package name with no alternatives
optional_packages = [['libsystemd', 'libsystemd-daemon']]
optional_packages = ['libsystemd']
pkgs = []
def setup_first_pkg_of_list(pkglist):
# The HAVE_pkg symbol is taken from the first alternative
upkg = pkglist[0].upper().replace('-', '_')
for pkg in pkglist:
if have_pkg(pkg):
pkgs.append(pkg)
defines.append('HAVE_{}=1'.format(upkg))
return True
return False
for pkglist in optional_packages:
if isinstance(pkglist, str):
pkglist = [pkglist]
if not setup_first_pkg_of_list(pkglist):
if len(pkglist) == 1:
print('Missing optional package {pkglist[0]}'.format(**locals()))
else:
alternatives = ':'.join(pkglist[1:])
print('Missing optional package {pkglist[0]} (or alteratives {alternatives})'.format(**locals()))
for pkg in optional_packages:
if have_pkg(pkg):
pkgs.append(pkg)
upkg = pkg.upper().replace('-', '_')
defines.append('HAVE_{}=1'.format(upkg))
else:
print('Missing optional package {pkg}'.format(**locals()))
defines = ' '.join(['-D' + d for d in defines])
@@ -678,9 +657,6 @@ with open(buildfile, 'w') as f:
rule swagger
command = seastar/json/json2code.py -f $in -o $out
description = SWAGGER $out
rule serializer
command = ./idl-compiler.py --ns ser -f $in -o $out
description = IDL compiler $out
rule ninja
command = {ninja} -C $subdir $target
restat = 1
@@ -717,7 +693,6 @@ with open(buildfile, 'w') as f:
compiles = {}
ragels = {}
swaggers = {}
serializers = {}
thrifts = set()
antlr3_grammars = set()
for binary in build_artifacts:
@@ -771,9 +746,6 @@ with open(buildfile, 'w') as f:
elif src.endswith('.rl'):
hh = '$builddir/' + mode + '/gen/' + src.replace('.rl', '.hh')
ragels[hh] = src
elif src.endswith('.idl.hh'):
hh = '$builddir/' + mode + '/gen/' + src.replace('.idl.hh', '.dist.hh')
serializers[hh] = src
elif src.endswith('.json'):
hh = '$builddir/' + mode + '/gen/' + src + '.hh'
swaggers[hh] = src
@@ -792,7 +764,6 @@ with open(buildfile, 'w') as f:
for g in antlr3_grammars:
gen_headers += g.headers('$builddir/{}/gen'.format(mode))
gen_headers += list(swaggers.keys())
gen_headers += list(serializers.keys())
f.write('build {}: cxx.{} {} || {} \n'.format(obj, mode, src, ' '.join(gen_headers)))
if src in extra_cxxflags:
f.write(' cxxflags = {seastar_cflags} $cxxflags $cxxflags_{mode} {extra_cxxflags}\n'.format(mode = mode, extra_cxxflags = extra_cxxflags[src], **modeval))
@@ -802,9 +773,6 @@ with open(buildfile, 'w') as f:
for hh in swaggers:
src = swaggers[hh]
f.write('build {}: swagger {}\n'.format(hh,src))
for hh in serializers:
src = serializers[hh]
f.write('build {}: serializer {} | idl-compiler.py\n'.format(hh,src))
for thrift in thrifts:
outs = ' '.join(thrift.generated('$builddir/{}/gen'.format(mode)))
f.write('build {}: thrift.{} {}\n'.format(outs, mode, thrift.source))

View File

@@ -137,12 +137,6 @@ future<bool> alter_table_statement::announce_migration(distributed<service::stor
if (schema->is_super()) {
throw exceptions::invalid_request_exception("Cannot use non-frozen collections with super column families");
}
auto it = schema->collections().find(column_name->name());
if (it != schema->collections().end() && !type->is_compatible_with(*it->second)) {
throw exceptions::invalid_request_exception(sprint("Cannot add a collection with the name %s "
"because a collection with the same name and a different type has already been used in the past", column_name));
}
}
cfm.with_column(column_name->name(), type, _is_static ? column_kind::static_column : column_kind::regular_column);

View File

@@ -88,7 +88,7 @@ void batch_statement::verify_batch_size(const std::vector<mutation>& mutations)
auto size = v.size / 1024;
if (size > warn_threshold) {
if (v.size > warn_threshold) {
std::unordered_set<sstring> ks_cf_pairs;
for (auto&& m : mutations) {
ks_cf_pairs.insert(m.schema()->ks_name() + "." + m.schema()->cf_name());

View File

@@ -226,16 +226,15 @@ select_statement::execute(distributed<service::storage_proxy>& proxy, service::q
// An aggregation query will never be paged for the user, but we always page it internally to avoid OOM.
// If we user provided a page_size we'll use that to page internally (because why not), otherwise we use our default
// Note that if there are some nodes in the cluster with a version less than 2.0, we can't use paging (CASSANDRA-6707).
auto aggregate = _selection->is_aggregate();
if (aggregate && page_size <= 0) {
if (_selection->is_aggregate() && page_size <= 0) {
page_size = DEFAULT_COUNT_PAGE_SIZE;
}
auto key_ranges = _restrictions->get_partition_key_ranges(options);
if (!aggregate && (page_size <= 0
if (page_size <= 0
|| !service::pager::query_pagers::may_need_paging(page_size,
*command, key_ranges))) {
*command, key_ranges)) {
return execute(proxy, command, std::move(key_ranges), state, options,
now);
}
@@ -243,7 +242,7 @@ select_statement::execute(distributed<service::storage_proxy>& proxy, service::q
auto p = service::pager::query_pagers::pager(_schema, _selection,
state, options, command, std::move(key_ranges));
if (aggregate) {
if (_selection->is_aggregate()) {
return do_with(
cql3::selection::result_set_builder(*_selection, now,
options.get_serialization_format()),

View File

@@ -59,7 +59,7 @@ bool update_statement::require_full_clustering_key() const {
void update_statement::add_update_for_key(mutation& m, const exploded_clustering_prefix& prefix, const update_parameters& params) {
if (s->is_dense()) {
if (!prefix || (prefix.size() == 1 && prefix.components().front().empty())) {
throw exceptions::invalid_request_exception(sprint("Missing PRIMARY KEY part %s", s->clustering_key_columns().begin()->name_as_text()));
throw exceptions::invalid_request_exception(sprint("Missing PRIMARY KEY part %s", *s->clustering_key_columns().begin()));
}
// An empty name for the compact value is what we use to recognize the case where there is not column

View File

@@ -23,7 +23,6 @@
#include "database.hh"
#include "unimplemented.hh"
#include "core/future-util.hh"
#include "db/commitlog/commitlog_entry.hh"
#include "db/system_keyspace.hh"
#include "db/consistency_level.hh"
#include "db/serializer.hh"
@@ -59,7 +58,6 @@
#include "utils/latency.hh"
#include "utils/flush_queue.hh"
#include "schema_registry.hh"
#include "service/priority_manager.hh"
using namespace std::chrono_literals;
@@ -129,9 +127,9 @@ column_family::make_partition_presence_checker(lw_shared_ptr<sstable_list> old_s
mutation_source
column_family::sstables_as_mutation_source() {
return mutation_source([this] (schema_ptr s, const query::partition_range& r, const io_priority_class& pc) {
return make_sstable_reader(std::move(s), r, pc);
});
return [this] (schema_ptr s, const query::partition_range& r) {
return make_sstable_reader(std::move(s), r);
};
}
// define in .cc, since sstable is forward-declared in .hh
@@ -156,14 +154,10 @@ class range_sstable_reader final : public mutation_reader::impl {
const query::partition_range& _pr;
lw_shared_ptr<sstable_list> _sstables;
mutation_reader _reader;
// Use a pointer instead of copying, so we don't need to regenerate the reader if
// the priority changes.
const io_priority_class* _pc;
public:
range_sstable_reader(schema_ptr s, lw_shared_ptr<sstable_list> sstables, const query::partition_range& pr, const io_priority_class& pc)
range_sstable_reader(schema_ptr s, lw_shared_ptr<sstable_list> sstables, const query::partition_range& pr)
: _pr(pr)
, _sstables(std::move(sstables))
, _pc(&pc)
{
std::vector<mutation_reader> readers;
for (const lw_shared_ptr<sstables::sstable>& sst : *_sstables | boost::adaptors::map_values) {
@@ -190,15 +184,11 @@ class single_key_sstable_reader final : public mutation_reader::impl {
mutation_opt _m;
bool _done = false;
lw_shared_ptr<sstable_list> _sstables;
// Use a pointer instead of copying, so we don't need to regenerate the reader if
// the priority changes.
const io_priority_class* _pc;
public:
single_key_sstable_reader(schema_ptr schema, lw_shared_ptr<sstable_list> sstables, const partition_key& key, const io_priority_class& pc)
single_key_sstable_reader(schema_ptr schema, lw_shared_ptr<sstable_list> sstables, const partition_key& key)
: _schema(std::move(schema))
, _key(sstables::key::from_partition_key(*_schema, key))
, _sstables(std::move(sstables))
, _pc(&pc)
{ }
virtual future<mutation_opt> operator()() override {
@@ -217,26 +207,26 @@ public:
};
mutation_reader
column_family::make_sstable_reader(schema_ptr s, const query::partition_range& pr, const io_priority_class& pc) const {
column_family::make_sstable_reader(schema_ptr s, const query::partition_range& pr) const {
if (pr.is_singular() && pr.start()->value().has_key()) {
const dht::ring_position& pos = pr.start()->value();
if (dht::shard_of(pos.token()) != engine().cpu_id()) {
return make_empty_reader(); // range doesn't belong to this shard
}
return make_mutation_reader<single_key_sstable_reader>(std::move(s), _sstables, *pos.key(), pc);
return make_mutation_reader<single_key_sstable_reader>(std::move(s), _sstables, *pos.key());
} else {
// range_sstable_reader is not movable so we need to wrap it
return make_mutation_reader<range_sstable_reader>(std::move(s), _sstables, pr, pc);
return make_mutation_reader<range_sstable_reader>(std::move(s), _sstables, pr);
}
}
key_source column_family::sstables_as_key_source() const {
return key_source([this] (const query::partition_range& range, const io_priority_class& pc) {
return [this] (const query::partition_range& range) {
std::vector<key_reader> readers;
readers.reserve(_sstables->size());
std::transform(_sstables->begin(), _sstables->end(), std::back_inserter(readers), [&] (auto&& entry) {
auto& sst = entry.second;
auto rd = sstables::make_key_reader(_schema, sst, range, pc);
auto rd = sstables::make_key_reader(_schema, sst, range);
if (sst->is_shared()) {
rd = make_filtering_reader(std::move(rd), [] (const dht::decorated_key& dk) {
return dht::shard_of(dk.token()) == engine().cpu_id();
@@ -245,7 +235,7 @@ key_source column_family::sstables_as_key_source() const {
return rd;
});
return make_combined_reader(_schema, std::move(readers));
});
};
}
// Exposed for testing, not performance critical.
@@ -285,7 +275,7 @@ column_family::find_row(schema_ptr s, const dht::decorated_key& partition_key, c
}
mutation_reader
column_family::make_reader(schema_ptr s, const query::partition_range& range, const io_priority_class& pc) const {
column_family::make_reader(schema_ptr s, const query::partition_range& range) const {
if (query::is_wrap_around(range, *s)) {
// make_combined_reader() can't handle streams that wrap around yet.
fail(unimplemented::cause::WRAP_AROUND);
@@ -319,15 +309,14 @@ column_family::make_reader(schema_ptr s, const query::partition_range& range, co
}
if (_config.enable_cache) {
readers.emplace_back(_cache.make_reader(s, range, pc));
readers.emplace_back(_cache.make_reader(s, range));
} else {
readers.emplace_back(make_sstable_reader(s, range, pc));
readers.emplace_back(make_sstable_reader(s, range));
}
return make_combined_reader(std::move(readers));
}
// Not performance critical. Currently used for testing only.
template <typename Func>
future<bool>
column_family::for_all_partitions(schema_ptr s, Func&& func) const {
@@ -474,15 +463,7 @@ future<sstables::entry_descriptor> column_family::probe_file(sstring sstdir, sst
}
update_sstables_known_generation(comps.generation);
{
auto i = _sstables->find(comps.generation);
if (i != _sstables->end()) {
auto new_toc = sstdir + "/" + fname;
throw std::runtime_error(sprint("Attempted to add sstable generation %d twice: new=%s existing=%s",
comps.generation, new_toc, i->second->toc_filename()));
}
}
assert(_sstables->count(comps.generation) == 0);
auto fut = sstable::get_sstable_key_range(*_schema, _schema->ks_name(), _schema->cf_name(), sstdir, comps.generation, comps.version, comps.format);
return std::move(fut).then([this, sstdir = std::move(sstdir), comps] (range<partition_key> r) {
@@ -604,20 +585,27 @@ column_family::try_flush_memtable_to_sstable(lw_shared_ptr<memtable> old) {
_config.cf_stats->pending_memtables_flushes_bytes += memtable_size;
newtab->set_unshared();
dblog.debug("Flushing to {}", newtab->get_filename());
// Note that due to our sharded architecture, it is possible that
// in the face of a value change some shards will backup sstables
// while others won't.
//
// This is, in theory, possible to mitigate through a rwlock.
// However, this doesn't differ from the situation where all tables
// are coming from a single shard and the toggle happens in the
// middle of them.
//
// The code as is guarantees that we'll never partially backup a
// single sstable, so that is enough of a guarantee.
auto&& priority = service::get_local_memtable_flush_priority();
return newtab->write_components(*old, incremental_backups_enabled(), priority).then([this, newtab, old] {
return newtab->open_data();
return newtab->write_components(*old).then([this, newtab, old] {
return newtab->open_data().then([this, newtab] {
// Note that due to our sharded architecture, it is possible that
// in the face of a value change some shards will backup sstables
// while others won't.
//
// This is, in theory, possible to mitigate through a rwlock.
// However, this doesn't differ from the situation where all tables
// are coming from a single shard and the toggle happens in the
// middle of them.
//
// The code as is guarantees that we'll never partially backup a
// single sstable, so that is enough of a guarantee.
if (!incremental_backups_enabled()) {
return make_ready_future<>();
}
auto dir = newtab->get_dir() + "/backups/";
return touch_directory(dir).then([dir, newtab] {
return newtab->create_links(dir);
});
});
}).then_wrapped([this, old, newtab, memtable_size] (future<> ret) {
_config.cf_stats->pending_memtables_flushes_count--;
_config.cf_stats->pending_memtables_flushes_bytes -= memtable_size;
@@ -722,119 +710,68 @@ column_family::reshuffle_sstables(int64_t start) {
});
}
void
column_family::rebuild_sstable_list(const std::vector<sstables::shared_sstable>& new_sstables,
const std::vector<sstables::shared_sstable>& sstables_to_remove) {
// Build a new list of _sstables: We remove from the existing list the
// tables we compacted (by now, there might be more sstables flushed
// later), and we add the new tables generated by the compaction.
// We create a new list rather than modifying it in-place, so that
// on-going reads can continue to use the old list.
auto current_sstables = _sstables;
_sstables = make_lw_shared<sstable_list>();
// zeroing live_disk_space_used and live_sstable_count because the
// sstable list is re-created below.
_stats.live_disk_space_used = 0;
_stats.live_sstable_count = 0;
std::unordered_set<sstables::shared_sstable> s(
sstables_to_remove.begin(), sstables_to_remove.end());
for (const auto& oldtab : *current_sstables) {
// Checks if oldtab is a sstable not being compacted.
if (!s.count(oldtab.second)) {
update_stats_for_new_sstable(oldtab.second->data_size());
_sstables->emplace(oldtab.first, oldtab.second);
}
}
for (const auto& newtab : new_sstables) {
// FIXME: rename the new sstable(s). Verify a rename doesn't cause
// problems for the sstable object.
update_stats_for_new_sstable(newtab->data_size());
_sstables->emplace(newtab->generation(), newtab);
}
for (const auto& oldtab : sstables_to_remove) {
oldtab->mark_for_deletion();
}
}
future<>
column_family::compact_sstables(sstables::compaction_descriptor descriptor, bool cleanup) {
column_family::compact_sstables(sstables::compaction_descriptor descriptor) {
if (!descriptor.sstables.size()) {
// if there is nothing to compact, just return.
return make_ready_future<>();
}
return with_lock(_sstables_lock.for_read(), [this, descriptor = std::move(descriptor), cleanup] {
return with_lock(_sstables_lock.for_read(), [this, descriptor = std::move(descriptor)] {
auto sstables_to_compact = make_lw_shared<std::vector<sstables::shared_sstable>>(std::move(descriptor.sstables));
auto new_tables = make_lw_shared<std::vector<sstables::shared_sstable>>();
auto new_tables = make_lw_shared<std::vector<
std::pair<unsigned, sstables::shared_sstable>>>();
auto create_sstable = [this, new_tables] {
auto gen = this->calculate_generation_for_new_table();
// FIXME: this generation calculation should be in a function.
auto gen = _sstable_generation++ * smp::count + engine().cpu_id();
// FIXME: use "tmp" marker in names of incomplete sstable
auto sst = make_lw_shared<sstables::sstable>(_schema->ks_name(), _schema->cf_name(), _config.datadir, gen,
sstables::sstable::version_types::ka,
sstables::sstable::format_types::big);
sst->set_unshared();
new_tables->emplace_back(sst);
new_tables->emplace_back(gen, sst);
return sst;
};
return sstables::compact_sstables(*sstables_to_compact, *this,
create_sstable, descriptor.max_sstable_bytes, descriptor.level, cleanup).then([this, new_tables, sstables_to_compact] {
this->rebuild_sstable_list(*new_tables, *sstables_to_compact);
}).then_wrapped([this, new_tables] (future<> f) {
try {
f.get();
} catch (...) {
// Delete either partially or fully written sstables of a compaction that
// was either stopped abruptly (e.g. out of disk space) or deliberately
// (e.g. nodetool stop COMPACTION).
for (auto& sst : *new_tables) {
dblog.debug("Deleting sstable {} of interrupted compaction for {}/{}", sst->get_filename(), _schema->ks_name(), _schema->cf_name());
sst->mark_for_deletion();
create_sstable, descriptor.max_sstable_bytes, descriptor.level).then([this, new_tables, sstables_to_compact] {
// Build a new list of _sstables: We remove from the existing list the
// tables we compacted (by now, there might be more sstables flushed
// later), and we add the new tables generated by the compaction.
// We create a new list rather than modifying it in-place, so that
// on-going reads can continue to use the old list.
auto current_sstables = _sstables;
_sstables = make_lw_shared<sstable_list>();
// zeroing live_disk_space_used and live_sstable_count because the
// sstable list is re-created below.
_stats.live_disk_space_used = 0;
_stats.live_sstable_count = 0;
std::unordered_set<sstables::shared_sstable> s(
sstables_to_compact->begin(), sstables_to_compact->end());
for (const auto& oldtab : *current_sstables) {
// Checks if oldtab is a sstable not being compacted.
if (!s.count(oldtab.second)) {
update_stats_for_new_sstable(oldtab.second->data_size());
_sstables->emplace(oldtab.first, oldtab.second);
}
throw;
}
for (const auto& newtab : *new_tables) {
// FIXME: rename the new sstable(s). Verify a rename doesn't cause
// problems for the sstable object.
update_stats_for_new_sstable(newtab.second->data_size());
_sstables->emplace(newtab.first, newtab.second);
}
for (const auto& oldtab : *sstables_to_compact) {
oldtab->mark_for_deletion();
}
});
});
}
static bool needs_cleanup(const lw_shared_ptr<sstables::sstable>& sst,
const lw_shared_ptr<std::vector<range<dht::token>>>& owned_ranges,
schema_ptr s) {
auto first = sst->get_first_partition_key(*s);
auto last = sst->get_last_partition_key(*s);
auto first_token = dht::global_partitioner().get_token(*s, first);
auto last_token = dht::global_partitioner().get_token(*s, last);
range<dht::token> sst_token_range = range<dht::token>::make(first_token, last_token);
// return true iff sst partition range isn't fully contained in any of the owned ranges.
for (auto& r : *owned_ranges) {
if (r.contains(sst_token_range, dht::token_comparator())) {
return false;
}
}
return true;
}
future<> column_family::cleanup_sstables(sstables::compaction_descriptor descriptor) {
std::vector<range<dht::token>> r = service::get_local_storage_service().get_local_ranges(_schema->ks_name());
auto owned_ranges = make_lw_shared<std::vector<range<dht::token>>>(std::move(r));
auto sstables_to_cleanup = make_lw_shared<std::vector<sstables::shared_sstable>>(std::move(descriptor.sstables));
return parallel_for_each(*sstables_to_cleanup, [this, owned_ranges = std::move(owned_ranges), sstables_to_cleanup] (auto& sst) {
if (!owned_ranges->empty() && !needs_cleanup(sst, owned_ranges, _schema)) {
return make_ready_future<>();
}
std::vector<sstables::shared_sstable> sstable_to_compact({ sst });
return this->compact_sstables(sstables::compaction_descriptor(std::move(sstable_to_compact)), true);
});
}
future<>
column_family::load_new_sstables(std::vector<sstables::entry_descriptor> new_tables) {
return parallel_for_each(new_tables, [this] (auto comps) {
@@ -880,9 +817,12 @@ void column_family::trigger_compaction() {
}
}
future<> column_family::run_compaction(sstables::compaction_descriptor descriptor) {
return compact_sstables(std::move(descriptor)).then([this] {
_stats.pending_compactions--;
future<> column_family::run_compaction() {
sstables::compaction_strategy strategy = _compaction_strategy;
return do_with(std::move(strategy), [this] (sstables::compaction_strategy& cs) {
return cs.compact(*this).then([this] {
_stats.pending_compactions--;
});
});
}
@@ -1566,7 +1506,7 @@ column_family::query(schema_ptr s, const query::read_command& cmd, const std::ve
return do_with(query_state(std::move(s), cmd, partition_ranges), [this] (query_state& qs) {
return do_until(std::bind(&query_state::done, &qs), [this, &qs] {
auto&& range = *qs.current_partition_range++;
qs.reader = make_reader(qs.schema, range, service::get_local_sstable_query_read_priority());
qs.reader = make_reader(qs.schema, range);
qs.range_empty = false;
return do_until([&qs] { return !qs.limit || qs.range_empty; }, [&qs] {
return qs.reader().then([&qs](mutation_opt mo) {
@@ -1595,9 +1535,9 @@ column_family::query(schema_ptr s, const query::read_command& cmd, const std::ve
mutation_source
column_family::as_mutation_source() const {
return mutation_source([this] (schema_ptr s, const query::partition_range& range, const io_priority_class& pc) {
return this->make_reader(std::move(s), range, pc);
});
return [this] (schema_ptr s, const query::partition_range& range) {
return this->make_reader(std::move(s), range);
};
}
future<lw_shared_ptr<query::result>>
@@ -1654,8 +1594,7 @@ std::ostream& operator<<(std::ostream& out, const atomic_cell_or_collection& c)
}
std::ostream& operator<<(std::ostream& os, const mutation& m) {
const ::schema& s = *m.schema();
fprint(os, "{%s.%s key %s data ", s.ks_name(), s.cf_name(), m.decorated_key());
fprint(os, "{mutation: schema %p key %s data ", m.schema().get(), m.decorated_key());
os << m.partition() << "}";
return os;
}
@@ -1674,49 +1613,6 @@ std::ostream& operator<<(std::ostream& out, const database& db) {
return out;
}
void
column_family::apply(const mutation& m, const db::replay_position& rp) {
utils::latency_counter lc;
_stats.writes.set_latency(lc);
active_memtable().apply(m, rp);
seal_on_overflow();
_stats.writes.mark(lc);
if (lc.is_start()) {
_stats.estimated_write.add(lc.latency(), _stats.writes.count);
}
}
void
column_family::apply(const frozen_mutation& m, const schema_ptr& m_schema, const db::replay_position& rp) {
utils::latency_counter lc;
_stats.writes.set_latency(lc);
check_valid_rp(rp);
active_memtable().apply(m, m_schema, rp);
seal_on_overflow();
_stats.writes.mark(lc);
if (lc.is_start()) {
_stats.estimated_write.add(lc.latency(), _stats.writes.count);
}
}
void
column_family::seal_on_overflow() {
++_mutation_count;
if (active_memtable().occupancy().total_space() >= _config.max_memtable_size) {
// FIXME: if sparse, do some in-memory compaction first
// FIXME: maybe merge with other in-memory memtables
_mutation_count = 0;
seal_active_memtable();
}
}
void
column_family::check_valid_rp(const db::replay_position& rp) const {
if (rp < _highest_flushed_rp) {
throw replay_position_reordered_exception();
}
}
future<> database::apply_in_memory(const frozen_mutation& m, const schema_ptr& m_schema, const db::replay_position& rp) {
try {
auto& cf = find_column_family(m.column_family_id());
@@ -1738,8 +1634,9 @@ future<> database::do_apply(schema_ptr s, const frozen_mutation& m) {
s->ks_name(), s->cf_name(), s->version()));
}
if (cf.commitlog() != nullptr) {
commitlog_entry_writer cew(s, m);
return cf.commitlog()->add_entry(uuid, cew).then([&m, this, s](auto rp) {
bytes_view repr = m.representation();
auto write_repr = [repr] (data_output& out) { out.write(repr.begin(), repr.end()); };
return cf.commitlog()->add_mutation(uuid, repr.size(), write_repr).then([&m, this, s](auto rp) {
try {
return this->apply_in_memory(m, s, rp);
} catch (replay_position_reordered_exception&) {
@@ -1788,9 +1685,6 @@ void database::unthrottle() {
}
future<> database::apply(schema_ptr s, const frozen_mutation& m) {
if (dblog.is_enabled(logging::log_level::trace)) {
dblog.trace("apply {}", m.pretty_printer(s));
}
return throttle().then([this, &m, s = std::move(s)] {
return do_apply(std::move(s), m);
});

View File

@@ -64,7 +64,7 @@
#include "mutation_reader.hh"
#include "row_cache.hh"
#include "compaction_strategy.hh"
#include "sstables/compaction_manager.hh"
#include "utils/compaction_manager.hh"
#include "utils/exponential_backoff_retry.hh"
#include "utils/histogram.hh"
#include "sstables/estimated_histogram.hh"
@@ -172,9 +172,6 @@ private:
int _compaction_disabled = 0;
class memtable_flush_queue;
std::unique_ptr<memtable_flush_queue> _flush_queue;
// Store generation of sstables being compacted at the moment. That's needed to prevent a
// sstable from being compacted twice.
std::unordered_set<unsigned long> _compacting_generations;
private:
void update_stats_for_new_sstable(uint64_t new_sstable_data_size);
void add_sstable(sstables::sstable&& sstable);
@@ -188,20 +185,12 @@ private:
void update_sstables_known_generation(unsigned generation) {
_sstable_generation = std::max<uint64_t>(_sstable_generation, generation / smp::count + 1);
}
uint64_t calculate_generation_for_new_table() {
return _sstable_generation++ * smp::count + engine().cpu_id();
}
// Rebuild existing _sstables with new_sstables added to it and sstables_to_remove removed from it.
void rebuild_sstable_list(const std::vector<sstables::shared_sstable>& new_sstables,
const std::vector<sstables::shared_sstable>& sstables_to_remove);
private:
// Creates a mutation reader which covers sstables.
// Caller needs to ensure that column_family remains live (FIXME: relax this).
// The 'range' parameter must be live as long as the reader is used.
// Mutations returned by the reader will all have given schema.
mutation_reader make_sstable_reader(schema_ptr schema, const query::partition_range& range, const io_priority_class& pc) const;
mutation_reader make_sstable_reader(schema_ptr schema, const query::partition_range& range) const;
mutation_source sstables_as_mutation_source();
key_source sstables_as_key_source() const;
@@ -213,11 +202,7 @@ public:
// Note: for data queries use query() instead.
// The 'range' parameter must be live as long as the reader is used.
// Mutations returned by the reader will all have given schema.
// If I/O needs to be issued to read anything in the specified range, the operations
// will be scheduled under the priority class given by pc.
mutation_reader make_reader(schema_ptr schema,
const query::partition_range& range = query::full_partition_range,
const io_priority_class& pc = default_priority_class()) const;
mutation_reader make_reader(schema_ptr schema, const query::partition_range& range = query::full_partition_range) const;
mutation_source as_mutation_source() const;
@@ -305,15 +290,7 @@ public:
// not a real compaction policy.
future<> compact_all_sstables();
// Compact all sstables provided in the vector.
// If cleanup is set to true, compaction_sstables will run on behalf of a cleanup job,
// meaning that irrelevant keys will be discarded.
future<> compact_sstables(sstables::compaction_descriptor descriptor, bool cleanup = false);
// Performs a cleanup on each sstable of this column family, excluding
// those ones that are irrelevant to this node or being compacted.
// Cleanup is about discarding keys that are no longer relevant for a
// given sstable, e.g. after node loses part of its token range because
// of a newly added node.
future<> cleanup_sstables(sstables::compaction_descriptor descriptor);
future<> compact_sstables(sstables::compaction_descriptor descriptor);
future<bool> snapshot_exists(sstring name);
@@ -336,7 +313,7 @@ public:
void start_compaction();
void trigger_compaction();
future<> run_compaction(sstables::compaction_descriptor descriptor);
future<> run_compaction();
void set_compaction_strategy(sstables::compaction_strategy_type strategy);
const sstables::compaction_strategy& get_compaction_strategy() const {
return _compaction_strategy;
@@ -367,10 +344,6 @@ public:
}
});
}
std::unordered_set<unsigned long>& compacting_generations() {
return _compacting_generations;
}
private:
// One does not need to wait on this future if all we are interested in, is
// initiating the write. The writes initiated here will eventually
@@ -702,6 +675,53 @@ public:
// FIXME: stub
class secondary_index_manager {};
inline
void
column_family::apply(const mutation& m, const db::replay_position& rp) {
utils::latency_counter lc;
_stats.writes.set_latency(lc);
active_memtable().apply(m, rp);
seal_on_overflow();
_stats.writes.mark(lc);
if (lc.is_start()) {
_stats.estimated_write.add(lc.latency(), _stats.writes.count);
}
}
inline
void
column_family::seal_on_overflow() {
++_mutation_count;
if (active_memtable().occupancy().total_space() >= _config.max_memtable_size) {
// FIXME: if sparse, do some in-memory compaction first
// FIXME: maybe merge with other in-memory memtables
_mutation_count = 0;
seal_active_memtable();
}
}
inline
void
column_family::check_valid_rp(const db::replay_position& rp) const {
if (rp < _highest_flushed_rp) {
throw replay_position_reordered_exception();
}
}
inline
void
column_family::apply(const frozen_mutation& m, const schema_ptr& m_schema, const db::replay_position& rp) {
utils::latency_counter lc;
_stats.writes.set_latency(lc);
check_valid_rp(rp);
active_memtable().apply(m, m_schema, rp);
seal_on_overflow();
_stats.writes.mark(lc);
if (lc.is_start()) {
_stats.estimated_write.add(lc.latency(), _stats.writes.count);
}
}
future<> update_schema_version_and_announce(distributed<service::storage_proxy>& proxy);
#endif /* DATABASE_HH_ */

View File

@@ -45,7 +45,6 @@
#include <boost/range/adaptor/sliced.hpp>
#include "batchlog_manager.hh"
#include "canonical_mutation.hh"
#include "service/storage_service.hh"
#include "service/storage_proxy.hh"
#include "system_keyspace.hh"
@@ -118,14 +117,14 @@ mutation db::batchlog_manager::get_batch_log_mutation_for(const std::vector<muta
auto key = partition_key::from_singular(*schema, id);
auto timestamp = api::new_timestamp();
auto data = [this, &mutations] {
std::vector<canonical_mutation> fm(mutations.begin(), mutations.end());
std::vector<frozen_mutation> fm(mutations.begin(), mutations.end());
const auto size = std::accumulate(fm.begin(), fm.end(), size_t(0), [](size_t s, auto& m) {
return s + serializer<canonical_mutation>{m}.size();
return s + serializer<frozen_mutation>{m}.size();
});
bytes buf(bytes::initialized_later(), size);
data_output out(buf);
for (auto& m : fm) {
serializer<canonical_mutation>{m}(out);
serializer<frozen_mutation>{m}(out);
}
return buf;
}();
@@ -153,24 +152,23 @@ future<> db::batchlog_manager::replay_all_failed_batches() {
auto batch = [this, limiter](const cql3::untyped_result_set::row& row) {
auto written_at = row.get_as<db_clock::time_point>("written_at");
auto id = row.get_as<utils::UUID>("id");
// enough time for the actual write + batchlog entry mutation delivery (two separate requests).
// enough time for the actual write + batchlog entry mutation delivery (two separate requests).
auto timeout = get_batch_log_timeout();
if (db_clock::now() < written_at + timeout) {
logger.debug("Skipping replay of {}, too fresh", id);
return make_ready_future<>();
}
// not used currently. ever?
//auto version = row.has("version") ? row.get_as<uint32_t>("version") : /*MessagingService.VERSION_12*/6u;
auto id = row.get_as<utils::UUID>("id");
auto data = row.get_blob("data");
logger.debug("Replaying batch {}", id);
auto fms = make_lw_shared<std::deque<canonical_mutation>>();
auto fms = make_lw_shared<std::deque<frozen_mutation>>();
data_input in(data);
while (in.has_next()) {
fms->emplace_back(serializer<canonical_mutation>::read(in));
fms->emplace_back(serializer<frozen_mutation>::read(in));
}
auto mutations = make_lw_shared<std::vector<mutation>>();
@@ -182,10 +180,11 @@ future<> db::batchlog_manager::replay_all_failed_batches() {
}
auto& fm = fms->front();
auto mid = fm.column_family_id();
return system_keyspace::get_truncated_at(mid).then([this, mid, &fm, written_at, mutations](db_clock::time_point t) {
schema_ptr s = _qp.db().local().find_schema(mid);
return system_keyspace::get_truncated_at(mid).then([this, &fm, written_at, mutations](db_clock::time_point t) {
warn(unimplemented::cause::SCHEMA_CHANGE);
auto schema = local_schema_registry().get(fm.schema_version());
if (written_at > t) {
mutations->emplace_back(fm.to_mutation(s));
mutations->emplace_back(fm.unfreeze(schema));
}
}).then([fms] {
fms->pop_front();

View File

@@ -64,8 +64,6 @@
#include "utils/crc.hh"
#include "utils/runtime.hh"
#include "log.hh"
#include "commitlog_entry.hh"
#include "service/priority_manager.hh"
static logging::logger logger("commitlog");
@@ -157,9 +155,6 @@ public:
bool _shutdown = false;
semaphore _new_segment_semaphore;
semaphore _write_semaphore;
semaphore _flush_semaphore;
scollectd::registrations _regs;
// TODO: verify that we're ok with not-so-great granularity
@@ -175,11 +170,7 @@ public:
uint64_t bytes_slack = 0;
uint64_t segments_created = 0;
uint64_t segments_destroyed = 0;
uint64_t pending_writes = 0;
uint64_t pending_flushes = 0;
uint64_t pending_allocations = 0;
uint64_t write_limit_exceeded = 0;
uint64_t flush_limit_exceeded = 0;
uint64_t pending_operations = 0;
uint64_t total_size = 0;
uint64_t buffer_list_bytes = 0;
uint64_t total_size_on_disk = 0;
@@ -187,73 +178,33 @@ public:
stats totals;
future<> begin_write() {
void begin_op() {
_gate.enter();
++totals.pending_writes; // redundant, given semaphore. but easier to read
if (totals.pending_writes >= cfg.max_active_writes) {
++totals.write_limit_exceeded;
logger.trace("Write ops overflow: {}. Will block.", totals.pending_writes);
}
return _write_semaphore.wait();
++totals.pending_operations;
}
void end_write() {
_write_semaphore.signal();
--totals.pending_writes;
void end_op() {
--totals.pending_operations;
_gate.leave();
}
future<> begin_flush() {
_gate.enter();
++totals.pending_flushes;
if (totals.pending_flushes >= cfg.max_active_flushes) {
++totals.flush_limit_exceeded;
logger.trace("Flush ops overflow: {}. Will block.", totals.pending_flushes);
}
return _flush_semaphore.wait();
}
void end_flush() {
_flush_semaphore.signal();
--totals.pending_flushes;
_gate.leave();
}
bool should_wait_for_write() const {
return _write_semaphore.waiters() > 0 || _flush_semaphore.waiters() > 0;
}
segment_manager(config c)
: cfg([&c] {
config cfg(c);
if (cfg.commit_log_location.empty()) {
cfg.commit_log_location = "/var/lib/scylla/commitlog";
}
if (cfg.max_active_writes == 0) {
cfg.max_active_writes = // TODO: call someone to get an idea...
25 * smp::count;
}
cfg.max_active_writes = std::max(uint64_t(1), cfg.max_active_writes / smp::count);
if (cfg.max_active_flushes == 0) {
cfg.max_active_flushes = // TODO: call someone to get an idea...
5 * smp::count;
}
cfg.max_active_flushes = std::max(uint64_t(1), cfg.max_active_flushes / smp::count);
return cfg;
}())
, max_size(std::min<size_t>(std::numeric_limits<position_type>::max(), std::max<size_t>(cfg.commitlog_segment_size_in_mb, 1) * 1024 * 1024))
, max_mutation_size(max_size >> 1)
, max_disk_size(size_t(std::ceil(cfg.commitlog_total_space_in_mb / double(smp::count))) * 1024 * 1024)
, _write_semaphore(cfg.max_active_writes)
, _flush_semaphore(cfg.max_active_flushes)
: cfg(c), max_size(
std::min<size_t>(std::numeric_limits<position_type>::max(),
std::max<size_t>(cfg.commitlog_segment_size_in_mb,
1) * 1024 * 1024)), max_mutation_size(
max_size >> 1), max_disk_size(
size_t(
std::ceil(
cfg.commitlog_total_space_in_mb
/ double(smp::count))) * 1024 * 1024)
{
assert(max_size > 0);
if (cfg.commit_log_location.empty()) {
cfg.commit_log_location = "/var/lib/scylla/commitlog";
}
logger.trace("Commitlog {} maximum disk size: {} MB / cpu ({} cpus)",
cfg.commit_log_location, max_disk_size / (1024 * 1024),
smp::count);
_regs = create_counters();
}
~segment_manager() {
@@ -287,8 +238,6 @@ public:
}
std::vector<sstring> get_active_names() const;
uint64_t get_num_dirty_segments() const;
uint64_t get_num_active_segments() const;
using buffer_type = temporary_buffer<char>;
@@ -392,39 +341,9 @@ class db::commitlog::segment: public enable_lw_shared_from_this<segment> {
std::unordered_map<cf_id_type, position_type> _cf_dirty;
time_point _sync_time;
seastar::gate _gate;
uint64_t _write_waiters = 0;
semaphore _queue;
std::unordered_set<table_schema_version> _known_schema_versions;
friend std::ostream& operator<<(std::ostream&, const segment&);
friend class segment_manager;
future<> begin_flush() {
// This is maintaining the semantica of only using the write-lock
// as a gate for flushing, i.e. once we've begun a flush for position X
// we are ok with writes to positions > X
return _dwrite.write_lock().then(std::bind(&segment_manager::begin_flush, _segment_manager)).finally([this] {
_dwrite.write_unlock();
});
}
void end_flush() {
_segment_manager->end_flush();
}
future<> begin_write() {
// This is maintaining the semantica of only using the write-lock
// as a gate for flushing, i.e. once we've begun a flush for position X
// we are ok with writes to positions > X
return _dwrite.read_lock().then(std::bind(&segment_manager::begin_write, _segment_manager));
}
void end_write() {
_segment_manager->end_write();
_dwrite.read_unlock();
}
public:
struct cf_mark {
const segment& s;
@@ -446,7 +365,7 @@ public:
segment(segment_manager* m, const descriptor& d, file && f, bool active)
: _segment_manager(m), _desc(std::move(d)), _file(std::move(f)), _sync_time(
clock_type::now()), _queue(0)
clock_type::now())
{
++_segment_manager->totals.segments_created;
logger.debug("Created new {} segment {}", active ? "active" : "reserve", *this);
@@ -464,19 +383,9 @@ public:
}
}
bool is_schema_version_known(schema_ptr s) {
return _known_schema_versions.count(s->version());
}
void add_schema_version(schema_ptr s) {
_known_schema_versions.emplace(s->version());
}
void forget_schema_versions() {
_known_schema_versions.clear();
}
bool must_sync() {
if (_segment_manager->cfg.mode == sync_mode::BATCH) {
return false;
return true;
}
auto now = clock_type::now();
auto ms = std::chrono::duration_cast<std::chrono::milliseconds>(
@@ -492,9 +401,8 @@ public:
*/
future<sseg_ptr> finish_and_get_new() {
_closed = true;
return maybe_wait_for_write(sync()).then([](sseg_ptr s) {
return s->_segment_manager->active_segment();
});
sync();
return _segment_manager->active_segment();
}
void reset_sync_time() {
_sync_time = clock_type::now();
@@ -509,7 +417,7 @@ public:
logger.trace("Sync not needed {}: ({} / {})", *this, position(), _flush_pos);
return make_ready_future<sseg_ptr>(shared_from_this());
}
return cycle().then([](sseg_ptr seg) {
return cycle().then([](auto seg) {
return seg->flush();
});
}
@@ -532,14 +440,16 @@ public:
// This is not 100% neccesary, we really only need the ones below our flush pos,
// but since we pretty much assume that task ordering will make this the case anyway...
return begin_flush().then(
return _dwrite.write_lock().then(
[this, me, pos]() mutable {
_dwrite.write_unlock(); // release it already.
pos = std::max(pos, _file_pos);
if (pos <= _flush_pos) {
logger.trace("{} already synced! ({} < {})", *this, pos, _flush_pos);
return make_ready_future<sseg_ptr>(std::move(me));
}
return _file.flush().then_wrapped([this, pos, me](future<> f) {
_segment_manager->begin_op();
return _file.flush().then_wrapped([this, pos, me](auto f) {
try {
f.get();
// TODO: retry/ignore/fail/stop - optional behaviour in origin.
@@ -552,50 +462,16 @@ public:
logger.error("Failed to flush commits to disk: {}", std::current_exception());
throw;
}
}).finally([this, me] {
_segment_manager->end_op();
});
}).finally([this] {
end_flush();
});
});
}
/**
* Allocate a new buffer
*/
void new_buffer(size_t s) {
assert(_buffer.empty());
auto overhead = segment_overhead_size;
if (_file_pos == 0) {
overhead += descriptor_header_size;
}
auto a = align_up(s + overhead, alignment);
auto k = std::max(a, default_size);
for (;;) {
try {
_buffer = _segment_manager->acquire_buffer(k);
break;
} catch (std::bad_alloc&) {
logger.warn("Could not allocate {} k bytes output buffer ({} k required)", k / 1024, a / 1024);
if (k > a) {
k = std::max(a, k / 2);
logger.debug("Trying reduced size: {} k", k / 1024);
continue;
}
throw;
}
}
_buf_pos = overhead;
auto * p = reinterpret_cast<uint32_t *>(_buffer.get_write());
std::fill(p, p + overhead, 0);
_segment_manager->totals.total_size += k;
}
/**
* Send any buffer contents to disk and get a new tmp buffer
*/
// See class comment for info
future<sseg_ptr> cycle() {
future<sseg_ptr> cycle(size_t s = 0) {
auto size = clear_buffer_slack();
auto buf = std::move(_buffer);
auto off = _file_pos;
@@ -603,6 +479,36 @@ public:
_file_pos += size;
_buf_pos = 0;
// if we need new buffer, get one.
// TODO: keep a queue of available buffers?
if (s > 0) {
auto overhead = segment_overhead_size;
if (_file_pos == 0) {
overhead += descriptor_header_size;
}
auto a = align_up(s + overhead, alignment);
auto k = std::max(a, default_size);
for (;;) {
try {
_buffer = _segment_manager->acquire_buffer(k);
break;
} catch (std::bad_alloc&) {
logger.warn("Could not allocate {} k bytes output buffer ({} k required)", k / 1024, a / 1024);
if (k > a) {
k = std::max(a, k / 2);
logger.debug("Trying reduced size: {} k", k / 1024);
continue;
}
throw;
}
}
_buf_pos = overhead;
auto * p = reinterpret_cast<uint32_t *>(_buffer.get_write());
std::fill(p, p + overhead, 0);
_segment_manager->totals.total_size += k;
}
auto me = shared_from_this();
assert(!me.owned());
@@ -639,15 +545,13 @@ public:
out.write(uint32_t(_file_pos));
out.write(crc.checksum());
forget_schema_versions();
// acquire read lock
return begin_write().then([this, size, off, buf = std::move(buf), me]() mutable {
return _dwrite.read_lock().then([this, size, off, buf = std::move(buf), me]() mutable {
auto written = make_lw_shared<size_t>(0);
auto p = buf.get();
_segment_manager->begin_op();
return repeat([this, size, off, written, p]() mutable {
auto&& priority_class = service::get_local_commitlog_priority();
return _file.dma_write(off + *written, p + *written, size - *written, priority_class).then_wrapped([this, size, written](future<size_t>&& f) {
return _file.dma_write(off + *written, p + *written, size - *written).then_wrapped([this, size, written](auto&& f) {
try {
auto bytes = std::get<0>(f.get());
*written += bytes;
@@ -671,59 +575,20 @@ public:
});
}).finally([this, buf = std::move(buf)]() mutable {
_segment_manager->release_buffer(std::move(buf));
_segment_manager->end_op();
});
}).then([me] {
return make_ready_future<sseg_ptr>(std::move(me));
}).finally([me, this]() {
end_write(); // release
});
}
future<sseg_ptr> maybe_wait_for_write(future<sseg_ptr> f) {
if (_segment_manager->should_wait_for_write()) {
++_write_waiters;
logger.trace("Too many pending writes. Must wait.");
return f.finally([this] {
if (--_write_waiters == 0) {
_queue.signal(_queue.waiters());
}
});
}
return make_ready_future<sseg_ptr>(shared_from_this());
}
/**
* If an allocation causes a write, and the write causes a block,
* any allocations post that need to wait for this to finish,
* other wise we will just continue building up more write queue
* eventually (+ loose more ordering)
*
* Some caution here, since maybe_wait_for_write actually
* releases _all_ queued up ops when finishing, we could get
* "bursts" of alloc->write, causing build-ups anyway.
* This should be measured properly. For now I am hoping this
* will work out as these should "block as a group". However,
* buffer memory usage might grow...
*/
bool must_wait_for_alloc() {
return _write_waiters > 0;
}
future<sseg_ptr> wait_for_alloc() {
auto me = shared_from_this();
++_segment_manager->totals.pending_allocations;
logger.trace("Previous allocation is blocking. Must wait.");
return _queue.wait().then([me] { // TODO: do we need a finally?
--me->_segment_manager->totals.pending_allocations;
return make_ready_future<sseg_ptr>(me);
_dwrite.read_unlock(); // release
});
}
/**
* Add a "mutation" to the segment.
*/
future<replay_position> allocate(const cf_id_type& id, shared_ptr<entry_writer> writer) {
const auto size = writer->size(*this);
future<replay_position> allocate(const cf_id_type& id, size_t size,
serializer_func func) {
const auto s = size + entry_overhead_size; // total size
if (s > _segment_manager->max_mutation_size) {
return make_exception_future<replay_position>(
@@ -732,26 +597,23 @@ public:
+ " bytes is too large for the maxiumum size of "
+ std::to_string(_segment_manager->max_mutation_size)));
}
std::experimental::optional<future<sseg_ptr>> op;
if (must_sync()) {
op = sync();
} else if (must_wait_for_alloc()) {
op = wait_for_alloc();
} else if (!is_still_allocating() || position() + s > _segment_manager->max_size) { // would we make the file too big?
// do this in next segment instead.
op = finish_and_get_new();
} else if (_buffer.empty()) {
new_buffer(s);
} else if (s > (_buffer.size() - _buf_pos)) { // enough data?
op = maybe_wait_for_write(cycle());
}
if (op) {
return op->then([id, writer = std::move(writer)] (sseg_ptr new_seg) mutable {
return new_seg->allocate(id, std::move(writer));
});
// would we make the file too big?
for (;;) {
if (position() + s > _segment_manager->max_size) {
// do this in next segment instead.
return finish_and_get_new().then(
[id, size, func = std::move(func)](auto new_seg) {
return new_seg->allocate(id, size, func);
});
}
// enough data?
if (s > (_buffer.size() - _buf_pos)) {
// TODO: iff we have to many writes running, maybe we should
// wait for this?
cycle(s);
continue; // re-check file size overflow
}
break;
}
_gate.enter(); // this might throw. I guess we accept this?
@@ -772,7 +634,7 @@ public:
out.write(crc.checksum());
// actual data
writer->write(*this, out);
func(out);
crc.process_bytes(p + 2 * sizeof(uint32_t), size);
@@ -783,8 +645,9 @@ public:
_gate.leave();
if (_segment_manager->cfg.mode == sync_mode::BATCH) {
return sync().then([rp](sseg_ptr) {
// finally, check if we're required to sync.
if (must_sync()) {
return sync().then([rp](auto seg) {
return make_ready_future<replay_position>(rp);
});
}
@@ -873,7 +736,7 @@ db::commitlog::segment_manager::list_descriptors(sstring dirname) {
}
return make_ready_future<std::experimental::optional<directory_entry_type>>(de.type);
};
return entry_type(de).then([this, de](std::experimental::optional<directory_entry_type> type) {
return entry_type(de).then([this, de](auto type) {
if (type == directory_entry_type::regular && de.name[0] != '.') {
try {
_result.emplace_back(de.name);
@@ -890,7 +753,7 @@ db::commitlog::segment_manager::list_descriptors(sstring dirname) {
}
};
return engine().open_directory(dirname).then([this, dirname](file dir) {
return engine().open_directory(dirname).then([this, dirname](auto dir) {
auto h = make_lw_shared<helper>(std::move(dirname), std::move(dir));
return h->done().then([h]() {
return make_ready_future<std::vector<db::commitlog::descriptor>>(std::move(h->_result));
@@ -899,7 +762,7 @@ db::commitlog::segment_manager::list_descriptors(sstring dirname) {
}
future<> db::commitlog::segment_manager::init() {
return list_descriptors(cfg.commit_log_location).then([this](std::vector<descriptor> descs) {
return list_descriptors(cfg.commit_log_location).then([this](auto descs) {
segment_id_type id = std::chrono::duration_cast<std::chrono::milliseconds>(runtime::get_boot_time().time_since_epoch()).count() + 1;
for (auto& d : descs) {
id = std::max(id, replay_position(d.id).base_id());
@@ -969,23 +832,9 @@ scollectd::registrations db::commitlog::segment_manager::create_counters() {
),
add_polled_metric(type_instance_id("commitlog"
, per_cpu_plugin_instance, "queue_length", "pending_writes")
, make_typed(data_type::GAUGE, totals.pending_writes)
, per_cpu_plugin_instance, "queue_length", "pending_operations")
, make_typed(data_type::GAUGE, totals.pending_operations)
),
add_polled_metric(type_instance_id("commitlog"
, per_cpu_plugin_instance, "queue_length", "pending_flushes")
, make_typed(data_type::GAUGE, totals.pending_flushes)
),
add_polled_metric(type_instance_id("commitlog"
, per_cpu_plugin_instance, "total_operations", "write_limit_exceeded")
, make_typed(data_type::DERIVE, totals.write_limit_exceeded)
),
add_polled_metric(type_instance_id("commitlog"
, per_cpu_plugin_instance, "total_operations", "flush_limit_exceeded")
, make_typed(data_type::DERIVE, totals.flush_limit_exceeded)
),
add_polled_metric(type_instance_id("commitlog"
, per_cpu_plugin_instance, "memory", "total_size")
, make_typed(data_type::GAUGE, totals.total_size)
@@ -1114,7 +963,7 @@ std::ostream& db::operator<<(std::ostream& out, const db::replay_position& p) {
void db::commitlog::segment_manager::discard_unused_segments() {
logger.trace("Checking for unused segments ({} active)", _segments.size());
auto i = std::remove_if(_segments.begin(), _segments.end(), [=](sseg_ptr s) {
auto i = std::remove_if(_segments.begin(), _segments.end(), [=](auto s) {
if (s->can_delete()) {
logger.debug("Segment {} is unused", *s);
return true;
@@ -1208,7 +1057,7 @@ void db::commitlog::segment_manager::on_timer() {
return this->allocate_segment(false).then([this](sseg_ptr s) {
if (!_shutdown) {
// insertion sort.
auto i = std::upper_bound(_reserve_segments.begin(), _reserve_segments.end(), s, [](sseg_ptr s1, sseg_ptr s2) {
auto i = std::upper_bound(_reserve_segments.begin(), _reserve_segments.end(), s, [](auto s1, auto s2) {
const descriptor& d1 = s1->_desc;
const descriptor& d2 = s2->_desc;
return d1.id < d2.id;
@@ -1220,7 +1069,7 @@ void db::commitlog::segment_manager::on_timer() {
--_reserve_allocating;
});
});
}).handle_exception([](std::exception_ptr ep) {
}).handle_exception([](auto ep) {
logger.warn("Exception in segment reservation: {}", ep);
});
arm();
@@ -1237,19 +1086,6 @@ std::vector<sstring> db::commitlog::segment_manager::get_active_names() const {
return res;
}
uint64_t db::commitlog::segment_manager::get_num_dirty_segments() const {
return std::count_if(_segments.begin(), _segments.end(), [](sseg_ptr s) {
return !s->is_still_allocating() && !s->is_clean();
});
}
uint64_t db::commitlog::segment_manager::get_num_active_segments() const {
return std::count_if(_segments.begin(), _segments.end(), [](sseg_ptr s) {
return s->is_still_allocating();
});
}
db::commitlog::segment_manager::buffer_type db::commitlog::segment_manager::acquire_buffer(size_t s) {
auto i = _temp_buffers.begin();
auto e = _temp_buffers.end();
@@ -1292,44 +1128,8 @@ void db::commitlog::segment_manager::release_buffer(buffer_type&& b) {
*/
future<db::replay_position> db::commitlog::add(const cf_id_type& id,
size_t size, serializer_func func) {
class serializer_func_entry_writer final : public entry_writer {
serializer_func _func;
size_t _size;
public:
serializer_func_entry_writer(size_t sz, serializer_func func)
: _func(std::move(func)), _size(sz)
{ }
virtual size_t size(segment&) override { return _size; }
virtual void write(segment&, output& out) override {
_func(out);
}
};
auto writer = ::make_shared<serializer_func_entry_writer>(size, std::move(func));
return _segment_manager->active_segment().then([id, writer] (auto s) {
return s->allocate(id, writer);
});
}
future<db::replay_position> db::commitlog::add_entry(const cf_id_type& id, const commitlog_entry_writer& cew)
{
class cl_entry_writer final : public entry_writer {
commitlog_entry_writer _writer;
public:
cl_entry_writer(const commitlog_entry_writer& wr) : _writer(wr) { }
virtual size_t size(segment& seg) override {
_writer.set_with_schema(!seg.is_schema_version_known(_writer.schema()));
return _writer.size();
}
virtual void write(segment& seg, output& out) override {
if (_writer.with_schema()) {
seg.add_schema_version(_writer.schema());
}
_writer.write(out);
}
};
auto writer = ::make_shared<cl_entry_writer>(cew);
return _segment_manager->active_segment().then([id, writer] (auto s) {
return s->allocate(id, writer);
return _segment_manager->active_segment().then([=](auto s) {
return s->allocate(id, size, std::move(func));
});
}
@@ -1400,18 +1200,11 @@ future<> db::commitlog::shutdown() {
return _segment_manager->shutdown();
}
size_t db::commitlog::max_record_size() const {
return _segment_manager->max_mutation_size - segment::entry_overhead_size;
}
uint64_t db::commitlog::max_active_writes() const {
return _segment_manager->cfg.max_active_writes;
}
uint64_t db::commitlog::max_active_flushes() const {
return _segment_manager->cfg.max_active_flushes;
}
future<> db::commitlog::clear() {
return _segment_manager->clear();
}
@@ -1593,6 +1386,10 @@ db::commitlog::read_log_file(file f, commit_load_reader_func next, position_type
return skip(slack);
}
if (start_off > pos) {
return skip(size - entry_header_size);
}
return fin.read_exactly(size - entry_header_size).then([this, size, crc = std::move(crc), rp](temporary_buffer<char> buf) mutable {
advance(buf);
@@ -1662,28 +1459,7 @@ uint64_t db::commitlog::get_flush_count() const {
}
uint64_t db::commitlog::get_pending_tasks() const {
return _segment_manager->totals.pending_writes
+ _segment_manager->totals.pending_flushes;
}
uint64_t db::commitlog::get_pending_writes() const {
return _segment_manager->totals.pending_writes;
}
uint64_t db::commitlog::get_pending_flushes() const {
return _segment_manager->totals.pending_flushes;
}
uint64_t db::commitlog::get_pending_allocations() const {
return _segment_manager->totals.pending_allocations;
}
uint64_t db::commitlog::get_write_limit_exceeded_count() const {
return _segment_manager->totals.write_limit_exceeded;
}
uint64_t db::commitlog::get_flush_limit_exceeded_count() const {
return _segment_manager->totals.flush_limit_exceeded;
return _segment_manager->totals.pending_operations;
}
uint64_t db::commitlog::get_num_segments_created() const {
@@ -1694,14 +1470,6 @@ uint64_t db::commitlog::get_num_segments_destroyed() const {
return _segment_manager->totals.segments_destroyed;
}
uint64_t db::commitlog::get_num_dirty_segments() const {
return _segment_manager->get_num_dirty_segments();
}
uint64_t db::commitlog::get_num_active_segments() const {
return _segment_manager->get_num_active_segments();
}
future<std::vector<db::commitlog::descriptor>> db::commitlog::list_existing_descriptors() const {
return list_existing_descriptors(active_config().commit_log_location);
}

View File

@@ -48,7 +48,6 @@
#include "core/stream.hh"
#include "utils/UUID.hh"
#include "replay_position.hh"
#include "commitlog_entry.hh"
class file;
@@ -115,10 +114,6 @@ public:
// Max number of segments to keep in pre-alloc reserve.
// Not (yet) configurable from scylla.conf.
uint64_t max_reserve_segments = 12;
// Max active writes/flushes. Default value
// zero means try to figure it out ourselves
uint64_t max_active_writes = 0;
uint64_t max_active_flushes = 0;
sync_mode mode = sync_mode::PERIODIC;
};
@@ -186,13 +181,6 @@ public:
});
}
/**
* Add an entry to the commit log.
*
* @param entry_writer a writer responsible for writing the entry
*/
future<replay_position> add_entry(const cf_id_type& id, const commitlog_entry_writer& entry_writer);
/**
* Modifies the per-CF dirty cursors of any commit log segments for the column family according to the position
* given. Discards any commit log segments that are no longer used.
@@ -245,37 +233,14 @@ public:
uint64_t get_completed_tasks() const;
uint64_t get_flush_count() const;
uint64_t get_pending_tasks() const;
uint64_t get_pending_writes() const;
uint64_t get_pending_flushes() const;
uint64_t get_pending_allocations() const;
uint64_t get_write_limit_exceeded_count() const;
uint64_t get_flush_limit_exceeded_count() const;
uint64_t get_num_segments_created() const;
uint64_t get_num_segments_destroyed() const;
/**
* Get number of inactive (finished), segments lingering
* due to still being dirty
*/
uint64_t get_num_dirty_segments() const;
/**
* Get number of active segments, i.e. still being allocated to
*/
uint64_t get_num_active_segments() const;
/**
* Returns the largest amount of data that can be written in a single "mutation".
*/
size_t max_record_size() const;
/**
* Return max allowed pending writes (per this shard)
*/
uint64_t max_active_writes() const;
/**
* Return max allowed pending flushes (per this shard)
*/
uint64_t max_active_flushes() const;
future<> clear();
const config& active_config() const;
@@ -318,11 +283,6 @@ public:
const sstring&, commit_load_reader_func, position_type = 0);
private:
commitlog(config);
struct entry_writer {
virtual size_t size(segment&) = 0;
virtual void write(segment&, output&) = 0;
};
};
}

View File

@@ -1,88 +0,0 @@
/*
* Copyright 2016 ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#pragma once
#include <experimental/optional>
#include "frozen_mutation.hh"
#include "schema.hh"
namespace stdx = std::experimental;
class commitlog_entry_writer {
schema_ptr _schema;
db::serializer<column_mapping> _column_mapping_serializer;
const frozen_mutation& _mutation;
bool _with_schema = true;
public:
commitlog_entry_writer(schema_ptr s, const frozen_mutation& fm)
: _schema(std::move(s)), _column_mapping_serializer(_schema->get_column_mapping()), _mutation(fm)
{ }
void set_with_schema(bool value) {
_with_schema = value;
}
bool with_schema() {
return _with_schema;
}
schema_ptr schema() const {
return _schema;
}
size_t size() const {
size_t size = data_output::serialized_size<bool>();
if (_with_schema) {
size += _column_mapping_serializer.size();
}
size += _mutation.representation().size();
return size;
}
void write(data_output& out) const {
out.write(_with_schema);
if (_with_schema) {
_column_mapping_serializer.write(out);
}
auto bv = _mutation.representation();
out.write(bv.begin(), bv.end());
}
};
class commitlog_entry_reader {
frozen_mutation _mutation;
stdx::optional<column_mapping> _column_mapping;
public:
commitlog_entry_reader(const temporary_buffer<char>& buffer)
: _mutation(bytes())
{
data_input in(buffer);
bool has_column_mapping = in.read<bool>();
if (has_column_mapping) {
_column_mapping = db::serializer<::column_mapping>::read(in);
}
auto bv = in.read_view(in.avail());
_mutation = frozen_mutation(bytes(bv.begin(), bv.end()));
}
const stdx::optional<column_mapping>& get_column_mapping() const { return _column_mapping; }
const frozen_mutation& mutation() const { return _mutation; }
};

View File

@@ -56,14 +56,10 @@
#include "db/serializer.hh"
#include "cql3/query_processor.hh"
#include "log.hh"
#include "converting_mutation_partition_applier.hh"
#include "schema_registry.hh"
#include "commitlog_entry.hh"
static logging::logger logger("commitlog_replayer");
class db::commitlog_replayer::impl {
std::unordered_map<table_schema_version, column_mapping> _column_mappings;
public:
impl(seastar::sharded<cql3::query_processor>& db);
@@ -74,19 +70,6 @@ public:
uint64_t skipped_mutations = 0;
uint64_t applied_mutations = 0;
uint64_t corrupt_bytes = 0;
stats& operator+=(const stats& s) {
invalid_mutations += s.invalid_mutations;
skipped_mutations += s.skipped_mutations;
applied_mutations += s.applied_mutations;
corrupt_bytes += s.corrupt_bytes;
return *this;
}
stats operator+(const stats& s) const {
stats tmp = *this;
tmp += s;
return tmp;
}
};
future<> process(stats*, temporary_buffer<char> buf, replay_position rp);
@@ -165,6 +148,8 @@ future<> db::commitlog_replayer::impl::init() {
future<db::commitlog_replayer::impl::stats>
db::commitlog_replayer::impl::recover(sstring file) {
logger.info("Replaying {}", file);
replay_position rp{commitlog::descriptor(file)};
auto gp = _min_pos[rp.shard_id()];
@@ -197,29 +182,19 @@ db::commitlog_replayer::impl::recover(sstring file) {
}
future<> db::commitlog_replayer::impl::process(stats* s, temporary_buffer<char> buf, replay_position rp) {
auto shard = rp.shard_id();
if (rp < _min_pos[shard]) {
logger.trace("entry {} is less than global min position. skipping", rp);
s->skipped_mutations++;
return make_ready_future<>();
}
try {
commitlog_entry_reader cer(buf);
auto& fm = cer.mutation();
auto cm_it = _column_mappings.find(fm.schema_version());
if (cm_it == _column_mappings.end()) {
if (!cer.get_column_mapping()) {
throw std::runtime_error(sprint("unknown schema version {}", fm.schema_version()));
}
logger.debug("new schema version {} in entry {}", fm.schema_version(), rp);
cm_it = _column_mappings.emplace(fm.schema_version(), *cer.get_column_mapping()).first;
}
auto shard_id = rp.shard_id();
if (rp < _min_pos[shard_id]) {
logger.trace("entry {} is less than global min position. skipping", rp);
s->skipped_mutations++;
return make_ready_future<>();
}
frozen_mutation fm(bytes(reinterpret_cast<const int8_t *>(buf.get()), buf.size()));
auto uuid = fm.column_family_id();
auto& map = _rpm[shard_id];
auto& map = _rpm[shard];
auto i = map.find(uuid);
if (i != map.end() && rp <= i->second) {
logger.trace("entry {} at {} is younger than recorded replay position {}. skipping", fm.column_family_id(), rp, i->second);
@@ -228,8 +203,7 @@ future<> db::commitlog_replayer::impl::process(stats* s, temporary_buffer<char>
}
auto shard = _qp.local().db().local().shard_of(fm);
return _qp.local().db().invoke_on(shard, [this, cer = std::move(cer), cm_it, rp, shard, s] (database& db) -> future<> {
auto& fm = cer.mutation();
return _qp.local().db().invoke_on(shard, [fm = std::move(fm), rp, shard, s] (database& db) -> future<> {
// TODO: might need better verification that the deserialized mutation
// is schema compatible. My guess is that just applying the mutation
// will not do this.
@@ -245,11 +219,8 @@ future<> db::commitlog_replayer::impl::process(stats* s, temporary_buffer<char>
// their "replay_position" attribute will be empty, which is
// lower than anything the new session will produce.
if (cf.schema()->version() != fm.schema_version()) {
const column_mapping& cm = cm_it->second;
mutation m(fm.decorated_key(*cf.schema()), cf.schema());
converting_mutation_partition_applier v(cm, *cf.schema(), m.partition());
fm.partition().accept(cm, v);
cf.apply(std::move(m));
// TODO: Convert fm to current schema
fail(unimplemented::cause::SCHEMA_CHANGE);
} else {
cf.apply(fm, cf.schema());
}
@@ -292,41 +263,32 @@ future<db::commitlog_replayer> db::commitlog_replayer::create_replayer(seastar::
}
future<> db::commitlog_replayer::recover(std::vector<sstring> files) {
logger.info("Replaying {}", join(", ", files));
return map_reduce(files, [this](auto f) {
logger.debug("Replaying {}", f);
return _impl->recover(f).then([f](impl::stats stats) {
if (stats.corrupt_bytes != 0) {
logger.warn("Corrupted file: {}. {} bytes skipped.", f, stats.corrupt_bytes);
}
logger.debug("Log replay of {} complete, {} replayed mutations ({} invalid, {} skipped)"
, f
, stats.applied_mutations
, stats.invalid_mutations
, stats.skipped_mutations
);
return make_ready_future<impl::stats>(stats);
}).handle_exception([f](auto ep) -> future<impl::stats> {
logger.error("Error recovering {}: {}", f, ep);
try {
std::rethrow_exception(ep);
} catch (std::invalid_argument&) {
logger.error("Scylla cannot process {}. Make sure to fully flush all Cassandra commit log files to sstable before migrating.", f);
throw;
} catch (...) {
throw;
}
});
}, impl::stats(), std::plus<impl::stats>()).then([](impl::stats totals) {
logger.info("Log replay complete, {} replayed mutations ({} invalid, {} skipped)"
, totals.applied_mutations
, totals.invalid_mutations
, totals.skipped_mutations
);
return parallel_for_each(files, [this](auto f) {
return this->recover(f);
});
}
future<> db::commitlog_replayer::recover(sstring f) {
return recover(std::vector<sstring>{ f });
return _impl->recover(f).then([f](impl::stats stats) {
if (stats.corrupt_bytes != 0) {
logger.warn("Corrupted file: {}. {} bytes skipped.", f, stats.corrupt_bytes);
}
logger.info("Log replay of {} complete, {} replayed mutations ({} invalid, {} skipped)"
, f
, stats.applied_mutations
, stats.invalid_mutations
, stats.skipped_mutations
);
}).handle_exception([f](auto ep) {
logger.error("Error recovering {}: {}", f, ep);
try {
std::rethrow_exception(ep);
} catch (std::invalid_argument&) {
logger.error("Scylla cannot process {}. Make sure to fully flush all Cassandra commit log files to sstable before migrating.");
throw;
} catch (...) {
throw;
}
});;
}

View File

@@ -268,7 +268,7 @@ public:
"Counter writes read the current values before incrementing and writing them back. The recommended value is (16 × number_of_drives) ." \
) \
/* Common automatic backup settings */ \
val(incremental_backups, bool, false, Used, \
val(incremental_backups, bool, false, Unused, \
"Backs up data updated since the last snapshot was taken. When enabled, Cassandra creates a hard link to each SSTable flushed or streamed locally in a backups/ subdirectory of the keyspace data. Removing these links is the operator's responsibility.\n" \
"Related information: Enabling incremental backups" \
) \
@@ -718,7 +718,6 @@ public:
val(replace_address_first_boot, sstring, "", Used, "Like replace_address option, but if the node has been bootstrapped sucessfully it will be ignored. Same as -Dcassandra.replace_address_first_boot.") \
val(override_decommission, bool, false, Used, "Set true to force a decommissioned node to join the cluster") \
val(ring_delay_ms, uint32_t, 30 * 1000, Used, "Time a node waits to hear from other nodes before joining the ring in milliseconds. Same as -Dcassandra.ring_delay_ms in cassandra.") \
val(shutdown_announce_in_ms, uint32_t, 2 * 1000, Used, "Time a node waits after sending gossip shutdown message in milliseconds. Same as -Dcassandra.shutdown_announce_in_ms in cassandra.") \
val(developer_mode, bool, false, Used, "Relax environement checks. Setting to true can reduce performance and reliability significantly.") \
val(skip_wait_for_gossip_to_settle, int32_t, -1, Used, "An integer to configure the wait for gossip to settle. -1: wait normally, 0: do not wait at all, n: wait for at most n polls. Same as -Dcassandra.skip_wait_for_gossip_to_settle in cassandra.") \
val(experimental, bool, false, Used, "Set to true to unlock experimental features.") \

View File

@@ -50,20 +50,16 @@ namespace db {
namespace marshal {
type_parser::type_parser(sstring_view str, size_t idx)
: _str{str.begin(), str.end()}
type_parser::type_parser(const sstring& str, size_t idx)
: _str{str}
, _idx{idx}
{ }
type_parser::type_parser(sstring_view str)
type_parser::type_parser(const sstring& str)
: type_parser{str, 0}
{ }
data_type type_parser::parse(const sstring& str) {
return type_parser(sstring_view(str)).parse();
}
data_type type_parser::parse(sstring_view str) {
return type_parser(str).parse();
}

View File

@@ -62,15 +62,14 @@ class type_parser {
public static final TypeParser EMPTY_PARSER = new TypeParser("", 0);
#endif
type_parser(sstring_view str, size_t idx);
type_parser(const sstring& str, size_t idx);
public:
explicit type_parser(sstring_view str);
explicit type_parser(const sstring& str);
/**
* Parse a string containing an type definition.
*/
static data_type parse(const sstring& str);
static data_type parse(sstring_view str);
#if 0
public static AbstractType<?> parse(CharSequence compareWith) throws SyntaxException, ConfigurationException

View File

@@ -1327,10 +1327,8 @@ schema_ptr create_table_from_mutations(schema_mutations sm, std::experimental::o
throw std::runtime_error(sprint("%s not implemented", __PRETTY_FUNCTION__));
}
auto comparator = table_row.get_nonnull<sstring>("comparator");
bool is_compound = cell_comparator::check_compound(comparator);
bool is_compound = cell_comparator::check_compound(table_row.get_nonnull<sstring>("comparator"));
builder.set_is_compound(is_compound);
cell_comparator::read_collections(builder, comparator);
#if 0
CellNameType comparator = CellNames.fromAbstractType(fullRawComparator, isDense);

View File

@@ -251,6 +251,39 @@ std::ostream& operator<<(std::ostream& out, const ring_position& pos) {
return out << "}";
}
size_t ring_position::serialized_size() const {
size_t size = serialize_int32_size; /* _key length */
if (_key) {
size += _key.value().representation().size();
} else {
size += sizeof(int8_t); /* _token_bund */
}
return size + _token.serialized_size();
}
void ring_position::serialize(bytes::iterator& out) const {
_token.serialize(out);
if (_key) {
auto v = _key.value().representation();
serialize_int32(out, v.size());
out = std::copy(v.begin(), v.end(), out);
} else {
serialize_int32(out, 0);
serialize_int8(out, static_cast<int8_t>(_token_bound));
}
}
ring_position ring_position::deserialize(bytes_view& in) {
auto token = token::deserialize(in);
auto size = read_simple<uint32_t>(in);
if (size == 0) {
auto bound = dht::ring_position::token_bound(read_simple<int8_t>(in));
return ring_position(std::move(token), bound);
} else {
return ring_position(std::move(token), partition_key::from_bytes(to_bytes(read_simple_bytes(in, size))));
}
}
unsigned shard_of(const token& t) {
return global_partitioner().shard_of(t);
}

View File

@@ -338,12 +338,6 @@ public:
, _key(std::experimental::make_optional(std::move(key)))
{ }
ring_position(dht::token token, token_bound bound, std::experimental::optional<partition_key> key)
: _token(std::move(token))
, _token_bound(bound)
, _key(std::move(key))
{ }
ring_position(const dht::decorated_key& dk)
: _token(dk._token)
, _key(std::experimental::make_optional(dk._key))
@@ -385,6 +379,10 @@ public:
// "less" comparator corresponding to tri_compare()
bool less_compare(const schema&, const ring_position&) const;
size_t serialized_size() const;
void serialize(bytes::iterator& out) const;
static ring_position deserialize(bytes_view& in);
friend std::ostream& operator<<(std::ostream&, const ring_position&);
};

View File

@@ -107,7 +107,7 @@ public:
, _tokens(std::move(tokens))
, _address(address)
, _description(std::move(description))
, _stream_plan(_description) {
, _stream_plan(_description, true) {
}
range_streamer(distributed<database>& db, token_metadata& tm, inet_address address, sstring description)

View File

@@ -38,9 +38,9 @@ if [ ! -d packer ]; then
fi
if [ $LOCALRPM = 0 ]; then
echo "sudo yum remove -y abrt; sudo sh -x -e /home/centos/scylla_install_pkg; sudo sh -x -e /usr/lib/scylla/scylla_setup -a" > scylla_deploy.sh
echo "sudo sh -x -e /home/centos/scylla_install_pkg; sudo sh -x -e /usr/lib/scylla/scylla_setup -a" > scylla_deploy.sh
else
echo "sudo yum remove -y abrt; sudo sh -x -e /home/centos/scylla_install_pkg -l /home/centos; sudo sh -x -e /usr/lib/scylla/scylla_setup -a" > scylla_deploy.sh
echo "sudo sh -x -e /home/centos/scylla_install_pkg -l /home/centos; sudo sh -x -e /usr/lib/scylla/scylla_setup -a" > scylla_deploy.sh
fi

View File

@@ -4,6 +4,7 @@
. /etc/os-release
. /etc/sysconfig/scylla-server
if [ ! -f /etc/default/grub ]; then
echo "Unsupported bootloader"
exit 1
@@ -17,7 +18,7 @@ fi
sed -e "s#^GRUB_CMDLINE_LINUX=\"#GRUB_CMDLINE_LINUX=\"hugepagesz=2M hugepages=$NR_HUGEPAGES #" /etc/default/grub > /tmp/grub
mv /tmp/grub /etc/default/grub
if [ "$ID" = "ubuntu" ]; then
grub-mkconfig -o /boot/grub/grub.cfg
grub2-mkconfig -o /boot/grub/grub.cfg
else
grub2-mkconfig -o /boot/grub2/grub.cfg
fi

View File

@@ -29,7 +29,7 @@ if [ "$NAME" = "Ubuntu" ]; then
else
yum install -y ntp ntpdate || true
if [ $AMI -eq 1 ]; then
sed -e s#centos.pool.ntp.org#amazon.pool.ntp.org# /etc/ntp.conf > /tmp/ntp.conf
sed -e s#fedora.pool.ntp.org#amazon.pool.ntp.org# /etc/ntp.conf > /tmp/ntp.conf
mv /tmp/ntp.conf /etc/ntp.conf
fi
if [ "`systemctl is-active ntpd`" = "active" ]; then

View File

@@ -43,13 +43,6 @@ if [ "`mount|grep /var/lib/scylla`" != "" ]; then
echo "/var/lib/scylla is already mounted"
exit 1
fi
. /etc/os-release
if [ "$NAME" = "Ubuntu" ]; then
apt-get -y install mdadm xfsprogs
else
yum -y install mdadm xfsprogs
fi
mdadm --create --verbose --force --run $RAID --level=0 -c256 --raid-devices=$NR_DISK $DISKS
blockdev --setra 65536 $RAID
mkfs.xfs $RAID -f

View File

@@ -34,11 +34,8 @@ SCYLLA_HOME=/var/lib/scylla
# scylla config dir
SCYLLA_CONF=/etc/scylla
# scylla arguments (for posix mode)
SCYLLA_ARGS="--log-to-syslog 1 --log-to-stdout 0 --default-log-level info --collectd-address=127.0.0.1:25826 --collectd=1 --collectd-poll-period 3000 --network-stack posix"
## scylla arguments (for dpdk mode)
#SCYLLA_ARGS="--log-to-syslog 1 --log-to-stdout 0 --default-log-level info --collectd-address=127.0.0.1:25826 --collectd=1 --collectd-poll-period 3000 --network-stack native --dpdk-pmd"
# additional arguments
SCYLLA_ARGS=""
# setup as AMI instance
AMI=no

View File

@@ -43,7 +43,7 @@ if [ "$ID" = "centos" ]; then
if [ $REBUILD = 1 ]; then
./dist/redhat/centos_dep/build_dependency.sh
else
sudo curl https://s3.amazonaws.com/downloads.scylladb.com/rpm/unstable/centos/master/latest/scylla.repo -o /etc/yum.repos.d/scylla.repo
sudo curl https://s3.amazonaws.com/downloads.scylladb.com/rpm/centos/scylla.repo -o /etc/yum.repos.d/scylla.repo
fi
fi
VERSION=$(./SCYLLA-VERSION-GEN)

View File

@@ -1,5 +1,5 @@
--- binutils.spec.orig 2015-09-30 14:48:25.000000000 +0000
+++ binutils.spec 2016-01-20 14:42:17.856037134 +0000
--- binutils.spec 2015-10-19 05:45:55.106745163 +0000
+++ binutils.spec.1 2015-10-19 05:45:55.807742899 +0000
@@ -17,7 +17,7 @@
%define enable_deterministic_archives 1
@@ -7,7 +7,7 @@
-Name: %{?cross}binutils%{?_with_debug:-debug}
+Name: scylla-%{?cross}binutils%{?_with_debug:-debug}
Version: 2.25
Release: 15%{?dist}
Release: 5%{?dist}
License: GPLv3+
@@ -29,6 +29,7 @@
# instead.
@@ -17,7 +17,7 @@
Source2: binutils-2.19.50.0.1-output-format.sed
Patch01: binutils-2.20.51.0.2-libtool-lib64.patch
@@ -89,6 +90,9 @@
@@ -82,6 +83,9 @@
BuildRequires: texinfo >= 4.0, gettext, flex, bison, zlib-devel
# BZ 920545: We need pod2man in order to build the manual pages.
BuildRequires: /usr/bin/pod2man
@@ -27,7 +27,7 @@
# Required for: ld-bootstrap/bootstrap.exp bootstrap with --static
# It should not be required for: ld-elf/elf.exp static {preinit,init,fini} array
%if %{run_testsuite}
@@ -112,8 +116,8 @@
@@ -105,8 +109,8 @@
%if "%{build_gold}" == "both"
Requires(post): coreutils
@@ -38,7 +38,7 @@
%endif
# On ARM EABI systems, we do want -gnueabi to be part of the
@@ -138,11 +142,12 @@
@@ -131,11 +135,12 @@
%package devel
Summary: BFD and opcodes static and dynamic libraries and header files
Group: System Environment/Libraries
@@ -50,10 +50,10 @@
Requires: zlib-devel
-Requires: binutils = %{version}-%{release}
+Requires: scylla-binutils = %{version}-%{release}
# BZ 1215242: We need touch...
Requires: coreutils
@@ -426,11 +431,11 @@
%description devel
This package contains BFD and opcodes static and dynamic libraries.
@@ -411,11 +416,11 @@
%post
%if "%{build_gold}" == "both"
%__rm -f %{_bindir}/%{?cross}ld
@@ -68,7 +68,7 @@
%endif
%if %{isnative}
/sbin/ldconfig
@@ -448,8 +453,8 @@
@@ -433,8 +438,8 @@
%preun
%if "%{build_gold}" == "both"
if [ $1 = 0 ]; then

View File

@@ -1,5 +1,5 @@
--- boost.spec.orig 2016-01-15 18:41:47.000000000 +0000
+++ boost.spec 2016-01-20 14:46:47.397663246 +0000
--- boost.spec 2015-05-03 17:32:13.000000000 +0000
+++ boost.spec.1 2015-10-19 06:03:12.670534256 +0000
@@ -6,6 +6,11 @@
# We should be able to install directly.
%define boost_docdir __tmp_docdir
@@ -20,9 +20,9 @@
+Name: scylla-boost
+%define orig_name boost
Summary: The free peer-reviewed portable C++ source libraries
Version: 1.58.0
%define version_enc 1_58_0
Release: 11%{?dist}
Version: 1.57.0
%define version_enc 1_57_0
Release: 6%{?dist}
License: Boost and MIT and Python
-%define toplev_dirname %{name}_%{version_enc}
@@ -93,8 +93,8 @@
+Requires: scylla-boost-wave%{?_isa} = %{version}-%{release}
BuildRequires: m4
BuildRequires: libstdc++-devel
@@ -156,6 +164,7 @@
BuildRequires: libstdc++-devel%{?_isa}
@@ -151,6 +159,7 @@
%package atomic
Summary: Run-Time component of boost atomic library
Group: System Environment/Libraries
@@ -102,7 +102,7 @@
%description atomic
@@ -167,7 +176,8 @@
@@ -162,7 +171,8 @@
%package chrono
Summary: Run-Time component of boost chrono library
Group: System Environment/Libraries
@@ -112,7 +112,7 @@
%description chrono
@@ -176,6 +186,7 @@
@@ -171,6 +181,7 @@
%package container
Summary: Run-Time component of boost container library
Group: System Environment/Libraries
@@ -120,7 +120,7 @@
%description container
@@ -188,6 +199,7 @@
@@ -183,6 +194,7 @@
%package context
Summary: Run-Time component of boost context switching library
Group: System Environment/Libraries
@@ -128,7 +128,7 @@
%description context
@@ -197,6 +209,7 @@
@@ -192,6 +204,7 @@
%package coroutine
Summary: Run-Time component of boost coroutine library
Group: System Environment/Libraries
@@ -136,7 +136,7 @@
%description coroutine
Run-Time support for Boost.Coroutine, a library that provides
@@ -208,6 +221,7 @@
@@ -203,6 +216,7 @@
%package date-time
Summary: Run-Time component of boost date-time library
Group: System Environment/Libraries
@@ -144,7 +144,7 @@
%description date-time
@@ -217,7 +231,8 @@
@@ -212,7 +226,8 @@
%package filesystem
Summary: Run-Time component of boost filesystem library
Group: System Environment/Libraries
@@ -154,7 +154,7 @@
%description filesystem
@@ -228,7 +243,8 @@
@@ -223,7 +238,8 @@
%package graph
Summary: Run-Time component of boost graph library
Group: System Environment/Libraries
@@ -164,7 +164,7 @@
%description graph
@@ -248,9 +264,10 @@
@@ -243,9 +259,10 @@
%package locale
Summary: Run-Time component of boost locale library
Group: System Environment/Libraries
@@ -178,7 +178,7 @@
%description locale
@@ -260,6 +277,7 @@
@@ -255,6 +272,7 @@
%package log
Summary: Run-Time component of boost logging library
Group: System Environment/Libraries
@@ -186,7 +186,7 @@
%description log
@@ -270,6 +288,7 @@
@@ -265,6 +283,7 @@
%package math
Summary: Math functions for boost TR1 library
Group: System Environment/Libraries
@@ -194,7 +194,7 @@
%description math
@@ -279,6 +298,7 @@
@@ -274,6 +293,7 @@
%package program-options
Summary: Run-Time component of boost program_options library
Group: System Environment/Libraries
@@ -202,7 +202,7 @@
%description program-options
@@ -289,6 +309,7 @@
@@ -284,6 +304,7 @@
%package python
Summary: Run-Time component of boost python library
Group: System Environment/Libraries
@@ -210,7 +210,7 @@
%description python
@@ -303,6 +324,7 @@
@@ -298,6 +319,7 @@
%package python3
Summary: Run-Time component of boost python library for Python 3
Group: System Environment/Libraries
@@ -218,7 +218,7 @@
%description python3
@@ -315,8 +337,9 @@
@@ -310,8 +332,9 @@
%package python3-devel
Summary: Shared object symbolic links for Boost.Python 3
Group: System Environment/Libraries
@@ -230,7 +230,7 @@
%description python3-devel
@@ -327,6 +350,7 @@
@@ -322,6 +345,7 @@
%package random
Summary: Run-Time component of boost random library
Group: System Environment/Libraries
@@ -238,7 +238,7 @@
%description random
@@ -335,6 +359,7 @@
@@ -330,6 +354,7 @@
%package regex
Summary: Run-Time component of boost regular expression library
Group: System Environment/Libraries
@@ -246,7 +246,7 @@
%description regex
@@ -343,6 +368,7 @@
@@ -338,6 +363,7 @@
%package serialization
Summary: Run-Time component of boost serialization library
Group: System Environment/Libraries
@@ -254,7 +254,7 @@
%description serialization
@@ -351,6 +377,7 @@
@@ -346,6 +372,7 @@
%package signals
Summary: Run-Time component of boost signals and slots library
Group: System Environment/Libraries
@@ -262,7 +262,7 @@
%description signals
@@ -359,6 +386,7 @@
@@ -354,6 +381,7 @@
%package system
Summary: Run-Time component of boost system support library
Group: System Environment/Libraries
@@ -270,7 +270,7 @@
%description system
@@ -369,6 +397,7 @@
@@ -364,6 +392,7 @@
%package test
Summary: Run-Time component of boost test library
Group: System Environment/Libraries
@@ -278,7 +278,7 @@
%description test
@@ -378,7 +407,8 @@
@@ -373,7 +402,8 @@
%package thread
Summary: Run-Time component of boost thread library
Group: System Environment/Libraries
@@ -288,7 +288,7 @@
%description thread
@@ -390,8 +420,9 @@
@@ -385,8 +415,9 @@
%package timer
Summary: Run-Time component of boost timer library
Group: System Environment/Libraries
@@ -300,7 +300,7 @@
%description timer
@@ -402,11 +433,12 @@
@@ -397,11 +428,12 @@
%package wave
Summary: Run-Time component of boost C99/C++ pre-processing library
Group: System Environment/Libraries
@@ -318,7 +318,7 @@
%description wave
@@ -417,27 +449,20 @@
@@ -412,27 +444,20 @@
%package devel
Summary: The Boost C++ headers and shared development libraries
Group: Development/Libraries
@@ -352,7 +352,7 @@
%description static
Static Boost C++ libraries.
@@ -448,11 +473,7 @@
@@ -443,11 +468,7 @@
%if 0%{?rhel} >= 6
BuildArch: noarch
%endif
@@ -365,7 +365,7 @@
%description doc
This package contains the documentation in the HTML format of the Boost C++
@@ -465,7 +486,7 @@
@@ -460,7 +481,7 @@
%if 0%{?rhel} >= 6
BuildArch: noarch
%endif
@@ -374,18 +374,19 @@
%description examples
This package contains example source files distributed with boost.
@@ -476,8 +497,9 @@
@@ -471,9 +492,10 @@
%package openmpi
Summary: Run-Time component of Boost.MPI library
Group: System Environment/Libraries
+Requires: scylla-env
Requires: openmpi%{?_isa}
BuildRequires: openmpi-devel
-Requires: boost-serialization%{?_isa} = %{version}-%{release}
+Requires: scylla-boost-serialization%{?_isa} = %{version}-%{release}
%description openmpi
@@ -487,10 +509,11 @@
@@ -483,10 +505,11 @@
%package openmpi-devel
Summary: Shared library symbolic links for Boost.MPI
Group: System Environment/Libraries
@@ -401,7 +402,7 @@
%description openmpi-devel
@@ -500,9 +523,10 @@
@@ -496,9 +519,10 @@
%package openmpi-python
Summary: Python run-time component of Boost.MPI library
Group: System Environment/Libraries
@@ -415,7 +416,7 @@
%description openmpi-python
@@ -512,8 +536,9 @@
@@ -508,8 +532,9 @@
%package graph-openmpi
Summary: Run-Time component of parallel boost graph library
Group: System Environment/Libraries
@@ -427,11 +428,12 @@
%description graph-openmpi
@@ -530,10 +555,10 @@
@@ -526,11 +551,11 @@
%package mpich
Summary: Run-Time component of Boost.MPI library
Group: System Environment/Libraries
+Requires: scylla-env
Requires: mpich%{?_isa}
BuildRequires: mpich-devel
-Requires: boost-serialization%{?_isa} = %{version}-%{release}
-Provides: boost-mpich2 = %{version}-%{release}
@@ -441,7 +443,7 @@
%description mpich
@@ -543,12 +568,12 @@
@@ -540,12 +565,12 @@
%package mpich-devel
Summary: Shared library symbolic links for Boost.MPI
Group: System Environment/Libraries
@@ -460,7 +462,7 @@
%description mpich-devel
@@ -558,11 +583,11 @@
@@ -555,11 +580,11 @@
%package mpich-python
Summary: Python run-time component of Boost.MPI library
Group: System Environment/Libraries
@@ -477,7 +479,7 @@
%description mpich-python
@@ -572,10 +597,10 @@
@@ -569,10 +594,10 @@
%package graph-mpich
Summary: Run-Time component of parallel boost graph library
Group: System Environment/Libraries
@@ -492,7 +494,7 @@
%description graph-mpich
@@ -589,7 +614,8 @@
@@ -586,7 +611,8 @@
%package build
Summary: Cross platform build system for C++ projects
Group: Development/Tools
@@ -502,7 +504,7 @@
BuildArch: noarch
%description build
@@ -613,6 +639,7 @@
@@ -600,6 +626,7 @@
%package jam
Summary: A low-level build tool
Group: Development/Tools
@@ -510,7 +512,7 @@
%description jam
Boost.Jam (BJam) is the low-level build engine tool for Boost.Build.
@@ -1186,7 +1213,7 @@
@@ -1134,7 +1161,7 @@
%files devel
%defattr(-, root, root, -)
%doc LICENSE_1_0.txt

View File

@@ -12,36 +12,28 @@ sudo yum install -y wget yum-utils rpm-build rpmdevtools gcc gcc-c++ make patch
mkdir -p build/srpms
cd build/srpms
if [ ! -f binutils-2.25-15.fc23.src.rpm ]; then
wget https://kojipkgs.fedoraproject.org//packages/binutils/2.25/15.fc23/src/binutils-2.25-15.fc23.src.rpm
if [ ! -f binutils-2.25-5.fc22.src.rpm ]; then
wget http://download.fedoraproject.org/pub/fedora/linux/releases/22/Everything/source/SRPMS/b/binutils-2.25-5.fc22.src.rpm
fi
if [ ! -f isl-0.14-4.fc23.src.rpm ]; then
wget https://kojipkgs.fedoraproject.org//packages/isl/0.14/4.fc23/src/isl-0.14-4.fc23.src.rpm
if [ ! -f isl-0.14-3.fc22.src.rpm ]; then
wget http://download.fedoraproject.org/pub/fedora/linux/releases/22/Everything/source/SRPMS/i/isl-0.14-3.fc22.src.rpm
fi
if [ ! -f gcc-5.3.1-2.fc23.src.rpm ]; then
wget https://kojipkgs.fedoraproject.org//packages/gcc/5.3.1/2.fc23/src/gcc-5.3.1-2.fc23.src.rpm
if [ ! -f gcc-5.1.1-4.fc22.src.rpm ]; then
wget https://s3.amazonaws.com/scylla-centos-dep/gcc-5.1.1-4.fc22.src.rpm
fi
if [ ! -f boost-1.58.0-11.fc23.src.rpm ]; then
wget https://kojipkgs.fedoraproject.org//packages/boost/1.58.0/11.fc23/src/boost-1.58.0-11.fc23.src.rpm
if [ ! -f boost-1.57.0-6.fc22.src.rpm ]; then
wget http://download.fedoraproject.org/pub/fedora/linux/releases/22/Everything/source/SRPMS/b/boost-1.57.0-6.fc22.src.rpm
fi
if [ ! -f ninja-build-1.6.0-2.fc23.src.rpm ]; then
wget https://kojipkgs.fedoraproject.org//packages/ninja-build/1.6.0/2.fc23/src/ninja-build-1.6.0-2.fc23.src.rpm
if [ ! -f ninja-build-1.5.3-2.fc22.src.rpm ]; then
wget http://download.fedoraproject.org/pub/fedora/linux/releases/22/Everything/source/SRPMS/n/ninja-build-1.5.3-2.fc22.src.rpm
fi
if [ ! -f ragel-6.8-5.fc23.src.rpm ]; then
wget https://kojipkgs.fedoraproject.org//packages/ragel/6.8/5.fc23/src/ragel-6.8-5.fc23.src.rpm
fi
if [ ! -f gdb-7.10.1-30.fc23.src.rpm ]; then
wget https://kojipkgs.fedoraproject.org//packages/gdb/7.10.1/30.fc23/src/gdb-7.10.1-30.fc23.src.rpm
fi
if [ ! -f pyparsing-2.0.3-2.fc23.src.rpm ]; then
wget https://kojipkgs.fedoraproject.org//packages/pyparsing/2.0.3/2.fc23/src/pyparsing-2.0.3-2.fc23.src.rpm
if [ ! -f ragel-6.8-3.fc22.src.rpm ]; then
wget http://download.fedoraproject.org/pub/fedora/linux/releases/22/Everything/source/SRPMS/r/ragel-6.8-3.fc22.src.rpm
fi
cd -
@@ -54,8 +46,6 @@ sudo yum install -y flex bison dejagnu zlib-static glibc-static sharutils bc lib
sudo yum install -y gcc-objc
sudo yum install -y asciidoc
sudo yum install -y gettext
sudo yum install -y rpm-devel python34-devel guile-devel readline-devel ncurses-devel expat-devel texlive-collection-latexrecommended xz-devel libselinux-devel
sudo yum install -y dos2unix
if [ ! -f $RPMBUILD/RPMS/noarch/scylla-env-1.0-1.el7.centos.noarch.rpm ]; then
cd dist/redhat/centos_dep
@@ -65,62 +55,48 @@ if [ ! -f $RPMBUILD/RPMS/noarch/scylla-env-1.0-1.el7.centos.noarch.rpm ]; then
fi
do_install scylla-env-1.0-1.el7.centos.noarch.rpm
if [ ! -f $RPMBUILD/RPMS/x86_64/scylla-binutils-2.25-15.el7.centos.x86_64.rpm ]; then
rpm --define "_topdir $RPMBUILD" -ivh build/srpms/binutils-2.25-15.fc23.src.rpm
if [ ! -f $RPMBUILD/RPMS/x86_64/scylla-binutils-2.25-5.el7.centos.x86_64.rpm ]; then
rpm --define "_topdir $RPMBUILD" -ivh build/srpms/binutils-2.25-5.fc22.src.rpm
patch $RPMBUILD/SPECS/binutils.spec < dist/redhat/centos_dep/binutils.diff
rpmbuild --define "_topdir $RPMBUILD" -ba $RPMBUILD/SPECS/binutils.spec
fi
do_install scylla-binutils-2.25-15.el7.centos.x86_64.rpm
do_install scylla-binutils-2.25-5.el7.centos.x86_64.rpm
if [ ! -f $RPMBUILD/RPMS/x86_64/scylla-isl-0.14-4.el7.centos.x86_64.rpm ]; then
rpm --define "_topdir $RPMBUILD" -ivh build/srpms/isl-0.14-4.fc23.src.rpm
if [ ! -f $RPMBUILD/RPMS/x86_64/scylla-isl-0.14-3.el7.centos.x86_64.rpm ]; then
rpm --define "_topdir $RPMBUILD" -ivh build/srpms/isl-0.14-3.fc22.src.rpm
patch $RPMBUILD/SPECS/isl.spec < dist/redhat/centos_dep/isl.diff
rpmbuild --define "_topdir $RPMBUILD" -ba $RPMBUILD/SPECS/isl.spec
fi
do_install scylla-isl-0.14-4.el7.centos.x86_64.rpm
do_install scylla-isl-devel-0.14-4.el7.centos.x86_64.rpm
do_install scylla-isl-0.14-3.el7.centos.x86_64.rpm
do_install scylla-isl-devel-0.14-3.el7.centos.x86_64.rpm
if [ ! -f $RPMBUILD/RPMS/x86_64/scylla-gcc-5.3.1-2.el7.centos.x86_64.rpm ]; then
rpm --define "_topdir $RPMBUILD" -ivh build/srpms/gcc-5.3.1-2.fc23.src.rpm
if [ ! -f $RPMBUILD/RPMS/x86_64/scylla-gcc-5.1.1-4.el7.centos.x86_64.rpm ]; then
rpm --define "_topdir $RPMBUILD" -ivh build/srpms/gcc-5.1.1-4.fc22.src.rpm
patch $RPMBUILD/SPECS/gcc.spec < dist/redhat/centos_dep/gcc.diff
rpmbuild --define "_topdir $RPMBUILD" -ba $RPMBUILD/SPECS/gcc.spec
fi
do_install scylla-*5.3.1-2*
do_install scylla-*5.1.1-4*
if [ ! -f $RPMBUILD/RPMS/x86_64/scylla-boost-1.58.0-11.el7.centos.x86_64.rpm ]; then
rpm --define "_topdir $RPMBUILD" -ivh build/srpms/boost-1.58.0-11.fc23.src.rpm
if [ ! -f $RPMBUILD/RPMS/x86_64/scylla-boost-1.57.0-6.el7.centos.x86_64.rpm ]; then
rpm --define "_topdir $RPMBUILD" -ivh build/srpms/boost-1.57.0-6.fc22.src.rpm
patch $RPMBUILD/SPECS/boost.spec < dist/redhat/centos_dep/boost.diff
rpmbuild --define "_topdir $RPMBUILD" -ba $RPMBUILD/SPECS/boost.spec
fi
do_install scylla-boost*
if [ ! -f $RPMBUILD/RPMS/x86_64/scylla-ninja-build-1.6.0-2.el7.centos.x86_64.rpm ]; then
rpm --define "_topdir $RPMBUILD" -ivh build/srpms/ninja-build-1.6.0-2.fc23.src.rpm
if [ ! -f $RPMBUILD/RPMS/x86_64/scylla-ninja-build-1.5.3-2.el7.centos.x86_64.rpm ]; then
rpm --define "_topdir $RPMBUILD" -ivh build/srpms/ninja-build-1.5.3-2.fc22.src.rpm
patch $RPMBUILD/SPECS/ninja-build.spec < dist/redhat/centos_dep/ninja-build.diff
rpmbuild --define "_topdir $RPMBUILD" -ba $RPMBUILD/SPECS/ninja-build.spec
fi
do_install scylla-ninja-build-1.6.0-2.el7.centos.x86_64.rpm
do_install scylla-ninja-build-1.5.3-2.el7.centos.x86_64.rpm
if [ ! -f $RPMBUILD/RPMS/x86_64/scylla-ragel-6.8-5.el7.centos.x86_64.rpm ]; then
rpm --define "_topdir $RPMBUILD" -ivh build/srpms/ragel-6.8-5.fc23.src.rpm
if [ ! -f $RPMBUILD/RPMS/x86_64/scylla-ragel-6.8-3.el7.centos.x86_64.rpm ]; then
rpm --define "_topdir $RPMBUILD" -ivh build/srpms/ragel-6.8-3.fc22.src.rpm
patch $RPMBUILD/SPECS/ragel.spec < dist/redhat/centos_dep/ragel.diff
rpmbuild --define "_topdir $RPMBUILD" -ba $RPMBUILD/SPECS/ragel.spec
fi
do_install scylla-ragel-6.8-5.el7.centos.x86_64.rpm
if [ ! -f $RPMBUILD/RPMS/x86_64/scylla-gdb-7.10.1-30.el7.centos.x86_64.rpm ]; then
rpm --define "_topdir $RPMBUILD" -ivh build/srpms/gdb-7.10.1-30.fc23.src.rpm
patch $RPMBUILD/SPECS/gdb.spec < dist/redhat/centos_dep/gdb.diff
rpmbuild --define "_topdir $RPMBUILD" -ba $RPMBUILD/SPECS/gdb.spec
fi
do_install scylla-gdb-7.10.1-30.el7.centos.x86_64.rpm
if [ ! -f $RPMBUILD/RPMS/noarch/python34-pyparsing-2.0.3-2.el7.centos.noarch.rpm ]; then
rpm --define "_topdir $RPMBUILD" -ivh build/srpms/pyparsing-2.0.3-2.fc23.src.rpm
patch $RPMBUILD/SPECS/pyparsing.spec < dist/redhat/centos_dep/pyparsing.diff
rpmbuild --define "_topdir $RPMBUILD" -ba $RPMBUILD/SPECS/pyparsing.spec
fi
do_install python34-pyparsing-2.0.3-2.el7.centos.noarch.rpm
do_install scylla-ragel-6.8-3.el7.centos.x86_64.rpm
if [ ! -f $RPMBUILD/RPMS/noarch/scylla-antlr3-tool-3.5.2-1.el7.centos.noarch.rpm ]; then
mkdir build/scylla-antlr3-tool-3.5.2

View File

@@ -1,14 +1,30 @@
--- gcc.spec.orig 2015-12-08 16:03:46.000000000 +0000
+++ gcc.spec 2016-01-21 08:47:49.160667342 +0000
@@ -1,6 +1,7 @@
%global DATE 20151207
%global SVNREV 231358
%global gcc_version 5.3.1
--- gcc.spec 2015-10-19 06:31:44.889189647 +0000
+++ gcc.spec.1 2015-10-19 07:56:17.445991665 +0000
@@ -1,22 +1,15 @@
%global DATE 20150618
%global SVNREV 224595
%global gcc_version 5.1.1
+%define _prefix /opt/scylladb
# Note, gcc_release must be integer, if you want to add suffixes to
# %{release}, append them after %{gcc_release} on Release: line.
%global gcc_release 2
@@ -84,7 +85,8 @@
%global gcc_release 4
%global _unpackaged_files_terminate_build 0
%global _performance_build 1
%global multilib_64_archs sparc64 ppc64 ppc64p7 s390x x86_64
-%ifarch %{ix86} x86_64 ia64 ppc ppc64 ppc64p7 alpha %{arm} aarch64
-%global build_ada 1
-%else
%global build_ada 0
-%endif
-%ifarch %{ix86} x86_64 ppc ppc64 ppc64le ppc64p7 s390 s390x %{arm} aarch64
-%global build_go 1
-%else
%global build_go 0
-%endif
%ifarch %{ix86} x86_64 ia64
%global build_libquadmath 1
%else
@@ -82,7 +75,8 @@
%global multilib_32_arch i686
%endif
Summary: Various compilers (C, C++, Objective-C, Java, ...)
@@ -18,7 +34,7 @@
Version: %{gcc_version}
Release: %{gcc_release}%{?dist}
# libgcc, libgfortran, libgomp, libstdc++ and crtstuff have
@@ -99,6 +101,7 @@
@@ -97,6 +91,7 @@
%global isl_version 0.14
URL: http://gcc.gnu.org
BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n)
@@ -26,7 +42,7 @@
# Need binutils with -pie support >= 2.14.90.0.4-4
# Need binutils which can omit dot symbols and overlap .opd on ppc64 >= 2.15.91.0.2-4
# Need binutils which handle -msecure-plt on ppc >= 2.16.91.0.2-2
@@ -110,7 +113,7 @@
@@ -108,7 +103,7 @@
# Need binutils which support .cfi_sections >= 2.19.51.0.14-33
# Need binutils which support --no-add-needed >= 2.20.51.0.2-12
# Need binutils which support -plugin
@@ -35,7 +51,7 @@
# While gcc doesn't include statically linked binaries, during testing
# -static is used several times.
BuildRequires: glibc-static
@@ -145,15 +148,15 @@
@@ -143,15 +138,15 @@
BuildRequires: libunwind >= 0.98
%endif
%if %{build_isl}
@@ -55,7 +71,7 @@
# Need .eh_frame ld optimizations
# Need proper visibility support
# Need -pie support
@@ -168,7 +171,7 @@
@@ -166,7 +161,7 @@
# Need binutils that support .cfi_sections
# Need binutils that support --no-add-needed
# Need binutils that support -plugin
@@ -64,7 +80,7 @@
# Make sure gdb will understand DW_FORM_strp
Conflicts: gdb < 5.1-2
Requires: glibc-devel >= 2.2.90-12
@@ -176,17 +179,15 @@
@@ -174,17 +169,15 @@
# Make sure glibc supports TFmode long double
Requires: glibc >= 2.3.90-35
%endif
@@ -86,7 +102,7 @@
Requires(post): /sbin/install-info
Requires(preun): /sbin/install-info
AutoReq: true
@@ -228,12 +229,12 @@
@@ -226,12 +219,12 @@
The gcc package contains the GNU Compiler Collection version 5.
You'll need this package in order to compile C code.
@@ -101,7 +117,7 @@
%endif
Obsoletes: libmudflap
Obsoletes: libmudflap-devel
@@ -241,17 +242,19 @@
@@ -239,17 +232,19 @@
Obsoletes: libgcj < %{version}-%{release}
Obsoletes: libgcj-devel < %{version}-%{release}
Obsoletes: libgcj-src < %{version}-%{release}
@@ -125,7 +141,7 @@
Autoreq: true
%description c++
@@ -259,50 +262,55 @@
@@ -257,50 +252,55 @@
It includes support for most of the current C++ specification,
including templates and exception handling.
@@ -193,7 +209,7 @@
Autoreq: true
%description objc
@@ -313,29 +321,32 @@
@@ -311,29 +311,32 @@
%package objc++
Summary: Objective-C++ support for GCC
Group: Development/Languages
@@ -233,7 +249,7 @@
%endif
Requires(post): /sbin/install-info
Requires(preun): /sbin/install-info
@@ -345,260 +356,286 @@
@@ -343,260 +346,286 @@
The gcc-gfortran package provides support for compiling Fortran
programs with the GNU Compiler Collection.
@@ -592,7 +608,7 @@
Cpp is the GNU C-Compatible Compiler Preprocessor.
Cpp is a macro processor which is used automatically
by the C compiler to transform your program before actual
@@ -623,8 +660,9 @@
@@ -621,8 +650,9 @@
%package gnat
Summary: Ada 83, 95, 2005 and 2012 support for GCC
Group: Development/Languages
@@ -604,7 +620,7 @@
Requires(post): /sbin/install-info
Requires(preun): /sbin/install-info
Autoreq: true
@@ -633,82 +671,90 @@
@@ -631,40 +661,44 @@
GNAT is a GNU Ada 83, 95, 2005 and 2012 front-end to GCC. This package includes
development tools, the documents and Ada compiler.
@@ -658,13 +674,8 @@
+Requires: scylla-libgo-devel = %{version}-%{release}
Requires(post): /sbin/install-info
Requires(preun): /sbin/install-info
-Requires(post): %{_sbindir}/update-alternatives
-Requires(postun): %{_sbindir}/update-alternatives
+Requires(post): /sbin/update-alternatives
+Requires(postun): /sbin/update-alternatives
Autoreq: true
%description go
Requires(post): %{_sbindir}/update-alternatives
@@ -675,38 +709,42 @@
The gcc-go package provides support for compiling Go programs
with the GNU Compiler Collection.
@@ -717,7 +728,7 @@
Requires: gmp-devel >= 4.1.2-8, mpfr-devel >= 2.2.1, libmpc-devel >= 0.8.1
%description plugin-devel
@@ -728,7 +774,8 @@
@@ -726,7 +764,8 @@
Summary: Debug information for package %{name}
Group: Development/Debug
AutoReqProv: 0
@@ -727,21 +738,21 @@
%description debuginfo
This package provides debug information for package %{name}.
@@ -958,11 +1005,11 @@
@@ -961,11 +1000,10 @@
--enable-gnu-unique-object --enable-linker-build-id --with-linker-hash-style=gnu \
--enable-plugin --enable-initfini-array \
--disable-libgcj \
-%if 0%{fedora} >= 21 && 0%{fedora} <= 22
--with-default-libstdcxx-abi=gcc4-compatible \
--with-default-libstdcxx-abi=c++98 \
-%endif
%if %{build_isl}
--with-isl \
- --with-isl \
+ --with-isl-include=/opt/scylladb/include/ \
+ --with-isl-lib=/opt/scylladb/lib64/ \
%else
--without-isl \
%endif
@@ -971,11 +1018,9 @@
@@ -974,11 +1012,9 @@
%else
--disable-libmpx \
%endif
@@ -753,7 +764,7 @@
%ifarch %{arm}
--disable-sjlj-exceptions \
%endif
@@ -1006,9 +1051,6 @@
@@ -1009,9 +1045,6 @@
%if 0%{?rhel} >= 7
--with-cpu-32=power8 --with-tune-32=power8 --with-cpu-64=power8 --with-tune-64=power8 \
%endif
@@ -763,7 +774,7 @@
%endif
%ifarch ppc
--build=%{gcc_target_platform} --target=%{gcc_target_platform} --with-cpu=default32
@@ -1270,16 +1312,15 @@
@@ -1273,16 +1306,15 @@
mv %{buildroot}%{_prefix}/%{_lib}/libmpx.spec $FULLPATH/
%endif
@@ -786,7 +797,7 @@
%endif
%ifarch ppc
rm -f $FULLPATH/libgcc_s.so
@@ -1819,7 +1860,7 @@
@@ -1816,7 +1848,7 @@
chmod 755 %{buildroot}%{_prefix}/bin/c?9
cd ..
@@ -795,7 +806,7 @@
%find_lang cpplib
# Remove binaries we will not be including, so that they don't end up in
@@ -1869,11 +1910,7 @@
@@ -1866,11 +1898,7 @@
# run the tests.
make %{?_smp_mflags} -k check ALT_CC_UNDER_TEST=gcc ALT_CXX_UNDER_TEST=g++ \
@@ -807,7 +818,7 @@
echo ====================TESTING=========================
( LC_ALL=C ../contrib/test_summary || : ) 2>&1 | sed -n '/^cat.*EOF/,/^EOF/{/^cat.*EOF/d;/^EOF/d;/^LAST_UPDATED:/d;p;}'
echo ====================TESTING END=====================
@@ -1900,13 +1937,13 @@
@@ -1897,13 +1925,13 @@
--info-dir=%{_infodir} %{_infodir}/gcc.info.gz || :
fi
@@ -823,21 +834,7 @@
if [ $1 = 0 -a -f %{_infodir}/cpp.info.gz ]; then
/sbin/install-info --delete \
--info-dir=%{_infodir} %{_infodir}/cpp.info.gz || :
@@ -1945,19 +1982,19 @@
fi
%post go
-%{_sbindir}/update-alternatives --install \
+/sbin/update-alternatives --install \
%{_prefix}/bin/go go %{_prefix}/bin/go.gcc 92 \
--slave %{_prefix}/bin/gofmt gofmt %{_prefix}/bin/gofmt.gcc
%preun go
if [ $1 = 0 ]; then
- %{_sbindir}/update-alternatives --remove go %{_prefix}/bin/go.gcc
+ /sbin/update-alternatives --remove go %{_prefix}/bin/go.gcc
fi
@@ -1954,7 +1982,7 @@
# Because glibc Prereq's libgcc and /sbin/ldconfig
# comes from glibc, it might not exist yet when
# libgcc is installed
@@ -846,7 +843,7 @@
if posix.access ("/sbin/ldconfig", "x") then
local pid = posix.fork ()
if pid == 0 then
@@ -1967,7 +2004,7 @@
@@ -1964,7 +1992,7 @@
end
end
@@ -855,7 +852,7 @@
if posix.access ("/sbin/ldconfig", "x") then
local pid = posix.fork ()
if pid == 0 then
@@ -1977,120 +2014,120 @@
@@ -1974,120 +2002,120 @@
end
end
@@ -1014,7 +1011,7 @@
%defattr(-,root,root,-)
%{_prefix}/bin/cc
%{_prefix}/bin/c89
@@ -2414,7 +2451,7 @@
@@ -2409,7 +2437,7 @@
%{!?_licensedir:%global license %%doc}
%license gcc/COPYING* COPYING.RUNTIME
@@ -1023,7 +1020,7 @@
%defattr(-,root,root,-)
%{_prefix}/lib/cpp
%{_prefix}/bin/cpp
@@ -2425,10 +2462,10 @@
@@ -2420,10 +2448,10 @@
%dir %{_prefix}/libexec/gcc/%{gcc_target_platform}/%{gcc_version}
%{_prefix}/libexec/gcc/%{gcc_target_platform}/%{gcc_version}/cc1
@@ -1037,7 +1034,7 @@
%{!?_licensedir:%global license %%doc}
%license gcc/COPYING* COPYING.RUNTIME
@@ -2469,7 +2506,7 @@
@@ -2461,7 +2489,7 @@
%endif
%doc rpm.doc/changelogs/gcc/cp/ChangeLog*
@@ -1046,7 +1043,7 @@
%defattr(-,root,root,-)
%{_prefix}/%{_lib}/libstdc++.so.6*
%dir %{_datadir}/gdb
@@ -2481,7 +2518,7 @@
@@ -2473,7 +2501,7 @@
%dir %{_prefix}/share/gcc-%{gcc_version}/python
%{_prefix}/share/gcc-%{gcc_version}/python/libstdcxx
@@ -1055,7 +1052,7 @@
%defattr(-,root,root,-)
%dir %{_prefix}/include/c++
%dir %{_prefix}/include/c++/%{gcc_version}
@@ -2507,7 +2544,7 @@
@@ -2488,7 +2516,7 @@
%endif
%doc rpm.doc/changelogs/libstdc++-v3/ChangeLog* libstdc++-v3/README*
@@ -1064,7 +1061,7 @@
%defattr(-,root,root,-)
%dir %{_prefix}/lib/gcc
%dir %{_prefix}/lib/gcc/%{gcc_target_platform}
@@ -2528,7 +2565,7 @@
@@ -2509,7 +2537,7 @@
%endif
%if %{build_libstdcxx_docs}
@@ -1073,7 +1070,7 @@
%defattr(-,root,root)
%{_mandir}/man3/*
%doc rpm.doc/libstdc++-v3/html
@@ -2567,7 +2604,7 @@
@@ -2548,7 +2576,7 @@
%dir %{_prefix}/libexec/gcc/%{gcc_target_platform}/%{gcc_version}
%{_prefix}/libexec/gcc/%{gcc_target_platform}/%{gcc_version}/cc1objplus
@@ -1082,7 +1079,7 @@
%defattr(-,root,root,-)
%{_prefix}/%{_lib}/libobjc.so.4*
@@ -2621,11 +2658,11 @@
@@ -2602,11 +2630,11 @@
%endif
%doc rpm.doc/gfortran/*
@@ -1096,7 +1093,7 @@
%defattr(-,root,root,-)
%dir %{_prefix}/lib/gcc
%dir %{_prefix}/lib/gcc/%{gcc_target_platform}
@@ -2671,12 +2708,12 @@
@@ -2652,12 +2680,12 @@
%{_prefix}/libexec/gcc/%{gcc_target_platform}/%{gcc_version}/gnat1
%doc rpm.doc/changelogs/gcc/ada/ChangeLog*
@@ -1111,7 +1108,7 @@
%defattr(-,root,root,-)
%dir %{_prefix}/lib/gcc
%dir %{_prefix}/lib/gcc/%{gcc_target_platform}
@@ -2702,7 +2739,7 @@
@@ -2683,7 +2711,7 @@
%exclude %{_prefix}/lib/gcc/%{gcc_target_platform}/%{gcc_version}/adalib/libgnarl.a
%endif
@@ -1120,7 +1117,7 @@
%defattr(-,root,root,-)
%dir %{_prefix}/lib/gcc
%dir %{_prefix}/lib/gcc/%{gcc_target_platform}
@@ -2726,7 +2763,7 @@
@@ -2707,7 +2735,7 @@
%endif
%endif
@@ -1129,7 +1126,7 @@
%defattr(-,root,root,-)
%{_prefix}/%{_lib}/libgomp.so.1*
%{_prefix}/%{_lib}/libgomp-plugin-host_nonshm.so.1*
@@ -2734,14 +2771,14 @@
@@ -2715,14 +2743,14 @@
%doc rpm.doc/changelogs/libgomp/ChangeLog*
%if %{build_libquadmath}
@@ -1146,7 +1143,7 @@
%defattr(-,root,root,-)
%dir %{_prefix}/lib/gcc
%dir %{_prefix}/lib/gcc/%{gcc_target_platform}
@@ -2754,7 +2791,7 @@
@@ -2735,7 +2763,7 @@
%endif
%doc rpm.doc/libquadmath/ChangeLog*
@@ -1155,7 +1152,7 @@
%defattr(-,root,root,-)
%dir %{_prefix}/lib/gcc
%dir %{_prefix}/lib/gcc/%{gcc_target_platform}
@@ -2773,12 +2810,12 @@
@@ -2754,12 +2782,12 @@
%endif
%if %{build_libitm}
@@ -1170,7 +1167,7 @@
%defattr(-,root,root,-)
%dir %{_prefix}/lib/gcc
%dir %{_prefix}/lib/gcc/%{gcc_target_platform}
@@ -2791,7 +2828,7 @@
@@ -2772,7 +2800,7 @@
%endif
%doc rpm.doc/libitm/ChangeLog*
@@ -1179,7 +1176,7 @@
%defattr(-,root,root,-)
%dir %{_prefix}/lib/gcc
%dir %{_prefix}/lib/gcc/%{gcc_target_platform}
@@ -2810,11 +2847,11 @@
@@ -2791,11 +2819,11 @@
%endif
%if %{build_libatomic}
@@ -1193,7 +1190,7 @@
%defattr(-,root,root,-)
%dir %{_prefix}/lib/gcc
%dir %{_prefix}/lib/gcc/%{gcc_target_platform}
@@ -2834,11 +2871,11 @@
@@ -2815,11 +2843,11 @@
%endif
%if %{build_libasan}
@@ -1207,7 +1204,7 @@
%defattr(-,root,root,-)
%dir %{_prefix}/lib/gcc
%dir %{_prefix}/lib/gcc/%{gcc_target_platform}
@@ -2860,11 +2897,11 @@
@@ -2841,11 +2869,11 @@
%endif
%if %{build_libubsan}
@@ -1221,7 +1218,7 @@
%defattr(-,root,root,-)
%dir %{_prefix}/lib/gcc
%dir %{_prefix}/lib/gcc/%{gcc_target_platform}
@@ -2886,11 +2923,11 @@
@@ -2867,11 +2895,11 @@
%endif
%if %{build_libtsan}
@@ -1235,7 +1232,7 @@
%defattr(-,root,root,-)
%dir %{_prefix}/lib/gcc
%dir %{_prefix}/lib/gcc/%{gcc_target_platform}
@@ -2902,11 +2939,11 @@
@@ -2883,11 +2911,11 @@
%endif
%if %{build_liblsan}
@@ -1249,7 +1246,7 @@
%defattr(-,root,root,-)
%dir %{_prefix}/lib/gcc
%dir %{_prefix}/lib/gcc/%{gcc_target_platform}
@@ -2918,11 +2955,11 @@
@@ -2899,11 +2927,11 @@
%endif
%if %{build_libcilkrts}
@@ -1263,7 +1260,7 @@
%defattr(-,root,root,-)
%dir %{_prefix}/lib/gcc
%dir %{_prefix}/lib/gcc/%{gcc_target_platform}
@@ -2942,12 +2979,12 @@
@@ -2923,12 +2951,12 @@
%endif
%if %{build_libmpx}
@@ -1278,7 +1275,7 @@
%defattr(-,root,root,-)
%dir %{_prefix}/lib/gcc
%dir %{_prefix}/lib/gcc/%{gcc_target_platform}
@@ -3009,12 +3046,12 @@
@@ -2990,12 +3018,12 @@
%endif
%doc rpm.doc/go/*
@@ -1293,7 +1290,7 @@
%defattr(-,root,root,-)
%dir %{_prefix}/lib/gcc
%dir %{_prefix}/lib/gcc/%{gcc_target_platform}
@@ -3042,7 +3079,7 @@
@@ -3023,7 +3051,7 @@
%{_prefix}/lib/gcc/%{gcc_target_platform}/%{gcc_version}/libgo.so
%endif
@@ -1302,7 +1299,7 @@
%defattr(-,root,root,-)
%dir %{_prefix}/lib/gcc
%dir %{_prefix}/lib/gcc/%{gcc_target_platform}
@@ -3060,12 +3097,12 @@
@@ -3041,12 +3069,12 @@
%endif
%endif

View File

@@ -1,29 +0,0 @@
--- gdb.spec.orig 2015-12-06 04:10:30.000000000 +0000
+++ gdb.spec 2016-01-20 14:49:12.745843903 +0000
@@ -16,7 +16,10 @@
}
Summary: A GNU source-level debugger for C, C++, Fortran, Go and other languages
-Name: %{?scl_prefix}gdb
+Name: %{?scl_prefix}scylla-gdb
+%define orig_name gdb
+Requires: scylla-env
+%define _prefix /opt/scylladb
# Freeze it when GDB gets branched
%global snapsrc 20150706
@@ -572,12 +575,8 @@
BuildRequires: rpm-devel%{buildisa}
BuildRequires: zlib-devel%{buildisa} libselinux-devel%{buildisa}
%if 0%{!?_without_python:1}
-%if 0%{?rhel:1} && 0%{?rhel} <= 7
-BuildRequires: python-devel%{buildisa}
-%else
-%global __python %{__python3}
-BuildRequires: python3-devel%{buildisa}
-%endif
+BuildRequires: python34-devel%{?_isa}
+%global __python /usr/bin/python3.4
%if 0%{?rhel:1} && 0%{?rhel} <= 7
# Temporarily before python files get moved to libstdc++.rpm
# libstdc++%{bits_other} is not present in Koji, the .spec script generating

View File

@@ -1,5 +1,5 @@
--- isl.spec.orig 2016-01-20 14:41:16.891802146 +0000
+++ isl.spec 2016-01-20 14:43:13.838336396 +0000
--- isl.spec 2015-01-06 16:24:49.000000000 +0000
+++ isl.spec.1 2015-10-18 12:12:38.000000000 +0000
@@ -1,5 +1,5 @@
Summary: Integer point manipulation library
-Name: isl

View File

@@ -1,56 +1,34 @@
--- ninja-build.spec.orig 2016-01-20 14:41:16.892802134 +0000
+++ ninja-build.spec 2016-01-20 14:44:42.453227192 +0000
@@ -1,19 +1,18 @@
-Name: ninja-build
+Name: scylla-ninja-build
Version: 1.6.0
Release: 2%{?dist}
Summary: A small build system with a focus on speed
License: ASL 2.0
URL: http://martine.github.com/ninja/
Source0: https://github.com/martine/ninja/archive/v%{version}.tar.gz#/ninja-%{version}.tar.gz
-Source1: ninja.vim
# Rename mentions of the executable name to be ninja-build.
Patch1000: ninja-1.6.0-binary-rename.patch
+Requires: scylla-env
BuildRequires: asciidoc
BuildRequires: gtest-devel
BuildRequires: python2-devel
-BuildRequires: re2c >= 0.11.3
-Requires: emacs-filesystem
-Requires: vim-filesystem
+#BuildRequires: scylla-re2c >= 0.11.3
+%define _prefix /opt/scylladb
%description
Ninja is a small build system with a focus on speed. It differs from other
@@ -32,15 +31,8 @@
./ninja -v ninja_test
%install
-# TODO: Install ninja_syntax.py?
-mkdir -p %{buildroot}/{%{_bindir},%{_datadir}/bash-completion/completions,%{_datadir}/emacs/site-lisp,%{_datadir}/vim/vimfiles/syntax,%{_datadir}/vim/vimfiles/ftdetect,%{_datadir}/zsh/site-functions}
-
+mkdir -p %{buildroot}/opt/scylladb/bin
install -pm755 ninja %{buildroot}%{_bindir}/ninja-build
-install -pm644 misc/bash-completion %{buildroot}%{_datadir}/bash-completion/completions/ninja-bash-completion
-install -pm644 misc/ninja-mode.el %{buildroot}%{_datadir}/emacs/site-lisp/ninja-mode.el
-install -pm644 misc/ninja.vim %{buildroot}%{_datadir}/vim/vimfiles/syntax/ninja.vim
-install -pm644 %{SOURCE1} %{buildroot}%{_datadir}/vim/vimfiles/ftdetect/ninja.vim
-install -pm644 misc/zsh-completion %{buildroot}%{_datadir}/zsh/site-functions/_ninja
%check
# workaround possible too low default limits
@@ -50,12 +42,6 @@
%files
%doc COPYING HACKING.md README doc/manual.html
%{_bindir}/ninja-build
-%{_datadir}/bash-completion/completions/ninja-bash-completion
-%{_datadir}/emacs/site-lisp/ninja-mode.el
-%{_datadir}/vim/vimfiles/syntax/ninja.vim
-%{_datadir}/vim/vimfiles/ftdetect/ninja.vim
-# zsh does not have a -filesystem package
-%{_datadir}/zsh/
%changelog
* Mon Nov 16 2015 Ben Boeckel <mathstuf@gmail.com> - 1.6.0-2
1c1
< Name: ninja-build
---
> Name: scylla-ninja-build
8d7
< Source1: ninja.vim
10a10
> Requires: scylla-env
14,16c14,15
< BuildRequires: re2c >= 0.11.3
< Requires: emacs-filesystem
< Requires: vim-filesystem
---
> #BuildRequires: scylla-re2c >= 0.11.3
> %define _prefix /opt/scylladb
35,37c34
< # TODO: Install ninja_syntax.py?
< mkdir -p %{buildroot}/{%{_bindir},%{_datadir}/bash-completion/completions,%{_datadir}/emacs/site-lisp,%{_datadir}/vim/vimfiles/syntax,%{_datadir}/vim/vimfiles/ftdetect,%{_datadir}/zsh/site-functions}
<
---
> mkdir -p %{buildroot}/opt/scylladb/bin
39,43d35
< install -pm644 misc/bash-completion %{buildroot}%{_datadir}/bash-completion/completions/ninja-bash-completion
< install -pm644 misc/ninja-mode.el %{buildroot}%{_datadir}/emacs/site-lisp/ninja-mode.el
< install -pm644 misc/ninja.vim %{buildroot}%{_datadir}/vim/vimfiles/syntax/ninja.vim
< install -pm644 %{SOURCE1} %{buildroot}%{_datadir}/vim/vimfiles/ftdetect/ninja.vim
< install -pm644 misc/zsh-completion %{buildroot}%{_datadir}/zsh/site-functions/_ninja
53,58d44
< %{_datadir}/bash-completion/completions/ninja-bash-completion
< %{_datadir}/emacs/site-lisp/ninja-mode.el
< %{_datadir}/vim/vimfiles/syntax/ninja.vim
< %{_datadir}/vim/vimfiles/ftdetect/ninja.vim
< # zsh does not have a -filesystem package
< %{_datadir}/zsh/

View File

@@ -1,40 +0,0 @@
--- pyparsing.spec.orig 2016-01-25 19:11:14.663651658 +0900
+++ pyparsing.spec 2016-01-25 19:12:49.853875369 +0900
@@ -1,4 +1,4 @@
-%if 0%{?fedora}
+%if 0%{?centos}
%global with_python3 1
%endif
@@ -15,7 +15,7 @@
BuildRequires: dos2unix
BuildRequires: glibc-common
%if 0%{?with_python3}
-BuildRequires: python3-devel
+BuildRequires: python34-devel
%endif # if with_python3
%description
@@ -30,11 +30,11 @@
The package contains documentation for pyparsing.
%if 0%{?with_python3}
-%package -n python3-pyparsing
+%package -n python34-pyparsing
Summary: An object-oriented approach to text processing (Python 3 version)
Group: Development/Libraries
-%description -n python3-pyparsing
+%description -n python34-pyparsing
pyparsing is a module that can be used to easily and directly configure syntax
definitions for any number of text parsing applications.
@@ -90,7 +90,7 @@
%{python_sitelib}/pyparsing.py*
%if 0%{?with_python3}
-%files -n python3-pyparsing
+%files -n python34-pyparsing
%doc CHANGES README LICENSE
%{python3_sitelib}/pyparsing*egg-info
%{python3_sitelib}/pyparsing.py*

View File

@@ -1,11 +1,11 @@
--- ragel.spec.orig 2015-06-18 22:12:28.000000000 +0000
+++ ragel.spec 2016-01-20 14:49:53.980327766 +0000
--- ragel.spec 2014-08-18 11:55:49.000000000 +0000
+++ ragel.spec.1 2015-10-18 12:18:23.000000000 +0000
@@ -1,17 +1,20 @@
-Name: ragel
+Name: scylla-ragel
+%define orig_name ragel
Version: 6.8
Release: 5%{?dist}
Release: 3%{?dist}
Summary: Finite state machine compiler
Group: Development/Tools

14
dist/redhat/scripts/scylla_run vendored Executable file
View File

@@ -0,0 +1,14 @@
#!/bin/sh -e
args="--log-to-syslog 1 --log-to-stdout 0 --default-log-level info $SCYLLA_ARGS"
if [ "$NETWORK_MODE" = "posix" ]; then
args="$args --network-stack posix"
elif [ "$NETWORK_MODE" = "virtio" ]; then
args="$args --network-stack native"
elif [ "$NETWORK_MODE" = "dpdk" ]; then
args="$args --network-stack native --dpdk-pmd"
fi
export HOME=/var/lib/scylla
exec /usr/bin/scylla $args

View File

@@ -9,10 +9,9 @@ URL: http://www.scylladb.com/
Source0: %{name}-@@VERSION@@-@@RELEASE@@.tar
BuildRequires: libaio-devel libstdc++-devel cryptopp-devel hwloc-devel numactl-devel libpciaccess-devel libxml2-devel zlib-devel thrift-devel yaml-cpp-devel lz4-devel snappy-devel jsoncpp-devel systemd-devel xz-devel openssl-devel libcap-devel libselinux-devel libgcrypt-devel libgpg-error-devel elfutils-devel krb5-devel libcom_err-devel libattr-devel pcre-devel elfutils-libelf-devel bzip2-devel keyutils-libs-devel xfsprogs-devel make gnutls-devel systemd-devel
%{?fedora:BuildRequires: boost-devel ninja-build ragel antlr3-tool antlr3-C++-devel python3 gcc-c++ libasan libubsan python3-pyparsing}
%{?rhel:BuildRequires: scylla-libstdc++-static scylla-boost-devel scylla-ninja-build scylla-ragel scylla-antlr3-tool scylla-antlr3-C++-devel python34 scylla-gcc-c++ >= 5.1.1, python34-pyparsing}
Requires: systemd-libs hwloc
Conflicts: abrt
%{?fedora:BuildRequires: boost-devel ninja-build ragel antlr3-tool antlr3-C++-devel python3 gcc-c++ libasan libubsan}
%{?rhel:BuildRequires: scylla-libstdc++-static scylla-boost-devel scylla-ninja-build scylla-ragel scylla-antlr3-tool scylla-antlr3-C++-devel python34 scylla-gcc-c++ >= 5.1.1}
Requires: systemd-libs xfsprogs mdadm hwloc
%description
@@ -52,6 +51,7 @@ install -m644 conf/scylla.yaml $RPM_BUILD_ROOT%{_sysconfdir}/scylla/
install -m644 conf/cassandra-rackdc.properties $RPM_BUILD_ROOT%{_sysconfdir}/scylla/
install -m644 dist/redhat/systemd/scylla-server.service $RPM_BUILD_ROOT%{_unitdir}/
install -m755 dist/common/scripts/* $RPM_BUILD_ROOT%{_prefix}/lib/scylla/
install -m755 dist/redhat/scripts/* $RPM_BUILD_ROOT%{_prefix}/lib/scylla/
install -m755 seastar/scripts/posix_net_conf.sh $RPM_BUILD_ROOT%{_prefix}/lib/scylla/
install -m755 seastar/dpdk/tools/dpdk_nic_bind.py $RPM_BUILD_ROOT%{_prefix}/lib/scylla/
install -m755 build/release/scylla $RPM_BUILD_ROOT%{_bindir}
@@ -140,6 +140,7 @@ rm -rf $RPM_BUILD_ROOT
%{_unitdir}/scylla-server.service
%{_bindir}/scylla
%{_prefix}/lib/scylla/scylla_prepare
%{_prefix}/lib/scylla/scylla_run
%{_prefix}/lib/scylla/scylla_stop
%{_prefix}/lib/scylla/scylla_setup
%{_prefix}/lib/scylla/scylla_coredump_setup

View File

@@ -1,6 +1,6 @@
[Unit]
Description=Scylla Server
After=network.target
After=network.target libvirtd.service
[Service]
Type=notify
@@ -8,11 +8,9 @@ LimitMEMLOCK=infinity
LimitNOFILE=200000
LimitAS=infinity
LimitNPROC=8096
WorkingDirectory=/var/lib/scylla
Environment="HOME=/var/lib/scylla"
EnvironmentFile=/etc/sysconfig/scylla-server
ExecStartPre=/usr/bin/sudo -E /usr/lib/scylla/scylla_prepare
ExecStart=/usr/bin/scylla $SCYLLA_ARGS
ExecStart=/usr/lib/scylla/scylla_run
ExecStopPost=/usr/bin/sudo -E /usr/lib/scylla/scylla_stop
TimeoutStartSec=900
KillMode=process

View File

@@ -9,19 +9,6 @@ if [ -e debian ] || [ -e build/release ]; then
rm -rf debian build
mkdir build
fi
sudo apt-get -y update
if [ ! -f /usr/bin/git ]; then
sudo apt-get -y install git
fi
if [ ! -f /usr/bin/mk-build-deps ]; then
sudo apt-get -y install devscripts
fi
if [ ! -f /usr/bin/equivs-build ]; then
sudo apt-get -y install equivs
fi
if [ ! -f /usr/bin/add-apt-repository ]; then
sudo apt-get -y install software-properties-common
fi
RELEASE=`lsb_release -r|awk '{print $2}'`
CODENAME=`lsb_release -c|awk '{print $2}'`
@@ -44,13 +31,27 @@ sed -i -e "s/@@VERSION@@/$SCYLLA_VERSION/g" debian/changelog
sed -i -e "s/@@RELEASE@@/$SCYLLA_RELEASE/g" debian/changelog
sed -i -e "s/@@CODENAME@@/$CODENAME/g" debian/changelog
sudo apt-get -y update
./dist/ubuntu/dep/build_dependency.sh
DEP="libyaml-cpp-dev liblz4-dev libsnappy-dev libcrypto++-dev libjsoncpp-dev libaio-dev ragel ninja-build git liblz4-1 libaio1 hugepages software-properties-common libgnutls28-dev libhwloc-dev libnuma-dev libpciaccess-dev"
if [ "$RELEASE" = "14.04" ]; then
DEP="$DEP libboost1.55-dev libboost-program-options1.55.0 libboost-program-options1.55-dev libboost-system1.55.0 libboost-system1.55-dev libboost-thread1.55.0 libboost-thread1.55-dev libboost-test1.55.0 libboost-test1.55-dev libboost-filesystem1.55-dev libboost-filesystem1.55.0 libsnappy1"
else
DEP="$DEP libboost-dev libboost-program-options-dev libboost-system-dev libboost-thread-dev libboost-test-dev libboost-filesystem-dev libboost-filesystem-dev libsnappy1v5"
fi
if [ "$RELEASE" = "15.10" ]; then
DEP="$DEP libjsoncpp0v5 libcrypto++9v5 libyaml-cpp0.5v5 antlr3"
else
DEP="$DEP libjsoncpp0 libcrypto++9 libyaml-cpp0.5"
fi
sudo apt-get -y install $DEP
if [ "$RELEASE" != "15.10" ]; then
sudo add-apt-repository -y ppa:ubuntu-toolchain-r/test
sudo apt-get -y update
fi
sudo apt-get -y install g++-4.9
echo Y | sudo mk-build-deps -i -r
debuild -r fakeroot -us -uc

View File

@@ -4,11 +4,11 @@ Homepage: http://scylladb.com
Section: database
Priority: optional
Standards-Version: 3.9.5
Build-Depends: debhelper (>= 9), libyaml-cpp-dev, liblz4-dev, libsnappy-dev, libcrypto++-dev, libjsoncpp-dev, libaio-dev, libthrift-dev, thrift-compiler, antlr3, antlr3-c++-dev, ragel, g++-4.9, ninja-build, git, libboost-program-options1.55-dev | libboost-program-options-dev, libboost-filesystem1.55-dev | libboost-filesystem-dev, libboost-system1.55-dev | libboost-system-dev, libboost-thread1.55-dev | libboost-thread-dev, libboost-test1.55-dev | libboost-test-dev, libgnutls28-dev, libhwloc-dev, libnuma-dev, libpciaccess-dev, xfslibs-dev, python3-pyparsing
Build-Depends: debhelper (>= 9), libyaml-cpp-dev, liblz4-dev, libsnappy-dev, libcrypto++-dev, libjsoncpp-dev, libaio-dev, libthrift-dev, thrift-compiler, antlr3, antlr3-c++-dev, ragel, g++-4.9, ninja-build, git, libboost-program-options1.55-dev | libboost-program-options-dev, libboost-filesystem1.55-dev | libboost-filesystem-dev, libboost-system1.55-dev | libboost-system-dev, libboost-thread1.55-dev | libboost-thread-dev, libboost-test1.55-dev | libboost-test-dev, libgnutls28-dev, libhwloc-dev, libnuma-dev, libpciaccess-dev
Package: scylla-server
Architecture: amd64
Depends: ${shlibs:Depends}, ${misc:Depends}, hugepages, adduser, hwloc-nox
Depends: ${shlibs:Depends}, ${misc:Depends}, hugepages, adduser, mdadm, xfsprogs, hwloc-nox
Description: Scylla database server binaries
Scylla is a highly scalable, eventually consistent, distributed,
partitioned row DB.

View File

@@ -5,7 +5,6 @@ SCRIPTS = $(CURDIR)/debian/scylla-server/usr/lib/scylla
SWAGGER = $(SCRIPTS)/swagger-ui
API = $(SCRIPTS)/api
SYSCTL = $(CURDIR)/debian/scylla-server/etc/sysctl.d
SUDOERS = $(CURDIR)/debian/scylla-server/etc/sudoers.d
LIMITS= $(CURDIR)/debian/scylla-server/etc/security/limits.d
LIBS = $(CURDIR)/debian/scylla-server/usr/lib
CONF = $(CURDIR)/debian/scylla-server/etc/scylla
@@ -26,9 +25,6 @@ override_dh_auto_install:
mkdir -p $(SYSCTL) && \
cp $(CURDIR)/dist/ubuntu/sysctl.d/99-scylla.conf $(SYSCTL)
mkdir -p $(SUDOERS) && \
cp $(CURDIR)/dist/common/sudoers.d/scylla $(SUDOERS)
mkdir -p $(CONF) && \
cp $(CURDIR)/conf/scylla.yaml $(CONF)
cp $(CURDIR)/conf/cassandra-rackdc.properties $(CONF)

View File

@@ -11,31 +11,23 @@ umask 022
console log
expect stop
setuid scylla
setgid scylla
limit core unlimited unlimited
limit memlock unlimited unlimited
limit nofile 200000 200000
limit as unlimited unlimited
limit nproc 8096 8096
chdir /var/lib/scylla
env HOME=/var/lib/scylla
pre-start script
cd /var/lib/scylla
. /etc/default/scylla-server
export NETWORK_MODE TAP BRIDGE ETHDRV ETHPCIID NR_HUGEPAGES USER GROUP SCYLLA_HOME SCYLLA_CONF SCYLLA_ARGS
sudo /usr/lib/scylla/scylla_prepare
/usr/lib/scylla/scylla_prepare
end script
script
cd /var/lib/scylla
. /etc/default/scylla-server
export NETWORK_MODE TAP BRIDGE ETHDRV ETHPCIID NR_HUGEPAGES USER GROUP SCYLLA_HOME SCYLLA_CONF SCYLLA_ARGS
exec /usr/bin/scylla $SCYLLA_ARGS
exec /usr/lib/scylla/scylla_run
end script
post-stop script
cd /var/lib/scylla
. /etc/default/scylla-server
export NETWORK_MODE TAP BRIDGE ETHDRV ETHPCIID NR_HUGEPAGES USER GROUP SCYLLA_HOME SCYLLA_CONF SCYLLA_ARGS
sudo /usr/lib/scylla/scylla_stop
/usr/lib/scylla/scylla_stop
end script

View File

@@ -1,8 +1,15 @@
#!/bin/sh -e
RELEASE=`lsb_release -r|awk '{print $2}'`
DEP="build-essential debhelper openjdk-7-jre-headless build-essential autoconf automake pkg-config libtool bison flex libevent-dev libglib2.0-dev libqt4-dev python-dev python-dbg php5-dev devscripts python-support xfslibs-dev"
if [ "$RELEASE" = "14.04" ]; then
DEP="$DEP libboost1.55-dev libboost-test1.55-dev"
else
DEP="$DEP libboost-dev libboost-test-dev"
fi
sudo apt-get -y install $DEP
sudo apt-get install -y gdebi-core
if [ "$RELEASE" = "14.04" ]; then
if [ ! -f build/antlr3_3.5.2-1_all.deb ]; then
rm -rf build/antlr3-3.5.2
@@ -10,7 +17,6 @@ if [ "$RELEASE" = "14.04" ]; then
cp -a dist/ubuntu/dep/antlr3-3.5.2/* build/antlr3-3.5.2
cd build/antlr3-3.5.2
wget http://www.antlr3.org/download/antlr-3.5.2-complete-no-st3.jar
echo Y | sudo mk-build-deps -i -r
debuild -r fakeroot --no-tgz-check -us -uc
cd -
fi
@@ -27,7 +33,6 @@ if [ ! -f build/antlr3-c++-dev_3.5.2-1_all.deb ]; then
cd -
cp -a dist/ubuntu/dep/antlr3-c++-dev-3.5.2/debian build/antlr3-c++-dev-3.5.2
cd build/antlr3-c++-dev-3.5.2
echo Y | sudo mk-build-deps -i -r
debuild -r fakeroot --no-tgz-check -us -uc
cd -
fi
@@ -41,13 +46,8 @@ if [ ! -f build/libthrift0_1.0.0-dev_amd64.deb ]; then
tar xpf thrift-0.9.1.tar.gz
cd thrift-0.9.1
patch -p0 < ../../dist/ubuntu/dep/thrift.diff
echo Y | sudo mk-build-deps -i -r
debuild -r fakeroot --no-tgz-check -us -uc
cd ../..
fi
sudo gdebi -n build/antlr3_*.deb
sudo gdebi -n build/antlr3-c++-dev_*.deb
sudo gdebi -n build/libthrift0_*.deb
sudo gdebi -n build/libthrift-dev_*.deb
sudo gdebi -n build/thrift-compiler_*.deb
sudo dpkg -i build/*.deb

View File

@@ -1,5 +1,6 @@
--- debian/changelog 2013-08-15 23:04:29.000000000 +0900
+++ ../thrift-0.9.1-ubuntu/debian/changelog 2016-01-15 23:22:11.189982999 +0900
diff -Nur ./debian/changelog ../thrift-0.9.1/debian/changelog
--- ./debian/changelog 2013-08-15 23:04:29.000000000 +0900
+++ ../thrift-0.9.1/debian/changelog 2015-10-29 23:03:25.797937232 +0900
@@ -1,65 +1,4 @@
-thrift (1.0.0-dev) stable; urgency=low
- * update version
@@ -69,8 +70,9 @@
-
- -- Esteve Fernandez <esteve@fluidinfo.com> Thu, 15 Jan 2009 11:34:24 +0100
+ -- Takuya ASADA <syuu@scylladb.com> Wed, 28 Oct 2015 05:11:38 +0900
--- debian/control 2013-08-18 23:58:22.000000000 +0900
+++ ../thrift-0.9.1-ubuntu/debian/control 2016-01-15 23:32:47.373982999 +0900
diff -Nur ./debian/control ../thrift-0.9.1/debian/control
--- ./debian/control 2013-08-18 23:58:22.000000000 +0900
+++ ../thrift-0.9.1/debian/control 2015-10-28 00:54:05.950464999 +0900
@@ -1,12 +1,10 @@
Source: thrift
Section: devel
@@ -84,7 +86,7 @@
+Build-Depends: debhelper (>= 5), build-essential, autoconf,
+ automake, pkg-config, libtool, bison, flex, libboost-dev | libboost1.55-dev,
+ libboost-test-dev | libboost-test1.55-dev, libevent-dev,
+ libglib2.0-dev, libqt4-dev, libssl-dev, python-support
+ libglib2.0-dev, libqt4-dev
Maintainer: Thrift Developer's <dev@thrift.apache.org>
Homepage: http://thrift.apache.org/
Vcs-Git: https://git-wip-us.apache.org/repos/asf/thrift.git
@@ -203,8 +205,9 @@
- build services that work efficiently and seamlessly.
- .
- This package contains the PHP bindings for Thrift.
--- debian/rules 2013-08-15 23:04:29.000000000 +0900
+++ ../thrift-0.9.1-ubuntu/debian/rules 2016-01-15 23:22:11.189982999 +0900
diff -Nur ./debian/rules ../thrift-0.9.1/debian/rules
--- ./debian/rules 2013-08-15 23:04:29.000000000 +0900
+++ ../thrift-0.9.1/debian/rules 2015-10-28 00:54:05.950464999 +0900
@@ -45,18 +45,6 @@
# Compile C (glib) library
$(MAKE) -C $(CURDIR)/lib/c_glib

19
dist/ubuntu/scripts/scylla_run vendored Executable file
View File

@@ -0,0 +1,19 @@
#!/bin/bash -e
args="--log-to-syslog 1 --log-to-stdout 0 --default-log-level info $SCYLLA_ARGS"
if [ "$NETWORK_MODE" = "posix" ]; then
args="$args --network-stack posix"
elif [ "$NETWORK_MODE" = "virtio" ]; then
args="$args --network-stack native"
elif [ "$NETWORK_MODE" = "dpdk" ]; then
args="$args --network-stack native --dpdk-pmd"
fi
export HOME=/var/lib/scylla
ulimit -c unlimited
ulimit -l unlimited
ulimit -n 200000
ulimit -m unlimited
ulimit -u 8096
exec sudo -E -u $USER /usr/bin/scylla $args

View File

@@ -118,3 +118,26 @@ std::ostream& operator<<(std::ostream& out, const frozen_mutation::printer& pr)
frozen_mutation::printer frozen_mutation::pretty_printer(schema_ptr s) const {
return { *this, std::move(s) };
}
template class db::serializer<frozen_mutation>;
template<>
db::serializer<frozen_mutation>::serializer(const frozen_mutation& mutation)
: _item(mutation), _size(sizeof(uint32_t) /* size */ + mutation.representation().size()) {
}
template<>
void db::serializer<frozen_mutation>::write(output& out, const frozen_mutation& mutation) {
bytes_view v = mutation.representation();
out.write(v);
}
template<>
void db::serializer<frozen_mutation>::read(frozen_mutation& m, input& in) {
m = read(in);
}
template<>
frozen_mutation db::serializer<frozen_mutation>::read(input& in) {
return frozen_mutation(bytes_serializer::read(in));
}

View File

@@ -67,3 +67,14 @@ public:
};
frozen_mutation freeze(const mutation& m);
namespace db {
typedef serializer<frozen_mutation> frozen_mutation_serializer;
template<> serializer<frozen_mutation>::serializer(const frozen_mutation &);
template<> void serializer<frozen_mutation>::write(output&, const type&);
template<> void serializer<frozen_mutation>::read(frozen_mutation&, input&);
template<> frozen_mutation serializer<frozen_mutation>::read(input&);
}

View File

@@ -55,10 +55,21 @@ static const std::map<application_state, sstring> application_state_names = {
{application_state::REMOVAL_COORDINATOR, "REMOVAL_COORDINATOR"},
{application_state::INTERNAL_IP, "INTERNAL_IP"},
{application_state::RPC_ADDRESS, "RPC_ADDRESS"},
{application_state::X_11_PADDING, "X_11_PADDING"},
{application_state::SEVERITY, "SEVERITY"},
{application_state::NET_VERSION, "NET_VERSION"},
{application_state::HOST_ID, "HOST_ID"},
{application_state::TOKENS, "TOKENS"},
{application_state::X1, "X1"},
{application_state::X2, "X2"},
{application_state::X3, "X3"},
{application_state::X4, "X4"},
{application_state::X5, "X5"},
{application_state::X6, "X6"},
{application_state::X7, "X7"},
{application_state::X8, "X8"},
{application_state::X9, "X9"},
{application_state::X10, "X10"},
};
std::ostream& operator<<(std::ostream& os, const application_state& m) {

View File

@@ -61,4 +61,42 @@ std::ostream& operator<<(std::ostream& os, const endpoint_state& x) {
return os;
}
void endpoint_state::serialize(bytes::iterator& out) const {
/* serialize the HeartBeatState */
_heart_beat_state.serialize(out);
/* serialize the map of ApplicationState objects */
int32_t app_state_size = _application_state.size();
serialize_int32(out, app_state_size);
for (auto& entry : _application_state) {
const application_state& state = entry.first;
const versioned_value& value = entry.second;
serialize_int32(out, int32_t(state));
value.serialize(out);
}
}
endpoint_state endpoint_state::deserialize(bytes_view& v) {
heart_beat_state hbs = heart_beat_state::deserialize(v);
endpoint_state es = endpoint_state(hbs);
int32_t app_state_size = read_simple<int32_t>(v);
for (int32_t i = 0; i < app_state_size; ++i) {
auto state = static_cast<application_state>(read_simple<int32_t>(v));
auto value = versioned_value::deserialize(v);
es.add_application_state(state, value);
}
return es;
}
size_t endpoint_state::serialized_size() const {
long size = _heart_beat_state.serialized_size();
size += serialize_int32_size;
for (auto& entry : _application_state) {
const versioned_value& value = entry.second;
size += serialize_int32_size;
size += value.serialized_size();
}
return size;
}
}

View File

@@ -81,14 +81,6 @@ public:
, _is_alive(true) {
}
endpoint_state(heart_beat_state&& initial_hb_state,
const std::map<application_state, versioned_value>& application_state)
: _heart_beat_state(std::move(initial_hb_state))
,_application_state(application_state)
, _update_timestamp(clk::now())
, _is_alive(true) {
}
heart_beat_state& get_heart_beat_state() {
return _heart_beat_state;
}
@@ -149,6 +141,13 @@ public:
}
friend std::ostream& operator<<(std::ostream& os, const endpoint_state& x);
// The following replaces EndpointStateSerializer from the Java code
void serialize(bytes::iterator& out) const;
static endpoint_state deserialize(bytes_view& v);
size_t serialized_size() const;
};
} // gms

View File

@@ -97,6 +97,50 @@ public:
friend inline std::ostream& operator<<(std::ostream& os, const gossip_digest& d) {
return os << d._endpoint << ":" << d._generation << ":" << d._max_version;
}
// The following replaces GossipDigestSerializer from the Java code
void serialize(bytes::iterator& out) const {
_endpoint.serialize(out);
serialize_int32(out, _generation);
serialize_int32(out, _max_version);
}
static gossip_digest deserialize(bytes_view& v) {
auto endpoint = inet_address::deserialize(v);
auto generation = read_simple<int32_t>(v);
auto max_version = read_simple<int32_t>(v);
return gossip_digest(endpoint, generation, max_version);
}
size_t serialized_size() const {
return _endpoint.serialized_size() + serialize_int32_size + serialize_int32_size;
}
}; // class gossip_digest
// serialization helper for std::vector<gossip_digest>
class gossip_digest_serialization_helper {
public:
static void serialize(bytes::iterator& out, const std::vector<gossip_digest>& digests) {
serialize_int32(out, int32_t(digests.size()));
for (auto& digest : digests) {
digest.serialize(out);
}
}
static std::vector<gossip_digest> deserialize(bytes_view& v) {
int32_t size = read_simple<int32_t>(v);
std::vector<gossip_digest> digests;
for (int32_t i = 0; i < size; ++i)
digests.push_back(gossip_digest::deserialize(v));
return digests;
}
static size_t serialized_size(const std::vector<gossip_digest>& digests) {
size_t size = serialize_int32_size;
for (auto& digest : digests)
size += digest.serialized_size();
return size;
}
};
} // namespace gms

View File

@@ -54,4 +54,44 @@ std::ostream& operator<<(std::ostream& os, const gossip_digest_ack& ack) {
return os << "}";
}
void gossip_digest_ack::serialize(bytes::iterator& out) const {
// 1) Digest
gossip_digest_serialization_helper::serialize(out, _digests);
// 2) Map size
serialize_int32(out, int32_t(_map.size()));
// 3) Map contents
for (auto& entry : _map) {
const inet_address& ep = entry.first;
const endpoint_state& st = entry.second;
ep.serialize(out);
st.serialize(out);
}
}
gossip_digest_ack gossip_digest_ack::deserialize(bytes_view& v) {
// 1) Digest
std::vector<gossip_digest> _digests = gossip_digest_serialization_helper::deserialize(v);
// 2) Map size
int32_t map_size = read_simple<int32_t>(v);
// 3) Map contents
std::map<inet_address, endpoint_state> _map;
for (int32_t i = 0; i < map_size; ++i) {
inet_address ep = inet_address::deserialize(v);
endpoint_state st = endpoint_state::deserialize(v);
_map.emplace(std::move(ep), std::move(st));
}
return gossip_digest_ack(std::move(_digests), std::move(_map));
}
size_t gossip_digest_ack::serialized_size() const {
size_t size = gossip_digest_serialization_helper::serialized_size(_digests);
size += serialize_int32_size;
for (auto& entry : _map) {
const inet_address& ep = entry.first;
const endpoint_state& st = entry.second;
size += ep.serialized_size() + st.serialized_size();
}
return size;
}
} // namespace gms

View File

@@ -72,6 +72,13 @@ public:
return _map;
}
// The following replaces GossipDigestAckSerializer from the Java code
void serialize(bytes::iterator& out) const;
static gossip_digest_ack deserialize(bytes_view& v);
size_t serialized_size() const;
friend std::ostream& operator<<(std::ostream& os, const gossip_digest_ack& ack);
};

View File

@@ -49,4 +49,39 @@ std::ostream& operator<<(std::ostream& os, const gossip_digest_ack2& ack2) {
return os << "}";
}
void gossip_digest_ack2::serialize(bytes::iterator& out) const {
// 1) Map size
serialize_int32(out, int32_t(_map.size()));
// 2) Map contents
for (auto& entry : _map) {
const inet_address& ep = entry.first;
const endpoint_state& st = entry.second;
ep.serialize(out);
st.serialize(out);
}
}
gossip_digest_ack2 gossip_digest_ack2::deserialize(bytes_view& v) {
// 1) Map size
int32_t map_size = read_simple<int32_t>(v);
// 2) Map contents
std::map<inet_address, endpoint_state> _map;
for (int32_t i = 0; i < map_size; ++i) {
inet_address ep = inet_address::deserialize(v);
endpoint_state st = endpoint_state::deserialize(v);
_map.emplace(std::move(ep), std::move(st));
}
return gossip_digest_ack2(std::move(_map));
}
size_t gossip_digest_ack2::serialized_size() const {
size_t size = serialize_int32_size;
for (auto& entry : _map) {
const inet_address& ep = entry.first;
const endpoint_state& st = entry.second;
size += ep.serialized_size() + st.serialized_size();
}
return size;
}
} // namespace gms

View File

@@ -69,6 +69,13 @@ public:
return _map;
}
// The following replaces GossipDigestAck2Serializer from the Java code
void serialize(bytes::iterator& out) const;
static gossip_digest_ack2 deserialize(bytes_view& v);
size_t serialized_size() const;
friend std::ostream& operator<<(std::ostream& os, const gossip_digest_ack2& ack2);
};

View File

@@ -50,4 +50,22 @@ std::ostream& operator<<(std::ostream& os, const gossip_digest_syn& syn) {
return os << "}";
}
void gossip_digest_syn::serialize(bytes::iterator& out) const {
serialize_string(out, _cluster_id);
serialize_string(out, _partioner);
gossip_digest_serialization_helper::serialize(out, _digests);
}
gossip_digest_syn gossip_digest_syn::deserialize(bytes_view& v) {
sstring cluster_id = read_simple_short_string(v);
sstring partioner = read_simple_short_string(v);
std::vector<gossip_digest> digests = gossip_digest_serialization_helper::deserialize(v);
return gossip_digest_syn(cluster_id, partioner, std::move(digests));
}
size_t gossip_digest_syn::serialized_size() const {
return serialize_string_size(_cluster_id) + serialize_string_size(_partioner) +
gossip_digest_serialization_helper::serialized_size(_digests);
}
} // namespace gms

View File

@@ -72,18 +72,17 @@ public:
return _partioner;
}
sstring get_cluster_id() const {
return cluster_id();
}
sstring get_partioner() const {
return partioner();
}
std::vector<gossip_digest> get_gossip_digests() const {
return _digests;
}
// The following replaces GossipDigestSynSerializer from the Java code
void serialize(bytes::iterator& out) const;
static gossip_digest_syn deserialize(bytes_view& v);
size_t serialized_size() const;
friend std::ostream& operator<<(std::ostream& os, const gossip_digest_syn& syn);
};

View File

@@ -233,7 +233,7 @@ future<> gossiper::handle_ack_msg(msg_addr id, gossip_digest_ack ack_msg) {
}
void gossiper::init_messaging_service_handler() {
ms().register_gossip_echo([] {
ms().register_echo([] {
return smp::submit_to(0, [] {
auto& gossiper = gms::get_local_gossiper();
gossiper.set_last_processed_message_at();
@@ -279,7 +279,7 @@ void gossiper::init_messaging_service_handler() {
void gossiper::uninit_messaging_service_handler() {
auto& ms = net::get_local_messaging_service();
ms.unregister_gossip_echo();
ms.unregister_echo();
ms.unregister_gossip_shutdown();
ms.unregister_gossip_digest_syn();
ms.unregister_gossip_digest_ack2();
@@ -478,7 +478,7 @@ void gossiper::do_status_check() {
}
void gossiper::run() {
_callback_running = seastar::async([this, g = this->shared_from_this()] {
seastar::async([this, g = this->shared_from_this()] {
logger.trace("=== Gossip round START");
//wait on messaging service to start listening
@@ -588,10 +588,7 @@ void gossiper::run() {
logger.trace("ep={}, eps={}", x.first, x.second);
}
}
if (_enabled) {
_scheduled_gossip_task.arm(INTERVAL);
}
return make_ready_future<>();
_scheduled_gossip_task.arm(INTERVAL);
});
}
@@ -665,7 +662,8 @@ void gossiper::convict(inet_address endpoint, double phi) {
return;
}
auto& state = it->second;
logger.debug("Convicting {} with status {} - alive {}", endpoint, get_gossip_status(state), state.is_alive());
// FIXME: Add getGossipStatus
// logger.debug("Convicting {} with status {} - alive {}", endpoint, getGossipStatus(epState), state.is_alive());
if (!state.is_alive()) {
return;
}
@@ -1051,7 +1049,7 @@ void gossiper::mark_alive(inet_address addr, endpoint_state& local_state) {
msg_addr id = get_msg_addr(addr);
logger.trace("Sending a EchoMessage to {}", id);
auto ok = make_shared<bool>(false);
ms().send_gossip_echo(id).then_wrapped([this, id, ok] (auto&& f) mutable {
ms().send_echo(id).then_wrapped([this, id, ok] (auto&& f) mutable {
try {
f.get();
logger.trace("Got EchoMessage Reply");
@@ -1115,7 +1113,7 @@ void gossiper::handle_major_state_change(inet_address ep, const endpoint_state&
logger.info("Node {} is now part of the cluster", ep);
}
}
logger.trace("Adding endpoint state for {}, status = {}", ep, get_gossip_status(eps));
logger.trace("Adding endpoint state for {}", ep);
endpoint_state_map[ep] = eps;
auto& ep_state = endpoint_state_map.at(ep);
@@ -1432,13 +1430,12 @@ future<> gossiper::do_stop_gossiping() {
return make_ready_future<>();
}).get();
}
auto& cfg = service::get_local_storage_service().db().local().get_config();
sleep(std::chrono::milliseconds(cfg.shutdown_announce_in_ms())).get();
// FIXME: Integer.getInteger("cassandra.shutdown_announce_in_ms", 2000)
sleep(INTERVAL * 2).get();
} else {
logger.warn("No local state or state is in silent shutdown, not announcing shutdown");
}
_scheduled_gossip_task.cancel();
_callback_running.get();
get_gossiper().invoke_on_all([] (gossiper& g) {
if (engine().cpu_id() == 0) {
get_local_failure_detector().unregister_failure_detection_event_listener(&g);

View File

@@ -99,7 +99,6 @@ private:
bool _enabled = false;
std::set<inet_address> _seeds_from_config;
sstring _cluster_name;
future<> _callback_running = make_ready_future<>();
public:
sstring get_cluster_name();
sstring get_partitioner_name();

View File

@@ -90,6 +90,22 @@ public:
friend inline std::ostream& operator<<(std::ostream& os, const heart_beat_state& h) {
return os << "{ generation = " << h._generation << ", version = " << h._version << " }";
}
// The following replaces HeartBeatStateSerializer from the Java code
void serialize(bytes::iterator& out) const {
serialize_int32(out, _generation);
serialize_int32(out, _version);
}
static heart_beat_state deserialize(bytes_view& v) {
auto generation = read_simple<int32_t>(v);
auto version = read_simple<int32_t>(v);
return heart_beat_state(generation, version);
}
size_t serialized_size() const {
return serialize_int32_size + serialize_int32_size;
}
};
} // gms

View File

@@ -37,9 +37,6 @@ public:
inet_address(int32_t ip)
: _addr(uint32_t(ip)) {
}
explicit inet_address(uint32_t ip)
: _addr(ip) {
}
inet_address(net::ipv4_address&& addr) : _addr(std::move(addr)) {}
const net::ipv4_address& addr() const {

View File

@@ -36,7 +36,6 @@
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#include "gms/versioned_value.hh"
#include "message/messaging_service.hh"
namespace gms {
@@ -53,8 +52,19 @@ constexpr const char* versioned_value::HIBERNATE;
constexpr const char* versioned_value::SHUTDOWN;
constexpr const char* versioned_value::REMOVAL_COORDINATOR;
versioned_value versioned_value::factory::network_version() {
return versioned_value(sprint("%s",net::messaging_service::current_version));
void versioned_value::serialize(bytes::iterator& out) const {
serialize_string(out, value);
serialize_int32(out, version);
}
versioned_value versioned_value::deserialize(bytes_view& v) {
auto value = read_simple_short_string(v);
auto version = read_simple<int32_t>(v);
return versioned_value(std::move(value), version);
}
size_t versioned_value::serialized_size() const {
return serialize_string_size(value) + serialize_int32_size;
}
}

View File

@@ -46,6 +46,7 @@
#include "gms/inet_address.hh"
#include "dht/i_partitioner.hh"
#include "to_string.hh"
#include "message/messaging_service.hh"
#include "version.hh"
#include <unordered_set>
#include <vector>
@@ -95,7 +96,7 @@ public:
value == other.value;
}
public:
private:
versioned_value(const sstring& value, int version = version_generator::get_next_version())
: version(version), value(value) {
#if 0
@@ -111,10 +112,8 @@ public:
: version(version), value(std::move(value)) {
}
versioned_value()
: version(-1) {
}
public:
int compare_to(const versioned_value &value) {
return version - value.version;
}
@@ -229,7 +228,9 @@ public:
return versioned_value(version::release());
}
versioned_value network_version();
versioned_value network_version() {
return versioned_value(sprint("%s",net::messaging_service::current_version));
}
versioned_value internal_ip(const sstring &private_ip) {
return versioned_value(private_ip);
@@ -239,6 +240,14 @@ public:
return versioned_value(to_sstring(value));
}
};
// The following replaces VersionedValueSerializer from the Java code
public:
void serialize(bytes::iterator& out) const;
static versioned_value deserialize(bytes_view& v);
size_t serialized_size() const;
}; // class versioned_value
} // namespace gms

View File

@@ -1,329 +0,0 @@
#!/usr/bin/python3
#
# Copyright 2016 ScyllaDB
#
#
# This file is part of Scylla.
#
# Scylla is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Scylla is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Scylla. If not, see <http://www.gnu.org/licenses/>.
import json
import sys
import re
import glob
import argparse
import os
from string import Template
import pyparsing as pp
EXTENSION = '.idl.hh'
READ_BUFF = 'input_buffer'
WRITE_BUFF = 'output_buffer'
SERIALIZER = 'serialize'
DESERIALIZER = 'deserialize'
SETSIZE = 'set_size'
SIZETYPE = 'size_type'
parser = argparse.ArgumentParser(description="""Generate serializer helper function""")
parser.add_argument('-o', help='Output file', default='')
parser.add_argument('-f', help='input file', default='')
parser.add_argument('--ns', help="""namespace, when set function will be created
under the given namespace""", default='')
parser.add_argument('file', nargs='*', help="combine one or more file names for the genral include files")
config = parser.parse_args()
def fprint(f, *args):
for arg in args:
f.write(arg)
def fprintln(f, *args):
for arg in args:
f.write(arg)
f.write('\n')
def print_cw(f):
fprintln(f, """
/*
* Copyright 2016 ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
/*
* This is an auto-generated code, do not modify directly.
*/
#pragma once
""")
def parse_file(file_name):
first = pp.Word(pp.alphas + "_", exact=1)
rest = pp.Word(pp.alphanums + "_")
number = pp.Word(pp.nums)
identifier = pp.Combine(first + pp.Optional(rest))
lbrace = pp.Literal('{').suppress()
rbrace = pp.Literal('}').suppress()
cls = pp.Literal('class')
colon = pp.Literal(":")
semi = pp.Literal(";").suppress()
langle = pp.Literal("<")
rangle = pp.Literal(">")
equals = pp.Literal("=")
comma = pp.Literal(",")
lparen = pp.Literal("(")
rparen = pp.Literal(")")
lbrack = pp.Literal("[")
rbrack = pp.Literal("]")
mins = pp.Literal("-")
struct = pp.Literal('struct')
template = pp.Literal('template')
final = pp.Literal('final').setResultsName("final")
stub = pp.Literal('stub').setResultsName("stub")
with_colon = pp.Word(pp.alphanums + "_" + ":")
btype = with_colon
type = pp.Forward()
nestedParens = pp.nestedExpr('<', '>')
tmpl = pp.Group(btype + langle.suppress() + pp.Group(pp.delimitedList(type)) + rangle.suppress())
type << (tmpl | btype)
enum_lit = pp.Literal('enum')
enum_class = pp.Group(enum_lit + cls)
ns = pp.Literal("namespace")
enum_init = equals.suppress() + pp.Optional(mins) + number
enum_value = pp.Group(identifier + pp.Optional(enum_init))
enum_values = pp.Group(lbrace + pp.delimitedList(enum_value) + pp.Optional(comma) + rbrace)
content = pp.Forward()
member_name = pp.Combine(pp.Group(identifier + pp.Optional(lparen + rparen)))
attrib = pp.Group(lbrack.suppress() + lbrack.suppress() + pp.SkipTo(']') + rbrack.suppress() + rbrack.suppress())
namespace = pp.Group(ns.setResultsName("type") + identifier.setResultsName("name") + lbrace + pp.Group(pp.OneOrMore(content)).setResultsName("content") + rbrace)
enum = pp.Group(enum_class.setResultsName("type") + identifier.setResultsName("name") + colon.suppress() + identifier.setResultsName("underline_type") + enum_values.setResultsName("enum_values") + pp.Optional(semi).suppress())
default_value = equals.suppress() + pp.SkipTo(';')
class_member = pp.Group(type.setResultsName("type") + member_name.setResultsName("name") + pp.Optional(attrib).setResultsName("attribute") + pp.Optional(default_value).setResultsName("default") + semi.suppress()).setResultsName("member")
template_param = pp.Group(identifier.setResultsName("type") + identifier.setResultsName("name"))
template_def = pp.Group(template + langle + pp.Group(pp.delimitedList(template_param)).setResultsName("params") + rangle)
class_content = pp.Forward()
class_def = pp.Group(pp.Optional(template_def).setResultsName("template") + (cls | struct).setResultsName("type") + with_colon.setResultsName("name") + pp.Optional(final) + pp.Optional(stub) + lbrace + pp.Group(pp.OneOrMore(class_content)).setResultsName("members") + rbrace + pp.Optional(semi))
content << (enum | class_def | namespace)
class_content << (enum | class_def | class_member)
rt = pp.OneOrMore(content)
singleLineComment = "//" + pp.restOfLine
rt.ignore(singleLineComment)
rt.ignore(pp.cStyleComment)
return rt.parseFile(file_name, parseAll=True)
def combine_ns(namespaces):
return "::".join(namespaces)
def open_namespaces(namespaces):
return "".join(map(lambda a: "namespace " + a + " { ", namespaces))
def close_namespaces(namespaces):
return "".join(map(lambda a: "}", namespaces))
def set_namespace(namespaces):
ns = combine_ns(namespaces)
ns_open = open_namespaces(namespaces)
ns_close = close_namespaces(namespaces)
return [ns, ns_open, ns_close]
def declare_class(hout, name, ns_open, ns_close):
clas_def = ns_open + name + ";" + ns_close
fprintln(hout, "\n", clas_def)
def declear_methods(hout, name, template_param = ""):
if config.ns != '':
fprintln(hout, "namespace ", config.ns, " {")
fprintln(hout, Template("""
template <typename Output$tmp_param>
void $ser_func(Output& buf, const $name& v);
template <typename Input$tmp_param>
$name $deser_func(Input& buf, boost::type<$name>);""").substitute({'ser_func': SERIALIZER, 'deser_func' : DESERIALIZER, 'name' : name, 'sizetype' : SIZETYPE, 'tmp_param' : template_param }))
if config.ns != '':
fprintln(hout, "}")
def handle_enum(enum, hout, cout, namespaces , parent_template_param = []):
[ns, ns_open, ns_close] = set_namespace(namespaces)
temp_def = ',' + ", ".join(map(lambda a: a[0] + " " + a[1], parent_template_param)) if parent_template_param else ""
name = enum["name"] if ns == "" else ns + "::" + enum["name"]
declear_methods(hout, name, temp_def)
fprintln(cout, Template("""
template<typename Output$temp_def>
void $ser_func(Output& buf, const $name& v) {
serialize(buf, static_cast<$type>(v));
}
template<typename Input$temp_def>
$name $deser_func(Input& buf, boost::type<$name>) {
return static_cast<$name>(deserialize(buf, boost::type<$type>()));
}""").substitute({'ser_func': SERIALIZER, 'deser_func' : DESERIALIZER, 'name' : name, 'size_type' : SIZETYPE, 'type': enum['underline_type'], 'temp_def' : temp_def}))
def join_template(lst):
return "<" + ", ".join([param_type(l) for l in lst]) + ">"
def param_type(lst):
if isinstance(lst, str):
return lst
if len(lst) == 1:
return lst[0]
return lst[0] + join_template(lst[1])
def is_class(obj):
return obj["type"] == "class" or obj["type"] == "struct"
def is_enum(obj):
try:
return not isinstance(obj["type"], str) and "".join(obj["type"]) == 'enumclass'
except:
return False
def handle_class(cls, hout, cout, namespaces=[], parent_template_param = []):
if "stub" in cls:
return
[ns, ns_open, ns_close] = set_namespace(namespaces)
tpl = "template" in cls
template_param_list = (cls["template"][0]["params"].asList() if tpl else [])
template_param = ", ".join(map(lambda a: a[0] + " " + a[1], template_param_list + parent_template_param)) if (template_param_list + parent_template_param) else ""
template = "template <"+ template_param +">\n" if tpl else ""
template_class_param = "<" + ",".join(map(lambda a: a[1], template_param_list)) + ">" if tpl else ""
temp_def = ',' + template_param if template_param != "" else ""
if ns == "":
name = cls["name"]
else:
name = ns + "::" + cls["name"]
full_name = name + template_class_param
for param in cls["members"]:
if is_class(param):
handle_class(param, hout, cout, namespaces + [cls["name"] + template_class_param], parent_template_param + template_param_list)
elif is_enum(param):
handle_enum(param, hout, cout, namespaces + [cls["name"] + template_class_param], parent_template_param + template_param_list)
declear_methods(hout, name + template_class_param, temp_def)
modifier = "final" in cls
fprintln(cout, Template("""
template<typename Output$temp_def>
void $func(Output& buf, const $name& obj) {""").substitute({'func' : SERIALIZER, 'name' : full_name, 'temp_def': temp_def}))
if not modifier:
fprintln(cout, Template(""" $set_size(buf, obj);""").substitute({'func' : SERIALIZER, 'set_size' : SETSIZE, 'name' : name, 'sizetype' : SIZETYPE}))
for param in cls["members"]:
if is_class(param) or is_enum(param):
continue
fprintln(cout, Template(""" $func(buf, obj.$var);""").substitute({'func' : SERIALIZER, 'var' : param["name"]}))
fprintln(cout, "}")
fprintln(cout, Template("""
template<typename Input$temp_def>
$name$temp_param $func(Input& buf, boost::type<$name$temp_param>) {""").substitute({'func' : DESERIALIZER, 'name' : name, 'temp_def': temp_def, 'temp_param' : template_class_param}))
if not modifier:
fprintln(cout, Template(""" $size_type size = $func(buf, boost::type<$size_type>());
Input in = buf.read_substream(size - sizeof($size_type));""").substitute({'func' : DESERIALIZER, 'size_type' : SIZETYPE}))
else:
fprintln(cout, """ Input& in = buf;""")
params = []
for index, param in enumerate(cls["members"]):
if is_class(param) or is_enum(param):
continue
local_param = "__local_" + str(index)
if "attribute" in param:
deflt = param["default"][0] if "default" in param else param["type"] + "()"
fprintln(cout, Template(""" $typ $local = (in.size()>0) ?
$func(in, boost::type<$typ>()) : $default;""").substitute({'func' : DESERIALIZER, 'typ': param_type(param["type"]), 'local' : local_param, 'default': deflt}))
else:
fprintln(cout, Template(""" $typ $local = $func(in, boost::type<$typ>());""").substitute({'func' : DESERIALIZER, 'typ': param_type(param["type"]), 'local' : local_param}))
params.append("std::move(" + local_param + ")")
fprintln(cout, Template("""
$name$temp_param res {$params};
return res;
}""").substitute({'name' : name, 'params': ", ".join(params), 'temp_param' : template_class_param}))
def handle_objects(tree, hout, cout, namespaces=[]):
for obj in tree:
if is_class(obj):
handle_class(obj, hout, cout, namespaces)
elif is_enum(obj):
handle_enum(obj, hout, cout, namespaces)
elif obj["type"] == "namespace":
handle_objects(obj["content"], hout, cout, namespaces + [obj["name"]])
else:
print("unknown type ", obj, obj["type"])
def load_file(name):
if config.o:
cout = open(config.o.replace('.hh', '.impl.hh'), "w+")
hout = open(config.o, "w+")
else:
cout = open(name.replace(EXTENSION, '.dist.impl.hh'), "w+")
hout = open(name.replace(EXTENSION, '.dist.hh'), "w+")
print_cw(hout)
fprintln(hout, """
/*
* The generate code should be included in a header file after
* The object definition
*/
""")
print_cw(cout)
if config.ns != '':
fprintln(cout, "namespace ", config.ns, " {")
data = parse_file(name)
if data:
handle_objects(data, hout, cout)
if config.ns != '':
fprintln(cout, "}")
cout.close()
hout.close()
def general_include(files):
name = config.o if config.o else "serializer.dist.hh"
cout = open(name.replace('.hh', '.impl.hh'), "w+")
hout = open(name, "w+")
print_cw(cout)
print_cw(hout)
for n in files:
fprintln(hout, '#include "' + n +'"')
fprintln(cout, '#include "' + n.replace(".dist.hh", '.dist.impl.hh') +'"')
cout.close()
hout.close()
if config.file:
general_include(config.file)
elif config.f != '':
load_file(config.f)

View File

@@ -1,78 +0,0 @@
/*
* Copyright 2016 ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
namespace gms {
enum class application_state:int {STATUS = 0,
LOAD,
SCHEMA,
DC,
RACK,
RELEASE_VERSION,
REMOVAL_COORDINATOR,
INTERNAL_IP,
RPC_ADDRESS,
SEVERITY,
NET_VERSION,
HOST_ID,
TOKENS
};
class inet_address final {
uint32_t raw_addr();
};
class versioned_value {
sstring value;
int version;
};
class heart_beat_state {
int32_t get_generation();
int32_t get_heart_beat_version();
};
class endpoint_state {
gms::heart_beat_state get_heart_beat_state();
std::map<gms::application_state, gms::versioned_value> get_application_state_map();
};
class gossip_digest {
gms::inet_address get_endpoint();
int32_t get_generation();
int32_t get_max_version();
};
class gossip_digest_syn {
sstring get_cluster_id();
sstring get_partioner();
std::vector<gms::gossip_digest> get_gossip_digests();
};
class gossip_digest_ack {
std::vector<gms::gossip_digest> get_gossip_digest_list();
std::map<gms::inet_address, gms::endpoint_state> get_endpoint_state_map();
};
class gossip_digest_ack2 {
std::map<gms::inet_address, gms::endpoint_state> get_endpoint_state_map();
};
}

View File

@@ -1,9 +0,0 @@
namespace service {
namespace pager {
class paging_state {
partition_key get_partition_key();
std::experimental::optional<clustering_key> get_clustering_key();
uint32_t get_remaining();
};
}
}

View File

@@ -1,33 +0,0 @@
/*
* Copyright 2016 ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
template<typename T>
class range_bound {
T value();
bool is_inclusive();
};
template<typename T>
class range {
std::experimental::optional<range_bound<T>> start();
std::experimental::optional<range_bound<T>> end();
bool is_singular();
};

View File

@@ -1,43 +0,0 @@
/*
* Copyright 2016 ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
namespace query {
class specific_ranges {
partition_key pk();
std::vector<range<clustering_key_prefix>> ranges();
};
class partition_slice {
std::vector<range<clustering_key_prefix>> default_row_ranges();
std::vector<uint32_t> static_columns;
std::vector<uint32_t> regular_columns;
query::partition_slice::option_set options;
std::unique_ptr<query::specific_ranges> get_specific_ranges();
};
class read_command {
utils::UUID cf_id;
utils::UUID schema_version;
query::partition_slice slice;
uint32_t row_limit;
std::chrono::time_point<gc_clock, gc_clock::duration> timestamp;
};
}

View File

@@ -1,30 +0,0 @@
/*
* Copyright 2016 ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
class partition {
uint32_t row_count();
frozen_mutation mut();
};
class reconcilable_result {
uint32_t row_count();
std::vector<partition> partitions();
};

View File

@@ -1,26 +0,0 @@
/*
* Copyright 2016 ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
namespace query {
class result final {
bytes_ostream buf();
};
}

View File

@@ -1,29 +0,0 @@
/*
* Copyright 2016 ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
namespace dht {
class ring_position {
enum class token_bound:int8_t {start = -1, end = 1};
dht::token token();
dht::ring_position::token_bound bound();
std::experimental::optional<partition_key> key();
};
}

View File

@@ -1,43 +0,0 @@
/*
* Copyright 2016 ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
namespace streaming {
class stream_request {
sstring keyspace;
std::vector<query::range<dht::token>> ranges;
std::vector<sstring> column_families;
};
class stream_summary {
utils::UUID cf_id;
int files;
long total_size;
};
class prepare_message {
std::vector<streaming::stream_request> requests;
std::vector<streaming::stream_summary> summaries;
uint32_t dst_cpu_id;
};
}

View File

@@ -1,11 +0,0 @@
namespace dht {
class token {
enum class kind : int {
before_all_keys,
key,
after_all_keys,
};
dht::token::kind _kind;
bytes _data;
};
}

View File

@@ -1,27 +0,0 @@
/*
* Copyright 2016 ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
namespace utils {
class UUID final {
int64_t get_most_significant_bits();
int64_t get_least_significant_bits();
};
}

View File

@@ -82,18 +82,4 @@ key_reader make_filtering_reader(key_reader&& reader, Filter&& filter) {
return make_key_reader<filtering_key_reader<Filter>>(std::move(reader), std::forward<Filter>(filter));
}
class key_source {
std::function<key_reader(const query::partition_range& range, const io_priority_class& pc)> _fn;
public:
key_source(std::function<key_reader(const query::partition_range& range, const io_priority_class& pc)> fn) : _fn(std::move(fn)) {}
key_source(std::function<key_reader(const query::partition_range& range)> fn)
: _fn([fn = std::move(fn)](const query::partition_range& range, const io_priority_class& pc) {
return fn(range);
}) {}
key_reader operator()(const query::partition_range& range, const io_priority_class& pc) {
return _fn(range, pc);
}
key_reader operator()(const query::partition_range& range) {
return _fn(range, default_priority_class());
}
};
using key_source = std::function<key_reader(const query::partition_range& range)>;

View File

@@ -538,9 +538,11 @@ public:
class partition_key : public compound_wrapper<partition_key, partition_key_view> {
public:
using c_type = compound_type<allow_prefixes::no>;
explicit partition_key(bytes&& b)
private:
partition_key(bytes&& b)
: compound_wrapper<partition_key, partition_key_view>(std::move(b))
{ }
public:
partition_key(const partition_key_view& key)
: partition_key(bytes(key.representation().begin(), key.representation().end()))
{ }
@@ -611,10 +613,10 @@ public:
};
class clustering_key_prefix : public prefix_compound_wrapper<clustering_key_prefix, clustering_key_prefix_view, clustering_key> {
public:
explicit clustering_key_prefix(bytes&& b)
clustering_key_prefix(bytes&& b)
: prefix_compound_wrapper<clustering_key_prefix, clustering_key_prefix_view, clustering_key>(std::move(b))
{ }
public:
clustering_key_prefix(clustering_key_prefix_view v)
: clustering_key_prefix(bytes(v.representation().begin(), v.representation().end()))
{ }

View File

@@ -123,15 +123,6 @@ abstract_replication_strategy::get_ranges(inet_address ep) const {
}
prev_tok = tok;
}
if (!ret.empty()) {
// Make ret contain no wrap-around range by unwrapping the first element.
auto& r = ret.front();
if (r.is_wrap_around(dht::token_comparator())) {
auto split_ranges = r.unwrap();
r = split_ranges.first;
ret.push_back(split_ranges.second);
}
}
return ret;
}

View File

@@ -101,7 +101,6 @@ public:
replication_strategy_type get_type() const { return _my_type; }
// get_ranges() returns the list of ranges held by the given endpoint.
// The list is sorted, and its elements are non overlapping and non wrap-around.
// It the analogue of Origin's getAddressRanges().get(endpoint).
// This function is not efficient, and not meant for the fast path.
std::vector<range<token>> get_ranges(inet_address ep) const;

View File

@@ -47,9 +47,6 @@ public:
virtual future<> gossiper_starting() override;
virtual future<> start() override;
virtual void set_local_private_addr(const sstring& addr_str) override;
virtual sstring get_name() const override {
return "org.apache.cassandra.locator.Ec2MultiRegionSnitch";
}
private:
static constexpr const char* PUBLIC_IP_QUERY_REQ = "/latest/meta-data/public-ipv4";
static constexpr const char* PRIVATE_IP_QUERY_REQ = "/latest/meta-data/local-ipv4";

View File

@@ -32,7 +32,7 @@ public:
ec2_snitch(const sstring& fname = "", unsigned io_cpu_id = 0);
virtual future<> start() override;
virtual sstring get_name() const override {
return "org.apache.cassandra.locator.Ec2Snitch";
return "org.apache.cassandra.locator.EC2Snitch";
}
protected:
future<> load_config();

Some files were not shown because too many files have changed in this diff Show More