Compare commits

..

12 Commits

Author SHA1 Message Date
Pekka Enberg
e10588e37a streaming/stream_session: Don't stop stream manager
We cannot stop the stream manager because it's accessible via the API
server during shutdown, for example, which can cause a SIGSEGV.

Spotted by ASan.
Message-Id: <1453130811-22540-1-git-send-email-penberg@scylladb.com>
2016-01-20 10:29:34 +02:00
Pekka Enberg
bb0aeb9bb2 api/messaging_service: Fix heap-buffer-overflows in set_messaging_service()
Fix various issues in set_messaging_service() that caused
heap-buffer-overflows when JMX proxy connects to Scylla API:

  - Off-by-one error in 'num_verb' definition

  - Call to initializer list std::vector constructor variant that caused
    the vector to be two elements long.

  - Missing verb definitions from the Swagger definition that caused
    response vector to be too small.

Spotted by ASan.
Message-Id: <1453125439-16703-1-git-send-email-penberg@scylladb.com>
2016-01-20 10:29:27 +02:00
Takuya ASADA
d2c97d9620 dist: use our own CentOS7 Base image
Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1453241256-23338-4-git-send-email-syuu@scylladb.com>
2016-01-20 09:41:39 +02:00
Takuya ASADA
ddbe20f65c dist: stop ntpd before running ntpdate
New CentOS Base Image runs ntpd by default, so shutdown it before running ntpdate.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1453241256-23338-3-git-send-email-syuu@scylladb.com>
2016-01-20 09:41:33 +02:00
Takuya ASADA
88bf12aa0b dist: disable SELinux only when it enabled
New CentOS7 Base Image disabled SELinux by default, and running 'setenforce 0' on the image causes error, we won't able to build AMI.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1453241256-23338-2-git-send-email-syuu@scylladb.com>
2016-01-20 09:41:29 +02:00
Takuya ASADA
87fdf2ee0d dist: extend coredump size limit
16GB is not enough for some larger machines, so extend it.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1453115792-21989-2-git-send-email-syuu@scylladb.com>
2016-01-18 13:38:56 +02:00
Takuya ASADA
d6992189ed dist: preserve environment variable when running scylla_prepare on sudo
sysconfig parameters are passed via environment variables, but sudo resets it by default.
Need to keep them beyond sudo.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1453115792-21989-1-git-send-email-syuu@scylladb.com>
2016-01-18 13:23:55 +02:00
Tomasz Grabiec
5bf1afa059 config: Set default logging level to info
Commit d7b403db1f changed the default in
logging::logger. It affected tests but not scylla binary, where it's
being overwritten in main.cc.
Message-Id: <1452777008-21708-1-git-send-email-tgrabiec@scylladb.com>
2016-01-14 15:12:28 +02:00
Tomasz Grabiec
b013ed6357 cql3: Disable ALTER TABLE unless experimental features are on 2016-01-14 14:32:15 +02:00
Tomasz Grabiec
d4d0dd9cda tests: cql_test_env: Enable experimental features 2016-01-14 14:32:10 +02:00
Tomasz Grabiec
5865a43400 config: Add 'experimental' switch 2016-01-14 14:32:05 +02:00
Pekka Enberg
b81292d5d2 release: prepare for 0.16 2016-01-14 13:21:50 +02:00
250 changed files with 6097 additions and 6513 deletions

2
.gitmodules vendored
View File

@@ -1,6 +1,6 @@
[submodule "seastar"]
path = seastar
url = ../scylla-seastar
url = ../seastar
ignore = dirty
[submodule "swagger-ui"]
path = swagger-ui

103
IDL.md
View File

@@ -1,103 +0,0 @@
#IDL definition
The schema we use similar to c++ schema.
Use class or struct similar to the object you need the serializer for.
Use namespace when applicable.
##keywords
* class/struct - a class or a struct like C++
class/struct can have final or stub marker
* namespace - has the same C++ meaning
* enum class - has the same C++ meaning
* final modifier for class - when a class mark as final it will not contain a size parameter. Note that final class cannot be extended by future version, so use with care
* stub class - when a class is mark as stub, it means that no code will be generated for this class and it is only there as a documentation.
* version attributes - mark with [[version id ]] mark that a field is available from a specific version
* template - A template class definition like C++
##Syntax
###Namespace
```
namespace ns_name { namespace-body }
```
* ns_name: either a previously unused identifier, in which case this is original-namespace-definition or the name of a namespace, in which case this is extension-namespace-definition
* namespace-body: possibly empty sequence of declarations of any kind (including class and struct definitions as well as nested namespaces)
###class/struct
`
class-key class-name final(optional) stub(optional) { member-specification } ;(optional)
`
* class-key: one of class or struct.
* class-name: the name of the class that's being defined. optionally followed by keyword final, optionally followed by keyword stub
* final: when a class mark as final, it means it can not be extended and there is no need to serialize its size, use with care.
* stub: when a class is mark as stub, it means no code will generate for it and it is added for documentation only.
* member-specification: list of access specifiers, and public member accessor see class member below.
* to be compatible with C++ a class definition can be followed by a semicolon.
###enum
`enum-key identifier enum-base { enumerator-list(optional) }`
* enum-key: only enum class is supported
* identifier: the name of the enumeration that's being declared.
* enum-base: colon (:), followed by a type-specifier-seq that names an integral type (see the C++ standard for the full list of all possible integral types).
* enumerator-list: comma-separated list of enumerator definitions, each of which is either simply an identifier, which becomes the name of the enumerator, or an identifier with an initializer: identifier = integral value.
Note that though C++ allows constexpr as an initialize value, it makes the documentation less readable, hence is not permitted.
###class member
`type member-access attributes(optional) default-value(optional);`
* type: Any valid C++ type, following the C++ notation. note that there should be a serializer for the type, but deceleration order is not mandatory
* member-access: is the way the member can be access. If the member is public it can be the name itself. if not it could be a getter function that should be followed by braces. Note that getter can (and probably should) be const methods.
* attributes: Attributes define by square brackets. Currently are use to mark a version in which a specific member was added [ [ version version-number] ] would mark that the specific member was added in the given version number.
###template
`template < parameter-list > class-declaration`
* parameter-list - a non-empty comma-separated list of the template parameters.
* class-decleration - (See class section) The class name declared become a template name.
##IDL example
Forward slashes comments are ignored until the end of the line.
```
namespace utils {
// An example of a stub class
class UUID stub {
int64_t most_sig_bits;
int64_t least_sig_bits;
}
}
namespace gms {
//an enum example
enum class application_state:int {STATUS = 0,
LOAD,
SCHEMA,
DC};
// example of final class
class versioned_value final {
// getter and setter as public member
int version;
sstring value;
}
class heart_beat_state {
//getter as function
int32_t get_generation();
//default value example
int32_t get_heart_beat_version() = 1;
}
class endpoint_state {
heart_beat_state get_heart_beat_state();
std::map<application_state, versioned_value> get_application_state_map();
}
class gossip_digest {
inet_address get_endpoint();
int32_t get_generation();
//mark that a field was added on a specific version
int32_t get_max_version() [ [version 0.14.2] ];
}
class gossip_digest_ack {
std::vector<gossip_digest> digests();
std::map<inet_address, gms::endpoint_state> get_endpoint_state_map();
}
}
```

View File

@@ -15,7 +15,7 @@ git submodule update --recursive
* Installing required packages:
```
sudo yum install yaml-cpp-devel lz4-devel zlib-devel snappy-devel jsoncpp-devel thrift-devel antlr3-tool antlr3-C++-devel libasan libubsan gcc-c++ gnutls-devel ninja-build ragel libaio-devel cryptopp-devel xfsprogs-devel numactl-devel hwloc-devel libpciaccess-devel libxml2-devel python3-pyparsing
sudo yum install yaml-cpp-devel lz4-devel zlib-devel snappy-devel jsoncpp-devel thrift-devel antlr3-tool antlr3-C++-devel libasan libubsan gcc-c++ gnutls-devel ninja-build ragel libaio-devel cryptopp-devel xfsprogs-devel
```
* Build Scylla

View File

@@ -1,6 +1,6 @@
#!/bin/sh
VERSION=0.18.2
VERSION=0.16
if test -f version
then

View File

@@ -106,7 +106,7 @@
"required":true,
"allowMultiple":false,
"type":"string",
"paramType":"query"
"paramType":"string"
}
]
}

View File

@@ -196,10 +196,6 @@
"value": {
"type": "string",
"description": "The version value"
},
"version": {
"type": "int",
"description": "The application state version"
}
}
}

View File

@@ -234,12 +234,12 @@
"type":"string",
"enum":[
"CLIENT_ID",
"ECHO",
"MUTATION",
"MUTATION_DONE",
"READ_DATA",
"READ_MUTATION_DATA",
"READ_DIGEST",
"GOSSIP_ECHO",
"GOSSIP_DIGEST_SYN",
"GOSSIP_DIGEST_ACK2",
"GOSSIP_SHUTDOWN",
@@ -247,6 +247,7 @@
"TRUNCATE",
"REPLICATION_FINISHED",
"MIGRATION_REQUEST",
"STREAM_INIT_MESSAGE",
"PREPARE_MESSAGE",
"PREPARE_DONE_MESSAGE",
"STREAM_MUTATION",

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2015 ScyllaDB
* Copyright 2015 Cloudius Systems
*/
/*
@@ -52,98 +52,67 @@ static std::unique_ptr<reply> exception_reply(std::exception_ptr eptr) {
return std::make_unique<reply>();
}
future<> set_server_init(http_context& ctx) {
future<> set_server(http_context& ctx) {
auto rb = std::make_shared < api_registry_builder > (ctx.api_doc);
return ctx.http_server.set_routes([rb, &ctx](routes& r) {
r.register_exeption_handler(exception_reply);
httpd::directory_handler* dir = new httpd::directory_handler(ctx.api_dir,
new content_replace("html"));
r.put(GET, "/ui", new httpd::file_handler(ctx.api_dir + "/index.html",
new content_replace("html")));
r.add(GET, url("/ui").remainder("path"), new httpd::directory_handler(ctx.api_dir,
new content_replace("html")));
rb->register_function(r, "system",
"The system related API");
set_system(ctx, r);
r.add(GET, url("/ui").remainder("path"), dir);
rb->set_api_doc(r);
});
}
rb->register_function(r, "storage_service",
"The storage service API");
set_storage_service(ctx,r);
rb->register_function(r, "commitlog",
"The commit log API");
set_commitlog(ctx,r);
rb->register_function(r, "gossiper",
"The gossiper API");
set_gossiper(ctx,r);
rb->register_function(r, "column_family",
"The column family API");
set_column_family(ctx, r);
static future<> register_api(http_context& ctx, const sstring& api_name,
const sstring api_desc,
std::function<void(http_context& ctx, routes& r)> f) {
auto rb = std::make_shared < api_registry_builder > (ctx.api_doc);
return ctx.http_server.set_routes([rb, &ctx, api_name, api_desc, f](routes& r) {
rb->register_function(r, api_name, api_desc);
f(ctx,r);
});
}
future<> set_server_storage_service(http_context& ctx) {
return register_api(ctx, "storage_service", "The storage service API", set_storage_service);
}
future<> set_server_gossip(http_context& ctx) {
return register_api(ctx, "gossiper",
"The gossiper API", set_gossiper);
}
future<> set_server_load_sstable(http_context& ctx) {
return register_api(ctx, "column_family",
"The column family API", set_column_family);
}
future<> set_server_messaging_service(http_context& ctx) {
return register_api(ctx, "messaging_service",
"The messaging service API", set_messaging_service);
}
future<> set_server_storage_proxy(http_context& ctx) {
return register_api(ctx, "storage_proxy",
"The storage proxy API", set_storage_proxy);
}
future<> set_server_stream_manager(http_context& ctx) {
return register_api(ctx, "stream_manager",
"The stream manager API", set_stream_manager);
}
future<> set_server_gossip_settle(http_context& ctx) {
auto rb = std::make_shared < api_registry_builder > (ctx.api_doc);
return ctx.http_server.set_routes([rb, &ctx](routes& r) {
rb->register_function(r, "failure_detector",
"The failure detector API");
set_failure_detector(ctx,r);
rb->register_function(r, "cache_service",
"The cache service API");
set_cache_service(ctx,r);
rb->register_function(r, "endpoint_snitch_info",
"The endpoint snitch info API");
set_endpoint_snitch(ctx, r);
});
}
future<> set_server_done(http_context& ctx) {
auto rb = std::make_shared < api_registry_builder > (ctx.api_doc);
return ctx.http_server.set_routes([rb, &ctx](routes& r) {
rb->register_function(r, "compaction_manager",
"The Compaction manager API");
set_compaction_manager(ctx, r);
rb->register_function(r, "lsa", "Log-structured allocator API");
set_lsa(ctx, r);
rb->register_function(r, "commitlog",
"The commit log API");
set_commitlog(ctx,r);
rb->register_function(r, "hinted_handoff",
"The hinted handoff API");
set_hinted_handoff(ctx, r);
rb->register_function(r, "failure_detector",
"The failure detector API");
set_failure_detector(ctx,r);
rb->register_function(r, "messaging_service",
"The messaging service API");
set_messaging_service(ctx, r);
rb->register_function(r, "storage_proxy",
"The storage proxy API");
set_storage_proxy(ctx, r);
rb->register_function(r, "cache_service",
"The cache service API");
set_cache_service(ctx,r);
rb->register_function(r, "collectd",
"The collectd API");
set_collectd(ctx, r);
rb->register_function(r, "endpoint_snitch_info",
"The endpoint snitch info API");
set_endpoint_snitch(ctx, r);
rb->register_function(r, "compaction_manager",
"The Compaction manager API");
set_compaction_manager(ctx, r);
rb->register_function(r, "hinted_handoff",
"The hinted handoff API");
set_hinted_handoff(ctx, r);
rb->register_function(r, "stream_manager",
"The stream manager API");
set_stream_manager(ctx, r);
rb->register_function(r, "system",
"The system related API");
set_system(ctx, r);
});
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2015 ScyllaDB
* Copyright 2015 Cloudius Systems
*/
/*
@@ -21,17 +21,31 @@
#pragma once
#include "http/httpd.hh"
#include "json/json_elements.hh"
#include "database.hh"
#include "service/storage_proxy.hh"
#include <boost/lexical_cast.hpp>
#include <boost/algorithm/string/split.hpp>
#include <boost/algorithm/string/classification.hpp>
#include "api/api-doc/utils.json.hh"
#include "utils/histogram.hh"
#include "http/exception.hh"
#include "api_init.hh"
namespace api {
struct http_context {
sstring api_dir;
sstring api_doc;
httpd::http_server_control http_server;
distributed<database>& db;
distributed<service::storage_proxy>& sp;
http_context(distributed<database>& _db, distributed<service::storage_proxy>&
_sp) : db(_db), sp(_sp) {}
};
future<> set_server(http_context& ctx);
template<class T>
std::vector<sstring> container_to_vec(const T& container) {
std::vector<sstring> res;

View File

@@ -1,51 +0,0 @@
/*
* Copyright 2016 ScylaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#pragma once
#include "database.hh"
#include "service/storage_proxy.hh"
#include "http/httpd.hh"
namespace api {
struct http_context {
sstring api_dir;
sstring api_doc;
httpd::http_server_control http_server;
distributed<database>& db;
distributed<service::storage_proxy>& sp;
http_context(distributed<database>& _db,
distributed<service::storage_proxy>& _sp)
: db(_db), sp(_sp) {
}
};
future<> set_server_init(http_context& ctx);
future<> set_server_storage_service(http_context& ctx);
future<> set_server_gossip(http_context& ctx);
future<> set_server_load_sstable(http_context& ctx);
future<> set_server_messaging_service(http_context& ctx);
future<> set_server_storage_proxy(http_context& ctx);
future<> set_server_stream_manager(http_context& ctx);
future<> set_server_gossip_settle(http_context& ctx);
future<> set_server_done(http_context& ctx);
}

View File

@@ -49,7 +49,7 @@ void set_compaction_manager(http_context& ctx, routes& r) {
s.ks = c->ks;
s.cf = c->cf;
s.unit = "keys";
s.task_type = sstables::compaction_name(c->type);
s.task_type = "compaction";
s.completed = c->total_keys_written;
s.total = c->total_partitions;
summaries.push_back(std::move(s));
@@ -67,14 +67,11 @@ void set_compaction_manager(http_context& ctx, routes& r) {
return make_ready_future<json::json_return_type>(json_void());
});
cm::stop_compaction.set(r, [&ctx] (std::unique_ptr<request> req) {
auto type = req->get_query_param("type");
return ctx.db.invoke_on_all([type] (database& db) {
auto& cm = db.get_compaction_manager();
cm.stop_compaction(type);
}).then([] {
return make_ready_future<json::json_return_type>(json_void());
});
cm::stop_compaction.set(r, [] (std::unique_ptr<request> req) {
//TBD
// FIXME
warn(unimplemented::cause::API);
return make_ready_future<json::json_return_type>("");
});
cm::get_pending_tasks.set(r, [&ctx] (std::unique_ptr<request> req) {

View File

@@ -44,7 +44,6 @@ void set_failure_detector(http_context& ctx, routes& r) {
// method that the state index are static but the name can be changed.
version_val.application_state = static_cast<std::underlying_type<gms::application_state>::type>(a.first);
version_val.value = a.second.value;
version_val.version = a.second.version;
val.application_state.push(version_val);
}
res.push_back(val);

View File

@@ -30,7 +30,6 @@
#include "repair/repair.hh"
#include "locator/snitch_base.hh"
#include "column_family.hh"
#include "log.hh"
namespace api {
@@ -272,21 +271,15 @@ void set_storage_service(http_context& ctx, routes& r) {
});
ss::force_keyspace_cleanup.set(r, [&ctx](std::unique_ptr<request> req) {
//TBD
// FIXME
// the nodetool clean up is used in many tests
// this workaround willl let it work until
// a cleanup is implemented
warn(unimplemented::cause::API);
auto keyspace = validate_keyspace(ctx, req->param);
auto column_families = split_cf(req->get_query_param("cf"));
if (column_families.empty()) {
column_families = map_keys(ctx.db.local().find_keyspace(keyspace).metadata().get()->cf_meta_data());
}
return ctx.db.invoke_on_all([keyspace, column_families] (database& db) {
std::vector<column_family*> column_families_vec;
auto& cm = db.get_compaction_manager();
for (auto entry : column_families) {
column_family* cf = &db.find_column_family(keyspace, entry);
cm.submit_cleanup_job(cf);
}
}).then([]{
return make_ready_future<json::json_return_type>(0);
});
auto column_family = req->get_query_param("cf");
return make_ready_future<json::json_return_type>(0);
});
ss::scrub.set(r, [&ctx](std::unique_ptr<request> req) {
@@ -405,13 +398,9 @@ void set_storage_service(http_context& ctx, routes& r) {
});
ss::get_logging_levels.set(r, [](std::unique_ptr<request> req) {
//TBD
unimplemented();
std::vector<ss::mapper> res;
for (auto i : logging::logger_registry().get_all_logger_names()) {
ss::mapper log;
log.key = i;
log.value = logging::level_name(logging::logger_registry().get_logger_level(i));
res.push_back(log);
}
return make_ready_future<json::json_return_type>(res);
});

View File

@@ -47,7 +47,7 @@ static hs::progress_info get_progress_info(const streaming::progress_info& info)
res.direction = info.dir;
res.file_name = info.file_name;
res.peer = boost::lexical_cast<std::string>(info.peer);
res.session_index = 0;
res.session_index = info.session_index;
res.total_bytes = info.total_bytes;
return res;
}
@@ -70,7 +70,7 @@ static hs::stream_state get_state(
for (auto info : result_future.get_coordinator().get()->get_all_session_info()) {
hs::stream_info si;
si.peer = boost::lexical_cast<std::string>(info.peer);
si.session_index = 0;
si.session_index = info.session_index;
si.state = info.state;
si.connecting = si.peer;
set_summaries(info.receiving_summaries, si.receiving_summaries);
@@ -109,16 +109,14 @@ void set_stream_manager(http_context& ctx, routes& r) {
});
hs::get_total_incoming_bytes.set(r, [](std::unique_ptr<request> req) {
gms::inet_address peer(req->param["peer"]);
return streaming::get_stream_manager().map_reduce0([peer](streaming::stream_manager& sm) {
gms::inet_address ep(req->param["peer"]);
utils::UUID plan_id = gms::get_local_gossiper().get_host_id(ep);
return streaming::get_stream_manager().map_reduce0([plan_id](streaming::stream_manager& stream) {
int64_t res = 0;
for (auto sr : sm.get_all_streams()) {
if (sr) {
for (auto session : sr->get_coordinator()->get_all_stream_sessions()) {
if (session->peer == peer) {
res += session->get_bytes_received();
}
}
streaming::stream_result_future* s = stream.get_receiving_stream(plan_id).get();
if (s != nullptr) {
for (auto si: s->get_coordinator()->get_all_session_info()) {
res += si.get_total_size_received();
}
}
return res;
@@ -128,12 +126,12 @@ void set_stream_manager(http_context& ctx, routes& r) {
});
hs::get_all_total_incoming_bytes.set(r, [](std::unique_ptr<request> req) {
return streaming::get_stream_manager().map_reduce0([](streaming::stream_manager& sm) {
return streaming::get_stream_manager().map_reduce0([](streaming::stream_manager& stream) {
int64_t res = 0;
for (auto sr : sm.get_all_streams()) {
if (sr) {
for (auto session : sr->get_coordinator()->get_all_stream_sessions()) {
res += session->get_bytes_received();
for (auto s : stream.get_receiving_streams()) {
if (s.second.get() != nullptr) {
for (auto si: s.second.get()->get_coordinator()->get_all_session_info()) {
res += si.get_total_size_received();
}
}
}
@@ -144,16 +142,14 @@ void set_stream_manager(http_context& ctx, routes& r) {
});
hs::get_total_outgoing_bytes.set(r, [](std::unique_ptr<request> req) {
gms::inet_address peer(req->param["peer"]);
return streaming::get_stream_manager().map_reduce0([peer](streaming::stream_manager& sm) {
gms::inet_address ep(req->param["peer"]);
utils::UUID plan_id = gms::get_local_gossiper().get_host_id(ep);
return streaming::get_stream_manager().map_reduce0([plan_id](streaming::stream_manager& stream) {
int64_t res = 0;
for (auto sr : sm.get_all_streams()) {
if (sr) {
for (auto session : sr->get_coordinator()->get_all_stream_sessions()) {
if (session->peer == peer) {
res += session->get_bytes_sent();
}
}
streaming::stream_result_future* s = stream.get_sending_stream(plan_id).get();
if (s != nullptr) {
for (auto si: s->get_coordinator()->get_all_session_info()) {
res += si.get_total_size_received();
}
}
return res;
@@ -163,12 +159,12 @@ void set_stream_manager(http_context& ctx, routes& r) {
});
hs::get_all_total_outgoing_bytes.set(r, [](std::unique_ptr<request> req) {
return streaming::get_stream_manager().map_reduce0([](streaming::stream_manager& sm) {
return streaming::get_stream_manager().map_reduce0([](streaming::stream_manager& stream) {
int64_t res = 0;
for (auto sr : sm.get_all_streams()) {
if (sr) {
for (auto session : sr->get_coordinator()->get_all_stream_sessions()) {
res += session->get_bytes_sent();
for (auto s : stream.get_initiated_streams()) {
if (s.second.get() != nullptr) {
for (auto si: s.second.get()->get_coordinator()->get_all_session_info()) {
res += si.get_total_size_received();
}
}
}

View File

@@ -91,62 +91,6 @@ class auth_migration_listener : public service::migration_listener {
static auth_migration_listener auth_migration;
/**
* Poor mans job schedule. For maximum 2 jobs. Sic.
* Still does nothing more clever than waiting 10 seconds
* like origin, then runs the submitted tasks.
*
* Only difference compared to sleep (from which this
* borrows _heavily_) is that if tasks have not run by the time
* we exit (and do static clean up) we delete the promise + cont
*
* Should be abstracted to some sort of global server function
* probably.
*/
void auth::auth::schedule_when_up(scheduled_func f) {
struct waiter {
promise<> done;
timer<> tmr;
waiter() : tmr([this] {done.set_value();})
{
tmr.arm(SUPERUSER_SETUP_DELAY);
}
~waiter() {
if (tmr.armed()) {
tmr.cancel();
done.set_exception(std::runtime_error("shutting down"));
}
logger.trace("Deleting scheduled task");
}
void kill() {
}
};
typedef std::unique_ptr<waiter> waiter_ptr;
static thread_local std::vector<waiter_ptr> waiters;
logger.trace("Adding scheduled task");
waiters.emplace_back(std::make_unique<waiter>());
auto* w = waiters.back().get();
w->done.get_future().finally([w] {
auto i = std::find_if(waiters.begin(), waiters.end(), [w](const waiter_ptr& p) {
return p.get() == w;
});
if (i != waiters.end()) {
waiters.erase(i);
}
}).then([f = std::move(f)] {
logger.trace("Running scheduled task");
return f();
}).handle_exception([](auto ep) {
return make_ready_future();
});
}
bool auth::auth::is_class_type(const sstring& type, const sstring& classname) {
if (type == classname) {
return true;
@@ -184,7 +128,7 @@ future<> auth::auth::setup() {
}).then([] {
service::get_local_migration_manager().register_listener(&auth_migration); // again, only one shard...
// instead of once-timer, just schedule this later
schedule_when_up([] {
sleep(SUPERUSER_SETUP_DELAY).then([] {
// setup default super user
return has_existing_users(USERS_CF, DEFAULT_SUPERUSER_NAME, USER_NAME).then([](bool exists) {
if (!exists) {

View File

@@ -112,9 +112,5 @@ public:
static future<> setup_table(const sstring& name, const sstring& cql);
static future<bool> has_existing_users(const sstring& cfname, const sstring& def_user_name, const sstring& name_column_name);
// For internal use. Run function "when system is up".
typedef std::function<future<>()> scheduled_func;
static void schedule_when_up(scheduled_func);
};
}

View File

@@ -160,8 +160,8 @@ future<> auth::password_authenticator::init() {
return auth::setup_table(CREDENTIALS_CF, create_table).then([this] {
// instead of once-timer, just schedule this later
auth::schedule_when_up([] {
return auth::has_existing_users(CREDENTIALS_CF, DEFAULT_USER_NAME, USER_NAME).then([](bool exists) {
sleep(auth::SUPERUSER_SETUP_DELAY).then([] {
auth::has_existing_users(CREDENTIALS_CF, DEFAULT_USER_NAME, USER_NAME).then([](bool exists) {
if (!exists) {
cql3::get_local_query_processor().process(sprint("INSERT INTO %s.%s (%s, %s) VALUES (?, ?) USING TIMESTAMP 0",
auth::AUTH_KS,

View File

@@ -42,14 +42,6 @@ private:
struct chunk {
// FIXME: group fragment pointers to reduce pointer chasing when packetizing
std::unique_ptr<chunk> next;
~chunk() {
auto p = std::move(next);
while (p) {
// Avoid recursion when freeing chunks
auto p_next = std::move(p->next);
p = std::move(p_next);
}
}
size_type offset; // Also means "size" after chunk is closed
size_type size;
value_type data[0];
@@ -214,10 +206,6 @@ public:
}
}
void write(const char* ptr, size_t size) {
write(bytes_view(reinterpret_cast<const signed char*>(ptr), size));
}
// Writes given sequence of bytes with a preceding length component encoded in big-endian format
inline void write_blob(bytes_view v) {
assert((size_type)v.size() == v.size());

View File

@@ -53,11 +53,6 @@ canonical_mutation::canonical_mutation(const mutation& m)
}())
{ }
utils::UUID canonical_mutation::column_family_id() const {
data_input in(_data);
return db::serializer<utils::UUID>::read(in);
}
mutation canonical_mutation::to_mutation(schema_ptr s) const {
data_input in(_data);

View File

@@ -49,10 +49,16 @@ public:
// is not intended, user should sync the schema first.
mutation to_mutation(schema_ptr) const;
utils::UUID column_family_id() const;
friend class db::serializer<canonical_mutation>;
};
//
//template<>
//struct hash<canonical_mutation> {
// template<typename Hasher>
// void operator()(Hasher& h, const canonical_mutation& m) const {
// m.feed_hash(h);
// }
//};
namespace db {

View File

@@ -34,8 +34,6 @@ enum class compaction_strategy_type {
};
class compaction_strategy_impl;
class sstable;
struct compaction_descriptor;
class compaction_strategy {
::shared_ptr<compaction_strategy_impl> _compaction_strategy_impl;
@@ -48,9 +46,7 @@ public:
compaction_strategy(compaction_strategy&&);
compaction_strategy& operator=(compaction_strategy&&);
// Return a list of sstables to be compacted after applying the strategy.
compaction_descriptor get_sstables_for_compaction(column_family& cfs, std::vector<lw_shared_ptr<sstable>> candidates);
future<> compact(column_family& cfs);
static sstring name(compaction_strategy_type type) {
switch (type) {
case compaction_strategy_type::null:

View File

@@ -26,10 +26,29 @@
#include <algorithm>
#include <vector>
#include <boost/range/iterator_range.hpp>
#include <boost/range/adaptor/transformed.hpp>
#include "utils/serialization.hh"
#include "unimplemented.hh"
// value_traits is meant to abstract away whether we are working on 'bytes'
// elements or 'bytes_opt' elements. We don't support optional values, but
// there are some generic layers which use this code which provide us with
// data in that format. In order to avoid allocation and rewriting that data
// into a new vector just to throw it away soon after that, we accept that
// format too.
template <typename T>
struct value_traits {
static const T& unwrap(const T& t) { return t; }
};
template<>
struct value_traits<bytes_opt> {
static const bytes& unwrap(const bytes_opt& t) {
assert(t);
return *t;
}
};
enum class allow_prefixes { no, yes };
template<allow_prefixes AllowPrefixes = allow_prefixes::no>
@@ -49,7 +68,7 @@ public:
, _byte_order_equal(std::all_of(_types.begin(), _types.end(), [] (auto t) {
return t->is_byte_order_equal();
}))
, _byte_order_comparable(false)
, _byte_order_comparable(!is_prefixable && _types.size() == 1 && _types[0]->is_byte_order_comparable())
, _is_reversed(_types.size() == 1 && _types[0]->is_reversed())
{ }
@@ -69,47 +88,76 @@ public:
/*
* Format:
* <len(value1)><value1><len(value2)><value2>...<len(value_n)><value_n>
* <len(value1)><value1><len(value2)><value2>...<len(value_n-1)><value_n-1>(len(value_n))?<value_n>
*
* For non-prefixable compounds, the value corresponding to the last component of types doesn't
* have its length encoded, its length is deduced from the input range.
*
* serialize_value() and serialize_optionals() for single element rely on the fact that for a single-element
* compounds their serialized form is equal to the serialized form of the component.
*/
template<typename RangeOfSerializedComponents>
static void serialize_value(RangeOfSerializedComponents&& values, bytes::iterator& out) {
for (auto&& val : values) {
template<typename Wrapped>
void serialize_value(const std::vector<Wrapped>& values, bytes::iterator& out) {
if (AllowPrefixes == allow_prefixes::yes) {
assert(values.size() <= _types.size());
} else {
assert(values.size() == _types.size());
}
size_t n_left = _types.size();
for (auto&& wrapped : values) {
auto&& val = value_traits<Wrapped>::unwrap(wrapped);
assert(val.size() <= std::numeric_limits<uint16_t>::max());
write<uint16_t>(out, uint16_t(val.size()));
if (--n_left || AllowPrefixes == allow_prefixes::yes) {
write<uint16_t>(out, uint16_t(val.size()));
}
out = std::copy(val.begin(), val.end(), out);
}
}
template <typename RangeOfSerializedComponents>
static size_t serialized_size(RangeOfSerializedComponents&& values) {
template <typename Wrapped>
size_t serialized_size(const std::vector<Wrapped>& values) {
size_t len = 0;
for (auto&& val : values) {
size_t n_left = _types.size();
for (auto&& wrapped : values) {
auto&& val = value_traits<Wrapped>::unwrap(wrapped);
assert(val.size() <= std::numeric_limits<uint16_t>::max());
len += sizeof(uint16_t) + val.size();
if (--n_left || AllowPrefixes == allow_prefixes::yes) {
len += sizeof(uint16_t);
}
len += val.size();
}
return len;
}
bytes serialize_single(bytes&& v) {
return serialize_value({std::move(v)});
if (AllowPrefixes == allow_prefixes::no) {
assert(_types.size() == 1);
return std::move(v);
} else {
// FIXME: Optimize
std::vector<bytes> vec;
vec.reserve(1);
vec.emplace_back(std::move(v));
return ::serialize_value(*this, vec);
}
}
template<typename RangeOfSerializedComponents>
static bytes serialize_value(RangeOfSerializedComponents&& values) {
bytes b(bytes::initialized_later(), serialized_size(values));
auto i = b.begin();
serialize_value(values, i);
return b;
bytes serialize_value(const std::vector<bytes>& values) {
return ::serialize_value(*this, values);
}
template<typename T>
static bytes serialize_value(std::initializer_list<T> values) {
return serialize_value(boost::make_iterator_range(values.begin(), values.end()));
bytes serialize_value(std::vector<bytes>&& values) {
if (AllowPrefixes == allow_prefixes::no && _types.size() == 1 && values.size() == 1) {
return std::move(values[0]);
}
return ::serialize_value(*this, values);
}
bytes serialize_optionals(const std::vector<bytes_opt>& values) {
return serialize_value(values | boost::adaptors::transformed([] (const bytes_opt& bo) -> bytes_view {
if (!bo) {
throw std::logic_error("attempted to create key component from empty optional");
}
return *bo;
}));
return ::serialize_value(*this, values);
}
bytes serialize_optionals(std::vector<bytes_opt>&& values) {
if (AllowPrefixes == allow_prefixes::no && _types.size() == 1 && values.size() == 1) {
assert(values[0]);
return std::move(*values[0]);
}
return ::serialize_value(*this, values);
}
bytes serialize_value_deep(const std::vector<data_value>& values) {
// TODO: Optimize
@@ -123,19 +171,35 @@ public:
return serialize_value(partial);
}
bytes decompose_value(const value_type& values) {
return serialize_value(values);
return ::serialize_value(*this, values);
}
class iterator : public std::iterator<std::input_iterator_tag, bytes_view> {
private:
ssize_t _types_left;
bytes_view _v;
value_type _current;
private:
void read_current() {
if (_types_left == 0) {
if (!_v.empty()) {
throw marshal_exception();
}
_v = bytes_view(nullptr, 0);
return;
}
--_types_left;
uint16_t len;
{
if (_types_left == 0 && AllowPrefixes == allow_prefixes::no) {
len = _v.size();
} else {
if (_v.empty()) {
_v = bytes_view(nullptr, 0);
return;
if (AllowPrefixes == allow_prefixes::yes) {
_types_left = 0;
_v = bytes_view(nullptr, 0);
return;
} else {
throw marshal_exception();
}
}
len = read_simple<uint16_t>(_v);
if (_v.size() < len) {
@@ -147,10 +211,10 @@ public:
}
public:
struct end_iterator_tag {};
iterator(const bytes_view& v) : _v(v) {
iterator(const compound_type& t, const bytes_view& v) : _types_left(t._types.size()), _v(v) {
read_current();
}
iterator(end_iterator_tag, const bytes_view& v) : _v(nullptr, 0) {}
iterator(end_iterator_tag, const bytes_view& v) : _types_left(0), _v(nullptr, 0) {}
iterator& operator++() {
read_current();
return *this;
@@ -162,18 +226,21 @@ public:
}
const value_type& operator*() const { return _current; }
const value_type* operator->() const { return &_current; }
bool operator!=(const iterator& i) const { return _v.begin() != i._v.begin(); }
bool operator==(const iterator& i) const { return _v.begin() == i._v.begin(); }
bool operator!=(const iterator& i) const { return _v.begin() != i._v.begin() || _types_left != i._types_left; }
bool operator==(const iterator& i) const { return _v.begin() == i._v.begin() && _types_left == i._types_left; }
};
static iterator begin(const bytes_view& v) {
return iterator(v);
iterator begin(const bytes_view& v) const {
return iterator(*this, v);
}
static iterator end(const bytes_view& v) {
iterator end(const bytes_view& v) const {
return iterator(typename iterator::end_iterator_tag(), v);
}
static boost::iterator_range<iterator> components(const bytes_view& v) {
boost::iterator_range<iterator> components(const bytes_view& v) const {
return { begin(v), end(v) };
}
auto iter_items(const bytes_view& v) {
return boost::iterator_range<iterator>(begin(v), end(v));
}
value_type deserialize_value(bytes_view v) {
std::vector<bytes> result;
result.reserve(_types.size());
@@ -191,7 +258,7 @@ public:
}
auto t = _types.begin();
size_t h = 0;
for (auto&& value : components(v)) {
for (auto&& value : iter_items(v)) {
h ^= (*t)->hash(value);
++t;
}
@@ -210,6 +277,12 @@ public:
return type->compare(v1, v2);
});
}
bytes from_string(sstring_view s) {
throw std::runtime_error(sprint("%s not implemented", __PRETTY_FUNCTION__));
}
sstring to_string(const bytes& b) {
throw std::runtime_error(sprint("%s not implemented", __PRETTY_FUNCTION__));
}
// Retruns true iff given prefix has no missing components
bool is_full(bytes_view v) const {
assert(AllowPrefixes == allow_prefixes::yes);

View File

@@ -25,31 +25,6 @@ from distutils.spawn import find_executable
configure_args = str.join(' ', [shlex.quote(x) for x in sys.argv[1:]])
for line in open('/etc/os-release'):
key, _, value = line.partition('=')
value = value.strip().strip('"')
if key == 'ID':
os_ids = [value]
if key == 'ID_LIKE':
os_ids += value.split(' ')
# distribution "internationalization", converting package names.
# Fedora name is key, values is distro -> package name dict.
i18n_xlat = {
'boost-devel': {
'debian': 'libboost-dev',
'ubuntu': 'libboost-dev (libboost1.55-dev on 14.04)',
},
}
def pkgname(name):
if name in i18n_xlat:
dict = i18n_xlat[name]
for id in os_ids:
if id in dict:
return dict[id]
return name
def get_flags():
with open('/proc/cpuinfo') as f:
for line in f:
@@ -294,7 +269,6 @@ scylla_core = (['database.cc',
'sstables/partition.cc',
'sstables/filter.cc',
'sstables/compaction.cc',
'sstables/compaction_manager.cc',
'log.cc',
'transport/event.cc',
'transport/event_notifier.cc',
@@ -342,7 +316,6 @@ scylla_core = (['database.cc',
'utils/big_decimal.cc',
'types.cc',
'validation.cc',
'service/priority_manager.cc',
'service/migration_manager.cc',
'service/storage_proxy.cc',
'cql3/operator.cc',
@@ -380,6 +353,7 @@ scylla_core = (['database.cc',
'utils/bloom_filter.cc',
'utils/bloom_calculations.cc',
'utils/rate_limiter.cc',
'utils/compaction_manager.cc',
'utils/file_lock.cc',
'utils/dynamic_bitset.cc',
'gms/version_generator.cc',
@@ -416,9 +390,11 @@ scylla_core = (['database.cc',
'service/client_state.cc',
'service/migration_task.cc',
'service/storage_service.cc',
'service/pending_range_calculator_service.cc',
'service/load_broadcaster.cc',
'service/pager/paging_state.cc',
'service/pager/query_pagers.cc',
'streaming/streaming.cc',
'streaming/stream_task.cc',
'streaming/stream_session.cc',
'streaming/stream_request.cc',
@@ -431,6 +407,13 @@ scylla_core = (['database.cc',
'streaming/stream_coordinator.cc',
'streaming/stream_manager.cc',
'streaming/stream_result_future.cc',
'streaming/messages/stream_init_message.cc',
'streaming/messages/retry_message.cc',
'streaming/messages/received_message.cc',
'streaming/messages/prepare_message.cc',
'streaming/messages/file_message_header.cc',
'streaming/messages/outgoing_file_message.cc',
'streaming/messages/incoming_file_message.cc',
'streaming/stream_session_state.cc',
'gc_clock.cc',
'partition_slice_builder.cc',
@@ -483,23 +466,7 @@ api = ['api/api.cc',
'api/system.cc'
]
idls = ['idl/gossip_digest.idl.hh',
'idl/uuid.idl.hh',
'idl/range.idl.hh',
'idl/keys.idl.hh',
'idl/read_command.idl.hh',
'idl/token.idl.hh',
'idl/ring_position.idl.hh',
'idl/result.idl.hh',
'idl/frozen_mutation.idl.hh',
'idl/reconcilable_result.idl.hh',
'idl/streaming.idl.hh',
'idl/paging_state.idl.hh',
'idl/frozen_schema.idl.hh',
'idl/partition_checksum.idl.hh',
]
scylla_tests_dependencies = scylla_core + api + idls + [
scylla_tests_dependencies = scylla_core + [
'tests/cql_test_env.cc',
'tests/cql_assertions.cc',
'tests/result_set_assertions.cc',
@@ -512,10 +479,11 @@ scylla_tests_seastar_deps = [
]
deps = {
'scylla': idls + ['main.cc'] + scylla_core + api,
'scylla': ['main.cc'] + scylla_core + api,
}
tests_not_using_seastar_test_framework = set([
'tests/types_test',
'tests/keys_test',
'tests/partitioner_test',
'tests/map_difference_test',
@@ -582,44 +550,16 @@ else:
args.pie = ''
args.fpie = ''
# a list element means a list of alternative packages to consider
# the first element becomes the HAVE_pkg define
# a string element is a package name with no alternatives
optional_packages = [['libsystemd', 'libsystemd-daemon']]
optional_packages = ['libsystemd']
pkgs = []
def setup_first_pkg_of_list(pkglist):
# The HAVE_pkg symbol is taken from the first alternative
upkg = pkglist[0].upper().replace('-', '_')
for pkg in pkglist:
if have_pkg(pkg):
pkgs.append(pkg)
defines.append('HAVE_{}=1'.format(upkg))
return True
return False
for pkglist in optional_packages:
if isinstance(pkglist, str):
pkglist = [pkglist]
if not setup_first_pkg_of_list(pkglist):
if len(pkglist) == 1:
print('Missing optional package {pkglist[0]}'.format(**locals()))
else:
alternatives = ':'.join(pkglist[1:])
print('Missing optional package {pkglist[0]} (or alteratives {alternatives})'.format(**locals()))
if not try_compile(compiler=args.cxx, source='#include <boost/version.hpp>'):
print('Boost not installed. Please install {}.'.format(pkgname("boost-devel")))
sys.exit(1)
if not try_compile(compiler=args.cxx, source='''\
#include <boost/version.hpp>
#if BOOST_VERSION < 105500
#error Boost version too low
#endif
'''):
print('Installed boost version too old. Please update {}.'.format(pkgname("boost-devel")))
sys.exit(1)
for pkg in optional_packages:
if have_pkg(pkg):
pkgs.append(pkg)
upkg = pkg.upper().replace('-', '_')
defines.append('HAVE_{}=1'.format(upkg))
else:
print('Missing optional package {pkg}'.format(**locals()))
defines = ' '.join(['-D' + d for d in defines])
@@ -717,9 +657,6 @@ with open(buildfile, 'w') as f:
rule swagger
command = seastar/json/json2code.py -f $in -o $out
description = SWAGGER $out
rule serializer
command = ./idl-compiler.py --ns ser -f $in -o $out
description = IDL compiler $out
rule ninja
command = {ninja} -C $subdir $target
restat = 1
@@ -756,7 +693,6 @@ with open(buildfile, 'w') as f:
compiles = {}
ragels = {}
swaggers = {}
serializers = {}
thrifts = set()
antlr3_grammars = set()
for binary in build_artifacts:
@@ -810,9 +746,6 @@ with open(buildfile, 'w') as f:
elif src.endswith('.rl'):
hh = '$builddir/' + mode + '/gen/' + src.replace('.rl', '.hh')
ragels[hh] = src
elif src.endswith('.idl.hh'):
hh = '$builddir/' + mode + '/gen/' + src.replace('.idl.hh', '.dist.hh')
serializers[hh] = src
elif src.endswith('.json'):
hh = '$builddir/' + mode + '/gen/' + src + '.hh'
swaggers[hh] = src
@@ -831,7 +764,6 @@ with open(buildfile, 'w') as f:
for g in antlr3_grammars:
gen_headers += g.headers('$builddir/{}/gen'.format(mode))
gen_headers += list(swaggers.keys())
gen_headers += list(serializers.keys())
f.write('build {}: cxx.{} {} || {} \n'.format(obj, mode, src, ' '.join(gen_headers)))
if src in extra_cxxflags:
f.write(' cxxflags = {seastar_cflags} $cxxflags $cxxflags_{mode} {extra_cxxflags}\n'.format(mode = mode, extra_cxxflags = extra_cxxflags[src], **modeval))
@@ -841,9 +773,6 @@ with open(buildfile, 'w') as f:
for hh in swaggers:
src = swaggers[hh]
f.write('build {}: swagger {}\n'.format(hh,src))
for hh in serializers:
src = serializers[hh]
f.write('build {}: serializer {} | idl-compiler.py\n'.format(hh,src))
for thrift in thrifts:
outs = ' '.join(thrift.generated('$builddir/{}/gen'.format(mode)))
f.write('build {}: thrift.{} {}\n'.format(outs, mode, thrift.source))

View File

@@ -259,10 +259,7 @@ lists::setter_by_index::execute(mutation& m, const exploded_clustering_prefix& p
// we should not get here for frozen lists
assert(column.type->is_multi_cell()); // "Attempted to set an individual element on a frozen list";
std::experimental::optional<clustering_key> row_key;
if (!column.is_static()) {
row_key = clustering_key::from_clustering_prefix(*params._schema, prefix);
}
auto row_key = clustering_key::from_clustering_prefix(*params._schema, prefix);
auto index = _idx->bind_and_get(params._options);
auto value = _t->bind_and_get(params._options);
@@ -272,7 +269,8 @@ lists::setter_by_index::execute(mutation& m, const exploded_clustering_prefix& p
}
auto idx = net::ntoh(int32_t(*unaligned_cast<int32_t>(index->begin())));
auto&& existing_list_opt = params.get_prefetched_list(m.key(), std::move(row_key), column);
auto existing_list_opt = params.get_prefetched_list(m.key(), row_key, column);
if (!existing_list_opt) {
throw exceptions::invalid_request_exception("Attempted to set an element on a list which is null");
}
@@ -385,13 +383,8 @@ lists::discarder::requires_read() {
void
lists::discarder::execute(mutation& m, const exploded_clustering_prefix& prefix, const update_parameters& params) {
assert(column.type->is_multi_cell()); // "Attempted to delete from a frozen list";
std::experimental::optional<clustering_key> row_key;
if (!column.is_static()) {
row_key = clustering_key::from_clustering_prefix(*params._schema, prefix);
}
auto&& existing_list = params.get_prefetched_list(m.key(), std::move(row_key), column);
auto&& row_key = clustering_key::from_clustering_prefix(*params._schema, prefix);
auto&& existing_list = params.get_prefetched_list(m.key(), row_key, column);
// We want to call bind before possibly returning to reject queries where the value provided is not a list.
auto&& value = _t->bind(params._options);
@@ -451,11 +444,8 @@ lists::discarder_by_index::execute(mutation& m, const exploded_clustering_prefix
auto cvalue = dynamic_pointer_cast<constants::value>(index);
assert(cvalue);
std::experimental::optional<clustering_key> row_key;
if (!column.is_static()) {
row_key = clustering_key::from_clustering_prefix(*params._schema, prefix);
}
auto&& existing_list = params.get_prefetched_list(m.key(), std::move(row_key), column);
auto row_key = clustering_key::from_clustering_prefix(*params._schema, prefix);
auto&& existing_list = params.get_prefetched_list(m.key(), row_key, column);
int32_t idx = read_simple_exactly<int32_t>(*cvalue->_bytes);
if (!existing_list) {
throw exceptions::invalid_request_exception("Attempted to delete an element from a list which is null");

View File

@@ -258,14 +258,16 @@ sets::adder::do_add(mutation& m, const exploded_clustering_prefix& row_key, cons
auto smut = set_type->serialize_mutation_form(mut);
m.set_cell(row_key, column, std::move(smut));
} else if (set_value != nullptr) {
} else {
// for frozen sets, we're overwriting the whole cell
auto v = set_type->serialize_partially_deserialized_form(
{set_value->_elements.begin(), set_value->_elements.end()},
serialization_format::internal());
m.set_cell(row_key, column, params.make_cell(std::move(v)));
} else {
m.set_cell(row_key, column, params.make_dead_cell());
if (set_value->_elements.empty()) {
m.set_cell(row_key, column, params.make_dead_cell());
} else {
m.set_cell(row_key, column, params.make_cell(std::move(v)));
}
}
}

View File

@@ -137,12 +137,6 @@ future<bool> alter_table_statement::announce_migration(distributed<service::stor
if (schema->is_super()) {
throw exceptions::invalid_request_exception("Cannot use non-frozen collections with super column families");
}
auto it = schema->collections().find(column_name->name());
if (it != schema->collections().end() && !type->is_compatible_with(*it->second)) {
throw exceptions::invalid_request_exception(sprint("Cannot add a collection with the name %s "
"because a collection with the same name and a different type has already been used in the past", column_name));
}
}
cfm.with_column(column_name->name(), type, _is_static ? column_kind::static_column : column_kind::regular_column);

View File

@@ -88,7 +88,7 @@ void batch_statement::verify_batch_size(const std::vector<mutation>& mutations)
auto size = v.size / 1024;
if (size > warn_threshold) {
if (v.size > warn_threshold) {
std::unordered_set<sstring> ks_cf_pairs;
for (auto&& m : mutations) {
ks_cf_pairs.insert(m.schema()->ks_name() + "." + m.schema()->cf_name());

View File

@@ -186,23 +186,11 @@ modification_statement::make_update_parameters(
class prefetch_data_builder {
update_parameters::prefetch_data& _data;
const query::partition_slice& _ps;
schema_ptr _schema;
std::experimental::optional<partition_key> _pkey;
private:
void add_cell(update_parameters::prefetch_data::row& cells, const column_definition& def, const std::experimental::optional<collection_mutation_view>& cell) {
if (cell) {
auto ctype = static_pointer_cast<const collection_type_impl>(def.type);
if (!ctype->is_multi_cell()) {
throw std::logic_error(sprint("cannot prefetch frozen collection: %s", def.name_as_text()));
}
cells.emplace(def.id, collection_mutation{*cell});
}
};
public:
prefetch_data_builder(schema_ptr s, update_parameters::prefetch_data& data, const query::partition_slice& ps)
prefetch_data_builder(update_parameters::prefetch_data& data, const query::partition_slice& ps)
: _data(data)
, _ps(ps)
, _schema(std::move(s))
{ }
void accept_new_partition(const partition_key& key, uint32_t row_count) {
@@ -217,9 +205,20 @@ public:
const query::result_row_view& row) {
update_parameters::prefetch_data::row cells;
auto add_cell = [&cells] (column_id id, std::experimental::optional<collection_mutation_view>&& cell) {
if (cell) {
cells.emplace(id, collection_mutation{to_bytes(cell->data)});
}
};
auto static_row_iterator = static_row.iterator();
for (auto&& id : _ps.static_columns) {
add_cell(id, static_row_iterator.next_collection_cell());
}
auto row_iterator = row.iterator();
for (auto&& id : _ps.regular_columns) {
add_cell(cells, _schema->regular_column_at(id), row_iterator.next_collection_cell());
add_cell(id, row_iterator.next_collection_cell());
}
_data.rows.emplace(std::make_pair(*_pkey, key), std::move(cells));
@@ -229,16 +228,7 @@ public:
assert(0);
}
void accept_partition_end(const query::result_row_view& static_row) {
update_parameters::prefetch_data::row cells;
auto static_row_iterator = static_row.iterator();
for (auto&& id : _ps.static_columns) {
add_cell(cells, _schema->static_column_at(id), static_row_iterator.next_collection_cell());
}
_data.rows.emplace(std::make_pair(*_pkey, std::experimental::nullopt), std::move(cells));
}
void accept_partition_end(const query::result_row_view& static_row) {}
};
future<update_parameters::prefetched_rows_type>
@@ -288,7 +278,7 @@ modification_statement::read_required_rows(
bytes_ostream buf(result->buf());
query::result_view v(buf.linearize());
auto prefetched_rows = update_parameters::prefetched_rows_type({update_parameters::prefetch_data(s)});
v.consume(ps, prefetch_data_builder(s, prefetched_rows.value(), ps));
v.consume(ps, prefetch_data_builder(prefetched_rows.value(), ps));
return prefetched_rows;
});
}

View File

@@ -226,16 +226,15 @@ select_statement::execute(distributed<service::storage_proxy>& proxy, service::q
// An aggregation query will never be paged for the user, but we always page it internally to avoid OOM.
// If we user provided a page_size we'll use that to page internally (because why not), otherwise we use our default
// Note that if there are some nodes in the cluster with a version less than 2.0, we can't use paging (CASSANDRA-6707).
auto aggregate = _selection->is_aggregate();
if (aggregate && page_size <= 0) {
if (_selection->is_aggregate() && page_size <= 0) {
page_size = DEFAULT_COUNT_PAGE_SIZE;
}
auto key_ranges = _restrictions->get_partition_key_ranges(options);
if (!aggregate && (page_size <= 0
if (page_size <= 0
|| !service::pager::query_pagers::may_need_paging(page_size,
*command, key_ranges))) {
*command, key_ranges)) {
return execute(proxy, command, std::move(key_ranges), state, options,
now);
}
@@ -243,7 +242,7 @@ select_statement::execute(distributed<service::storage_proxy>& proxy, service::q
auto p = service::pager::query_pagers::pager(_schema, _selection,
state, options, command, std::move(key_ranges));
if (aggregate) {
if (_selection->is_aggregate()) {
return do_with(
cql3::selection::result_set_builder(*_selection, now,
options.get_serialization_format()),
@@ -529,12 +528,9 @@ select_statement::raw_statement::get_ordering_comparator(schema_ptr schema,
}
bool select_statement::raw_statement::is_reversed(schema_ptr schema) {
std::experimental::optional<bool> reversed_map[schema->clustering_key_size()];
assert(_parameters->orderings().size() > 0);
parameters::orderings_type::size_type i = 0;
bool is_reversed_ = false;
bool relation_order_unsupported = false;
uint32_t i = 0;
for (auto&& e : _parameters->orderings()) {
::shared_ptr<column_identifier> column = e.first->prepare_column_identifier(schema);
bool reversed = e.second;
@@ -554,23 +550,32 @@ bool select_statement::raw_statement::is_reversed(schema_ptr schema) {
"Order by currently only support the ordering of columns following their declared order in the PRIMARY KEY");
}
bool current_reverse_status = (reversed != def->type->is_reversed());
if (i == 0) {
is_reversed_ = current_reverse_status;
}
if (is_reversed_ != current_reverse_status) {
relation_order_unsupported = true;
}
reversed_map[i] = std::experimental::make_optional(reversed != def->type->is_reversed());
++i;
}
if (relation_order_unsupported) {
throw exceptions::invalid_request_exception("Unsupported order by relation");
// GCC incorrenctly complains about "*is_reversed_" below
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wmaybe-uninitialized"
// Check that all bool in reversedMap, if set, agrees
std::experimental::optional<bool> is_reversed_{};
for (auto&& b : reversed_map) {
if (b) {
if (!is_reversed_) {
is_reversed_ = b;
} else {
if ((*is_reversed_) != *b) {
throw exceptions::invalid_request_exception("Unsupported order by relation");
}
}
}
}
return is_reversed_;
assert(is_reversed_);
return *is_reversed_;
#pragma GCC diagnostic pop
}
/** If ALLOW FILTERING was not specified, this verifies that it is not needed */

View File

@@ -59,7 +59,7 @@ bool update_statement::require_full_clustering_key() const {
void update_statement::add_update_for_key(mutation& m, const exploded_clustering_prefix& prefix, const update_parameters& params) {
if (s->is_dense()) {
if (!prefix || (prefix.size() == 1 && prefix.components().front().empty())) {
throw exceptions::invalid_request_exception(sprint("Missing PRIMARY KEY part %s", s->clustering_key_columns().begin()->name_as_text()));
throw exceptions::invalid_request_exception(sprint("Missing PRIMARY KEY part %s", *s->clustering_key_columns().begin()));
}
// An empty name for the compact value is what we use to recognize the case where there is not column

View File

@@ -45,15 +45,15 @@ namespace cql3 {
std::experimental::optional<collection_mutation_view>
update_parameters::get_prefetched_list(
partition_key pkey,
std::experimental::optional<clustering_key> ckey,
const partition_key& pkey,
const clustering_key& row_key,
const column_definition& column) const
{
if (!_prefetched) {
return {};
}
auto i = _prefetched->rows.find(std::make_pair(std::move(pkey), std::move(ckey)));
auto i = _prefetched->rows.find(std::make_pair(pkey, row_key));
if (i == _prefetched->rows.end()) {
return {};
}

View File

@@ -58,9 +58,8 @@ namespace cql3 {
*/
class update_parameters final {
public:
// Holder for data needed by CQL list updates which depend on current state of the list.
struct prefetch_data {
using key = std::pair<partition_key, std::experimental::optional<clustering_key>>;
using key = std::pair<partition_key, clustering_key>;
struct key_hashing {
partition_key::hashing pk_hash;
clustering_key::hashing ck_hash;
@@ -71,7 +70,7 @@ public:
{ }
size_t operator()(const key& k) const {
return pk_hash(k.first) ^ (k.second ? ck_hash(*k.second) : 0);
return pk_hash(k.first) ^ ck_hash(k.second);
}
};
struct key_equality {
@@ -84,8 +83,7 @@ public:
{ }
bool operator()(const key& k1, const key& k2) const {
return pk_eq(k1.first, k2.first)
&& bool(k1.second) == bool(k2.second) && (!k1.second || ck_eq(*k1.second, *k2.second));
return pk_eq(k1.first, k2.first) && ck_eq(k1.second, k2.second);
}
};
using row = std::unordered_map<column_id, collection_mutation>;
@@ -185,11 +183,8 @@ public:
return _timestamp;
}
std::experimental::optional<collection_mutation_view>
get_prefetched_list(
partition_key pkey,
std::experimental::optional<clustering_key> ckey,
const column_definition& column) const;
std::experimental::optional<collection_mutation_view> get_prefetched_list(
const partition_key& pkey, const clustering_key& row_key, const column_definition& column) const;
};
}

View File

@@ -23,7 +23,6 @@
#include "database.hh"
#include "unimplemented.hh"
#include "core/future-util.hh"
#include "db/commitlog/commitlog_entry.hh"
#include "db/system_keyspace.hh"
#include "db/consistency_level.hh"
#include "db/serializer.hh"
@@ -59,7 +58,6 @@
#include "utils/latency.hh"
#include "utils/flush_queue.hh"
#include "schema_registry.hh"
#include "service/priority_manager.hh"
using namespace std::chrono_literals;
@@ -129,9 +127,9 @@ column_family::make_partition_presence_checker(lw_shared_ptr<sstable_list> old_s
mutation_source
column_family::sstables_as_mutation_source() {
return mutation_source([this] (schema_ptr s, const query::partition_range& r, const io_priority_class& pc) {
return make_sstable_reader(std::move(s), r, pc);
});
return [this] (schema_ptr s, const query::partition_range& r) {
return make_sstable_reader(std::move(s), r);
};
}
// define in .cc, since sstable is forward-declared in .hh
@@ -156,14 +154,10 @@ class range_sstable_reader final : public mutation_reader::impl {
const query::partition_range& _pr;
lw_shared_ptr<sstable_list> _sstables;
mutation_reader _reader;
// Use a pointer instead of copying, so we don't need to regenerate the reader if
// the priority changes.
const io_priority_class* _pc;
public:
range_sstable_reader(schema_ptr s, lw_shared_ptr<sstable_list> sstables, const query::partition_range& pr, const io_priority_class& pc)
range_sstable_reader(schema_ptr s, lw_shared_ptr<sstable_list> sstables, const query::partition_range& pr)
: _pr(pr)
, _sstables(std::move(sstables))
, _pc(&pc)
{
std::vector<mutation_reader> readers;
for (const lw_shared_ptr<sstables::sstable>& sst : *_sstables | boost::adaptors::map_values) {
@@ -190,15 +184,11 @@ class single_key_sstable_reader final : public mutation_reader::impl {
mutation_opt _m;
bool _done = false;
lw_shared_ptr<sstable_list> _sstables;
// Use a pointer instead of copying, so we don't need to regenerate the reader if
// the priority changes.
const io_priority_class* _pc;
public:
single_key_sstable_reader(schema_ptr schema, lw_shared_ptr<sstable_list> sstables, const partition_key& key, const io_priority_class& pc)
single_key_sstable_reader(schema_ptr schema, lw_shared_ptr<sstable_list> sstables, const partition_key& key)
: _schema(std::move(schema))
, _key(sstables::key::from_partition_key(*_schema, key))
, _sstables(std::move(sstables))
, _pc(&pc)
{ }
virtual future<mutation_opt> operator()() override {
@@ -217,26 +207,26 @@ public:
};
mutation_reader
column_family::make_sstable_reader(schema_ptr s, const query::partition_range& pr, const io_priority_class& pc) const {
column_family::make_sstable_reader(schema_ptr s, const query::partition_range& pr) const {
if (pr.is_singular() && pr.start()->value().has_key()) {
const dht::ring_position& pos = pr.start()->value();
if (dht::shard_of(pos.token()) != engine().cpu_id()) {
return make_empty_reader(); // range doesn't belong to this shard
}
return make_mutation_reader<single_key_sstable_reader>(std::move(s), _sstables, *pos.key(), pc);
return make_mutation_reader<single_key_sstable_reader>(std::move(s), _sstables, *pos.key());
} else {
// range_sstable_reader is not movable so we need to wrap it
return make_mutation_reader<range_sstable_reader>(std::move(s), _sstables, pr, pc);
return make_mutation_reader<range_sstable_reader>(std::move(s), _sstables, pr);
}
}
key_source column_family::sstables_as_key_source() const {
return key_source([this] (const query::partition_range& range, const io_priority_class& pc) {
return [this] (const query::partition_range& range) {
std::vector<key_reader> readers;
readers.reserve(_sstables->size());
std::transform(_sstables->begin(), _sstables->end(), std::back_inserter(readers), [&] (auto&& entry) {
auto& sst = entry.second;
auto rd = sstables::make_key_reader(_schema, sst, range, pc);
auto rd = sstables::make_key_reader(_schema, sst, range);
if (sst->is_shared()) {
rd = make_filtering_reader(std::move(rd), [] (const dht::decorated_key& dk) {
return dht::shard_of(dk.token()) == engine().cpu_id();
@@ -245,7 +235,7 @@ key_source column_family::sstables_as_key_source() const {
return rd;
});
return make_combined_reader(_schema, std::move(readers));
});
};
}
// Exposed for testing, not performance critical.
@@ -285,7 +275,7 @@ column_family::find_row(schema_ptr s, const dht::decorated_key& partition_key, c
}
mutation_reader
column_family::make_reader(schema_ptr s, const query::partition_range& range, const io_priority_class& pc) const {
column_family::make_reader(schema_ptr s, const query::partition_range& range) const {
if (query::is_wrap_around(range, *s)) {
// make_combined_reader() can't handle streams that wrap around yet.
fail(unimplemented::cause::WRAP_AROUND);
@@ -319,15 +309,14 @@ column_family::make_reader(schema_ptr s, const query::partition_range& range, co
}
if (_config.enable_cache) {
readers.emplace_back(_cache.make_reader(s, range, pc));
readers.emplace_back(_cache.make_reader(s, range));
} else {
readers.emplace_back(make_sstable_reader(s, range, pc));
readers.emplace_back(make_sstable_reader(s, range));
}
return make_combined_reader(std::move(readers));
}
// Not performance critical. Currently used for testing only.
template <typename Func>
future<bool>
column_family::for_all_partitions(schema_ptr s, Func&& func) const {
@@ -474,15 +463,7 @@ future<sstables::entry_descriptor> column_family::probe_file(sstring sstdir, sst
}
update_sstables_known_generation(comps.generation);
{
auto i = _sstables->find(comps.generation);
if (i != _sstables->end()) {
auto new_toc = sstdir + "/" + fname;
throw std::runtime_error(sprint("Attempted to add sstable generation %d twice: new=%s existing=%s",
comps.generation, new_toc, i->second->toc_filename()));
}
}
assert(_sstables->count(comps.generation) == 0);
auto fut = sstable::get_sstable_key_range(*_schema, _schema->ks_name(), _schema->cf_name(), sstdir, comps.generation, comps.version, comps.format);
return std::move(fut).then([this, sstdir = std::move(sstdir), comps] (range<partition_key> r) {
@@ -589,7 +570,9 @@ column_family::seal_active_memtable() {
future<stop_iteration>
column_family::try_flush_memtable_to_sstable(lw_shared_ptr<memtable> old) {
auto gen = calculate_generation_for_new_table();
// FIXME: better way of ensuring we don't attempt to
// overwrite an existing table.
auto gen = _sstable_generation++ * smp::count + engine().cpu_id();
auto newtab = make_lw_shared<sstables::sstable>(_schema->ks_name(), _schema->cf_name(),
_config.datadir, gen,
@@ -602,20 +585,27 @@ column_family::try_flush_memtable_to_sstable(lw_shared_ptr<memtable> old) {
_config.cf_stats->pending_memtables_flushes_bytes += memtable_size;
newtab->set_unshared();
dblog.debug("Flushing to {}", newtab->get_filename());
// Note that due to our sharded architecture, it is possible that
// in the face of a value change some shards will backup sstables
// while others won't.
//
// This is, in theory, possible to mitigate through a rwlock.
// However, this doesn't differ from the situation where all tables
// are coming from a single shard and the toggle happens in the
// middle of them.
//
// The code as is guarantees that we'll never partially backup a
// single sstable, so that is enough of a guarantee.
auto&& priority = service::get_local_memtable_flush_priority();
return newtab->write_components(*old, incremental_backups_enabled(), priority).then([this, newtab, old] {
return newtab->open_data();
return newtab->write_components(*old).then([this, newtab, old] {
return newtab->open_data().then([this, newtab] {
// Note that due to our sharded architecture, it is possible that
// in the face of a value change some shards will backup sstables
// while others won't.
//
// This is, in theory, possible to mitigate through a rwlock.
// However, this doesn't differ from the situation where all tables
// are coming from a single shard and the toggle happens in the
// middle of them.
//
// The code as is guarantees that we'll never partially backup a
// single sstable, so that is enough of a guarantee.
if (!incremental_backups_enabled()) {
return make_ready_future<>();
}
auto dir = newtab->get_dir() + "/backups/";
return touch_directory(dir).then([dir, newtab] {
return newtab->create_links(dir);
});
});
}).then_wrapped([this, old, newtab, memtable_size] (future<> ret) {
_config.cf_stats->pending_memtables_flushes_count--;
_config.cf_stats->pending_memtables_flushes_bytes -= memtable_size;
@@ -720,104 +710,68 @@ column_family::reshuffle_sstables(int64_t start) {
});
}
void
column_family::rebuild_sstable_list(const std::vector<sstables::shared_sstable>& new_sstables,
const std::vector<sstables::shared_sstable>& sstables_to_remove) {
// Build a new list of _sstables: We remove from the existing list the
// tables we compacted (by now, there might be more sstables flushed
// later), and we add the new tables generated by the compaction.
// We create a new list rather than modifying it in-place, so that
// on-going reads can continue to use the old list.
auto current_sstables = _sstables;
_sstables = make_lw_shared<sstable_list>();
// zeroing live_disk_space_used and live_sstable_count because the
// sstable list is re-created below.
_stats.live_disk_space_used = 0;
_stats.live_sstable_count = 0;
std::unordered_set<sstables::shared_sstable> s(
sstables_to_remove.begin(), sstables_to_remove.end());
for (const auto& oldtab : *current_sstables) {
// Checks if oldtab is a sstable not being compacted.
if (!s.count(oldtab.second)) {
update_stats_for_new_sstable(oldtab.second->data_size());
_sstables->emplace(oldtab.first, oldtab.second);
}
}
for (const auto& newtab : new_sstables) {
// FIXME: rename the new sstable(s). Verify a rename doesn't cause
// problems for the sstable object.
update_stats_for_new_sstable(newtab->data_size());
_sstables->emplace(newtab->generation(), newtab);
}
for (const auto& oldtab : sstables_to_remove) {
oldtab->mark_for_deletion();
}
}
future<>
column_family::compact_sstables(sstables::compaction_descriptor descriptor, bool cleanup) {
column_family::compact_sstables(sstables::compaction_descriptor descriptor) {
if (!descriptor.sstables.size()) {
// if there is nothing to compact, just return.
return make_ready_future<>();
}
return with_lock(_sstables_lock.for_read(), [this, descriptor = std::move(descriptor), cleanup] {
return with_lock(_sstables_lock.for_read(), [this, descriptor = std::move(descriptor)] {
auto sstables_to_compact = make_lw_shared<std::vector<sstables::shared_sstable>>(std::move(descriptor.sstables));
auto create_sstable = [this] {
auto gen = this->calculate_generation_for_new_table();
auto new_tables = make_lw_shared<std::vector<
std::pair<unsigned, sstables::shared_sstable>>>();
auto create_sstable = [this, new_tables] {
// FIXME: this generation calculation should be in a function.
auto gen = _sstable_generation++ * smp::count + engine().cpu_id();
// FIXME: use "tmp" marker in names of incomplete sstable
auto sst = make_lw_shared<sstables::sstable>(_schema->ks_name(), _schema->cf_name(), _config.datadir, gen,
sstables::sstable::version_types::ka,
sstables::sstable::format_types::big);
sst->set_unshared();
new_tables->emplace_back(gen, sst);
return sst;
};
return sstables::compact_sstables(*sstables_to_compact, *this, create_sstable, descriptor.max_sstable_bytes, descriptor.level,
cleanup).then([this, sstables_to_compact] (auto new_sstables) {
this->rebuild_sstable_list(new_sstables, *sstables_to_compact);
return sstables::compact_sstables(*sstables_to_compact, *this,
create_sstable, descriptor.max_sstable_bytes, descriptor.level).then([this, new_tables, sstables_to_compact] {
// Build a new list of _sstables: We remove from the existing list the
// tables we compacted (by now, there might be more sstables flushed
// later), and we add the new tables generated by the compaction.
// We create a new list rather than modifying it in-place, so that
// on-going reads can continue to use the old list.
auto current_sstables = _sstables;
_sstables = make_lw_shared<sstable_list>();
// zeroing live_disk_space_used and live_sstable_count because the
// sstable list is re-created below.
_stats.live_disk_space_used = 0;
_stats.live_sstable_count = 0;
std::unordered_set<sstables::shared_sstable> s(
sstables_to_compact->begin(), sstables_to_compact->end());
for (const auto& oldtab : *current_sstables) {
// Checks if oldtab is a sstable not being compacted.
if (!s.count(oldtab.second)) {
update_stats_for_new_sstable(oldtab.second->data_size());
_sstables->emplace(oldtab.first, oldtab.second);
}
}
for (const auto& newtab : *new_tables) {
// FIXME: rename the new sstable(s). Verify a rename doesn't cause
// problems for the sstable object.
update_stats_for_new_sstable(newtab.second->data_size());
_sstables->emplace(newtab.first, newtab.second);
}
for (const auto& oldtab : *sstables_to_compact) {
oldtab->mark_for_deletion();
}
});
});
}
static bool needs_cleanup(const lw_shared_ptr<sstables::sstable>& sst,
const lw_shared_ptr<std::vector<range<dht::token>>>& owned_ranges,
schema_ptr s) {
auto first = sst->get_first_partition_key(*s);
auto last = sst->get_last_partition_key(*s);
auto first_token = dht::global_partitioner().get_token(*s, first);
auto last_token = dht::global_partitioner().get_token(*s, last);
range<dht::token> sst_token_range = range<dht::token>::make(first_token, last_token);
// return true iff sst partition range isn't fully contained in any of the owned ranges.
for (auto& r : *owned_ranges) {
if (r.contains(sst_token_range, dht::token_comparator())) {
return false;
}
}
return true;
}
future<> column_family::cleanup_sstables(sstables::compaction_descriptor descriptor) {
std::vector<range<dht::token>> r = service::get_local_storage_service().get_local_ranges(_schema->ks_name());
auto owned_ranges = make_lw_shared<std::vector<range<dht::token>>>(std::move(r));
auto sstables_to_cleanup = make_lw_shared<std::vector<sstables::shared_sstable>>(std::move(descriptor.sstables));
return parallel_for_each(*sstables_to_cleanup, [this, owned_ranges = std::move(owned_ranges), sstables_to_cleanup] (auto& sst) {
if (!owned_ranges->empty() && !needs_cleanup(sst, owned_ranges, _schema)) {
return make_ready_future<>();
}
std::vector<sstables::shared_sstable> sstable_to_compact({ sst });
return this->compact_sstables(sstables::compaction_descriptor(std::move(sstable_to_compact)), true);
});
}
future<>
column_family::load_new_sstables(std::vector<sstables::entry_descriptor> new_tables) {
return parallel_for_each(new_tables, [this] (auto comps) {
@@ -857,26 +811,18 @@ void column_family::start_compaction() {
void column_family::trigger_compaction() {
// Submitting compaction job to compaction manager.
// #934 - always inc the pending counter, to help
// indicate the want for compaction.
_stats.pending_compactions++;
do_trigger_compaction(); // see below
}
void column_family::do_trigger_compaction() {
// But only submit if we're not locked out
if (!_compaction_disabled) {
_stats.pending_compactions++;
_compaction_manager.submit(this);
}
}
future<> column_family::run_compaction(sstables::compaction_descriptor descriptor) {
assert(_stats.pending_compactions > 0);
return compact_sstables(std::move(descriptor)).then([this] {
// only do this on success. (no exceptions)
// in that case, we rely on it being still set
// for reqeueuing
_stats.pending_compactions--;
future<> column_family::run_compaction() {
sstables::compaction_strategy strategy = _compaction_strategy;
return do_with(std::move(strategy), [this] (sstables::compaction_strategy& cs) {
return cs.compact(*this).then([this] {
_stats.pending_compactions--;
});
});
}
@@ -1014,9 +960,6 @@ future<> column_family::populate(sstring sstdir) {
return make_ready_future<>();
});
});
}).then([this] {
// Make sure this is called even if CF is empty
mark_ready_for_writes();
});
}
@@ -1089,37 +1032,19 @@ future<> database::populate_keyspace(sstring datadir, sstring ks_name) {
dblog.error("Keyspace {}: Skipping malformed CF {} ", ksdir, de.name);
return make_ready_future<>();
}
sstring cfname = comps[0];
sstring uuidst = comps[1];
auto sstdir = ksdir + "/" + de.name;
try {
auto&& uuid = [&] {
try {
return find_uuid(ks_name, cfname);
} catch (const std::out_of_range& e) {
std::throw_with_nested(no_such_column_family(ks_name, cfname));
}
}();
auto& cf = find_column_family(uuid);
// #870: Check that the directory name matches
// the current, expected UUID of the CF.
if (utils::UUID(uuidst) == uuid) {
// FIXME: Increase parallelism.
auto sstdir = ksdir + "/" + de.name;
dblog.info("Keyspace {}: Reading CF {} ", ksdir, cfname);
return cf.populate(sstdir);
}
// Nope. Warn and ignore.
dblog.info("Keyspace {}: Skipping obsolete version of CF {} ({})", ksdir, cfname, uuidst);
} catch (marshal_exception&) {
// Bogus UUID part of directory name
dblog.warn("{}, CF {}: malformed UUID: {}. Ignoring", ksdir, comps[0], uuidst);
auto& cf = find_column_family(ks_name, cfname);
dblog.info("Keyspace {}: Reading CF {} ", ksdir, cfname);
// FIXME: Increase parallelism.
return cf.populate(sstdir);
} catch (no_such_column_family&) {
dblog.warn("{}, CF {}: schema not loaded!", ksdir, comps[0]);
return make_ready_future<>();
}
return make_ready_future<>();
});
}
return make_ready_future<>();
@@ -1201,14 +1126,6 @@ database::init_system_keyspace() {
return populate_keyspace(_cfg->data_file_directories()[0], db::system_keyspace::NAME).then([this]() {
return init_commitlog();
});
}).then([this] {
auto& ks = find_keyspace(db::system_keyspace::NAME);
return parallel_for_each(ks.metadata()->cf_meta_data(), [this] (auto& pair) {
auto cfm = pair.second;
auto& cf = this->find_column_family(cfm);
cf.mark_ready_for_writes();
return make_ready_future<>();
});
});
}
@@ -1589,7 +1506,7 @@ column_family::query(schema_ptr s, const query::read_command& cmd, const std::ve
return do_with(query_state(std::move(s), cmd, partition_ranges), [this] (query_state& qs) {
return do_until(std::bind(&query_state::done, &qs), [this, &qs] {
auto&& range = *qs.current_partition_range++;
qs.reader = make_reader(qs.schema, range, service::get_local_sstable_query_read_priority());
qs.reader = make_reader(qs.schema, range);
qs.range_empty = false;
return do_until([&qs] { return !qs.limit || qs.range_empty; }, [&qs] {
return qs.reader().then([&qs](mutation_opt mo) {
@@ -1618,9 +1535,9 @@ column_family::query(schema_ptr s, const query::read_command& cmd, const std::ve
mutation_source
column_family::as_mutation_source() const {
return mutation_source([this] (schema_ptr s, const query::partition_range& range, const io_priority_class& pc) {
return this->make_reader(std::move(s), range, pc);
});
return [this] (schema_ptr s, const query::partition_range& range) {
return this->make_reader(std::move(s), range);
};
}
future<lw_shared_ptr<query::result>>
@@ -1677,8 +1594,7 @@ std::ostream& operator<<(std::ostream& out, const atomic_cell_or_collection& c)
}
std::ostream& operator<<(std::ostream& os, const mutation& m) {
const ::schema& s = *m.schema();
fprint(os, "{%s.%s key %s data ", s.ks_name(), s.cf_name(), m.decorated_key());
fprint(os, "{mutation: schema %p key %s data ", m.schema().get(), m.decorated_key());
os << m.partition() << "}";
return os;
}
@@ -1697,47 +1613,6 @@ std::ostream& operator<<(std::ostream& out, const database& db) {
return out;
}
void
column_family::apply(const mutation& m, const db::replay_position& rp) {
utils::latency_counter lc;
_stats.writes.set_latency(lc);
active_memtable().apply(m, rp);
seal_on_overflow();
_stats.writes.mark(lc);
if (lc.is_start()) {
_stats.estimated_write.add(lc.latency(), _stats.writes.count);
}
}
void
column_family::apply(const frozen_mutation& m, const schema_ptr& m_schema, const db::replay_position& rp) {
utils::latency_counter lc;
_stats.writes.set_latency(lc);
check_valid_rp(rp);
active_memtable().apply(m, m_schema, rp);
seal_on_overflow();
_stats.writes.mark(lc);
if (lc.is_start()) {
_stats.estimated_write.add(lc.latency(), _stats.writes.count);
}
}
void
column_family::seal_on_overflow() {
if (active_memtable().occupancy().total_space() >= _config.max_memtable_size) {
// FIXME: if sparse, do some in-memory compaction first
// FIXME: maybe merge with other in-memory memtables
seal_active_memtable();
}
}
void
column_family::check_valid_rp(const db::replay_position& rp) const {
if (rp < _highest_flushed_rp) {
throw replay_position_reordered_exception();
}
}
future<> database::apply_in_memory(const frozen_mutation& m, const schema_ptr& m_schema, const db::replay_position& rp) {
try {
auto& cf = find_column_family(m.column_family_id());
@@ -1759,8 +1634,9 @@ future<> database::do_apply(schema_ptr s, const frozen_mutation& m) {
s->ks_name(), s->cf_name(), s->version()));
}
if (cf.commitlog() != nullptr) {
commitlog_entry_writer cew(s, m);
return cf.commitlog()->add_entry(uuid, cew).then([&m, this, s](auto rp) {
bytes_view repr = m.representation();
auto write_repr = [repr] (data_output& out) { out.write(repr.begin(), repr.end()); };
return cf.commitlog()->add_mutation(uuid, repr.size(), write_repr).then([&m, this, s](auto rp) {
try {
return this->apply_in_memory(m, s, rp);
} catch (replay_position_reordered_exception&) {
@@ -1809,9 +1685,6 @@ void database::unthrottle() {
}
future<> database::apply(schema_ptr s, const frozen_mutation& m) {
if (dblog.is_enabled(logging::log_level::trace)) {
dblog.trace("apply {}", m.pretty_printer(s));
}
return throttle().then([this, &m, s = std::move(s)] {
return do_apply(std::move(s), m);
});
@@ -2296,8 +2169,7 @@ void column_family::clear() {
// NOTE: does not need to be futurized, but might eventually, depending on
// if we implement notifications, whatnot.
future<db::replay_position> column_family::discard_sstables(db_clock::time_point truncated_at) {
assert(_compaction_disabled > 0);
assert(!compaction_manager_queued());
assert(_stats.pending_compactions == 0);
return with_lock(_sstables_lock.for_read(), [this, truncated_at] {
db::replay_position rp;

View File

@@ -56,6 +56,7 @@
#include "tombstone.hh"
#include "atomic_cell.hh"
#include "query-request.hh"
#include "query-result.hh"
#include "keys.hh"
#include "mutation.hh"
#include "memtable.hh"
@@ -63,7 +64,7 @@
#include "mutation_reader.hh"
#include "row_cache.hh"
#include "compaction_strategy.hh"
#include "sstables/compaction_manager.hh"
#include "utils/compaction_manager.hh"
#include "utils/exponential_backoff_retry.hh"
#include "utils/histogram.hh"
#include "sstables/estimated_histogram.hh"
@@ -159,8 +160,8 @@ private:
// the read lock, and the ones that wish to stop that process will take the write lock.
rwlock _sstables_lock;
mutable row_cache _cache; // Cache covers only sstables.
std::experimental::optional<int64_t> _sstable_generation = {};
int64_t _sstable_generation = 1;
unsigned _mutation_count = 0;
db::replay_position _highest_flushed_rp;
// Provided by the database that owns this commitlog
db::commitlog* _commitlog;
@@ -171,9 +172,6 @@ private:
int _compaction_disabled = 0;
class memtable_flush_queue;
std::unique_ptr<memtable_flush_queue> _flush_queue;
// Store generation of sstables being compacted at the moment. That's needed to prevent a
// sstable from being compacted twice.
std::unordered_set<unsigned long> _compacting_generations;
private:
void update_stats_for_new_sstable(uint64_t new_sstable_data_size);
void add_sstable(sstables::sstable&& sstable);
@@ -185,66 +183,26 @@ private:
// update the sstable generation, making sure that new new sstables don't overwrite this one.
void update_sstables_known_generation(unsigned generation) {
if (!_sstable_generation) {
_sstable_generation = 1;
}
_sstable_generation = std::max<uint64_t>(*_sstable_generation, generation / smp::count + 1);
_sstable_generation = std::max<uint64_t>(_sstable_generation, generation / smp::count + 1);
}
uint64_t calculate_generation_for_new_table() {
assert(_sstable_generation);
// FIXME: better way of ensuring we don't attempt to
// overwrite an existing table.
return (*_sstable_generation)++ * smp::count + engine().cpu_id();
}
// Rebuild existing _sstables with new_sstables added to it and sstables_to_remove removed from it.
void rebuild_sstable_list(const std::vector<sstables::shared_sstable>& new_sstables,
const std::vector<sstables::shared_sstable>& sstables_to_remove);
private:
// Creates a mutation reader which covers sstables.
// Caller needs to ensure that column_family remains live (FIXME: relax this).
// The 'range' parameter must be live as long as the reader is used.
// Mutations returned by the reader will all have given schema.
mutation_reader make_sstable_reader(schema_ptr schema, const query::partition_range& range, const io_priority_class& pc) const;
mutation_reader make_sstable_reader(schema_ptr schema, const query::partition_range& range) const;
mutation_source sstables_as_mutation_source();
key_source sstables_as_key_source() const;
partition_presence_checker make_partition_presence_checker(lw_shared_ptr<sstable_list> old_sstables);
std::chrono::steady_clock::time_point _sstable_writes_disabled_at;
void do_trigger_compaction();
public:
// This function should be called when this column family is ready for writes, IOW,
// to produce SSTables. Extensive details about why this is important can be found
// in Scylla's Github Issue #1014
//
// Nothing should be writing to SSTables before we have the chance to populate the
// existing SSTables and calculate what should the next generation number be.
//
// However, if that happens, we want to protect against it in a way that does not
// involve overwriting existing tables. This is one of the ways to do it: every
// column family starts in an unwriteable state, and when it can finally be written
// to, we mark it as writeable.
//
// Note that this *cannot* be a part of add_column_family. That adds a column family
// to a db in memory only, and if anybody is about to write to a CF, that was most
// likely already called. We need to call this explicitly when we are sure we're ready
// to issue disk operations safely.
void mark_ready_for_writes() {
update_sstables_known_generation(0);
}
// Creates a mutation reader which covers all data sources for this column family.
// Caller needs to ensure that column_family remains live (FIXME: relax this).
// Note: for data queries use query() instead.
// The 'range' parameter must be live as long as the reader is used.
// Mutations returned by the reader will all have given schema.
// If I/O needs to be issued to read anything in the specified range, the operations
// will be scheduled under the priority class given by pc.
mutation_reader make_reader(schema_ptr schema,
const query::partition_range& range = query::full_partition_range,
const io_priority_class& pc = default_priority_class()) const;
mutation_reader make_reader(schema_ptr schema, const query::partition_range& range = query::full_partition_range) const;
mutation_source as_mutation_source() const;
@@ -332,15 +290,7 @@ public:
// not a real compaction policy.
future<> compact_all_sstables();
// Compact all sstables provided in the vector.
// If cleanup is set to true, compaction_sstables will run on behalf of a cleanup job,
// meaning that irrelevant keys will be discarded.
future<> compact_sstables(sstables::compaction_descriptor descriptor, bool cleanup = false);
// Performs a cleanup on each sstable of this column family, excluding
// those ones that are irrelevant to this node or being compacted.
// Cleanup is about discarding keys that are no longer relevant for a
// given sstable, e.g. after node loses part of its token range because
// of a newly added node.
future<> cleanup_sstables(sstables::compaction_descriptor descriptor);
future<> compact_sstables(sstables::compaction_descriptor descriptor);
future<bool> snapshot_exists(sstring name);
@@ -363,7 +313,7 @@ public:
void start_compaction();
void trigger_compaction();
future<> run_compaction(sstables::compaction_descriptor descriptor);
future<> run_compaction();
void set_compaction_strategy(sstables::compaction_strategy_type strategy);
const sstables::compaction_strategy& get_compaction_strategy() const {
return _compaction_strategy;
@@ -389,19 +339,11 @@ public:
Result run_with_compaction_disabled(Func && func) {
++_compaction_disabled;
return _compaction_manager.remove(this).then(std::forward<Func>(func)).finally([this] {
// #934. The pending counter is actually a great indicator into whether we
// actually need to trigger a compaction again.
if (--_compaction_disabled == 0 && _stats.pending_compactions > 0) {
// we're turning if on again, use function that does not increment
// the counter further.
do_trigger_compaction();
if (--_compaction_disabled == 0) {
trigger_compaction();
}
});
}
std::unordered_set<unsigned long>& compacting_generations() {
return _compacting_generations;
}
private:
// One does not need to wait on this future if all we are interested in, is
// initiating the write. The writes initiated here will eventually
@@ -411,11 +353,16 @@ private:
// But it is possible to synchronously wait for the seal to complete by
// waiting on this future. This is useful in situations where we want to
// synchronously flush data to disk.
//
// FIXME: A better interface would guarantee that all writes before this
// one are also complete
future<> seal_active_memtable();
// filter manifest.json files out
static bool manifest_json_filter(const sstring& fname);
seastar::gate _in_flight_seals;
// Iterate over all partitions. Protocol is the same as std::all_of(),
// so that iteration can be stopped by returning false.
// Func signature: bool (const decorated_key& dk, const mutation_partition& mp)
@@ -728,6 +675,53 @@ public:
// FIXME: stub
class secondary_index_manager {};
inline
void
column_family::apply(const mutation& m, const db::replay_position& rp) {
utils::latency_counter lc;
_stats.writes.set_latency(lc);
active_memtable().apply(m, rp);
seal_on_overflow();
_stats.writes.mark(lc);
if (lc.is_start()) {
_stats.estimated_write.add(lc.latency(), _stats.writes.count);
}
}
inline
void
column_family::seal_on_overflow() {
++_mutation_count;
if (active_memtable().occupancy().total_space() >= _config.max_memtable_size) {
// FIXME: if sparse, do some in-memory compaction first
// FIXME: maybe merge with other in-memory memtables
_mutation_count = 0;
seal_active_memtable();
}
}
inline
void
column_family::check_valid_rp(const db::replay_position& rp) const {
if (rp < _highest_flushed_rp) {
throw replay_position_reordered_exception();
}
}
inline
void
column_family::apply(const frozen_mutation& m, const schema_ptr& m_schema, const db::replay_position& rp) {
utils::latency_counter lc;
_stats.writes.set_latency(lc);
check_valid_rp(rp);
active_memtable().apply(m, m_schema, rp);
seal_on_overflow();
_stats.writes.mark(lc);
if (lc.is_start()) {
_stats.estimated_write.add(lc.latency(), _stats.writes.count);
}
}
future<> update_schema_version_and_announce(distributed<service::storage_proxy>& proxy);
#endif /* DATABASE_HH_ */

View File

@@ -45,7 +45,6 @@
#include <boost/range/adaptor/sliced.hpp>
#include "batchlog_manager.hh"
#include "canonical_mutation.hh"
#include "service/storage_service.hh"
#include "service/storage_proxy.hh"
#include "system_keyspace.hh"
@@ -118,14 +117,14 @@ mutation db::batchlog_manager::get_batch_log_mutation_for(const std::vector<muta
auto key = partition_key::from_singular(*schema, id);
auto timestamp = api::new_timestamp();
auto data = [this, &mutations] {
std::vector<canonical_mutation> fm(mutations.begin(), mutations.end());
std::vector<frozen_mutation> fm(mutations.begin(), mutations.end());
const auto size = std::accumulate(fm.begin(), fm.end(), size_t(0), [](size_t s, auto& m) {
return s + serializer<canonical_mutation>{m}.size();
return s + serializer<frozen_mutation>{m}.size();
});
bytes buf(bytes::initialized_later(), size);
data_output out(buf);
for (auto& m : fm) {
serializer<canonical_mutation>{m}(out);
serializer<frozen_mutation>{m}(out);
}
return buf;
}();
@@ -153,60 +152,49 @@ future<> db::batchlog_manager::replay_all_failed_batches() {
auto batch = [this, limiter](const cql3::untyped_result_set::row& row) {
auto written_at = row.get_as<db_clock::time_point>("written_at");
auto id = row.get_as<utils::UUID>("id");
// enough time for the actual write + batchlog entry mutation delivery (two separate requests).
// enough time for the actual write + batchlog entry mutation delivery (two separate requests).
auto timeout = get_batch_log_timeout();
if (db_clock::now() < written_at + timeout) {
logger.debug("Skipping replay of {}, too fresh", id);
return make_ready_future<>();
}
// check version of serialization format
if (!row.has("version")) {
logger.warn("Skipping logged batch because of unknown version");
return make_ready_future<>();
}
auto version = row.get_as<int32_t>("version");
if (version != net::messaging_service::current_version) {
logger.warn("Skipping logged batch because of incorrect version");
return make_ready_future<>();
}
// not used currently. ever?
//auto version = row.has("version") ? row.get_as<uint32_t>("version") : /*MessagingService.VERSION_12*/6u;
auto id = row.get_as<utils::UUID>("id");
auto data = row.get_blob("data");
logger.debug("Replaying batch {}", id);
auto fms = make_lw_shared<std::deque<canonical_mutation>>();
auto fms = make_lw_shared<std::deque<frozen_mutation>>();
data_input in(data);
while (in.has_next()) {
fms->emplace_back(serializer<canonical_mutation>::read(in));
fms->emplace_back(serializer<frozen_mutation>::read(in));
}
auto mutations = make_lw_shared<std::vector<mutation>>();
auto size = data.size();
return map_reduce(*fms, [this, written_at] (canonical_mutation& fm) {
return system_keyspace::get_truncated_at(fm.column_family_id()).then([written_at, &fm] (db_clock::time_point t) ->
std::experimental::optional<std::reference_wrapper<canonical_mutation>> {
if (written_at > t) {
return { std::ref(fm) };
} else {
return {};
}
});
},
std::vector<mutation>(),
[this] (std::vector<mutation> mutations, std::experimental::optional<std::reference_wrapper<canonical_mutation>> fm) {
if (fm) {
schema_ptr s = _qp.db().local().find_schema(fm.value().get().column_family_id());
mutations.emplace_back(fm.value().get().to_mutation(s));
return repeat([this, fms = std::move(fms), written_at, mutations]() mutable {
if (fms->empty()) {
return make_ready_future<stop_iteration>(stop_iteration::yes);
}
return mutations;
}).then([this, id, limiter, written_at, size, fms] (std::vector<mutation> mutations) {
if (mutations.empty()) {
auto& fm = fms->front();
auto mid = fm.column_family_id();
return system_keyspace::get_truncated_at(mid).then([this, &fm, written_at, mutations](db_clock::time_point t) {
warn(unimplemented::cause::SCHEMA_CHANGE);
auto schema = local_schema_registry().get(fm.schema_version());
if (written_at > t) {
mutations->emplace_back(fm.unfreeze(schema));
}
}).then([fms] {
fms->pop_front();
return make_ready_future<stop_iteration>(stop_iteration::no);
});
}).then([this, id, mutations, limiter, written_at, size] {
if (mutations->empty()) {
return make_ready_future<>();
}
const auto ttl = [this, &mutations, written_at]() -> clock_type {
const auto ttl = [this, mutations, written_at]() -> clock_type {
/*
* Calculate ttl for the mutations' hints (and reduce ttl by the time the mutations spent in the batchlog).
* This ensures that deletes aren't "undone" by an old batch replay.
@@ -228,8 +216,8 @@ future<> db::batchlog_manager::replay_all_failed_batches() {
// Our normal write path does not add much redundancy to the dispatch, and rate is handled after send
// in both cases.
// FIXME: verify that the above is reasonably true.
return limiter->reserve(size).then([this, mutations = std::move(mutations), id] {
return _qp.proxy().local().mutate(mutations, db::consistency_level::ANY);
return limiter->reserve(size).then([this, mutations, id] {
return _qp.proxy().local().mutate(std::move(*mutations), db::consistency_level::ANY);
});
}).then([this, id] {
// delete batch

View File

@@ -64,8 +64,6 @@
#include "utils/crc.hh"
#include "utils/runtime.hh"
#include "log.hh"
#include "commitlog_entry.hh"
#include "service/priority_manager.hh"
static logging::logger logger("commitlog");
@@ -157,9 +155,6 @@ public:
bool _shutdown = false;
semaphore _new_segment_semaphore;
semaphore _write_semaphore;
semaphore _flush_semaphore;
scollectd::registrations _regs;
// TODO: verify that we're ok with not-so-great granularity
@@ -175,11 +170,7 @@ public:
uint64_t bytes_slack = 0;
uint64_t segments_created = 0;
uint64_t segments_destroyed = 0;
uint64_t pending_writes = 0;
uint64_t pending_flushes = 0;
uint64_t pending_allocations = 0;
uint64_t write_limit_exceeded = 0;
uint64_t flush_limit_exceeded = 0;
uint64_t pending_operations = 0;
uint64_t total_size = 0;
uint64_t buffer_list_bytes = 0;
uint64_t total_size_on_disk = 0;
@@ -187,73 +178,33 @@ public:
stats totals;
future<> begin_write() {
void begin_op() {
_gate.enter();
++totals.pending_writes; // redundant, given semaphore. but easier to read
if (totals.pending_writes >= cfg.max_active_writes) {
++totals.write_limit_exceeded;
logger.trace("Write ops overflow: {}. Will block.", totals.pending_writes);
}
return _write_semaphore.wait();
++totals.pending_operations;
}
void end_write() {
_write_semaphore.signal();
--totals.pending_writes;
void end_op() {
--totals.pending_operations;
_gate.leave();
}
future<> begin_flush() {
_gate.enter();
++totals.pending_flushes;
if (totals.pending_flushes >= cfg.max_active_flushes) {
++totals.flush_limit_exceeded;
logger.trace("Flush ops overflow: {}. Will block.", totals.pending_flushes);
}
return _flush_semaphore.wait();
}
void end_flush() {
_flush_semaphore.signal();
--totals.pending_flushes;
_gate.leave();
}
bool should_wait_for_write() const {
return _write_semaphore.waiters() > 0 || _flush_semaphore.waiters() > 0;
}
segment_manager(config c)
: cfg([&c] {
config cfg(c);
if (cfg.commit_log_location.empty()) {
cfg.commit_log_location = "/var/lib/scylla/commitlog";
}
if (cfg.max_active_writes == 0) {
cfg.max_active_writes = // TODO: call someone to get an idea...
25 * smp::count;
}
cfg.max_active_writes = std::max(uint64_t(1), cfg.max_active_writes / smp::count);
if (cfg.max_active_flushes == 0) {
cfg.max_active_flushes = // TODO: call someone to get an idea...
5 * smp::count;
}
cfg.max_active_flushes = std::max(uint64_t(1), cfg.max_active_flushes / smp::count);
return cfg;
}())
, max_size(std::min<size_t>(std::numeric_limits<position_type>::max(), std::max<size_t>(cfg.commitlog_segment_size_in_mb, 1) * 1024 * 1024))
, max_mutation_size(max_size >> 1)
, max_disk_size(size_t(std::ceil(cfg.commitlog_total_space_in_mb / double(smp::count))) * 1024 * 1024)
, _write_semaphore(cfg.max_active_writes)
, _flush_semaphore(cfg.max_active_flushes)
: cfg(c), max_size(
std::min<size_t>(std::numeric_limits<position_type>::max(),
std::max<size_t>(cfg.commitlog_segment_size_in_mb,
1) * 1024 * 1024)), max_mutation_size(
max_size >> 1), max_disk_size(
size_t(
std::ceil(
cfg.commitlog_total_space_in_mb
/ double(smp::count))) * 1024 * 1024)
{
assert(max_size > 0);
if (cfg.commit_log_location.empty()) {
cfg.commit_log_location = "/var/lib/scylla/commitlog";
}
logger.trace("Commitlog {} maximum disk size: {} MB / cpu ({} cpus)",
cfg.commit_log_location, max_disk_size / (1024 * 1024),
smp::count);
_regs = create_counters();
}
~segment_manager() {
@@ -287,8 +238,6 @@ public:
}
std::vector<sstring> get_active_names() const;
uint64_t get_num_dirty_segments() const;
uint64_t get_num_active_segments() const;
using buffer_type = temporary_buffer<char>;
@@ -392,39 +341,9 @@ class db::commitlog::segment: public enable_lw_shared_from_this<segment> {
std::unordered_map<cf_id_type, position_type> _cf_dirty;
time_point _sync_time;
seastar::gate _gate;
uint64_t _write_waiters = 0;
semaphore _queue;
std::unordered_set<table_schema_version> _known_schema_versions;
friend std::ostream& operator<<(std::ostream&, const segment&);
friend class segment_manager;
future<> begin_flush() {
// This is maintaining the semantica of only using the write-lock
// as a gate for flushing, i.e. once we've begun a flush for position X
// we are ok with writes to positions > X
return _dwrite.write_lock().then(std::bind(&segment_manager::begin_flush, _segment_manager)).finally([this] {
_dwrite.write_unlock();
});
}
void end_flush() {
_segment_manager->end_flush();
}
future<> begin_write() {
// This is maintaining the semantica of only using the write-lock
// as a gate for flushing, i.e. once we've begun a flush for position X
// we are ok with writes to positions > X
return _dwrite.read_lock().then(std::bind(&segment_manager::begin_write, _segment_manager));
}
void end_write() {
_segment_manager->end_write();
_dwrite.read_unlock();
}
public:
struct cf_mark {
const segment& s;
@@ -446,7 +365,7 @@ public:
segment(segment_manager* m, const descriptor& d, file && f, bool active)
: _segment_manager(m), _desc(std::move(d)), _file(std::move(f)), _sync_time(
clock_type::now()), _queue(0)
clock_type::now())
{
++_segment_manager->totals.segments_created;
logger.debug("Created new {} segment {}", active ? "active" : "reserve", *this);
@@ -464,19 +383,9 @@ public:
}
}
bool is_schema_version_known(schema_ptr s) {
return _known_schema_versions.count(s->version());
}
void add_schema_version(schema_ptr s) {
_known_schema_versions.emplace(s->version());
}
void forget_schema_versions() {
_known_schema_versions.clear();
}
bool must_sync() {
if (_segment_manager->cfg.mode == sync_mode::BATCH) {
return false;
return true;
}
auto now = clock_type::now();
auto ms = std::chrono::duration_cast<std::chrono::milliseconds>(
@@ -492,9 +401,8 @@ public:
*/
future<sseg_ptr> finish_and_get_new() {
_closed = true;
return maybe_wait_for_write(sync()).then([](sseg_ptr s) {
return s->_segment_manager->active_segment();
});
sync();
return _segment_manager->active_segment();
}
void reset_sync_time() {
_sync_time = clock_type::now();
@@ -509,7 +417,7 @@ public:
logger.trace("Sync not needed {}: ({} / {})", *this, position(), _flush_pos);
return make_ready_future<sseg_ptr>(shared_from_this());
}
return cycle().then([](sseg_ptr seg) {
return cycle().then([](auto seg) {
return seg->flush();
});
}
@@ -532,14 +440,16 @@ public:
// This is not 100% neccesary, we really only need the ones below our flush pos,
// but since we pretty much assume that task ordering will make this the case anyway...
return begin_flush().then(
return _dwrite.write_lock().then(
[this, me, pos]() mutable {
_dwrite.write_unlock(); // release it already.
pos = std::max(pos, _file_pos);
if (pos <= _flush_pos) {
logger.trace("{} already synced! ({} < {})", *this, pos, _flush_pos);
return make_ready_future<sseg_ptr>(std::move(me));
}
return _file.flush().then_wrapped([this, pos, me](future<> f) {
_segment_manager->begin_op();
return _file.flush().then_wrapped([this, pos, me](auto f) {
try {
f.get();
// TODO: retry/ignore/fail/stop - optional behaviour in origin.
@@ -552,50 +462,16 @@ public:
logger.error("Failed to flush commits to disk: {}", std::current_exception());
throw;
}
}).finally([this, me] {
_segment_manager->end_op();
});
}).finally([this] {
end_flush();
});
});
}
/**
* Allocate a new buffer
*/
void new_buffer(size_t s) {
assert(_buffer.empty());
auto overhead = segment_overhead_size;
if (_file_pos == 0) {
overhead += descriptor_header_size;
}
auto a = align_up(s + overhead, alignment);
auto k = std::max(a, default_size);
for (;;) {
try {
_buffer = _segment_manager->acquire_buffer(k);
break;
} catch (std::bad_alloc&) {
logger.warn("Could not allocate {} k bytes output buffer ({} k required)", k / 1024, a / 1024);
if (k > a) {
k = std::max(a, k / 2);
logger.debug("Trying reduced size: {} k", k / 1024);
continue;
}
throw;
}
}
_buf_pos = overhead;
auto * p = reinterpret_cast<uint32_t *>(_buffer.get_write());
std::fill(p, p + overhead, 0);
_segment_manager->totals.total_size += k;
}
/**
* Send any buffer contents to disk and get a new tmp buffer
*/
// See class comment for info
future<sseg_ptr> cycle() {
future<sseg_ptr> cycle(size_t s = 0) {
auto size = clear_buffer_slack();
auto buf = std::move(_buffer);
auto off = _file_pos;
@@ -603,6 +479,36 @@ public:
_file_pos += size;
_buf_pos = 0;
// if we need new buffer, get one.
// TODO: keep a queue of available buffers?
if (s > 0) {
auto overhead = segment_overhead_size;
if (_file_pos == 0) {
overhead += descriptor_header_size;
}
auto a = align_up(s + overhead, alignment);
auto k = std::max(a, default_size);
for (;;) {
try {
_buffer = _segment_manager->acquire_buffer(k);
break;
} catch (std::bad_alloc&) {
logger.warn("Could not allocate {} k bytes output buffer ({} k required)", k / 1024, a / 1024);
if (k > a) {
k = std::max(a, k / 2);
logger.debug("Trying reduced size: {} k", k / 1024);
continue;
}
throw;
}
}
_buf_pos = overhead;
auto * p = reinterpret_cast<uint32_t *>(_buffer.get_write());
std::fill(p, p + overhead, 0);
_segment_manager->totals.total_size += k;
}
auto me = shared_from_this();
assert(!me.owned());
@@ -639,15 +545,13 @@ public:
out.write(uint32_t(_file_pos));
out.write(crc.checksum());
forget_schema_versions();
// acquire read lock
return begin_write().then([this, size, off, buf = std::move(buf), me]() mutable {
return _dwrite.read_lock().then([this, size, off, buf = std::move(buf), me]() mutable {
auto written = make_lw_shared<size_t>(0);
auto p = buf.get();
_segment_manager->begin_op();
return repeat([this, size, off, written, p]() mutable {
auto&& priority_class = service::get_local_commitlog_priority();
return _file.dma_write(off + *written, p + *written, size - *written, priority_class).then_wrapped([this, size, written](future<size_t>&& f) {
return _file.dma_write(off + *written, p + *written, size - *written).then_wrapped([this, size, written](auto&& f) {
try {
auto bytes = std::get<0>(f.get());
*written += bytes;
@@ -671,59 +575,20 @@ public:
});
}).finally([this, buf = std::move(buf)]() mutable {
_segment_manager->release_buffer(std::move(buf));
_segment_manager->end_op();
});
}).then([me] {
return make_ready_future<sseg_ptr>(std::move(me));
}).finally([me, this]() {
end_write(); // release
});
}
future<sseg_ptr> maybe_wait_for_write(future<sseg_ptr> f) {
if (_segment_manager->should_wait_for_write()) {
++_write_waiters;
logger.trace("Too many pending writes. Must wait.");
return f.finally([this] {
if (--_write_waiters == 0) {
_queue.signal(_queue.waiters());
}
});
}
return make_ready_future<sseg_ptr>(shared_from_this());
}
/**
* If an allocation causes a write, and the write causes a block,
* any allocations post that need to wait for this to finish,
* other wise we will just continue building up more write queue
* eventually (+ loose more ordering)
*
* Some caution here, since maybe_wait_for_write actually
* releases _all_ queued up ops when finishing, we could get
* "bursts" of alloc->write, causing build-ups anyway.
* This should be measured properly. For now I am hoping this
* will work out as these should "block as a group". However,
* buffer memory usage might grow...
*/
bool must_wait_for_alloc() {
return _write_waiters > 0;
}
future<sseg_ptr> wait_for_alloc() {
auto me = shared_from_this();
++_segment_manager->totals.pending_allocations;
logger.trace("Previous allocation is blocking. Must wait.");
return _queue.wait().then([me] { // TODO: do we need a finally?
--me->_segment_manager->totals.pending_allocations;
return make_ready_future<sseg_ptr>(me);
_dwrite.read_unlock(); // release
});
}
/**
* Add a "mutation" to the segment.
*/
future<replay_position> allocate(const cf_id_type& id, shared_ptr<entry_writer> writer) {
const auto size = writer->size(*this);
future<replay_position> allocate(const cf_id_type& id, size_t size,
serializer_func func) {
const auto s = size + entry_overhead_size; // total size
if (s > _segment_manager->max_mutation_size) {
return make_exception_future<replay_position>(
@@ -732,26 +597,23 @@ public:
+ " bytes is too large for the maxiumum size of "
+ std::to_string(_segment_manager->max_mutation_size)));
}
std::experimental::optional<future<sseg_ptr>> op;
if (must_sync()) {
op = sync();
} else if (must_wait_for_alloc()) {
op = wait_for_alloc();
} else if (!is_still_allocating() || position() + s > _segment_manager->max_size) { // would we make the file too big?
// do this in next segment instead.
op = finish_and_get_new();
} else if (_buffer.empty()) {
new_buffer(s);
} else if (s > (_buffer.size() - _buf_pos)) { // enough data?
op = maybe_wait_for_write(cycle());
}
if (op) {
return op->then([id, writer = std::move(writer)] (sseg_ptr new_seg) mutable {
return new_seg->allocate(id, std::move(writer));
});
// would we make the file too big?
for (;;) {
if (position() + s > _segment_manager->max_size) {
// do this in next segment instead.
return finish_and_get_new().then(
[id, size, func = std::move(func)](auto new_seg) {
return new_seg->allocate(id, size, func);
});
}
// enough data?
if (s > (_buffer.size() - _buf_pos)) {
// TODO: iff we have to many writes running, maybe we should
// wait for this?
cycle(s);
continue; // re-check file size overflow
}
break;
}
_gate.enter(); // this might throw. I guess we accept this?
@@ -772,7 +634,7 @@ public:
out.write(crc.checksum());
// actual data
writer->write(*this, out);
func(out);
crc.process_bytes(p + 2 * sizeof(uint32_t), size);
@@ -783,8 +645,9 @@ public:
_gate.leave();
if (_segment_manager->cfg.mode == sync_mode::BATCH) {
return sync().then([rp](sseg_ptr) {
// finally, check if we're required to sync.
if (must_sync()) {
return sync().then([rp](auto seg) {
return make_ready_future<replay_position>(rp);
});
}
@@ -873,7 +736,7 @@ db::commitlog::segment_manager::list_descriptors(sstring dirname) {
}
return make_ready_future<std::experimental::optional<directory_entry_type>>(de.type);
};
return entry_type(de).then([this, de](std::experimental::optional<directory_entry_type> type) {
return entry_type(de).then([this, de](auto type) {
if (type == directory_entry_type::regular && de.name[0] != '.') {
try {
_result.emplace_back(de.name);
@@ -890,7 +753,7 @@ db::commitlog::segment_manager::list_descriptors(sstring dirname) {
}
};
return engine().open_directory(dirname).then([this, dirname](file dir) {
return engine().open_directory(dirname).then([this, dirname](auto dir) {
auto h = make_lw_shared<helper>(std::move(dirname), std::move(dir));
return h->done().then([h]() {
return make_ready_future<std::vector<db::commitlog::descriptor>>(std::move(h->_result));
@@ -899,7 +762,7 @@ db::commitlog::segment_manager::list_descriptors(sstring dirname) {
}
future<> db::commitlog::segment_manager::init() {
return list_descriptors(cfg.commit_log_location).then([this](std::vector<descriptor> descs) {
return list_descriptors(cfg.commit_log_location).then([this](auto descs) {
segment_id_type id = std::chrono::duration_cast<std::chrono::milliseconds>(runtime::get_boot_time().time_since_epoch()).count() + 1;
for (auto& d : descs) {
id = std::max(id, replay_position(d.id).base_id());
@@ -969,23 +832,9 @@ scollectd::registrations db::commitlog::segment_manager::create_counters() {
),
add_polled_metric(type_instance_id("commitlog"
, per_cpu_plugin_instance, "queue_length", "pending_writes")
, make_typed(data_type::GAUGE, totals.pending_writes)
, per_cpu_plugin_instance, "queue_length", "pending_operations")
, make_typed(data_type::GAUGE, totals.pending_operations)
),
add_polled_metric(type_instance_id("commitlog"
, per_cpu_plugin_instance, "queue_length", "pending_flushes")
, make_typed(data_type::GAUGE, totals.pending_flushes)
),
add_polled_metric(type_instance_id("commitlog"
, per_cpu_plugin_instance, "total_operations", "write_limit_exceeded")
, make_typed(data_type::DERIVE, totals.write_limit_exceeded)
),
add_polled_metric(type_instance_id("commitlog"
, per_cpu_plugin_instance, "total_operations", "flush_limit_exceeded")
, make_typed(data_type::DERIVE, totals.flush_limit_exceeded)
),
add_polled_metric(type_instance_id("commitlog"
, per_cpu_plugin_instance, "memory", "total_size")
, make_typed(data_type::GAUGE, totals.total_size)
@@ -1114,7 +963,7 @@ std::ostream& db::operator<<(std::ostream& out, const db::replay_position& p) {
void db::commitlog::segment_manager::discard_unused_segments() {
logger.trace("Checking for unused segments ({} active)", _segments.size());
auto i = std::remove_if(_segments.begin(), _segments.end(), [=](sseg_ptr s) {
auto i = std::remove_if(_segments.begin(), _segments.end(), [=](auto s) {
if (s->can_delete()) {
logger.debug("Segment {} is unused", *s);
return true;
@@ -1208,7 +1057,7 @@ void db::commitlog::segment_manager::on_timer() {
return this->allocate_segment(false).then([this](sseg_ptr s) {
if (!_shutdown) {
// insertion sort.
auto i = std::upper_bound(_reserve_segments.begin(), _reserve_segments.end(), s, [](sseg_ptr s1, sseg_ptr s2) {
auto i = std::upper_bound(_reserve_segments.begin(), _reserve_segments.end(), s, [](auto s1, auto s2) {
const descriptor& d1 = s1->_desc;
const descriptor& d2 = s2->_desc;
return d1.id < d2.id;
@@ -1220,7 +1069,7 @@ void db::commitlog::segment_manager::on_timer() {
--_reserve_allocating;
});
});
}).handle_exception([](std::exception_ptr ep) {
}).handle_exception([](auto ep) {
logger.warn("Exception in segment reservation: {}", ep);
});
arm();
@@ -1237,19 +1086,6 @@ std::vector<sstring> db::commitlog::segment_manager::get_active_names() const {
return res;
}
uint64_t db::commitlog::segment_manager::get_num_dirty_segments() const {
return std::count_if(_segments.begin(), _segments.end(), [](sseg_ptr s) {
return !s->is_still_allocating() && !s->is_clean();
});
}
uint64_t db::commitlog::segment_manager::get_num_active_segments() const {
return std::count_if(_segments.begin(), _segments.end(), [](sseg_ptr s) {
return s->is_still_allocating();
});
}
db::commitlog::segment_manager::buffer_type db::commitlog::segment_manager::acquire_buffer(size_t s) {
auto i = _temp_buffers.begin();
auto e = _temp_buffers.end();
@@ -1292,44 +1128,8 @@ void db::commitlog::segment_manager::release_buffer(buffer_type&& b) {
*/
future<db::replay_position> db::commitlog::add(const cf_id_type& id,
size_t size, serializer_func func) {
class serializer_func_entry_writer final : public entry_writer {
serializer_func _func;
size_t _size;
public:
serializer_func_entry_writer(size_t sz, serializer_func func)
: _func(std::move(func)), _size(sz)
{ }
virtual size_t size(segment&) override { return _size; }
virtual void write(segment&, output& out) override {
_func(out);
}
};
auto writer = ::make_shared<serializer_func_entry_writer>(size, std::move(func));
return _segment_manager->active_segment().then([id, writer] (auto s) {
return s->allocate(id, writer);
});
}
future<db::replay_position> db::commitlog::add_entry(const cf_id_type& id, const commitlog_entry_writer& cew)
{
class cl_entry_writer final : public entry_writer {
commitlog_entry_writer _writer;
public:
cl_entry_writer(const commitlog_entry_writer& wr) : _writer(wr) { }
virtual size_t size(segment& seg) override {
_writer.set_with_schema(!seg.is_schema_version_known(_writer.schema()));
return _writer.size();
}
virtual void write(segment& seg, output& out) override {
if (_writer.with_schema()) {
seg.add_schema_version(_writer.schema());
}
_writer.write(out);
}
};
auto writer = ::make_shared<cl_entry_writer>(cew);
return _segment_manager->active_segment().then([id, writer] (auto s) {
return s->allocate(id, writer);
return _segment_manager->active_segment().then([=](auto s) {
return s->allocate(id, size, std::move(func));
});
}
@@ -1400,18 +1200,11 @@ future<> db::commitlog::shutdown() {
return _segment_manager->shutdown();
}
size_t db::commitlog::max_record_size() const {
return _segment_manager->max_mutation_size - segment::entry_overhead_size;
}
uint64_t db::commitlog::max_active_writes() const {
return _segment_manager->cfg.max_active_writes;
}
uint64_t db::commitlog::max_active_flushes() const {
return _segment_manager->cfg.max_active_flushes;
}
future<> db::commitlog::clear() {
return _segment_manager->clear();
}
@@ -1593,6 +1386,10 @@ db::commitlog::read_log_file(file f, commit_load_reader_func next, position_type
return skip(slack);
}
if (start_off > pos) {
return skip(size - entry_header_size);
}
return fin.read_exactly(size - entry_header_size).then([this, size, crc = std::move(crc), rp](temporary_buffer<char> buf) mutable {
advance(buf);
@@ -1662,28 +1459,7 @@ uint64_t db::commitlog::get_flush_count() const {
}
uint64_t db::commitlog::get_pending_tasks() const {
return _segment_manager->totals.pending_writes
+ _segment_manager->totals.pending_flushes;
}
uint64_t db::commitlog::get_pending_writes() const {
return _segment_manager->totals.pending_writes;
}
uint64_t db::commitlog::get_pending_flushes() const {
return _segment_manager->totals.pending_flushes;
}
uint64_t db::commitlog::get_pending_allocations() const {
return _segment_manager->totals.pending_allocations;
}
uint64_t db::commitlog::get_write_limit_exceeded_count() const {
return _segment_manager->totals.write_limit_exceeded;
}
uint64_t db::commitlog::get_flush_limit_exceeded_count() const {
return _segment_manager->totals.flush_limit_exceeded;
return _segment_manager->totals.pending_operations;
}
uint64_t db::commitlog::get_num_segments_created() const {
@@ -1694,14 +1470,6 @@ uint64_t db::commitlog::get_num_segments_destroyed() const {
return _segment_manager->totals.segments_destroyed;
}
uint64_t db::commitlog::get_num_dirty_segments() const {
return _segment_manager->get_num_dirty_segments();
}
uint64_t db::commitlog::get_num_active_segments() const {
return _segment_manager->get_num_active_segments();
}
future<std::vector<db::commitlog::descriptor>> db::commitlog::list_existing_descriptors() const {
return list_existing_descriptors(active_config().commit_log_location);
}

View File

@@ -48,7 +48,6 @@
#include "core/stream.hh"
#include "utils/UUID.hh"
#include "replay_position.hh"
#include "commitlog_entry.hh"
class file;
@@ -115,10 +114,6 @@ public:
// Max number of segments to keep in pre-alloc reserve.
// Not (yet) configurable from scylla.conf.
uint64_t max_reserve_segments = 12;
// Max active writes/flushes. Default value
// zero means try to figure it out ourselves
uint64_t max_active_writes = 0;
uint64_t max_active_flushes = 0;
sync_mode mode = sync_mode::PERIODIC;
};
@@ -186,13 +181,6 @@ public:
});
}
/**
* Add an entry to the commit log.
*
* @param entry_writer a writer responsible for writing the entry
*/
future<replay_position> add_entry(const cf_id_type& id, const commitlog_entry_writer& entry_writer);
/**
* Modifies the per-CF dirty cursors of any commit log segments for the column family according to the position
* given. Discards any commit log segments that are no longer used.
@@ -245,37 +233,14 @@ public:
uint64_t get_completed_tasks() const;
uint64_t get_flush_count() const;
uint64_t get_pending_tasks() const;
uint64_t get_pending_writes() const;
uint64_t get_pending_flushes() const;
uint64_t get_pending_allocations() const;
uint64_t get_write_limit_exceeded_count() const;
uint64_t get_flush_limit_exceeded_count() const;
uint64_t get_num_segments_created() const;
uint64_t get_num_segments_destroyed() const;
/**
* Get number of inactive (finished), segments lingering
* due to still being dirty
*/
uint64_t get_num_dirty_segments() const;
/**
* Get number of active segments, i.e. still being allocated to
*/
uint64_t get_num_active_segments() const;
/**
* Returns the largest amount of data that can be written in a single "mutation".
*/
size_t max_record_size() const;
/**
* Return max allowed pending writes (per this shard)
*/
uint64_t max_active_writes() const;
/**
* Return max allowed pending flushes (per this shard)
*/
uint64_t max_active_flushes() const;
future<> clear();
const config& active_config() const;
@@ -318,11 +283,6 @@ public:
const sstring&, commit_load_reader_func, position_type = 0);
private:
commitlog(config);
struct entry_writer {
virtual size_t size(segment&) = 0;
virtual void write(segment&, output&) = 0;
};
};
}

View File

@@ -1,88 +0,0 @@
/*
* Copyright 2016 ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#pragma once
#include <experimental/optional>
#include "frozen_mutation.hh"
#include "schema.hh"
namespace stdx = std::experimental;
class commitlog_entry_writer {
schema_ptr _schema;
db::serializer<column_mapping> _column_mapping_serializer;
const frozen_mutation& _mutation;
bool _with_schema = true;
public:
commitlog_entry_writer(schema_ptr s, const frozen_mutation& fm)
: _schema(std::move(s)), _column_mapping_serializer(_schema->get_column_mapping()), _mutation(fm)
{ }
void set_with_schema(bool value) {
_with_schema = value;
}
bool with_schema() {
return _with_schema;
}
schema_ptr schema() const {
return _schema;
}
size_t size() const {
size_t size = data_output::serialized_size<bool>();
if (_with_schema) {
size += _column_mapping_serializer.size();
}
size += _mutation.representation().size();
return size;
}
void write(data_output& out) const {
out.write(_with_schema);
if (_with_schema) {
_column_mapping_serializer.write(out);
}
auto bv = _mutation.representation();
out.write(bv.begin(), bv.end());
}
};
class commitlog_entry_reader {
frozen_mutation _mutation;
stdx::optional<column_mapping> _column_mapping;
public:
commitlog_entry_reader(const temporary_buffer<char>& buffer)
: _mutation(bytes())
{
data_input in(buffer);
bool has_column_mapping = in.read<bool>();
if (has_column_mapping) {
_column_mapping = db::serializer<::column_mapping>::read(in);
}
auto bv = in.read_view(in.avail());
_mutation = frozen_mutation(bytes(bv.begin(), bv.end()));
}
const stdx::optional<column_mapping>& get_column_mapping() const { return _column_mapping; }
const frozen_mutation& mutation() const { return _mutation; }
};

View File

@@ -56,14 +56,10 @@
#include "db/serializer.hh"
#include "cql3/query_processor.hh"
#include "log.hh"
#include "converting_mutation_partition_applier.hh"
#include "schema_registry.hh"
#include "commitlog_entry.hh"
static logging::logger logger("commitlog_replayer");
class db::commitlog_replayer::impl {
std::unordered_map<table_schema_version, column_mapping> _column_mappings;
public:
impl(seastar::sharded<cql3::query_processor>& db);
@@ -74,19 +70,6 @@ public:
uint64_t skipped_mutations = 0;
uint64_t applied_mutations = 0;
uint64_t corrupt_bytes = 0;
stats& operator+=(const stats& s) {
invalid_mutations += s.invalid_mutations;
skipped_mutations += s.skipped_mutations;
applied_mutations += s.applied_mutations;
corrupt_bytes += s.corrupt_bytes;
return *this;
}
stats operator+(const stats& s) const {
stats tmp = *this;
tmp += s;
return tmp;
}
};
future<> process(stats*, temporary_buffer<char> buf, replay_position rp);
@@ -165,6 +148,8 @@ future<> db::commitlog_replayer::impl::init() {
future<db::commitlog_replayer::impl::stats>
db::commitlog_replayer::impl::recover(sstring file) {
logger.info("Replaying {}", file);
replay_position rp{commitlog::descriptor(file)};
auto gp = _min_pos[rp.shard_id()];
@@ -197,29 +182,19 @@ db::commitlog_replayer::impl::recover(sstring file) {
}
future<> db::commitlog_replayer::impl::process(stats* s, temporary_buffer<char> buf, replay_position rp) {
auto shard = rp.shard_id();
if (rp < _min_pos[shard]) {
logger.trace("entry {} is less than global min position. skipping", rp);
s->skipped_mutations++;
return make_ready_future<>();
}
try {
commitlog_entry_reader cer(buf);
auto& fm = cer.mutation();
auto cm_it = _column_mappings.find(fm.schema_version());
if (cm_it == _column_mappings.end()) {
if (!cer.get_column_mapping()) {
throw std::runtime_error(sprint("unknown schema version {}", fm.schema_version()));
}
logger.debug("new schema version {} in entry {}", fm.schema_version(), rp);
cm_it = _column_mappings.emplace(fm.schema_version(), *cer.get_column_mapping()).first;
}
auto shard_id = rp.shard_id();
if (rp < _min_pos[shard_id]) {
logger.trace("entry {} is less than global min position. skipping", rp);
s->skipped_mutations++;
return make_ready_future<>();
}
frozen_mutation fm(bytes(reinterpret_cast<const int8_t *>(buf.get()), buf.size()));
auto uuid = fm.column_family_id();
auto& map = _rpm[shard_id];
auto& map = _rpm[shard];
auto i = map.find(uuid);
if (i != map.end() && rp <= i->second) {
logger.trace("entry {} at {} is younger than recorded replay position {}. skipping", fm.column_family_id(), rp, i->second);
@@ -228,8 +203,7 @@ future<> db::commitlog_replayer::impl::process(stats* s, temporary_buffer<char>
}
auto shard = _qp.local().db().local().shard_of(fm);
return _qp.local().db().invoke_on(shard, [this, cer = std::move(cer), cm_it, rp, shard, s] (database& db) -> future<> {
auto& fm = cer.mutation();
return _qp.local().db().invoke_on(shard, [fm = std::move(fm), rp, shard, s] (database& db) -> future<> {
// TODO: might need better verification that the deserialized mutation
// is schema compatible. My guess is that just applying the mutation
// will not do this.
@@ -245,11 +219,8 @@ future<> db::commitlog_replayer::impl::process(stats* s, temporary_buffer<char>
// their "replay_position" attribute will be empty, which is
// lower than anything the new session will produce.
if (cf.schema()->version() != fm.schema_version()) {
const column_mapping& cm = cm_it->second;
mutation m(fm.decorated_key(*cf.schema()), cf.schema());
converting_mutation_partition_applier v(cm, *cf.schema(), m.partition());
fm.partition().accept(cm, v);
cf.apply(std::move(m));
// TODO: Convert fm to current schema
fail(unimplemented::cause::SCHEMA_CHANGE);
} else {
cf.apply(fm, cf.schema());
}
@@ -292,41 +263,32 @@ future<db::commitlog_replayer> db::commitlog_replayer::create_replayer(seastar::
}
future<> db::commitlog_replayer::recover(std::vector<sstring> files) {
logger.info("Replaying {}", join(", ", files));
return map_reduce(files, [this](auto f) {
logger.debug("Replaying {}", f);
return _impl->recover(f).then([f](impl::stats stats) {
if (stats.corrupt_bytes != 0) {
logger.warn("Corrupted file: {}. {} bytes skipped.", f, stats.corrupt_bytes);
}
logger.debug("Log replay of {} complete, {} replayed mutations ({} invalid, {} skipped)"
, f
, stats.applied_mutations
, stats.invalid_mutations
, stats.skipped_mutations
);
return make_ready_future<impl::stats>(stats);
}).handle_exception([f](auto ep) -> future<impl::stats> {
logger.error("Error recovering {}: {}", f, ep);
try {
std::rethrow_exception(ep);
} catch (std::invalid_argument&) {
logger.error("Scylla cannot process {}. Make sure to fully flush all Cassandra commit log files to sstable before migrating.", f);
throw;
} catch (...) {
throw;
}
});
}, impl::stats(), std::plus<impl::stats>()).then([](impl::stats totals) {
logger.info("Log replay complete, {} replayed mutations ({} invalid, {} skipped)"
, totals.applied_mutations
, totals.invalid_mutations
, totals.skipped_mutations
);
return parallel_for_each(files, [this](auto f) {
return this->recover(f);
});
}
future<> db::commitlog_replayer::recover(sstring f) {
return recover(std::vector<sstring>{ f });
return _impl->recover(f).then([f](impl::stats stats) {
if (stats.corrupt_bytes != 0) {
logger.warn("Corrupted file: {}. {} bytes skipped.", f, stats.corrupt_bytes);
}
logger.info("Log replay of {} complete, {} replayed mutations ({} invalid, {} skipped)"
, f
, stats.applied_mutations
, stats.invalid_mutations
, stats.skipped_mutations
);
}).handle_exception([f](auto ep) {
logger.error("Error recovering {}: {}", f, ep);
try {
std::rethrow_exception(ep);
} catch (std::invalid_argument&) {
logger.error("Scylla cannot process {}. Make sure to fully flush all Cassandra commit log files to sstable before migrating.");
throw;
} catch (...) {
throw;
}
});;
}

View File

@@ -268,7 +268,7 @@ public:
"Counter writes read the current values before incrementing and writing them back. The recommended value is (16 × number_of_drives) ." \
) \
/* Common automatic backup settings */ \
val(incremental_backups, bool, false, Used, \
val(incremental_backups, bool, false, Unused, \
"Backs up data updated since the last snapshot was taken. When enabled, Cassandra creates a hard link to each SSTable flushed or streamed locally in a backups/ subdirectory of the keyspace data. Removing these links is the operator's responsibility.\n" \
"Related information: Enabling incremental backups" \
) \
@@ -718,7 +718,6 @@ public:
val(replace_address_first_boot, sstring, "", Used, "Like replace_address option, but if the node has been bootstrapped sucessfully it will be ignored. Same as -Dcassandra.replace_address_first_boot.") \
val(override_decommission, bool, false, Used, "Set true to force a decommissioned node to join the cluster") \
val(ring_delay_ms, uint32_t, 30 * 1000, Used, "Time a node waits to hear from other nodes before joining the ring in milliseconds. Same as -Dcassandra.ring_delay_ms in cassandra.") \
val(shutdown_announce_in_ms, uint32_t, 2 * 1000, Used, "Time a node waits after sending gossip shutdown message in milliseconds. Same as -Dcassandra.shutdown_announce_in_ms in cassandra.") \
val(developer_mode, bool, false, Used, "Relax environement checks. Setting to true can reduce performance and reliability significantly.") \
val(skip_wait_for_gossip_to_settle, int32_t, -1, Used, "An integer to configure the wait for gossip to settle. -1: wait normally, 0: do not wait at all, n: wait for at most n polls. Same as -Dcassandra.skip_wait_for_gossip_to_settle in cassandra.") \
val(experimental, bool, false, Used, "Set to true to unlock experimental features.") \

View File

@@ -50,20 +50,16 @@ namespace db {
namespace marshal {
type_parser::type_parser(sstring_view str, size_t idx)
: _str{str.begin(), str.end()}
type_parser::type_parser(const sstring& str, size_t idx)
: _str{str}
, _idx{idx}
{ }
type_parser::type_parser(sstring_view str)
type_parser::type_parser(const sstring& str)
: type_parser{str, 0}
{ }
data_type type_parser::parse(const sstring& str) {
return type_parser(sstring_view(str)).parse();
}
data_type type_parser::parse(sstring_view str) {
return type_parser(str).parse();
}

View File

@@ -62,15 +62,14 @@ class type_parser {
public static final TypeParser EMPTY_PARSER = new TypeParser("", 0);
#endif
type_parser(sstring_view str, size_t idx);
type_parser(const sstring& str, size_t idx);
public:
explicit type_parser(sstring_view str);
explicit type_parser(const sstring& str);
/**
* Parse a string containing an type definition.
*/
static data_type parse(const sstring& str);
static data_type parse(sstring_view str);
#if 0
public static AbstractType<?> parse(CharSequence compareWith) throws SyntaxException, ConfigurationException

View File

@@ -415,16 +415,16 @@ future<std::vector<frozen_mutation>> convert_schema_to_mutations(distributed<ser
if (partition_key == system_keyspace::NAME) {
continue;
}
results.emplace_back(std::move(p.mut()));
results.emplace_back(p.mut());
}
return results;
});
};
auto reduce = [] (auto&& result, auto&& mutations) {
std::move(mutations.begin(), mutations.end(), std::back_inserter(result));
std::copy(mutations.begin(), mutations.end(), std::back_inserter(result));
return std::move(result);
};
return map_reduce(ALL.begin(), ALL.end(), map, std::vector<frozen_mutation>{}, reduce);
return map_reduce(ALL.begin(), ALL.end(), map, std::move(std::vector<frozen_mutation>{}), reduce);
}
future<schema_result>
@@ -703,8 +703,6 @@ static void merge_tables(distributed<service::storage_proxy>& proxy,
auto& ks = db.find_keyspace(s->ks_name());
auto cfg = ks.make_column_family_config(*s);
db.add_column_family(s, cfg);
auto& cf = db.find_column_family(s);
cf.mark_ready_for_writes();
ks.make_directory_for_column_family(s->cf_name(), s->id()).get();
service::get_local_migration_manager().notify_create_column_family(s);
}
@@ -1329,10 +1327,8 @@ schema_ptr create_table_from_mutations(schema_mutations sm, std::experimental::o
throw std::runtime_error(sprint("%s not implemented", __PRETTY_FUNCTION__));
}
auto comparator = table_row.get_nonnull<sstring>("comparator");
bool is_compound = cell_comparator::check_compound(comparator);
bool is_compound = cell_comparator::check_compound(table_row.get_nonnull<sstring>("comparator"));
builder.set_is_compound(is_compound);
cell_comparator::read_collections(builder, comparator);
#if 0
CellNameType comparator = CellNames.fromAbstractType(fullRawComparator, isDense);

View File

@@ -251,6 +251,39 @@ std::ostream& operator<<(std::ostream& out, const ring_position& pos) {
return out << "}";
}
size_t ring_position::serialized_size() const {
size_t size = serialize_int32_size; /* _key length */
if (_key) {
size += _key.value().representation().size();
} else {
size += sizeof(int8_t); /* _token_bund */
}
return size + _token.serialized_size();
}
void ring_position::serialize(bytes::iterator& out) const {
_token.serialize(out);
if (_key) {
auto v = _key.value().representation();
serialize_int32(out, v.size());
out = std::copy(v.begin(), v.end(), out);
} else {
serialize_int32(out, 0);
serialize_int8(out, static_cast<int8_t>(_token_bound));
}
}
ring_position ring_position::deserialize(bytes_view& in) {
auto token = token::deserialize(in);
auto size = read_simple<uint32_t>(in);
if (size == 0) {
auto bound = dht::ring_position::token_bound(read_simple<int8_t>(in));
return ring_position(std::move(token), bound);
} else {
return ring_position(std::move(token), partition_key::from_bytes(to_bytes(read_simple_bytes(in, size))));
}
}
unsigned shard_of(const token& t) {
return global_partitioner().shard_of(t);
}
@@ -263,6 +296,29 @@ int token_comparator::operator()(const token& t1, const token& t2) const {
return tri_compare(t1, t2);
}
void token::serialize(bytes::iterator& out) const {
uint8_t kind = _kind == dht::token::kind::before_all_keys ? 0 :
_kind == dht::token::kind::key ? 1 : 2;
serialize_int8(out, kind);
serialize_int16(out, _data.size());
out = std::copy(_data.begin(), _data.end(), out);
}
token token::deserialize(bytes_view& in) {
uint8_t kind = read_simple<uint8_t>(in);
size_t size = read_simple<uint16_t>(in);
return token(kind == 0 ? dht::token::kind::before_all_keys :
kind == 1 ? dht::token::kind::key :
dht::token::kind::after_all_keys,
to_bytes(read_simple_bytes(in, size)));
}
size_t token::serialized_size() const {
return serialize_int8_size // token::kind;
+ serialize_int16_size // token size
+ _data.size();
}
bool ring_position::equal(const schema& s, const ring_position& other) const {
return tri_compare(s, other) == 0;
}

View File

@@ -97,6 +97,11 @@ public:
bool is_maximum() const {
return _kind == kind::after_all_keys;
}
void serialize(bytes::iterator& out) const;
static token deserialize(bytes_view& in);
size_t serialized_size() const;
};
token midpoint_unsigned(const token& t1, const token& t2);
@@ -333,12 +338,6 @@ public:
, _key(std::experimental::make_optional(std::move(key)))
{ }
ring_position(dht::token token, token_bound bound, std::experimental::optional<partition_key> key)
: _token(std::move(token))
, _token_bound(bound)
, _key(std::move(key))
{ }
ring_position(const dht::decorated_key& dk)
: _token(dk._token)
, _key(std::experimental::make_optional(dk._key))
@@ -380,6 +379,10 @@ public:
// "less" comparator corresponding to tri_compare()
bool less_compare(const schema&, const ring_position&) const;
size_t serialized_size() const;
void serialize(bytes::iterator& out) const;
static ring_position deserialize(bytes_view& in);
friend std::ostream& operator<<(std::ostream&, const ring_position&);
};

View File

@@ -107,7 +107,7 @@ public:
, _tokens(std::move(tokens))
, _address(address)
, _description(std::move(description))
, _stream_plan(_description) {
, _stream_plan(_description, true) {
}
range_streamer(distributed<database>& db, token_metadata& tm, inet_address address, sstring description)

View File

@@ -30,20 +30,19 @@ if [ ! -f variables.json ]; then
fi
if [ ! -d packer ]; then
wget https://releases.hashicorp.com/packer/0.8.6/packer_0.8.6_linux_amd64.zip
wget https://dl.bintray.com/mitchellh/packer/packer_0.8.6_linux_amd64.zip
mkdir packer
cd packer
unzip -x ../packer_0.8.6_linux_amd64.zip
cd -
fi
echo "sudo yum remove -y abrt" > scylla_deploy.sh
if [ $LOCALRPM = 0 ]; then
echo "sudo sh -x -e /home/centos/scylla_install_pkg" >> scylla_deploy.sh
echo "sudo sh -x -e /home/centos/scylla_install_pkg; sudo sh -x -e /usr/lib/scylla/scylla_setup -a" > scylla_deploy.sh
else
echo "sudo sh -x -e /home/centos/scylla_install_pkg -l /home/centos" >> scylla_deploy.sh
echo "sudo sh -x -e /home/centos/scylla_install_pkg -l /home/centos; sudo sh -x -e /usr/lib/scylla/scylla_setup -a" > scylla_deploy.sh
fi
echo "sudo sh -x -e /usr/lib/scylla/scylla_setup -a" >> scylla_deploy.sh
chmod a+rx scylla_deploy.sh
packer/packer build -var-file=variables.json scylla.json

40
dist/ami/scylla.json vendored
View File

@@ -8,52 +8,16 @@
"security_group_id": "{{user `security_group_id`}}",
"region": "{{user `region`}}",
"associate_public_ip_address": "{{user `associate_public_ip_address`}}",
"source_ami": "ami-f3102499",
"source_ami": "ami-8ef1d6e4",
"user_data_file": "user_data.txt",
"instance_type": "{{user `instance_type`}}",
"ssh_username": "centos",
"ssh_timeout": "5m",
"ami_name": "scylla_{{isotime | clean_ami_name}}",
"enhanced_networking": true,
"launch_block_device_mappings": [
{
"device_name": "/dev/sda1",
"volume_size": 10,
"delete_on_termination": true
}
],
"ami_block_device_mappings": [
{
"device_name": "/dev/sdb",
"virtual_name": "ephemeral0"
},
{
"device_name": "/dev/sdc",
"virtual_name": "ephemeral1"
},
{
"device_name": "/dev/sdd",
"virtual_name": "ephemeral2"
},
{
"device_name": "/dev/sde",
"virtual_name": "ephemeral3"
},
{
"device_name": "/dev/sdf",
"virtual_name": "ephemeral4"
},
{
"device_name": "/dev/sdg",
"virtual_name": "ephemeral5"
},
{
"device_name": "/dev/sdh",
"virtual_name": "ephemeral6"
},
{
"device_name": "/dev/sdi",
"virtual_name": "ephemeral7"
"volume_size": 10
}
]
}

View File

@@ -4,6 +4,7 @@
. /etc/os-release
. /etc/sysconfig/scylla-server
if [ ! -f /etc/default/grub ]; then
echo "Unsupported bootloader"
exit 1
@@ -17,7 +18,7 @@ fi
sed -e "s#^GRUB_CMDLINE_LINUX=\"#GRUB_CMDLINE_LINUX=\"hugepagesz=2M hugepages=$NR_HUGEPAGES #" /etc/default/grub > /tmp/grub
mv /tmp/grub /etc/default/grub
if [ "$ID" = "ubuntu" ]; then
grub-mkconfig -o /boot/grub/grub.cfg
grub2-mkconfig -o /boot/grub/grub.cfg
else
grub2-mkconfig -o /boot/grub2/grub.cfg
fi

View File

@@ -29,7 +29,7 @@ if [ "$NAME" = "Ubuntu" ]; then
else
yum install -y ntp ntpdate || true
if [ $AMI -eq 1 ]; then
sed -e s#centos.pool.ntp.org#amazon.pool.ntp.org# /etc/ntp.conf > /tmp/ntp.conf
sed -e s#fedora.pool.ntp.org#amazon.pool.ntp.org# /etc/ntp.conf > /tmp/ntp.conf
mv /tmp/ntp.conf /etc/ntp.conf
fi
if [ "`systemctl is-active ntpd`" = "active" ]; then

View File

@@ -1,8 +1,39 @@
#!/bin/sh -e
if [ "$AMI" = "yes" ] && [ -f /etc/scylla/ami_disabled ]; then
rm /etc/scylla/ami_disabled
exit 1
if [ "$AMI" = "yes" ]; then
RAIDCNT=`grep xvdb /proc/mdstat | wc -l`
RAIDDEV=`grep xvdb /proc/mdstat | awk '{print $1}'`
if [ $RAIDCNT -ge 1 ]; then
echo "RAID already constructed."
if [ "`mount|grep /var/lib/scylla`" = "" ]; then
mount -o noatime /dev/$RAIDDEV /var/lib/scylla
fi
else
echo "RAID does not constructed, going to initialize..."
if [ "$AMI_KEEP_VERSION" != "yes" ]; then
yum update -y
fi
DISKS=""
for i in /dev/xvd{b..z}; do
if [ -b $i ];then
echo "Found disk $i"
if [ "$DISKS" = "" ]; then
DISKS=$i
else
DISKS="$DISKS,$i"
fi
fi
done
if [ "$DISKS" != "" ]; then
/usr/lib/scylla/scylla_raid_setup -d $DISKS
fi
/usr/lib/scylla/scylla-ami/ds2_configure.py
fi
fi
if [ "$NETWORK_MODE" = "virtio" ]; then

View File

@@ -43,13 +43,6 @@ if [ "`mount|grep /var/lib/scylla`" != "" ]; then
echo "/var/lib/scylla is already mounted"
exit 1
fi
. /etc/os-release
if [ "$NAME" = "Ubuntu" ]; then
apt-get -y install mdadm xfsprogs
else
yum -y install mdadm xfsprogs
fi
mdadm --create --verbose --force --run $RAID --level=0 -c256 --raid-devices=$NR_DISK $DISKS
blockdev --setra 65536 $RAID
mkfs.xfs $RAID -f

View File

@@ -35,6 +35,7 @@ while getopts d:n:al:h OPT; do
esac
done
SYSCONFIG_SETUP_ARGS="-n $NIC"
. /etc/os-release
if [ "$ID" != "ubuntu" ]; then
@@ -44,13 +45,16 @@ if [ "$ID" != "ubuntu" ]; then
mv /tmp/selinux /etc/sysconfig/
fi
if [ $AMI -eq 1 ]; then
SYSCONFIG_SETUP_ARGS="$SYSCONFIG_SETUP_ARGS -N -a"
if [ "$LOCAL_PKG" = "" ]; then
yum update -y
else
SYSCONFIG_SETUP_ARGS="$SYSCONFIG_SETUP_ARGS -k"
fi
grep -v ' - mounts' /etc/cloud/cloud.cfg > /tmp/cloud.cfg
mv /tmp/cloud.cfg /etc/cloud/cloud.cfg
mv /home/centos/scylla-ami/scylla-ami-setup.service /usr/lib/systemd/system/
mv /home/centos/scylla-ami /usr/lib/scylla/scylla-ami
chmod a+rx /usr/lib/scylla/scylla-ami/ds2_configure.py
systemctl daemon-reload
systemctl enable scylla-ami-setup.service
fi
systemctl enable scylla-server.service
systemctl enable scylla-jmx.service
@@ -68,5 +72,5 @@ else
/usr/lib/scylla/scylla_coredump_setup -s
/usr/lib/scylla/scylla_ntp_setup -a
/usr/lib/scylla/scylla_bootparam_setup -a
/usr/lib/scylla/scylla_sysconfig_setup -n $NIC
fi
/usr/lib/scylla/scylla_sysconfig_setup $SYSCONFIG_SETUP_ARGS

View File

@@ -3,17 +3,17 @@
# Copyright (C) 2015 ScyllaDB
print_usage() {
echo "scylla-sysconfig-setup -n eth0 -m posix -p 64 -u scylla -g scylla -r /var/lib/scylla -c /etc/scylla -N -a -k"
echo "scylla-sysconfig-setup -n eth0 -m posix -p 64 -u scylla -g scylla -d /var/lib/scylla -c /etc/scylla -N -a -k"
echo " -n specify NIC"
echo " -m network mode (posix, dpdk)"
echo " -p number of hugepages"
echo " -u user (dpdk requires root)"
echo " -g group (dpdk requires root)"
echo " -r scylla home directory"
echo " -d scylla home directory"
echo " -c scylla config directory"
echo " -N setup NIC's interrupts, RPS, XPS"
echo " -a AMI instance mode"
echo " -d disk count"
echo " -k keep package version on AMI"
exit 1
}
@@ -23,9 +23,19 @@ if [ "$ID" = "ubuntu" ]; then
else
SYSCONFIG=/etc/sysconfig
fi
. $SYSCONFIG/scylla-server
DISK_COUNT=0
NIC=eth0
NETWORK_MODE=posix
NR_HUGEPAGES=64
USER=scylla
GROUP=scylla
SCYLLA_HOME=/var/lib/scylla
SCYLLA_CONF=/etc/scylla
SETUP_NIC=0
SET_NIC="no"
AMI=no
AMI_KEEP_VERSION=no
SCYLLA_ARGS=
while getopts n:m:p:u:g:d:c:Nakh OPT; do
case "$OPT" in
"n")
@@ -43,7 +53,7 @@ while getopts n:m:p:u:g:d:c:Nakh OPT; do
"g")
GROUP=$OPTARG
;;
"r")
"d")
SCYLLA_HOME=$OPTARG
;;
"c")
@@ -55,8 +65,8 @@ while getopts n:m:p:u:g:d:c:Nakh OPT; do
"a")
AMI=yes
;;
"d")
DISK_COUNT=$OPTARG
"k")
AMI_KEEP_VERSION=yes
;;
"h")
print_usage
@@ -69,29 +79,11 @@ echo Setting parameters on $SYSCONFIG/scylla-server
ETHDRV=`/usr/lib/scylla/dpdk_nic_bind.py --status | grep if=$NIC | sed -e "s/^.*drv=//" -e "s/ .*$//"`
ETHPCIID=`/usr/lib/scylla/dpdk_nic_bind.py --status | grep if=$NIC | awk '{print $1}'`
NR_CPU=`cat /proc/cpuinfo |grep processor|wc -l`
NR_SHARDS=$NR_CPU
if [ $NR_CPU -ge 8 ] && [ "$SET_NIC" = "no" ]; then
NR_SHARDS=$((NR_CPU - 1))
if [ $NR_CPU -ge 8 ]; then
NR=$((NR_CPU - 1))
SET_NIC="yes"
SCYLLA_ARGS="$SCYLLA_ARGS --cpuset 1-$NR_SHARDS --smp $NR_SHARDS"
SCYLLA_ARGS="--cpuset 1-$NR --smp $NR"
fi
if [ "$AMI" = "yes" ] && [ $DISK_COUNT -gt 0 ]; then
NR_DISKS=$DISK_COUNT
if [ $NR_DISKS -lt 2 ]; then NR_DISKS=2; fi
NR_REQS=$((32 * $NR_DISKS / 2))
NR_IO_QUEUES=$NR_SHARDS
if [ $(($NR_REQS/$NR_IO_QUEUES)) -lt 4 ]; then
NR_IO_QUEUES=$(($NR_REQS / 4))
fi
NR_REQS=$(($(($NR_REQS / $NR_IO_QUEUES)) * $NR_IO_QUEUES))
SCYLLA_IO="$SCYLLA_IO --num-io-queues $NR_IO_QUEUES --max-io-requests $NR_REQS"
fi
sed -e s#^NETWORK_MODE=.*#NETWORK_MODE=$NETWORK_MODE# \
-e s#^ETHDRV=.*#ETHDRV=$ETHDRV# \
-e s#^ETHPCIID=.*#ETHPCIID=$ETHPCIID# \
@@ -101,8 +93,8 @@ sed -e s#^NETWORK_MODE=.*#NETWORK_MODE=$NETWORK_MODE# \
-e s#^SCYLLA_HOME=.*#SCYLLA_HOME=$SCYLLA_HOME# \
-e s#^SCYLLA_CONF=.*#SCYLLA_CONF=$SCYLLA_CONF# \
-e s#^SET_NIC=.*#SET_NIC=$SET_NIC# \
-e "s#^SCYLLA_ARGS=.*#SCYLLA_ARGS=\"$SCYLLA_ARGS\"#" \
-e "s#^SCYLLA_IO=.*#SCYLLA_IO=\"$SCYLLA_IO\"#" \
-e s#^AMI=.*#AMI=$AMI# \
-e s#^SCYLLA_ARGS=.*#SCYLLA_ARGS="$SCYLLA_ARGS"# \
-e s#^AMI=.*#AMI="$AMI"# \
-e s#^AMI_KEEP_VERSION=.*#AMI_KEEP_VERSION="$AMI_KEEP_VERSION"# \
$SYSCONFIG/scylla-server > /tmp/scylla-server
mv /tmp/scylla-server $SYSCONFIG/scylla-server

View File

@@ -34,14 +34,11 @@ SCYLLA_HOME=/var/lib/scylla
# scylla config dir
SCYLLA_CONF=/etc/scylla
# scylla arguments (for posix mode)
SCYLLA_ARGS="--log-to-syslog 1 --log-to-stdout 0 --default-log-level info --collectd-address=127.0.0.1:25826 --collectd=1 --collectd-poll-period 3000 --network-stack posix"
## scylla arguments (for dpdk mode)
#SCYLLA_ARGS="--log-to-syslog 1 --log-to-stdout 0 --default-log-level info --collectd-address=127.0.0.1:25826 --collectd=1 --collectd-poll-period 3000 --network-stack native --dpdk-pmd"
# scylla io
SCYLLA_IO=
# additional arguments
SCYLLA_ARGS=""
# setup as AMI instance
AMI=no
# do not upgrade Scylla packages on AMI startup
AMI_KEEP_VERSION=no

View File

@@ -43,7 +43,7 @@ if [ "$ID" = "centos" ]; then
if [ $REBUILD = 1 ]; then
./dist/redhat/centos_dep/build_dependency.sh
else
sudo curl https://s3.amazonaws.com/downloads.scylladb.com/rpm/unstable/centos/master/latest/scylla.repo -o /etc/yum.repos.d/scylla.repo
sudo curl https://s3.amazonaws.com/downloads.scylladb.com/rpm/centos/scylla.repo -o /etc/yum.repos.d/scylla.repo
fi
fi
VERSION=$(./SCYLLA-VERSION-GEN)

View File

@@ -1,5 +1,5 @@
--- binutils.spec.orig 2015-09-30 14:48:25.000000000 +0000
+++ binutils.spec 2016-01-20 14:42:17.856037134 +0000
--- binutils.spec 2015-10-19 05:45:55.106745163 +0000
+++ binutils.spec.1 2015-10-19 05:45:55.807742899 +0000
@@ -17,7 +17,7 @@
%define enable_deterministic_archives 1
@@ -7,7 +7,7 @@
-Name: %{?cross}binutils%{?_with_debug:-debug}
+Name: scylla-%{?cross}binutils%{?_with_debug:-debug}
Version: 2.25
Release: 15%{?dist}
Release: 5%{?dist}
License: GPLv3+
@@ -29,6 +29,7 @@
# instead.
@@ -17,7 +17,7 @@
Source2: binutils-2.19.50.0.1-output-format.sed
Patch01: binutils-2.20.51.0.2-libtool-lib64.patch
@@ -89,6 +90,9 @@
@@ -82,6 +83,9 @@
BuildRequires: texinfo >= 4.0, gettext, flex, bison, zlib-devel
# BZ 920545: We need pod2man in order to build the manual pages.
BuildRequires: /usr/bin/pod2man
@@ -27,7 +27,7 @@
# Required for: ld-bootstrap/bootstrap.exp bootstrap with --static
# It should not be required for: ld-elf/elf.exp static {preinit,init,fini} array
%if %{run_testsuite}
@@ -112,8 +116,8 @@
@@ -105,8 +109,8 @@
%if "%{build_gold}" == "both"
Requires(post): coreutils
@@ -38,7 +38,7 @@
%endif
# On ARM EABI systems, we do want -gnueabi to be part of the
@@ -138,11 +142,12 @@
@@ -131,11 +135,12 @@
%package devel
Summary: BFD and opcodes static and dynamic libraries and header files
Group: System Environment/Libraries
@@ -50,10 +50,10 @@
Requires: zlib-devel
-Requires: binutils = %{version}-%{release}
+Requires: scylla-binutils = %{version}-%{release}
# BZ 1215242: We need touch...
Requires: coreutils
@@ -426,11 +431,11 @@
%description devel
This package contains BFD and opcodes static and dynamic libraries.
@@ -411,11 +416,11 @@
%post
%if "%{build_gold}" == "both"
%__rm -f %{_bindir}/%{?cross}ld
@@ -68,7 +68,7 @@
%endif
%if %{isnative}
/sbin/ldconfig
@@ -448,8 +453,8 @@
@@ -433,8 +438,8 @@
%preun
%if "%{build_gold}" == "both"
if [ $1 = 0 ]; then

View File

@@ -1,5 +1,5 @@
--- boost.spec.orig 2016-01-15 18:41:47.000000000 +0000
+++ boost.spec 2016-01-20 14:46:47.397663246 +0000
--- boost.spec 2015-05-03 17:32:13.000000000 +0000
+++ boost.spec.1 2015-10-19 06:03:12.670534256 +0000
@@ -6,6 +6,11 @@
# We should be able to install directly.
%define boost_docdir __tmp_docdir
@@ -20,9 +20,9 @@
+Name: scylla-boost
+%define orig_name boost
Summary: The free peer-reviewed portable C++ source libraries
Version: 1.58.0
%define version_enc 1_58_0
Release: 11%{?dist}
Version: 1.57.0
%define version_enc 1_57_0
Release: 6%{?dist}
License: Boost and MIT and Python
-%define toplev_dirname %{name}_%{version_enc}
@@ -93,8 +93,8 @@
+Requires: scylla-boost-wave%{?_isa} = %{version}-%{release}
BuildRequires: m4
BuildRequires: libstdc++-devel
@@ -156,6 +164,7 @@
BuildRequires: libstdc++-devel%{?_isa}
@@ -151,6 +159,7 @@
%package atomic
Summary: Run-Time component of boost atomic library
Group: System Environment/Libraries
@@ -102,7 +102,7 @@
%description atomic
@@ -167,7 +176,8 @@
@@ -162,7 +171,8 @@
%package chrono
Summary: Run-Time component of boost chrono library
Group: System Environment/Libraries
@@ -112,7 +112,7 @@
%description chrono
@@ -176,6 +186,7 @@
@@ -171,6 +181,7 @@
%package container
Summary: Run-Time component of boost container library
Group: System Environment/Libraries
@@ -120,7 +120,7 @@
%description container
@@ -188,6 +199,7 @@
@@ -183,6 +194,7 @@
%package context
Summary: Run-Time component of boost context switching library
Group: System Environment/Libraries
@@ -128,7 +128,7 @@
%description context
@@ -197,6 +209,7 @@
@@ -192,6 +204,7 @@
%package coroutine
Summary: Run-Time component of boost coroutine library
Group: System Environment/Libraries
@@ -136,7 +136,7 @@
%description coroutine
Run-Time support for Boost.Coroutine, a library that provides
@@ -208,6 +221,7 @@
@@ -203,6 +216,7 @@
%package date-time
Summary: Run-Time component of boost date-time library
Group: System Environment/Libraries
@@ -144,7 +144,7 @@
%description date-time
@@ -217,7 +231,8 @@
@@ -212,7 +226,8 @@
%package filesystem
Summary: Run-Time component of boost filesystem library
Group: System Environment/Libraries
@@ -154,7 +154,7 @@
%description filesystem
@@ -228,7 +243,8 @@
@@ -223,7 +238,8 @@
%package graph
Summary: Run-Time component of boost graph library
Group: System Environment/Libraries
@@ -164,7 +164,7 @@
%description graph
@@ -248,9 +264,10 @@
@@ -243,9 +259,10 @@
%package locale
Summary: Run-Time component of boost locale library
Group: System Environment/Libraries
@@ -178,7 +178,7 @@
%description locale
@@ -260,6 +277,7 @@
@@ -255,6 +272,7 @@
%package log
Summary: Run-Time component of boost logging library
Group: System Environment/Libraries
@@ -186,7 +186,7 @@
%description log
@@ -270,6 +288,7 @@
@@ -265,6 +283,7 @@
%package math
Summary: Math functions for boost TR1 library
Group: System Environment/Libraries
@@ -194,7 +194,7 @@
%description math
@@ -279,6 +298,7 @@
@@ -274,6 +293,7 @@
%package program-options
Summary: Run-Time component of boost program_options library
Group: System Environment/Libraries
@@ -202,7 +202,7 @@
%description program-options
@@ -289,6 +309,7 @@
@@ -284,6 +304,7 @@
%package python
Summary: Run-Time component of boost python library
Group: System Environment/Libraries
@@ -210,7 +210,7 @@
%description python
@@ -303,6 +324,7 @@
@@ -298,6 +319,7 @@
%package python3
Summary: Run-Time component of boost python library for Python 3
Group: System Environment/Libraries
@@ -218,7 +218,7 @@
%description python3
@@ -315,8 +337,9 @@
@@ -310,8 +332,9 @@
%package python3-devel
Summary: Shared object symbolic links for Boost.Python 3
Group: System Environment/Libraries
@@ -230,7 +230,7 @@
%description python3-devel
@@ -327,6 +350,7 @@
@@ -322,6 +345,7 @@
%package random
Summary: Run-Time component of boost random library
Group: System Environment/Libraries
@@ -238,7 +238,7 @@
%description random
@@ -335,6 +359,7 @@
@@ -330,6 +354,7 @@
%package regex
Summary: Run-Time component of boost regular expression library
Group: System Environment/Libraries
@@ -246,7 +246,7 @@
%description regex
@@ -343,6 +368,7 @@
@@ -338,6 +363,7 @@
%package serialization
Summary: Run-Time component of boost serialization library
Group: System Environment/Libraries
@@ -254,7 +254,7 @@
%description serialization
@@ -351,6 +377,7 @@
@@ -346,6 +372,7 @@
%package signals
Summary: Run-Time component of boost signals and slots library
Group: System Environment/Libraries
@@ -262,7 +262,7 @@
%description signals
@@ -359,6 +386,7 @@
@@ -354,6 +381,7 @@
%package system
Summary: Run-Time component of boost system support library
Group: System Environment/Libraries
@@ -270,7 +270,7 @@
%description system
@@ -369,6 +397,7 @@
@@ -364,6 +392,7 @@
%package test
Summary: Run-Time component of boost test library
Group: System Environment/Libraries
@@ -278,7 +278,7 @@
%description test
@@ -378,7 +407,8 @@
@@ -373,7 +402,8 @@
%package thread
Summary: Run-Time component of boost thread library
Group: System Environment/Libraries
@@ -288,7 +288,7 @@
%description thread
@@ -390,8 +420,9 @@
@@ -385,8 +415,9 @@
%package timer
Summary: Run-Time component of boost timer library
Group: System Environment/Libraries
@@ -300,7 +300,7 @@
%description timer
@@ -402,11 +433,12 @@
@@ -397,11 +428,12 @@
%package wave
Summary: Run-Time component of boost C99/C++ pre-processing library
Group: System Environment/Libraries
@@ -318,7 +318,7 @@
%description wave
@@ -417,27 +449,20 @@
@@ -412,27 +444,20 @@
%package devel
Summary: The Boost C++ headers and shared development libraries
Group: Development/Libraries
@@ -352,7 +352,7 @@
%description static
Static Boost C++ libraries.
@@ -448,11 +473,7 @@
@@ -443,11 +468,7 @@
%if 0%{?rhel} >= 6
BuildArch: noarch
%endif
@@ -365,7 +365,7 @@
%description doc
This package contains the documentation in the HTML format of the Boost C++
@@ -465,7 +486,7 @@
@@ -460,7 +481,7 @@
%if 0%{?rhel} >= 6
BuildArch: noarch
%endif
@@ -374,18 +374,19 @@
%description examples
This package contains example source files distributed with boost.
@@ -476,8 +497,9 @@
@@ -471,9 +492,10 @@
%package openmpi
Summary: Run-Time component of Boost.MPI library
Group: System Environment/Libraries
+Requires: scylla-env
Requires: openmpi%{?_isa}
BuildRequires: openmpi-devel
-Requires: boost-serialization%{?_isa} = %{version}-%{release}
+Requires: scylla-boost-serialization%{?_isa} = %{version}-%{release}
%description openmpi
@@ -487,10 +509,11 @@
@@ -483,10 +505,11 @@
%package openmpi-devel
Summary: Shared library symbolic links for Boost.MPI
Group: System Environment/Libraries
@@ -401,7 +402,7 @@
%description openmpi-devel
@@ -500,9 +523,10 @@
@@ -496,9 +519,10 @@
%package openmpi-python
Summary: Python run-time component of Boost.MPI library
Group: System Environment/Libraries
@@ -415,7 +416,7 @@
%description openmpi-python
@@ -512,8 +536,9 @@
@@ -508,8 +532,9 @@
%package graph-openmpi
Summary: Run-Time component of parallel boost graph library
Group: System Environment/Libraries
@@ -427,11 +428,12 @@
%description graph-openmpi
@@ -530,10 +555,10 @@
@@ -526,11 +551,11 @@
%package mpich
Summary: Run-Time component of Boost.MPI library
Group: System Environment/Libraries
+Requires: scylla-env
Requires: mpich%{?_isa}
BuildRequires: mpich-devel
-Requires: boost-serialization%{?_isa} = %{version}-%{release}
-Provides: boost-mpich2 = %{version}-%{release}
@@ -441,7 +443,7 @@
%description mpich
@@ -543,12 +568,12 @@
@@ -540,12 +565,12 @@
%package mpich-devel
Summary: Shared library symbolic links for Boost.MPI
Group: System Environment/Libraries
@@ -460,7 +462,7 @@
%description mpich-devel
@@ -558,11 +583,11 @@
@@ -555,11 +580,11 @@
%package mpich-python
Summary: Python run-time component of Boost.MPI library
Group: System Environment/Libraries
@@ -477,7 +479,7 @@
%description mpich-python
@@ -572,10 +597,10 @@
@@ -569,10 +594,10 @@
%package graph-mpich
Summary: Run-Time component of parallel boost graph library
Group: System Environment/Libraries
@@ -492,7 +494,7 @@
%description graph-mpich
@@ -589,7 +614,8 @@
@@ -586,7 +611,8 @@
%package build
Summary: Cross platform build system for C++ projects
Group: Development/Tools
@@ -502,7 +504,7 @@
BuildArch: noarch
%description build
@@ -613,6 +639,7 @@
@@ -600,6 +626,7 @@
%package jam
Summary: A low-level build tool
Group: Development/Tools
@@ -510,7 +512,7 @@
%description jam
Boost.Jam (BJam) is the low-level build engine tool for Boost.Build.
@@ -1186,7 +1213,7 @@
@@ -1134,7 +1161,7 @@
%files devel
%defattr(-, root, root, -)
%doc LICENSE_1_0.txt

View File

@@ -12,36 +12,28 @@ sudo yum install -y wget yum-utils rpm-build rpmdevtools gcc gcc-c++ make patch
mkdir -p build/srpms
cd build/srpms
if [ ! -f binutils-2.25-15.fc23.src.rpm ]; then
wget https://kojipkgs.fedoraproject.org//packages/binutils/2.25/15.fc23/src/binutils-2.25-15.fc23.src.rpm
if [ ! -f binutils-2.25-5.fc22.src.rpm ]; then
wget http://download.fedoraproject.org/pub/fedora/linux/releases/22/Everything/source/SRPMS/b/binutils-2.25-5.fc22.src.rpm
fi
if [ ! -f isl-0.14-4.fc23.src.rpm ]; then
wget https://kojipkgs.fedoraproject.org//packages/isl/0.14/4.fc23/src/isl-0.14-4.fc23.src.rpm
if [ ! -f isl-0.14-3.fc22.src.rpm ]; then
wget http://download.fedoraproject.org/pub/fedora/linux/releases/22/Everything/source/SRPMS/i/isl-0.14-3.fc22.src.rpm
fi
if [ ! -f gcc-5.3.1-2.fc23.src.rpm ]; then
wget https://kojipkgs.fedoraproject.org//packages/gcc/5.3.1/2.fc23/src/gcc-5.3.1-2.fc23.src.rpm
if [ ! -f gcc-5.1.1-4.fc22.src.rpm ]; then
wget https://s3.amazonaws.com/scylla-centos-dep/gcc-5.1.1-4.fc22.src.rpm
fi
if [ ! -f boost-1.58.0-11.fc23.src.rpm ]; then
wget https://kojipkgs.fedoraproject.org//packages/boost/1.58.0/11.fc23/src/boost-1.58.0-11.fc23.src.rpm
if [ ! -f boost-1.57.0-6.fc22.src.rpm ]; then
wget http://download.fedoraproject.org/pub/fedora/linux/releases/22/Everything/source/SRPMS/b/boost-1.57.0-6.fc22.src.rpm
fi
if [ ! -f ninja-build-1.6.0-2.fc23.src.rpm ]; then
wget https://kojipkgs.fedoraproject.org//packages/ninja-build/1.6.0/2.fc23/src/ninja-build-1.6.0-2.fc23.src.rpm
if [ ! -f ninja-build-1.5.3-2.fc22.src.rpm ]; then
wget http://download.fedoraproject.org/pub/fedora/linux/releases/22/Everything/source/SRPMS/n/ninja-build-1.5.3-2.fc22.src.rpm
fi
if [ ! -f ragel-6.8-5.fc23.src.rpm ]; then
wget https://kojipkgs.fedoraproject.org//packages/ragel/6.8/5.fc23/src/ragel-6.8-5.fc23.src.rpm
fi
if [ ! -f gdb-7.10.1-30.fc23.src.rpm ]; then
wget https://kojipkgs.fedoraproject.org//packages/gdb/7.10.1/30.fc23/src/gdb-7.10.1-30.fc23.src.rpm
fi
if [ ! -f pyparsing-2.0.3-2.fc23.src.rpm ]; then
wget https://kojipkgs.fedoraproject.org//packages/pyparsing/2.0.3/2.fc23/src/pyparsing-2.0.3-2.fc23.src.rpm
if [ ! -f ragel-6.8-3.fc22.src.rpm ]; then
wget http://download.fedoraproject.org/pub/fedora/linux/releases/22/Everything/source/SRPMS/r/ragel-6.8-3.fc22.src.rpm
fi
cd -
@@ -54,8 +46,6 @@ sudo yum install -y flex bison dejagnu zlib-static glibc-static sharutils bc lib
sudo yum install -y gcc-objc
sudo yum install -y asciidoc
sudo yum install -y gettext
sudo yum install -y rpm-devel python34-devel guile-devel readline-devel ncurses-devel expat-devel texlive-collection-latexrecommended xz-devel libselinux-devel
sudo yum install -y dos2unix
if [ ! -f $RPMBUILD/RPMS/noarch/scylla-env-1.0-1.el7.centos.noarch.rpm ]; then
cd dist/redhat/centos_dep
@@ -65,62 +55,48 @@ if [ ! -f $RPMBUILD/RPMS/noarch/scylla-env-1.0-1.el7.centos.noarch.rpm ]; then
fi
do_install scylla-env-1.0-1.el7.centos.noarch.rpm
if [ ! -f $RPMBUILD/RPMS/x86_64/scylla-binutils-2.25-15.el7.centos.x86_64.rpm ]; then
rpm --define "_topdir $RPMBUILD" -ivh build/srpms/binutils-2.25-15.fc23.src.rpm
if [ ! -f $RPMBUILD/RPMS/x86_64/scylla-binutils-2.25-5.el7.centos.x86_64.rpm ]; then
rpm --define "_topdir $RPMBUILD" -ivh build/srpms/binutils-2.25-5.fc22.src.rpm
patch $RPMBUILD/SPECS/binutils.spec < dist/redhat/centos_dep/binutils.diff
rpmbuild --define "_topdir $RPMBUILD" -ba $RPMBUILD/SPECS/binutils.spec
fi
do_install scylla-binutils-2.25-15.el7.centos.x86_64.rpm
do_install scylla-binutils-2.25-5.el7.centos.x86_64.rpm
if [ ! -f $RPMBUILD/RPMS/x86_64/scylla-isl-0.14-4.el7.centos.x86_64.rpm ]; then
rpm --define "_topdir $RPMBUILD" -ivh build/srpms/isl-0.14-4.fc23.src.rpm
if [ ! -f $RPMBUILD/RPMS/x86_64/scylla-isl-0.14-3.el7.centos.x86_64.rpm ]; then
rpm --define "_topdir $RPMBUILD" -ivh build/srpms/isl-0.14-3.fc22.src.rpm
patch $RPMBUILD/SPECS/isl.spec < dist/redhat/centos_dep/isl.diff
rpmbuild --define "_topdir $RPMBUILD" -ba $RPMBUILD/SPECS/isl.spec
fi
do_install scylla-isl-0.14-4.el7.centos.x86_64.rpm
do_install scylla-isl-devel-0.14-4.el7.centos.x86_64.rpm
do_install scylla-isl-0.14-3.el7.centos.x86_64.rpm
do_install scylla-isl-devel-0.14-3.el7.centos.x86_64.rpm
if [ ! -f $RPMBUILD/RPMS/x86_64/scylla-gcc-5.3.1-2.el7.centos.x86_64.rpm ]; then
rpm --define "_topdir $RPMBUILD" -ivh build/srpms/gcc-5.3.1-2.fc23.src.rpm
if [ ! -f $RPMBUILD/RPMS/x86_64/scylla-gcc-5.1.1-4.el7.centos.x86_64.rpm ]; then
rpm --define "_topdir $RPMBUILD" -ivh build/srpms/gcc-5.1.1-4.fc22.src.rpm
patch $RPMBUILD/SPECS/gcc.spec < dist/redhat/centos_dep/gcc.diff
rpmbuild --define "_topdir $RPMBUILD" -ba $RPMBUILD/SPECS/gcc.spec
fi
do_install scylla-*5.3.1-2*
do_install scylla-*5.1.1-4*
if [ ! -f $RPMBUILD/RPMS/x86_64/scylla-boost-1.58.0-11.el7.centos.x86_64.rpm ]; then
rpm --define "_topdir $RPMBUILD" -ivh build/srpms/boost-1.58.0-11.fc23.src.rpm
if [ ! -f $RPMBUILD/RPMS/x86_64/scylla-boost-1.57.0-6.el7.centos.x86_64.rpm ]; then
rpm --define "_topdir $RPMBUILD" -ivh build/srpms/boost-1.57.0-6.fc22.src.rpm
patch $RPMBUILD/SPECS/boost.spec < dist/redhat/centos_dep/boost.diff
rpmbuild --define "_topdir $RPMBUILD" -ba $RPMBUILD/SPECS/boost.spec
fi
do_install scylla-boost*
if [ ! -f $RPMBUILD/RPMS/x86_64/scylla-ninja-build-1.6.0-2.el7.centos.x86_64.rpm ]; then
rpm --define "_topdir $RPMBUILD" -ivh build/srpms/ninja-build-1.6.0-2.fc23.src.rpm
if [ ! -f $RPMBUILD/RPMS/x86_64/scylla-ninja-build-1.5.3-2.el7.centos.x86_64.rpm ]; then
rpm --define "_topdir $RPMBUILD" -ivh build/srpms/ninja-build-1.5.3-2.fc22.src.rpm
patch $RPMBUILD/SPECS/ninja-build.spec < dist/redhat/centos_dep/ninja-build.diff
rpmbuild --define "_topdir $RPMBUILD" -ba $RPMBUILD/SPECS/ninja-build.spec
fi
do_install scylla-ninja-build-1.6.0-2.el7.centos.x86_64.rpm
do_install scylla-ninja-build-1.5.3-2.el7.centos.x86_64.rpm
if [ ! -f $RPMBUILD/RPMS/x86_64/scylla-ragel-6.8-5.el7.centos.x86_64.rpm ]; then
rpm --define "_topdir $RPMBUILD" -ivh build/srpms/ragel-6.8-5.fc23.src.rpm
if [ ! -f $RPMBUILD/RPMS/x86_64/scylla-ragel-6.8-3.el7.centos.x86_64.rpm ]; then
rpm --define "_topdir $RPMBUILD" -ivh build/srpms/ragel-6.8-3.fc22.src.rpm
patch $RPMBUILD/SPECS/ragel.spec < dist/redhat/centos_dep/ragel.diff
rpmbuild --define "_topdir $RPMBUILD" -ba $RPMBUILD/SPECS/ragel.spec
fi
do_install scylla-ragel-6.8-5.el7.centos.x86_64.rpm
if [ ! -f $RPMBUILD/RPMS/x86_64/scylla-gdb-7.10.1-30.el7.centos.x86_64.rpm ]; then
rpm --define "_topdir $RPMBUILD" -ivh build/srpms/gdb-7.10.1-30.fc23.src.rpm
patch $RPMBUILD/SPECS/gdb.spec < dist/redhat/centos_dep/gdb.diff
rpmbuild --define "_topdir $RPMBUILD" -ba $RPMBUILD/SPECS/gdb.spec
fi
do_install scylla-gdb-7.10.1-30.el7.centos.x86_64.rpm
if [ ! -f $RPMBUILD/RPMS/noarch/python34-pyparsing-2.0.3-2.el7.centos.noarch.rpm ]; then
rpm --define "_topdir $RPMBUILD" -ivh build/srpms/pyparsing-2.0.3-2.fc23.src.rpm
patch $RPMBUILD/SPECS/pyparsing.spec < dist/redhat/centos_dep/pyparsing.diff
rpmbuild --define "_topdir $RPMBUILD" -ba $RPMBUILD/SPECS/pyparsing.spec
fi
do_install python34-pyparsing-2.0.3-2.el7.centos.noarch.rpm
do_install scylla-ragel-6.8-3.el7.centos.x86_64.rpm
if [ ! -f $RPMBUILD/RPMS/noarch/scylla-antlr3-tool-3.5.2-1.el7.centos.noarch.rpm ]; then
mkdir build/scylla-antlr3-tool-3.5.2

View File

@@ -1,14 +1,30 @@
--- gcc.spec.orig 2015-12-08 16:03:46.000000000 +0000
+++ gcc.spec 2016-01-21 08:47:49.160667342 +0000
@@ -1,6 +1,7 @@
%global DATE 20151207
%global SVNREV 231358
%global gcc_version 5.3.1
--- gcc.spec 2015-10-19 06:31:44.889189647 +0000
+++ gcc.spec.1 2015-10-19 07:56:17.445991665 +0000
@@ -1,22 +1,15 @@
%global DATE 20150618
%global SVNREV 224595
%global gcc_version 5.1.1
+%define _prefix /opt/scylladb
# Note, gcc_release must be integer, if you want to add suffixes to
# %{release}, append them after %{gcc_release} on Release: line.
%global gcc_release 2
@@ -84,7 +85,8 @@
%global gcc_release 4
%global _unpackaged_files_terminate_build 0
%global _performance_build 1
%global multilib_64_archs sparc64 ppc64 ppc64p7 s390x x86_64
-%ifarch %{ix86} x86_64 ia64 ppc ppc64 ppc64p7 alpha %{arm} aarch64
-%global build_ada 1
-%else
%global build_ada 0
-%endif
-%ifarch %{ix86} x86_64 ppc ppc64 ppc64le ppc64p7 s390 s390x %{arm} aarch64
-%global build_go 1
-%else
%global build_go 0
-%endif
%ifarch %{ix86} x86_64 ia64
%global build_libquadmath 1
%else
@@ -82,7 +75,8 @@
%global multilib_32_arch i686
%endif
Summary: Various compilers (C, C++, Objective-C, Java, ...)
@@ -18,7 +34,7 @@
Version: %{gcc_version}
Release: %{gcc_release}%{?dist}
# libgcc, libgfortran, libgomp, libstdc++ and crtstuff have
@@ -99,6 +101,7 @@
@@ -97,6 +91,7 @@
%global isl_version 0.14
URL: http://gcc.gnu.org
BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n)
@@ -26,7 +42,7 @@
# Need binutils with -pie support >= 2.14.90.0.4-4
# Need binutils which can omit dot symbols and overlap .opd on ppc64 >= 2.15.91.0.2-4
# Need binutils which handle -msecure-plt on ppc >= 2.16.91.0.2-2
@@ -110,7 +113,7 @@
@@ -108,7 +103,7 @@
# Need binutils which support .cfi_sections >= 2.19.51.0.14-33
# Need binutils which support --no-add-needed >= 2.20.51.0.2-12
# Need binutils which support -plugin
@@ -35,7 +51,7 @@
# While gcc doesn't include statically linked binaries, during testing
# -static is used several times.
BuildRequires: glibc-static
@@ -145,15 +148,15 @@
@@ -143,15 +138,15 @@
BuildRequires: libunwind >= 0.98
%endif
%if %{build_isl}
@@ -55,7 +71,7 @@
# Need .eh_frame ld optimizations
# Need proper visibility support
# Need -pie support
@@ -168,7 +171,7 @@
@@ -166,7 +161,7 @@
# Need binutils that support .cfi_sections
# Need binutils that support --no-add-needed
# Need binutils that support -plugin
@@ -64,7 +80,7 @@
# Make sure gdb will understand DW_FORM_strp
Conflicts: gdb < 5.1-2
Requires: glibc-devel >= 2.2.90-12
@@ -176,17 +179,15 @@
@@ -174,17 +169,15 @@
# Make sure glibc supports TFmode long double
Requires: glibc >= 2.3.90-35
%endif
@@ -86,7 +102,7 @@
Requires(post): /sbin/install-info
Requires(preun): /sbin/install-info
AutoReq: true
@@ -228,12 +229,12 @@
@@ -226,12 +219,12 @@
The gcc package contains the GNU Compiler Collection version 5.
You'll need this package in order to compile C code.
@@ -101,7 +117,7 @@
%endif
Obsoletes: libmudflap
Obsoletes: libmudflap-devel
@@ -241,17 +242,19 @@
@@ -239,17 +232,19 @@
Obsoletes: libgcj < %{version}-%{release}
Obsoletes: libgcj-devel < %{version}-%{release}
Obsoletes: libgcj-src < %{version}-%{release}
@@ -125,7 +141,7 @@
Autoreq: true
%description c++
@@ -259,50 +262,55 @@
@@ -257,50 +252,55 @@
It includes support for most of the current C++ specification,
including templates and exception handling.
@@ -193,7 +209,7 @@
Autoreq: true
%description objc
@@ -313,29 +321,32 @@
@@ -311,29 +311,32 @@
%package objc++
Summary: Objective-C++ support for GCC
Group: Development/Languages
@@ -233,7 +249,7 @@
%endif
Requires(post): /sbin/install-info
Requires(preun): /sbin/install-info
@@ -345,260 +356,286 @@
@@ -343,260 +346,286 @@
The gcc-gfortran package provides support for compiling Fortran
programs with the GNU Compiler Collection.
@@ -592,7 +608,7 @@
Cpp is the GNU C-Compatible Compiler Preprocessor.
Cpp is a macro processor which is used automatically
by the C compiler to transform your program before actual
@@ -623,8 +660,9 @@
@@ -621,8 +650,9 @@
%package gnat
Summary: Ada 83, 95, 2005 and 2012 support for GCC
Group: Development/Languages
@@ -604,7 +620,7 @@
Requires(post): /sbin/install-info
Requires(preun): /sbin/install-info
Autoreq: true
@@ -633,82 +671,90 @@
@@ -631,40 +661,44 @@
GNAT is a GNU Ada 83, 95, 2005 and 2012 front-end to GCC. This package includes
development tools, the documents and Ada compiler.
@@ -658,13 +674,8 @@
+Requires: scylla-libgo-devel = %{version}-%{release}
Requires(post): /sbin/install-info
Requires(preun): /sbin/install-info
-Requires(post): %{_sbindir}/update-alternatives
-Requires(postun): %{_sbindir}/update-alternatives
+Requires(post): /sbin/update-alternatives
+Requires(postun): /sbin/update-alternatives
Autoreq: true
%description go
Requires(post): %{_sbindir}/update-alternatives
@@ -675,38 +709,42 @@
The gcc-go package provides support for compiling Go programs
with the GNU Compiler Collection.
@@ -717,7 +728,7 @@
Requires: gmp-devel >= 4.1.2-8, mpfr-devel >= 2.2.1, libmpc-devel >= 0.8.1
%description plugin-devel
@@ -728,7 +774,8 @@
@@ -726,7 +764,8 @@
Summary: Debug information for package %{name}
Group: Development/Debug
AutoReqProv: 0
@@ -727,21 +738,21 @@
%description debuginfo
This package provides debug information for package %{name}.
@@ -958,11 +1005,11 @@
@@ -961,11 +1000,10 @@
--enable-gnu-unique-object --enable-linker-build-id --with-linker-hash-style=gnu \
--enable-plugin --enable-initfini-array \
--disable-libgcj \
-%if 0%{fedora} >= 21 && 0%{fedora} <= 22
--with-default-libstdcxx-abi=gcc4-compatible \
--with-default-libstdcxx-abi=c++98 \
-%endif
%if %{build_isl}
--with-isl \
- --with-isl \
+ --with-isl-include=/opt/scylladb/include/ \
+ --with-isl-lib=/opt/scylladb/lib64/ \
%else
--without-isl \
%endif
@@ -971,11 +1018,9 @@
@@ -974,11 +1012,9 @@
%else
--disable-libmpx \
%endif
@@ -753,7 +764,7 @@
%ifarch %{arm}
--disable-sjlj-exceptions \
%endif
@@ -1006,9 +1051,6 @@
@@ -1009,9 +1045,6 @@
%if 0%{?rhel} >= 7
--with-cpu-32=power8 --with-tune-32=power8 --with-cpu-64=power8 --with-tune-64=power8 \
%endif
@@ -763,7 +774,7 @@
%endif
%ifarch ppc
--build=%{gcc_target_platform} --target=%{gcc_target_platform} --with-cpu=default32
@@ -1270,16 +1312,15 @@
@@ -1273,16 +1306,15 @@
mv %{buildroot}%{_prefix}/%{_lib}/libmpx.spec $FULLPATH/
%endif
@@ -786,7 +797,7 @@
%endif
%ifarch ppc
rm -f $FULLPATH/libgcc_s.so
@@ -1819,7 +1860,7 @@
@@ -1816,7 +1848,7 @@
chmod 755 %{buildroot}%{_prefix}/bin/c?9
cd ..
@@ -795,7 +806,7 @@
%find_lang cpplib
# Remove binaries we will not be including, so that they don't end up in
@@ -1869,11 +1910,7 @@
@@ -1866,11 +1898,7 @@
# run the tests.
make %{?_smp_mflags} -k check ALT_CC_UNDER_TEST=gcc ALT_CXX_UNDER_TEST=g++ \
@@ -807,7 +818,7 @@
echo ====================TESTING=========================
( LC_ALL=C ../contrib/test_summary || : ) 2>&1 | sed -n '/^cat.*EOF/,/^EOF/{/^cat.*EOF/d;/^EOF/d;/^LAST_UPDATED:/d;p;}'
echo ====================TESTING END=====================
@@ -1900,13 +1937,13 @@
@@ -1897,13 +1925,13 @@
--info-dir=%{_infodir} %{_infodir}/gcc.info.gz || :
fi
@@ -823,21 +834,7 @@
if [ $1 = 0 -a -f %{_infodir}/cpp.info.gz ]; then
/sbin/install-info --delete \
--info-dir=%{_infodir} %{_infodir}/cpp.info.gz || :
@@ -1945,19 +1982,19 @@
fi
%post go
-%{_sbindir}/update-alternatives --install \
+/sbin/update-alternatives --install \
%{_prefix}/bin/go go %{_prefix}/bin/go.gcc 92 \
--slave %{_prefix}/bin/gofmt gofmt %{_prefix}/bin/gofmt.gcc
%preun go
if [ $1 = 0 ]; then
- %{_sbindir}/update-alternatives --remove go %{_prefix}/bin/go.gcc
+ /sbin/update-alternatives --remove go %{_prefix}/bin/go.gcc
fi
@@ -1954,7 +1982,7 @@
# Because glibc Prereq's libgcc and /sbin/ldconfig
# comes from glibc, it might not exist yet when
# libgcc is installed
@@ -846,7 +843,7 @@
if posix.access ("/sbin/ldconfig", "x") then
local pid = posix.fork ()
if pid == 0 then
@@ -1967,7 +2004,7 @@
@@ -1964,7 +1992,7 @@
end
end
@@ -855,7 +852,7 @@
if posix.access ("/sbin/ldconfig", "x") then
local pid = posix.fork ()
if pid == 0 then
@@ -1977,120 +2014,120 @@
@@ -1974,120 +2002,120 @@
end
end
@@ -1014,7 +1011,7 @@
%defattr(-,root,root,-)
%{_prefix}/bin/cc
%{_prefix}/bin/c89
@@ -2414,7 +2451,7 @@
@@ -2409,7 +2437,7 @@
%{!?_licensedir:%global license %%doc}
%license gcc/COPYING* COPYING.RUNTIME
@@ -1023,7 +1020,7 @@
%defattr(-,root,root,-)
%{_prefix}/lib/cpp
%{_prefix}/bin/cpp
@@ -2425,10 +2462,10 @@
@@ -2420,10 +2448,10 @@
%dir %{_prefix}/libexec/gcc/%{gcc_target_platform}/%{gcc_version}
%{_prefix}/libexec/gcc/%{gcc_target_platform}/%{gcc_version}/cc1
@@ -1037,7 +1034,7 @@
%{!?_licensedir:%global license %%doc}
%license gcc/COPYING* COPYING.RUNTIME
@@ -2469,7 +2506,7 @@
@@ -2461,7 +2489,7 @@
%endif
%doc rpm.doc/changelogs/gcc/cp/ChangeLog*
@@ -1046,7 +1043,7 @@
%defattr(-,root,root,-)
%{_prefix}/%{_lib}/libstdc++.so.6*
%dir %{_datadir}/gdb
@@ -2481,7 +2518,7 @@
@@ -2473,7 +2501,7 @@
%dir %{_prefix}/share/gcc-%{gcc_version}/python
%{_prefix}/share/gcc-%{gcc_version}/python/libstdcxx
@@ -1055,7 +1052,7 @@
%defattr(-,root,root,-)
%dir %{_prefix}/include/c++
%dir %{_prefix}/include/c++/%{gcc_version}
@@ -2507,7 +2544,7 @@
@@ -2488,7 +2516,7 @@
%endif
%doc rpm.doc/changelogs/libstdc++-v3/ChangeLog* libstdc++-v3/README*
@@ -1064,7 +1061,7 @@
%defattr(-,root,root,-)
%dir %{_prefix}/lib/gcc
%dir %{_prefix}/lib/gcc/%{gcc_target_platform}
@@ -2528,7 +2565,7 @@
@@ -2509,7 +2537,7 @@
%endif
%if %{build_libstdcxx_docs}
@@ -1073,7 +1070,7 @@
%defattr(-,root,root)
%{_mandir}/man3/*
%doc rpm.doc/libstdc++-v3/html
@@ -2567,7 +2604,7 @@
@@ -2548,7 +2576,7 @@
%dir %{_prefix}/libexec/gcc/%{gcc_target_platform}/%{gcc_version}
%{_prefix}/libexec/gcc/%{gcc_target_platform}/%{gcc_version}/cc1objplus
@@ -1082,7 +1079,7 @@
%defattr(-,root,root,-)
%{_prefix}/%{_lib}/libobjc.so.4*
@@ -2621,11 +2658,11 @@
@@ -2602,11 +2630,11 @@
%endif
%doc rpm.doc/gfortran/*
@@ -1096,7 +1093,7 @@
%defattr(-,root,root,-)
%dir %{_prefix}/lib/gcc
%dir %{_prefix}/lib/gcc/%{gcc_target_platform}
@@ -2671,12 +2708,12 @@
@@ -2652,12 +2680,12 @@
%{_prefix}/libexec/gcc/%{gcc_target_platform}/%{gcc_version}/gnat1
%doc rpm.doc/changelogs/gcc/ada/ChangeLog*
@@ -1111,7 +1108,7 @@
%defattr(-,root,root,-)
%dir %{_prefix}/lib/gcc
%dir %{_prefix}/lib/gcc/%{gcc_target_platform}
@@ -2702,7 +2739,7 @@
@@ -2683,7 +2711,7 @@
%exclude %{_prefix}/lib/gcc/%{gcc_target_platform}/%{gcc_version}/adalib/libgnarl.a
%endif
@@ -1120,7 +1117,7 @@
%defattr(-,root,root,-)
%dir %{_prefix}/lib/gcc
%dir %{_prefix}/lib/gcc/%{gcc_target_platform}
@@ -2726,7 +2763,7 @@
@@ -2707,7 +2735,7 @@
%endif
%endif
@@ -1129,7 +1126,7 @@
%defattr(-,root,root,-)
%{_prefix}/%{_lib}/libgomp.so.1*
%{_prefix}/%{_lib}/libgomp-plugin-host_nonshm.so.1*
@@ -2734,14 +2771,14 @@
@@ -2715,14 +2743,14 @@
%doc rpm.doc/changelogs/libgomp/ChangeLog*
%if %{build_libquadmath}
@@ -1146,7 +1143,7 @@
%defattr(-,root,root,-)
%dir %{_prefix}/lib/gcc
%dir %{_prefix}/lib/gcc/%{gcc_target_platform}
@@ -2754,7 +2791,7 @@
@@ -2735,7 +2763,7 @@
%endif
%doc rpm.doc/libquadmath/ChangeLog*
@@ -1155,7 +1152,7 @@
%defattr(-,root,root,-)
%dir %{_prefix}/lib/gcc
%dir %{_prefix}/lib/gcc/%{gcc_target_platform}
@@ -2773,12 +2810,12 @@
@@ -2754,12 +2782,12 @@
%endif
%if %{build_libitm}
@@ -1170,7 +1167,7 @@
%defattr(-,root,root,-)
%dir %{_prefix}/lib/gcc
%dir %{_prefix}/lib/gcc/%{gcc_target_platform}
@@ -2791,7 +2828,7 @@
@@ -2772,7 +2800,7 @@
%endif
%doc rpm.doc/libitm/ChangeLog*
@@ -1179,7 +1176,7 @@
%defattr(-,root,root,-)
%dir %{_prefix}/lib/gcc
%dir %{_prefix}/lib/gcc/%{gcc_target_platform}
@@ -2810,11 +2847,11 @@
@@ -2791,11 +2819,11 @@
%endif
%if %{build_libatomic}
@@ -1193,7 +1190,7 @@
%defattr(-,root,root,-)
%dir %{_prefix}/lib/gcc
%dir %{_prefix}/lib/gcc/%{gcc_target_platform}
@@ -2834,11 +2871,11 @@
@@ -2815,11 +2843,11 @@
%endif
%if %{build_libasan}
@@ -1207,7 +1204,7 @@
%defattr(-,root,root,-)
%dir %{_prefix}/lib/gcc
%dir %{_prefix}/lib/gcc/%{gcc_target_platform}
@@ -2860,11 +2897,11 @@
@@ -2841,11 +2869,11 @@
%endif
%if %{build_libubsan}
@@ -1221,7 +1218,7 @@
%defattr(-,root,root,-)
%dir %{_prefix}/lib/gcc
%dir %{_prefix}/lib/gcc/%{gcc_target_platform}
@@ -2886,11 +2923,11 @@
@@ -2867,11 +2895,11 @@
%endif
%if %{build_libtsan}
@@ -1235,7 +1232,7 @@
%defattr(-,root,root,-)
%dir %{_prefix}/lib/gcc
%dir %{_prefix}/lib/gcc/%{gcc_target_platform}
@@ -2902,11 +2939,11 @@
@@ -2883,11 +2911,11 @@
%endif
%if %{build_liblsan}
@@ -1249,7 +1246,7 @@
%defattr(-,root,root,-)
%dir %{_prefix}/lib/gcc
%dir %{_prefix}/lib/gcc/%{gcc_target_platform}
@@ -2918,11 +2955,11 @@
@@ -2899,11 +2927,11 @@
%endif
%if %{build_libcilkrts}
@@ -1263,7 +1260,7 @@
%defattr(-,root,root,-)
%dir %{_prefix}/lib/gcc
%dir %{_prefix}/lib/gcc/%{gcc_target_platform}
@@ -2942,12 +2979,12 @@
@@ -2923,12 +2951,12 @@
%endif
%if %{build_libmpx}
@@ -1278,7 +1275,7 @@
%defattr(-,root,root,-)
%dir %{_prefix}/lib/gcc
%dir %{_prefix}/lib/gcc/%{gcc_target_platform}
@@ -3009,12 +3046,12 @@
@@ -2990,12 +3018,12 @@
%endif
%doc rpm.doc/go/*
@@ -1293,7 +1290,7 @@
%defattr(-,root,root,-)
%dir %{_prefix}/lib/gcc
%dir %{_prefix}/lib/gcc/%{gcc_target_platform}
@@ -3042,7 +3079,7 @@
@@ -3023,7 +3051,7 @@
%{_prefix}/lib/gcc/%{gcc_target_platform}/%{gcc_version}/libgo.so
%endif
@@ -1302,7 +1299,7 @@
%defattr(-,root,root,-)
%dir %{_prefix}/lib/gcc
%dir %{_prefix}/lib/gcc/%{gcc_target_platform}
@@ -3060,12 +3097,12 @@
@@ -3041,12 +3069,12 @@
%endif
%endif

View File

@@ -1,29 +0,0 @@
--- gdb.spec.orig 2015-12-06 04:10:30.000000000 +0000
+++ gdb.spec 2016-01-20 14:49:12.745843903 +0000
@@ -16,7 +16,10 @@
}
Summary: A GNU source-level debugger for C, C++, Fortran, Go and other languages
-Name: %{?scl_prefix}gdb
+Name: %{?scl_prefix}scylla-gdb
+%define orig_name gdb
+Requires: scylla-env
+%define _prefix /opt/scylladb
# Freeze it when GDB gets branched
%global snapsrc 20150706
@@ -572,12 +575,8 @@
BuildRequires: rpm-devel%{buildisa}
BuildRequires: zlib-devel%{buildisa} libselinux-devel%{buildisa}
%if 0%{!?_without_python:1}
-%if 0%{?rhel:1} && 0%{?rhel} <= 7
-BuildRequires: python-devel%{buildisa}
-%else
-%global __python %{__python3}
-BuildRequires: python3-devel%{buildisa}
-%endif
+BuildRequires: python34-devel%{?_isa}
+%global __python /usr/bin/python3.4
%if 0%{?rhel:1} && 0%{?rhel} <= 7
# Temporarily before python files get moved to libstdc++.rpm
# libstdc++%{bits_other} is not present in Koji, the .spec script generating

View File

@@ -1,5 +1,5 @@
--- isl.spec.orig 2016-01-20 14:41:16.891802146 +0000
+++ isl.spec 2016-01-20 14:43:13.838336396 +0000
--- isl.spec 2015-01-06 16:24:49.000000000 +0000
+++ isl.spec.1 2015-10-18 12:12:38.000000000 +0000
@@ -1,5 +1,5 @@
Summary: Integer point manipulation library
-Name: isl

View File

@@ -1,56 +1,34 @@
--- ninja-build.spec.orig 2016-01-20 14:41:16.892802134 +0000
+++ ninja-build.spec 2016-01-20 14:44:42.453227192 +0000
@@ -1,19 +1,18 @@
-Name: ninja-build
+Name: scylla-ninja-build
Version: 1.6.0
Release: 2%{?dist}
Summary: A small build system with a focus on speed
License: ASL 2.0
URL: http://martine.github.com/ninja/
Source0: https://github.com/martine/ninja/archive/v%{version}.tar.gz#/ninja-%{version}.tar.gz
-Source1: ninja.vim
# Rename mentions of the executable name to be ninja-build.
Patch1000: ninja-1.6.0-binary-rename.patch
+Requires: scylla-env
BuildRequires: asciidoc
BuildRequires: gtest-devel
BuildRequires: python2-devel
-BuildRequires: re2c >= 0.11.3
-Requires: emacs-filesystem
-Requires: vim-filesystem
+#BuildRequires: scylla-re2c >= 0.11.3
+%define _prefix /opt/scylladb
%description
Ninja is a small build system with a focus on speed. It differs from other
@@ -32,15 +31,8 @@
./ninja -v ninja_test
%install
-# TODO: Install ninja_syntax.py?
-mkdir -p %{buildroot}/{%{_bindir},%{_datadir}/bash-completion/completions,%{_datadir}/emacs/site-lisp,%{_datadir}/vim/vimfiles/syntax,%{_datadir}/vim/vimfiles/ftdetect,%{_datadir}/zsh/site-functions}
-
+mkdir -p %{buildroot}/opt/scylladb/bin
install -pm755 ninja %{buildroot}%{_bindir}/ninja-build
-install -pm644 misc/bash-completion %{buildroot}%{_datadir}/bash-completion/completions/ninja-bash-completion
-install -pm644 misc/ninja-mode.el %{buildroot}%{_datadir}/emacs/site-lisp/ninja-mode.el
-install -pm644 misc/ninja.vim %{buildroot}%{_datadir}/vim/vimfiles/syntax/ninja.vim
-install -pm644 %{SOURCE1} %{buildroot}%{_datadir}/vim/vimfiles/ftdetect/ninja.vim
-install -pm644 misc/zsh-completion %{buildroot}%{_datadir}/zsh/site-functions/_ninja
%check
# workaround possible too low default limits
@@ -50,12 +42,6 @@
%files
%doc COPYING HACKING.md README doc/manual.html
%{_bindir}/ninja-build
-%{_datadir}/bash-completion/completions/ninja-bash-completion
-%{_datadir}/emacs/site-lisp/ninja-mode.el
-%{_datadir}/vim/vimfiles/syntax/ninja.vim
-%{_datadir}/vim/vimfiles/ftdetect/ninja.vim
-# zsh does not have a -filesystem package
-%{_datadir}/zsh/
%changelog
* Mon Nov 16 2015 Ben Boeckel <mathstuf@gmail.com> - 1.6.0-2
1c1
< Name: ninja-build
---
> Name: scylla-ninja-build
8d7
< Source1: ninja.vim
10a10
> Requires: scylla-env
14,16c14,15
< BuildRequires: re2c >= 0.11.3
< Requires: emacs-filesystem
< Requires: vim-filesystem
---
> #BuildRequires: scylla-re2c >= 0.11.3
> %define _prefix /opt/scylladb
35,37c34
< # TODO: Install ninja_syntax.py?
< mkdir -p %{buildroot}/{%{_bindir},%{_datadir}/bash-completion/completions,%{_datadir}/emacs/site-lisp,%{_datadir}/vim/vimfiles/syntax,%{_datadir}/vim/vimfiles/ftdetect,%{_datadir}/zsh/site-functions}
<
---
> mkdir -p %{buildroot}/opt/scylladb/bin
39,43d35
< install -pm644 misc/bash-completion %{buildroot}%{_datadir}/bash-completion/completions/ninja-bash-completion
< install -pm644 misc/ninja-mode.el %{buildroot}%{_datadir}/emacs/site-lisp/ninja-mode.el
< install -pm644 misc/ninja.vim %{buildroot}%{_datadir}/vim/vimfiles/syntax/ninja.vim
< install -pm644 %{SOURCE1} %{buildroot}%{_datadir}/vim/vimfiles/ftdetect/ninja.vim
< install -pm644 misc/zsh-completion %{buildroot}%{_datadir}/zsh/site-functions/_ninja
53,58d44
< %{_datadir}/bash-completion/completions/ninja-bash-completion
< %{_datadir}/emacs/site-lisp/ninja-mode.el
< %{_datadir}/vim/vimfiles/syntax/ninja.vim
< %{_datadir}/vim/vimfiles/ftdetect/ninja.vim
< # zsh does not have a -filesystem package
< %{_datadir}/zsh/

View File

@@ -1,40 +0,0 @@
--- pyparsing.spec.orig 2016-01-25 19:11:14.663651658 +0900
+++ pyparsing.spec 2016-01-25 19:12:49.853875369 +0900
@@ -1,4 +1,4 @@
-%if 0%{?fedora}
+%if 0%{?centos}
%global with_python3 1
%endif
@@ -15,7 +15,7 @@
BuildRequires: dos2unix
BuildRequires: glibc-common
%if 0%{?with_python3}
-BuildRequires: python3-devel
+BuildRequires: python34-devel
%endif # if with_python3
%description
@@ -30,11 +30,11 @@
The package contains documentation for pyparsing.
%if 0%{?with_python3}
-%package -n python3-pyparsing
+%package -n python34-pyparsing
Summary: An object-oriented approach to text processing (Python 3 version)
Group: Development/Libraries
-%description -n python3-pyparsing
+%description -n python34-pyparsing
pyparsing is a module that can be used to easily and directly configure syntax
definitions for any number of text parsing applications.
@@ -90,7 +90,7 @@
%{python_sitelib}/pyparsing.py*
%if 0%{?with_python3}
-%files -n python3-pyparsing
+%files -n python34-pyparsing
%doc CHANGES README LICENSE
%{python3_sitelib}/pyparsing*egg-info
%{python3_sitelib}/pyparsing.py*

View File

@@ -1,11 +1,11 @@
--- ragel.spec.orig 2015-06-18 22:12:28.000000000 +0000
+++ ragel.spec 2016-01-20 14:49:53.980327766 +0000
--- ragel.spec 2014-08-18 11:55:49.000000000 +0000
+++ ragel.spec.1 2015-10-18 12:18:23.000000000 +0000
@@ -1,17 +1,20 @@
-Name: ragel
+Name: scylla-ragel
+%define orig_name ragel
Version: 6.8
Release: 5%{?dist}
Release: 3%{?dist}
Summary: Finite state machine compiler
Group: Development/Tools

14
dist/redhat/scripts/scylla_run vendored Executable file
View File

@@ -0,0 +1,14 @@
#!/bin/sh -e
args="--log-to-syslog 1 --log-to-stdout 0 --default-log-level info $SCYLLA_ARGS"
if [ "$NETWORK_MODE" = "posix" ]; then
args="$args --network-stack posix"
elif [ "$NETWORK_MODE" = "virtio" ]; then
args="$args --network-stack native"
elif [ "$NETWORK_MODE" = "dpdk" ]; then
args="$args --network-stack native --dpdk-pmd"
fi
export HOME=/var/lib/scylla
exec /usr/bin/scylla $args

View File

@@ -9,10 +9,9 @@ URL: http://www.scylladb.com/
Source0: %{name}-@@VERSION@@-@@RELEASE@@.tar
BuildRequires: libaio-devel libstdc++-devel cryptopp-devel hwloc-devel numactl-devel libpciaccess-devel libxml2-devel zlib-devel thrift-devel yaml-cpp-devel lz4-devel snappy-devel jsoncpp-devel systemd-devel xz-devel openssl-devel libcap-devel libselinux-devel libgcrypt-devel libgpg-error-devel elfutils-devel krb5-devel libcom_err-devel libattr-devel pcre-devel elfutils-libelf-devel bzip2-devel keyutils-libs-devel xfsprogs-devel make gnutls-devel systemd-devel
%{?fedora:BuildRequires: boost-devel ninja-build ragel antlr3-tool antlr3-C++-devel python3 gcc-c++ libasan libubsan python3-pyparsing}
%{?rhel:BuildRequires: scylla-libstdc++-static scylla-boost-devel scylla-ninja-build scylla-ragel scylla-antlr3-tool scylla-antlr3-C++-devel python34 scylla-gcc-c++ >= 5.1.1, python34-pyparsing}
Requires: systemd-libs hwloc
Conflicts: abrt
%{?fedora:BuildRequires: boost-devel ninja-build ragel antlr3-tool antlr3-C++-devel python3 gcc-c++ libasan libubsan}
%{?rhel:BuildRequires: scylla-libstdc++-static scylla-boost-devel scylla-ninja-build scylla-ragel scylla-antlr3-tool scylla-antlr3-C++-devel python34 scylla-gcc-c++ >= 5.1.1}
Requires: systemd-libs xfsprogs mdadm hwloc
%description
@@ -52,6 +51,7 @@ install -m644 conf/scylla.yaml $RPM_BUILD_ROOT%{_sysconfdir}/scylla/
install -m644 conf/cassandra-rackdc.properties $RPM_BUILD_ROOT%{_sysconfdir}/scylla/
install -m644 dist/redhat/systemd/scylla-server.service $RPM_BUILD_ROOT%{_unitdir}/
install -m755 dist/common/scripts/* $RPM_BUILD_ROOT%{_prefix}/lib/scylla/
install -m755 dist/redhat/scripts/* $RPM_BUILD_ROOT%{_prefix}/lib/scylla/
install -m755 seastar/scripts/posix_net_conf.sh $RPM_BUILD_ROOT%{_prefix}/lib/scylla/
install -m755 seastar/dpdk/tools/dpdk_nic_bind.py $RPM_BUILD_ROOT%{_prefix}/lib/scylla/
install -m755 build/release/scylla $RPM_BUILD_ROOT%{_bindir}
@@ -140,6 +140,7 @@ rm -rf $RPM_BUILD_ROOT
%{_unitdir}/scylla-server.service
%{_bindir}/scylla
%{_prefix}/lib/scylla/scylla_prepare
%{_prefix}/lib/scylla/scylla_run
%{_prefix}/lib/scylla/scylla_stop
%{_prefix}/lib/scylla/scylla_setup
%{_prefix}/lib/scylla/scylla_coredump_setup

View File

@@ -1,6 +1,6 @@
[Unit]
Description=Scylla Server
After=network.target
After=network.target libvirtd.service
[Service]
Type=notify
@@ -8,11 +8,9 @@ LimitMEMLOCK=infinity
LimitNOFILE=200000
LimitAS=infinity
LimitNPROC=8096
WorkingDirectory=/var/lib/scylla
Environment="HOME=/var/lib/scylla"
EnvironmentFile=/etc/sysconfig/scylla-server
ExecStartPre=/usr/bin/sudo -E /usr/lib/scylla/scylla_prepare
ExecStart=/usr/bin/scylla $SCYLLA_ARGS $SCYLLA_IO
ExecStart=/usr/lib/scylla/scylla_run
ExecStopPost=/usr/bin/sudo -E /usr/lib/scylla/scylla_stop
TimeoutStartSec=900
KillMode=process

View File

@@ -9,19 +9,6 @@ if [ -e debian ] || [ -e build/release ]; then
rm -rf debian build
mkdir build
fi
sudo apt-get -y update
if [ ! -f /usr/bin/git ]; then
sudo apt-get -y install git
fi
if [ ! -f /usr/bin/mk-build-deps ]; then
sudo apt-get -y install devscripts
fi
if [ ! -f /usr/bin/equivs-build ]; then
sudo apt-get -y install equivs
fi
if [ ! -f /usr/bin/add-apt-repository ]; then
sudo apt-get -y install software-properties-common
fi
RELEASE=`lsb_release -r|awk '{print $2}'`
CODENAME=`lsb_release -c|awk '{print $2}'`
@@ -43,24 +30,28 @@ cp dist/ubuntu/changelog.in debian/changelog
sed -i -e "s/@@VERSION@@/$SCYLLA_VERSION/g" debian/changelog
sed -i -e "s/@@RELEASE@@/$SCYLLA_RELEASE/g" debian/changelog
sed -i -e "s/@@CODENAME@@/$CODENAME/g" debian/changelog
cp dist/ubuntu/rules.in debian/rules
cp dist/ubuntu/control.in debian/control
if [ "$RELEASE" = "15.10" ]; then
sed -i -e "s/@@COMPILER@@/g++/g" debian/rules
sed -i -e "s/@@COMPILER@@/g++/g" debian/control
else
sed -i -e "s/@@COMPILER@@/g++-4.9/g" debian/rules
sed -i -e "s/@@COMPILER@@/g++-4.9/g" debian/control
fi
sudo apt-get -y update
./dist/ubuntu/dep/build_dependency.sh
DEP="libyaml-cpp-dev liblz4-dev libsnappy-dev libcrypto++-dev libjsoncpp-dev libaio-dev ragel ninja-build git liblz4-1 libaio1 hugepages software-properties-common libgnutls28-dev libhwloc-dev libnuma-dev libpciaccess-dev"
if [ "$RELEASE" = "14.04" ]; then
DEP="$DEP libboost1.55-dev libboost-program-options1.55.0 libboost-program-options1.55-dev libboost-system1.55.0 libboost-system1.55-dev libboost-thread1.55.0 libboost-thread1.55-dev libboost-test1.55.0 libboost-test1.55-dev libboost-filesystem1.55-dev libboost-filesystem1.55.0 libsnappy1"
else
DEP="$DEP libboost-dev libboost-program-options-dev libboost-system-dev libboost-thread-dev libboost-test-dev libboost-filesystem-dev libboost-filesystem-dev libsnappy1v5"
fi
if [ "$RELEASE" = "15.10" ]; then
DEP="$DEP libjsoncpp0v5 libcrypto++9v5 libyaml-cpp0.5v5 antlr3"
else
DEP="$DEP libjsoncpp0 libcrypto++9 libyaml-cpp0.5"
fi
sudo apt-get -y install $DEP
if [ "$RELEASE" != "15.10" ]; then
sudo add-apt-repository -y ppa:ubuntu-toolchain-r/test
sudo apt-get -y update
sudo apt-get -y install g++-4.9
fi
echo Y | sudo mk-build-deps -i -r
sudo apt-get -y install g++-4.9
debuild -r fakeroot -us -uc

View File

@@ -4,11 +4,11 @@ Homepage: http://scylladb.com
Section: database
Priority: optional
Standards-Version: 3.9.5
Build-Depends: debhelper (>= 9), libyaml-cpp-dev, liblz4-dev, libsnappy-dev, libcrypto++-dev, libjsoncpp-dev, libaio-dev, libthrift-dev, thrift-compiler, antlr3, antlr3-c++-dev, ragel, ninja-build, git, libboost-program-options1.55-dev | libboost-program-options-dev, libboost-filesystem1.55-dev | libboost-filesystem-dev, libboost-system1.55-dev | libboost-system-dev, libboost-thread1.55-dev | libboost-thread-dev, libboost-test1.55-dev | libboost-test-dev, libgnutls28-dev, libhwloc-dev, libnuma-dev, libpciaccess-dev, xfslibs-dev, python3-pyparsing, libxml2-dev, @@COMPILER@@
Build-Depends: debhelper (>= 9), libyaml-cpp-dev, liblz4-dev, libsnappy-dev, libcrypto++-dev, libjsoncpp-dev, libaio-dev, libthrift-dev, thrift-compiler, antlr3, antlr3-c++-dev, ragel, g++-4.9, ninja-build, git, libboost-program-options1.55-dev | libboost-program-options-dev, libboost-filesystem1.55-dev | libboost-filesystem-dev, libboost-system1.55-dev | libboost-system-dev, libboost-thread1.55-dev | libboost-thread-dev, libboost-test1.55-dev | libboost-test-dev, libgnutls28-dev, libhwloc-dev, libnuma-dev, libpciaccess-dev
Package: scylla-server
Architecture: amd64
Depends: ${shlibs:Depends}, ${misc:Depends}, hugepages, adduser, hwloc-nox
Depends: ${shlibs:Depends}, ${misc:Depends}, hugepages, adduser, mdadm, xfsprogs, hwloc-nox
Description: Scylla database server binaries
Scylla is a highly scalable, eventually consistent, distributed,
partitioned row DB.

View File

@@ -5,13 +5,12 @@ SCRIPTS = $(CURDIR)/debian/scylla-server/usr/lib/scylla
SWAGGER = $(SCRIPTS)/swagger-ui
API = $(SCRIPTS)/api
SYSCTL = $(CURDIR)/debian/scylla-server/etc/sysctl.d
SUDOERS = $(CURDIR)/debian/scylla-server/etc/sudoers.d
LIMITS= $(CURDIR)/debian/scylla-server/etc/security/limits.d
LIBS = $(CURDIR)/debian/scylla-server/usr/lib
CONF = $(CURDIR)/debian/scylla-server/etc/scylla
override_dh_auto_build:
./configure.py --disable-xen --enable-dpdk --mode=release --static-stdc++ --compiler=@@COMPILER@@
./configure.py --disable-xen --enable-dpdk --mode=release --static-stdc++ --compiler=g++-4.9
ninja
override_dh_auto_clean:
@@ -26,9 +25,6 @@ override_dh_auto_install:
mkdir -p $(SYSCTL) && \
cp $(CURDIR)/dist/ubuntu/sysctl.d/99-scylla.conf $(SYSCTL)
mkdir -p $(SUDOERS) && \
cp $(CURDIR)/dist/common/sudoers.d/scylla $(SUDOERS)
mkdir -p $(CONF) && \
cp $(CURDIR)/conf/scylla.yaml $(CONF)
cp $(CURDIR)/conf/cassandra-rackdc.properties $(CONF)
@@ -40,8 +36,7 @@ override_dh_auto_install:
cp -r $(CURDIR)/licenses $(DOC)
mkdir -p $(SCRIPTS) && \
cp $(CURDIR)/seastar/scripts/dpdk_nic_bind.py $(SCRIPTS)
cp $(CURDIR)/seastar/scripts/posix_net_conf.sh $(SCRIPTS)
cp $(CURDIR)/seastar/dpdk/tools/dpdk_nic_bind.py $(SCRIPTS)
cp $(CURDIR)/dist/common/scripts/* $(SCRIPTS)
cp $(CURDIR)/dist/ubuntu/scripts/* $(SCRIPTS)

View File

@@ -11,31 +11,23 @@ umask 022
console log
expect stop
setuid scylla
setgid scylla
limit core unlimited unlimited
limit memlock unlimited unlimited
limit nofile 200000 200000
limit as unlimited unlimited
limit nproc 8096 8096
chdir /var/lib/scylla
env HOME=/var/lib/scylla
pre-start script
cd /var/lib/scylla
. /etc/default/scylla-server
export NETWORK_MODE TAP BRIDGE ETHDRV ETHPCIID NR_HUGEPAGES USER GROUP SCYLLA_HOME SCYLLA_CONF SCYLLA_ARGS SCYLLA_IO
sudo /usr/lib/scylla/scylla_prepare
export NETWORK_MODE TAP BRIDGE ETHDRV ETHPCIID NR_HUGEPAGES USER GROUP SCYLLA_HOME SCYLLA_CONF SCYLLA_ARGS
/usr/lib/scylla/scylla_prepare
end script
script
cd /var/lib/scylla
. /etc/default/scylla-server
export NETWORK_MODE TAP BRIDGE ETHDRV ETHPCIID NR_HUGEPAGES USER GROUP SCYLLA_HOME SCYLLA_CONF SCYLLA_ARGS SCYLLA_IO
exec /usr/bin/scylla $SCYLLA_ARGS $SCYLLA_IO
export NETWORK_MODE TAP BRIDGE ETHDRV ETHPCIID NR_HUGEPAGES USER GROUP SCYLLA_HOME SCYLLA_CONF SCYLLA_ARGS
exec /usr/lib/scylla/scylla_run
end script
post-stop script
cd /var/lib/scylla
. /etc/default/scylla-server
export NETWORK_MODE TAP BRIDGE ETHDRV ETHPCIID NR_HUGEPAGES USER GROUP SCYLLA_HOME SCYLLA_CONF SCYLLA_ARGS SCYLLA_IO
sudo /usr/lib/scylla/scylla_stop
export NETWORK_MODE TAP BRIDGE ETHDRV ETHPCIID NR_HUGEPAGES USER GROUP SCYLLA_HOME SCYLLA_CONF SCYLLA_ARGS
/usr/lib/scylla/scylla_stop
end script

View File

@@ -1,8 +1,15 @@
#!/bin/sh -e
RELEASE=`lsb_release -r|awk '{print $2}'`
DEP="build-essential debhelper openjdk-7-jre-headless build-essential autoconf automake pkg-config libtool bison flex libevent-dev libglib2.0-dev libqt4-dev python-dev python-dbg php5-dev devscripts python-support xfslibs-dev"
if [ "$RELEASE" = "14.04" ]; then
DEP="$DEP libboost1.55-dev libboost-test1.55-dev"
else
DEP="$DEP libboost-dev libboost-test-dev"
fi
sudo apt-get -y install $DEP
sudo apt-get install -y gdebi-core
if [ "$RELEASE" = "14.04" ]; then
if [ ! -f build/antlr3_3.5.2-1_all.deb ]; then
rm -rf build/antlr3-3.5.2
@@ -10,7 +17,6 @@ if [ "$RELEASE" = "14.04" ]; then
cp -a dist/ubuntu/dep/antlr3-3.5.2/* build/antlr3-3.5.2
cd build/antlr3-3.5.2
wget http://www.antlr3.org/download/antlr-3.5.2-complete-no-st3.jar
echo Y | sudo mk-build-deps -i -r
debuild -r fakeroot --no-tgz-check -us -uc
cd -
fi
@@ -27,7 +33,6 @@ if [ ! -f build/antlr3-c++-dev_3.5.2-1_all.deb ]; then
cd -
cp -a dist/ubuntu/dep/antlr3-c++-dev-3.5.2/debian build/antlr3-c++-dev-3.5.2
cd build/antlr3-c++-dev-3.5.2
echo Y | sudo mk-build-deps -i -r
debuild -r fakeroot --no-tgz-check -us -uc
cd -
fi
@@ -41,15 +46,8 @@ if [ ! -f build/libthrift0_1.0.0-dev_amd64.deb ]; then
tar xpf thrift-0.9.1.tar.gz
cd thrift-0.9.1
patch -p0 < ../../dist/ubuntu/dep/thrift.diff
echo Y | sudo mk-build-deps -i -r
debuild -r fakeroot --no-tgz-check -us -uc
cd ../..
fi
if [ "$RELEASE" = "14.04" ]; then
sudo gdebi -n build/antlr3_*.deb
fi
sudo gdebi -n build/antlr3-c++-dev_*.deb
sudo gdebi -n build/libthrift0_*.deb
sudo gdebi -n build/libthrift-dev_*.deb
sudo gdebi -n build/thrift-compiler_*.deb
sudo dpkg -i build/*.deb

View File

@@ -1,5 +1,6 @@
--- debian/changelog 2013-08-15 23:04:29.000000000 +0900
+++ ../thrift-0.9.1-ubuntu/debian/changelog 2016-01-15 23:22:11.189982999 +0900
diff -Nur ./debian/changelog ../thrift-0.9.1/debian/changelog
--- ./debian/changelog 2013-08-15 23:04:29.000000000 +0900
+++ ../thrift-0.9.1/debian/changelog 2015-10-29 23:03:25.797937232 +0900
@@ -1,65 +1,4 @@
-thrift (1.0.0-dev) stable; urgency=low
- * update version
@@ -69,8 +70,9 @@
-
- -- Esteve Fernandez <esteve@fluidinfo.com> Thu, 15 Jan 2009 11:34:24 +0100
+ -- Takuya ASADA <syuu@scylladb.com> Wed, 28 Oct 2015 05:11:38 +0900
--- debian/control 2013-08-18 23:58:22.000000000 +0900
+++ ../thrift-0.9.1-ubuntu/debian/control 2016-01-15 23:32:47.373982999 +0900
diff -Nur ./debian/control ../thrift-0.9.1/debian/control
--- ./debian/control 2013-08-18 23:58:22.000000000 +0900
+++ ../thrift-0.9.1/debian/control 2015-10-28 00:54:05.950464999 +0900
@@ -1,12 +1,10 @@
Source: thrift
Section: devel
@@ -84,7 +86,7 @@
+Build-Depends: debhelper (>= 5), build-essential, autoconf,
+ automake, pkg-config, libtool, bison, flex, libboost-dev | libboost1.55-dev,
+ libboost-test-dev | libboost-test1.55-dev, libevent-dev,
+ libglib2.0-dev, libqt4-dev, libssl-dev, python-support
+ libglib2.0-dev, libqt4-dev
Maintainer: Thrift Developer's <dev@thrift.apache.org>
Homepage: http://thrift.apache.org/
Vcs-Git: https://git-wip-us.apache.org/repos/asf/thrift.git
@@ -203,8 +205,9 @@
- build services that work efficiently and seamlessly.
- .
- This package contains the PHP bindings for Thrift.
--- debian/rules 2013-08-15 23:04:29.000000000 +0900
+++ ../thrift-0.9.1-ubuntu/debian/rules 2016-01-15 23:22:11.189982999 +0900
diff -Nur ./debian/rules ../thrift-0.9.1/debian/rules
--- ./debian/rules 2013-08-15 23:04:29.000000000 +0900
+++ ../thrift-0.9.1/debian/rules 2015-10-28 00:54:05.950464999 +0900
@@ -45,18 +45,6 @@
# Compile C (glib) library
$(MAKE) -C $(CURDIR)/lib/c_glib

19
dist/ubuntu/scripts/scylla_run vendored Executable file
View File

@@ -0,0 +1,19 @@
#!/bin/bash -e
args="--log-to-syslog 1 --log-to-stdout 0 --default-log-level info $SCYLLA_ARGS"
if [ "$NETWORK_MODE" = "posix" ]; then
args="$args --network-stack posix"
elif [ "$NETWORK_MODE" = "virtio" ]; then
args="$args --network-stack native"
elif [ "$NETWORK_MODE" = "dpdk" ]; then
args="$args --network-stack native --dpdk-pmd"
fi
export HOME=/var/lib/scylla
ulimit -c unlimited
ulimit -l unlimited
ulimit -n 200000
ulimit -m unlimited
ulimit -u 8096
exec sudo -E -u $USER /usr/bin/scylla $args

View File

@@ -118,3 +118,26 @@ std::ostream& operator<<(std::ostream& out, const frozen_mutation::printer& pr)
frozen_mutation::printer frozen_mutation::pretty_printer(schema_ptr s) const {
return { *this, std::move(s) };
}
template class db::serializer<frozen_mutation>;
template<>
db::serializer<frozen_mutation>::serializer(const frozen_mutation& mutation)
: _item(mutation), _size(sizeof(uint32_t) /* size */ + mutation.representation().size()) {
}
template<>
void db::serializer<frozen_mutation>::write(output& out, const frozen_mutation& mutation) {
bytes_view v = mutation.representation();
out.write(v);
}
template<>
void db::serializer<frozen_mutation>::read(frozen_mutation& m, input& in) {
m = read(in);
}
template<>
frozen_mutation db::serializer<frozen_mutation>::read(input& in) {
return frozen_mutation(bytes_serializer::read(in));
}

View File

@@ -67,3 +67,14 @@ public:
};
frozen_mutation freeze(const mutation& m);
namespace db {
typedef serializer<frozen_mutation> frozen_mutation_serializer;
template<> serializer<frozen_mutation>::serializer(const frozen_mutation &);
template<> void serializer<frozen_mutation>::write(output&, const type&);
template<> void serializer<frozen_mutation>::read(frozen_mutation&, input&);
template<> frozen_mutation serializer<frozen_mutation>::read(input&);
}

View File

@@ -23,6 +23,8 @@
#include "db/schema_tables.hh"
#include "schema_mutations.hh"
template class db::serializer<frozen_schema>;
frozen_schema::frozen_schema(const schema_ptr& s)
: _data([&s] {
bytes_ostream out;
@@ -44,7 +46,19 @@ frozen_schema::frozen_schema(bytes b)
: _data(std::move(b))
{ }
bytes_view frozen_schema::representation() const
{
return _data;
template<>
db::serializer<frozen_schema>::serializer(const frozen_schema& v)
: _item(v)
, _size(db::serializer<bytes>(v._data).size())
{ }
template<>
void
db::serializer<frozen_schema>::write(output& out, const frozen_schema& v) {
db::serializer<bytes>(v._data).write(out);
}
template<>
frozen_schema db::serializer<frozen_schema>::read(input& in) {
return frozen_schema(db::serializer<bytes>::read(in));
}

View File

@@ -30,8 +30,9 @@
// It's safe to access from another shard by const&.
class frozen_schema {
bytes _data;
private:
frozen_schema(bytes);
public:
explicit frozen_schema(bytes);
frozen_schema(const schema_ptr&);
frozen_schema(frozen_schema&&) = default;
frozen_schema(const frozen_schema&) = default;
@@ -39,5 +40,14 @@ public:
frozen_schema& operator=(frozen_schema&&) = default;
schema_ptr unfreeze() const;
friend class db::serializer<frozen_schema>;
bytes_view representation() const;
};
namespace db {
template<> serializer<frozen_schema>::serializer(const frozen_schema&);
template<> void serializer<frozen_schema>::write(output&, const frozen_schema&);
template<> frozen_schema serializer<frozen_schema>::read(input&);
extern template class serializer<frozen_schema>;
}

View File

@@ -55,10 +55,21 @@ static const std::map<application_state, sstring> application_state_names = {
{application_state::REMOVAL_COORDINATOR, "REMOVAL_COORDINATOR"},
{application_state::INTERNAL_IP, "INTERNAL_IP"},
{application_state::RPC_ADDRESS, "RPC_ADDRESS"},
{application_state::X_11_PADDING, "X_11_PADDING"},
{application_state::SEVERITY, "SEVERITY"},
{application_state::NET_VERSION, "NET_VERSION"},
{application_state::HOST_ID, "HOST_ID"},
{application_state::TOKENS, "TOKENS"},
{application_state::X1, "X1"},
{application_state::X2, "X2"},
{application_state::X3, "X3"},
{application_state::X4, "X4"},
{application_state::X5, "X5"},
{application_state::X6, "X6"},
{application_state::X7, "X7"},
{application_state::X8, "X8"},
{application_state::X9, "X9"},
{application_state::X10, "X10"},
};
std::ostream& operator<<(std::ostream& os, const application_state& m) {

View File

@@ -61,4 +61,42 @@ std::ostream& operator<<(std::ostream& os, const endpoint_state& x) {
return os;
}
void endpoint_state::serialize(bytes::iterator& out) const {
/* serialize the HeartBeatState */
_heart_beat_state.serialize(out);
/* serialize the map of ApplicationState objects */
int32_t app_state_size = _application_state.size();
serialize_int32(out, app_state_size);
for (auto& entry : _application_state) {
const application_state& state = entry.first;
const versioned_value& value = entry.second;
serialize_int32(out, int32_t(state));
value.serialize(out);
}
}
endpoint_state endpoint_state::deserialize(bytes_view& v) {
heart_beat_state hbs = heart_beat_state::deserialize(v);
endpoint_state es = endpoint_state(hbs);
int32_t app_state_size = read_simple<int32_t>(v);
for (int32_t i = 0; i < app_state_size; ++i) {
auto state = static_cast<application_state>(read_simple<int32_t>(v));
auto value = versioned_value::deserialize(v);
es.add_application_state(state, value);
}
return es;
}
size_t endpoint_state::serialized_size() const {
long size = _heart_beat_state.serialized_size();
size += serialize_int32_size;
for (auto& entry : _application_state) {
const versioned_value& value = entry.second;
size += serialize_int32_size;
size += value.serialized_size();
}
return size;
}
}

View File

@@ -81,14 +81,6 @@ public:
, _is_alive(true) {
}
endpoint_state(heart_beat_state&& initial_hb_state,
const std::map<application_state, versioned_value>& application_state)
: _heart_beat_state(std::move(initial_hb_state))
,_application_state(application_state)
, _update_timestamp(clk::now())
, _is_alive(true) {
}
heart_beat_state& get_heart_beat_state() {
return _heart_beat_state;
}
@@ -149,6 +141,13 @@ public:
}
friend std::ostream& operator<<(std::ostream& os, const endpoint_state& x);
// The following replaces EndpointStateSerializer from the Java code
void serialize(bytes::iterator& out) const;
static endpoint_state deserialize(bytes_view& v);
size_t serialized_size() const;
};
} // gms

View File

@@ -65,7 +65,7 @@ private:
// because everyone seems pretty accustomed to the default of 8, and users who have
// already tuned their phi_convict_threshold for their own environments won't need to
// change.
static constexpr double PHI_FACTOR{M_LOG10El};
static constexpr double PHI_FACTOR{1.0 / std::log(10.0)};
public:
arrival_window(int size)
@@ -102,8 +102,7 @@ private:
// because everyone seems pretty accustomed to the default of 8, and users who have
// already tuned their phi_convict_threshold for their own environments won't need to
// change.
static constexpr double PHI_FACTOR{M_LOG10El};
static constexpr double PHI_FACTOR{1.0 / std::log(10.0)}; // 0.434...
std::map<inet_address, arrival_window> _arrival_samples;
std::list<i_failure_detection_event_listener*> _fd_evnt_listeners;
double _phi = 8;

View File

@@ -97,6 +97,50 @@ public:
friend inline std::ostream& operator<<(std::ostream& os, const gossip_digest& d) {
return os << d._endpoint << ":" << d._generation << ":" << d._max_version;
}
// The following replaces GossipDigestSerializer from the Java code
void serialize(bytes::iterator& out) const {
_endpoint.serialize(out);
serialize_int32(out, _generation);
serialize_int32(out, _max_version);
}
static gossip_digest deserialize(bytes_view& v) {
auto endpoint = inet_address::deserialize(v);
auto generation = read_simple<int32_t>(v);
auto max_version = read_simple<int32_t>(v);
return gossip_digest(endpoint, generation, max_version);
}
size_t serialized_size() const {
return _endpoint.serialized_size() + serialize_int32_size + serialize_int32_size;
}
}; // class gossip_digest
// serialization helper for std::vector<gossip_digest>
class gossip_digest_serialization_helper {
public:
static void serialize(bytes::iterator& out, const std::vector<gossip_digest>& digests) {
serialize_int32(out, int32_t(digests.size()));
for (auto& digest : digests) {
digest.serialize(out);
}
}
static std::vector<gossip_digest> deserialize(bytes_view& v) {
int32_t size = read_simple<int32_t>(v);
std::vector<gossip_digest> digests;
for (int32_t i = 0; i < size; ++i)
digests.push_back(gossip_digest::deserialize(v));
return digests;
}
static size_t serialized_size(const std::vector<gossip_digest>& digests) {
size_t size = serialize_int32_size;
for (auto& digest : digests)
size += digest.serialized_size();
return size;
}
};
} // namespace gms

View File

@@ -54,4 +54,44 @@ std::ostream& operator<<(std::ostream& os, const gossip_digest_ack& ack) {
return os << "}";
}
void gossip_digest_ack::serialize(bytes::iterator& out) const {
// 1) Digest
gossip_digest_serialization_helper::serialize(out, _digests);
// 2) Map size
serialize_int32(out, int32_t(_map.size()));
// 3) Map contents
for (auto& entry : _map) {
const inet_address& ep = entry.first;
const endpoint_state& st = entry.second;
ep.serialize(out);
st.serialize(out);
}
}
gossip_digest_ack gossip_digest_ack::deserialize(bytes_view& v) {
// 1) Digest
std::vector<gossip_digest> _digests = gossip_digest_serialization_helper::deserialize(v);
// 2) Map size
int32_t map_size = read_simple<int32_t>(v);
// 3) Map contents
std::map<inet_address, endpoint_state> _map;
for (int32_t i = 0; i < map_size; ++i) {
inet_address ep = inet_address::deserialize(v);
endpoint_state st = endpoint_state::deserialize(v);
_map.emplace(std::move(ep), std::move(st));
}
return gossip_digest_ack(std::move(_digests), std::move(_map));
}
size_t gossip_digest_ack::serialized_size() const {
size_t size = gossip_digest_serialization_helper::serialized_size(_digests);
size += serialize_int32_size;
for (auto& entry : _map) {
const inet_address& ep = entry.first;
const endpoint_state& st = entry.second;
size += ep.serialized_size() + st.serialized_size();
}
return size;
}
} // namespace gms

View File

@@ -72,6 +72,13 @@ public:
return _map;
}
// The following replaces GossipDigestAckSerializer from the Java code
void serialize(bytes::iterator& out) const;
static gossip_digest_ack deserialize(bytes_view& v);
size_t serialized_size() const;
friend std::ostream& operator<<(std::ostream& os, const gossip_digest_ack& ack);
};

View File

@@ -49,4 +49,39 @@ std::ostream& operator<<(std::ostream& os, const gossip_digest_ack2& ack2) {
return os << "}";
}
void gossip_digest_ack2::serialize(bytes::iterator& out) const {
// 1) Map size
serialize_int32(out, int32_t(_map.size()));
// 2) Map contents
for (auto& entry : _map) {
const inet_address& ep = entry.first;
const endpoint_state& st = entry.second;
ep.serialize(out);
st.serialize(out);
}
}
gossip_digest_ack2 gossip_digest_ack2::deserialize(bytes_view& v) {
// 1) Map size
int32_t map_size = read_simple<int32_t>(v);
// 2) Map contents
std::map<inet_address, endpoint_state> _map;
for (int32_t i = 0; i < map_size; ++i) {
inet_address ep = inet_address::deserialize(v);
endpoint_state st = endpoint_state::deserialize(v);
_map.emplace(std::move(ep), std::move(st));
}
return gossip_digest_ack2(std::move(_map));
}
size_t gossip_digest_ack2::serialized_size() const {
size_t size = serialize_int32_size;
for (auto& entry : _map) {
const inet_address& ep = entry.first;
const endpoint_state& st = entry.second;
size += ep.serialized_size() + st.serialized_size();
}
return size;
}
} // namespace gms

View File

@@ -69,6 +69,13 @@ public:
return _map;
}
// The following replaces GossipDigestAck2Serializer from the Java code
void serialize(bytes::iterator& out) const;
static gossip_digest_ack2 deserialize(bytes_view& v);
size_t serialized_size() const;
friend std::ostream& operator<<(std::ostream& os, const gossip_digest_ack2& ack2);
};

View File

@@ -50,4 +50,22 @@ std::ostream& operator<<(std::ostream& os, const gossip_digest_syn& syn) {
return os << "}";
}
void gossip_digest_syn::serialize(bytes::iterator& out) const {
serialize_string(out, _cluster_id);
serialize_string(out, _partioner);
gossip_digest_serialization_helper::serialize(out, _digests);
}
gossip_digest_syn gossip_digest_syn::deserialize(bytes_view& v) {
sstring cluster_id = read_simple_short_string(v);
sstring partioner = read_simple_short_string(v);
std::vector<gossip_digest> digests = gossip_digest_serialization_helper::deserialize(v);
return gossip_digest_syn(cluster_id, partioner, std::move(digests));
}
size_t gossip_digest_syn::serialized_size() const {
return serialize_string_size(_cluster_id) + serialize_string_size(_partioner) +
gossip_digest_serialization_helper::serialized_size(_digests);
}
} // namespace gms

View File

@@ -72,18 +72,17 @@ public:
return _partioner;
}
sstring get_cluster_id() const {
return cluster_id();
}
sstring get_partioner() const {
return partioner();
}
std::vector<gossip_digest> get_gossip_digests() const {
return _digests;
}
// The following replaces GossipDigestSynSerializer from the Java code
void serialize(bytes::iterator& out) const;
static gossip_digest_syn deserialize(bytes_view& v);
size_t serialized_size() const;
friend std::ostream& operator<<(std::ostream& os, const gossip_digest_syn& syn);
};

View File

@@ -233,7 +233,7 @@ future<> gossiper::handle_ack_msg(msg_addr id, gossip_digest_ack ack_msg) {
}
void gossiper::init_messaging_service_handler() {
ms().register_gossip_echo([] {
ms().register_echo([] {
return smp::submit_to(0, [] {
auto& gossiper = gms::get_local_gossiper();
gossiper.set_last_processed_message_at();
@@ -279,7 +279,7 @@ void gossiper::init_messaging_service_handler() {
void gossiper::uninit_messaging_service_handler() {
auto& ms = net::get_local_messaging_service();
ms.unregister_gossip_echo();
ms.unregister_echo();
ms.unregister_gossip_shutdown();
ms.unregister_gossip_digest_syn();
ms.unregister_gossip_digest_ack2();
@@ -409,14 +409,8 @@ future<> gossiper::apply_state_locally(const std::map<inet_address, endpoint_sta
// Runs inside seastar::async context
void gossiper::remove_endpoint(inet_address endpoint) {
// do subscribers first so anything in the subscriber that depends on gossiper state won't get confused
// We can not run on_remove callbacks here becasue on_remove in
// storage_service might take the gossiper::timer_callback_lock
seastar::async([this, endpoint] {
_subscribers.for_each([endpoint] (auto& subscriber) {
subscriber->on_remove(endpoint);
});
}).handle_exception([] (auto ep) {
logger.warn("Fail to call on_remove callback: {}", ep);
_subscribers.for_each([endpoint] (auto& subscriber) {
subscriber->on_remove(endpoint);
});
if(_seeds.count(endpoint)) {
@@ -484,7 +478,7 @@ void gossiper::do_status_check() {
}
void gossiper::run() {
_callback_running = seastar::async([this, g = this->shared_from_this()] {
seastar::async([this, g = this->shared_from_this()] {
logger.trace("=== Gossip round START");
//wait on messaging service to start listening
@@ -594,10 +588,7 @@ void gossiper::run() {
logger.trace("ep={}, eps={}", x.first, x.second);
}
}
if (_enabled) {
_scheduled_gossip_task.arm(INTERVAL);
}
return make_ready_future<>();
_scheduled_gossip_task.arm(INTERVAL);
});
}
@@ -671,7 +662,8 @@ void gossiper::convict(inet_address endpoint, double phi) {
return;
}
auto& state = it->second;
logger.debug("Convicting {} with status {} - alive {}", endpoint, get_gossip_status(state), state.is_alive());
// FIXME: Add getGossipStatus
// logger.debug("Convicting {} with status {} - alive {}", endpoint, getGossipStatus(epState), state.is_alive());
if (!state.is_alive()) {
return;
}
@@ -1057,7 +1049,7 @@ void gossiper::mark_alive(inet_address addr, endpoint_state& local_state) {
msg_addr id = get_msg_addr(addr);
logger.trace("Sending a EchoMessage to {}", id);
auto ok = make_shared<bool>(false);
ms().send_gossip_echo(id).then_wrapped([this, id, ok] (auto&& f) mutable {
ms().send_echo(id).then_wrapped([this, id, ok] (auto&& f) mutable {
try {
f.get();
logger.trace("Got EchoMessage Reply");
@@ -1121,7 +1113,7 @@ void gossiper::handle_major_state_change(inet_address ep, const endpoint_state&
logger.info("Node {} is now part of the cluster", ep);
}
}
logger.trace("Adding endpoint state for {}, status = {}", ep, get_gossip_status(eps));
logger.trace("Adding endpoint state for {}", ep);
endpoint_state_map[ep] = eps;
auto& ep_state = endpoint_state_map.at(ep);
@@ -1438,13 +1430,12 @@ future<> gossiper::do_stop_gossiping() {
return make_ready_future<>();
}).get();
}
auto& cfg = service::get_local_storage_service().db().local().get_config();
sleep(std::chrono::milliseconds(cfg.shutdown_announce_in_ms())).get();
// FIXME: Integer.getInteger("cassandra.shutdown_announce_in_ms", 2000)
sleep(INTERVAL * 2).get();
} else {
logger.warn("No local state or state is in silent shutdown, not announcing shutdown");
}
_scheduled_gossip_task.cancel();
_callback_running.get();
get_gossiper().invoke_on_all([] (gossiper& g) {
if (engine().cpu_id() == 0) {
get_local_failure_detector().unregister_failure_detection_event_listener(&g);

View File

@@ -99,7 +99,6 @@ private:
bool _enabled = false;
std::set<inet_address> _seeds_from_config;
sstring _cluster_name;
future<> _callback_running = make_ready_future<>();
public:
sstring get_cluster_name();
sstring get_partitioner_name();

View File

@@ -90,6 +90,22 @@ public:
friend inline std::ostream& operator<<(std::ostream& os, const heart_beat_state& h) {
return os << "{ generation = " << h._generation << ", version = " << h._version << " }";
}
// The following replaces HeartBeatStateSerializer from the Java code
void serialize(bytes::iterator& out) const {
serialize_int32(out, _generation);
serialize_int32(out, _version);
}
static heart_beat_state deserialize(bytes_view& v) {
auto generation = read_simple<int32_t>(v);
auto version = read_simple<int32_t>(v);
return heart_beat_state(generation, version);
}
size_t serialized_size() const {
return serialize_int32_size + serialize_int32_size;
}
};
} // gms

View File

@@ -37,9 +37,6 @@ public:
inet_address(int32_t ip)
: _addr(uint32_t(ip)) {
}
explicit inet_address(uint32_t ip)
: _addr(ip) {
}
inet_address(net::ipv4_address&& addr) : _addr(std::move(addr)) {}
const net::ipv4_address& addr() const {
@@ -60,6 +57,19 @@ public:
bool is_broadcast_address() {
return _addr == net::ipv4::broadcast_address();
}
void serialize(bytes::iterator& out) const {
int8_t inet_address_size = sizeof(inet_address);
serialize_int8(out, inet_address_size);
serialize_int32(out, _addr.ip);
}
static inet_address deserialize(bytes_view& v) {
int8_t inet_address_size = read_simple<int8_t>(v);
assert(inet_address_size == sizeof(inet_address));
return inet_address(read_simple<int32_t>(v));
}
size_t serialized_size() const {
return serialize_int8_size + serialize_int32_size;
}
friend inline bool operator==(const inet_address& x, const inet_address& y) {
return x._addr == y._addr;
}

View File

@@ -36,7 +36,6 @@
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#include "gms/versioned_value.hh"
#include "message/messaging_service.hh"
namespace gms {
@@ -53,8 +52,19 @@ constexpr const char* versioned_value::HIBERNATE;
constexpr const char* versioned_value::SHUTDOWN;
constexpr const char* versioned_value::REMOVAL_COORDINATOR;
versioned_value versioned_value::factory::network_version() {
return versioned_value(sprint("%s",net::messaging_service::current_version));
void versioned_value::serialize(bytes::iterator& out) const {
serialize_string(out, value);
serialize_int32(out, version);
}
versioned_value versioned_value::deserialize(bytes_view& v) {
auto value = read_simple_short_string(v);
auto version = read_simple<int32_t>(v);
return versioned_value(std::move(value), version);
}
size_t versioned_value::serialized_size() const {
return serialize_string_size(value) + serialize_int32_size;
}
}

View File

@@ -46,6 +46,7 @@
#include "gms/inet_address.hh"
#include "dht/i_partitioner.hh"
#include "to_string.hh"
#include "message/messaging_service.hh"
#include "version.hh"
#include <unordered_set>
#include <vector>
@@ -95,7 +96,7 @@ public:
value == other.value;
}
public:
private:
versioned_value(const sstring& value, int version = version_generator::get_next_version())
: version(version), value(value) {
#if 0
@@ -111,10 +112,8 @@ public:
: version(version), value(std::move(value)) {
}
versioned_value()
: version(-1) {
}
public:
int compare_to(const versioned_value &value) {
return version - value.version;
}
@@ -229,7 +228,9 @@ public:
return versioned_value(version::release());
}
versioned_value network_version();
versioned_value network_version() {
return versioned_value(sprint("%s",net::messaging_service::current_version));
}
versioned_value internal_ip(const sstring &private_ip) {
return versioned_value(private_ip);
@@ -239,6 +240,14 @@ public:
return versioned_value(to_sstring(value));
}
};
// The following replaces VersionedValueSerializer from the Java code
public:
void serialize(bytes::iterator& out) const;
static versioned_value deserialize(bytes_view& v);
size_t serialized_size() const;
}; // class versioned_value
} // namespace gms

View File

@@ -1,329 +0,0 @@
#!/usr/bin/python3
#
# Copyright 2016 ScyllaDB
#
#
# This file is part of Scylla.
#
# Scylla is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Scylla is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Scylla. If not, see <http://www.gnu.org/licenses/>.
import json
import sys
import re
import glob
import argparse
import os
from string import Template
import pyparsing as pp
EXTENSION = '.idl.hh'
READ_BUFF = 'input_buffer'
WRITE_BUFF = 'output_buffer'
SERIALIZER = 'serialize'
DESERIALIZER = 'deserialize'
SETSIZE = 'set_size'
SIZETYPE = 'size_type'
parser = argparse.ArgumentParser(description="""Generate serializer helper function""")
parser.add_argument('-o', help='Output file', default='')
parser.add_argument('-f', help='input file', default='')
parser.add_argument('--ns', help="""namespace, when set function will be created
under the given namespace""", default='')
parser.add_argument('file', nargs='*', help="combine one or more file names for the genral include files")
config = parser.parse_args()
def fprint(f, *args):
for arg in args:
f.write(arg)
def fprintln(f, *args):
for arg in args:
f.write(arg)
f.write('\n')
def print_cw(f):
fprintln(f, """
/*
* Copyright 2016 ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
/*
* This is an auto-generated code, do not modify directly.
*/
#pragma once
""")
def parse_file(file_name):
first = pp.Word(pp.alphas + "_", exact=1)
rest = pp.Word(pp.alphanums + "_")
number = pp.Word(pp.nums)
identifier = pp.Combine(first + pp.Optional(rest))
lbrace = pp.Literal('{').suppress()
rbrace = pp.Literal('}').suppress()
cls = pp.Literal('class')
colon = pp.Literal(":")
semi = pp.Literal(";").suppress()
langle = pp.Literal("<")
rangle = pp.Literal(">")
equals = pp.Literal("=")
comma = pp.Literal(",")
lparen = pp.Literal("(")
rparen = pp.Literal(")")
lbrack = pp.Literal("[")
rbrack = pp.Literal("]")
mins = pp.Literal("-")
struct = pp.Literal('struct')
template = pp.Literal('template')
final = pp.Literal('final').setResultsName("final")
stub = pp.Literal('stub').setResultsName("stub")
with_colon = pp.Word(pp.alphanums + "_" + ":")
btype = with_colon
type = pp.Forward()
nestedParens = pp.nestedExpr('<', '>')
tmpl = pp.Group(btype + langle.suppress() + pp.Group(pp.delimitedList(type)) + rangle.suppress())
type << (tmpl | btype)
enum_lit = pp.Literal('enum')
enum_class = pp.Group(enum_lit + cls)
ns = pp.Literal("namespace")
enum_init = equals.suppress() + pp.Optional(mins) + number
enum_value = pp.Group(identifier + pp.Optional(enum_init))
enum_values = pp.Group(lbrace + pp.delimitedList(enum_value) + pp.Optional(comma) + rbrace)
content = pp.Forward()
member_name = pp.Combine(pp.Group(identifier + pp.Optional(lparen + rparen)))
attrib = pp.Group(lbrack.suppress() + lbrack.suppress() + pp.SkipTo(']') + rbrack.suppress() + rbrack.suppress())
namespace = pp.Group(ns.setResultsName("type") + identifier.setResultsName("name") + lbrace + pp.Group(pp.OneOrMore(content)).setResultsName("content") + rbrace)
enum = pp.Group(enum_class.setResultsName("type") + identifier.setResultsName("name") + colon.suppress() + identifier.setResultsName("underline_type") + enum_values.setResultsName("enum_values") + pp.Optional(semi).suppress())
default_value = equals.suppress() + pp.SkipTo(';')
class_member = pp.Group(type.setResultsName("type") + member_name.setResultsName("name") + pp.Optional(attrib).setResultsName("attribute") + pp.Optional(default_value).setResultsName("default") + semi.suppress()).setResultsName("member")
template_param = pp.Group(identifier.setResultsName("type") + identifier.setResultsName("name"))
template_def = pp.Group(template + langle + pp.Group(pp.delimitedList(template_param)).setResultsName("params") + rangle)
class_content = pp.Forward()
class_def = pp.Group(pp.Optional(template_def).setResultsName("template") + (cls | struct).setResultsName("type") + with_colon.setResultsName("name") + pp.Optional(final) + pp.Optional(stub) + lbrace + pp.Group(pp.OneOrMore(class_content)).setResultsName("members") + rbrace + pp.Optional(semi))
content << (enum | class_def | namespace)
class_content << (enum | class_def | class_member)
rt = pp.OneOrMore(content)
singleLineComment = "//" + pp.restOfLine
rt.ignore(singleLineComment)
rt.ignore(pp.cStyleComment)
return rt.parseFile(file_name, parseAll=True)
def combine_ns(namespaces):
return "::".join(namespaces)
def open_namespaces(namespaces):
return "".join(map(lambda a: "namespace " + a + " { ", namespaces))
def close_namespaces(namespaces):
return "".join(map(lambda a: "}", namespaces))
def set_namespace(namespaces):
ns = combine_ns(namespaces)
ns_open = open_namespaces(namespaces)
ns_close = close_namespaces(namespaces)
return [ns, ns_open, ns_close]
def declare_class(hout, name, ns_open, ns_close):
clas_def = ns_open + name + ";" + ns_close
fprintln(hout, "\n", clas_def)
def declear_methods(hout, name, template_param = ""):
if config.ns != '':
fprintln(hout, "namespace ", config.ns, " {")
fprintln(hout, Template("""
template <typename Output$tmp_param>
void $ser_func(Output& buf, const $name& v);
template <typename Input$tmp_param>
$name $deser_func(Input& buf, boost::type<$name>);""").substitute({'ser_func': SERIALIZER, 'deser_func' : DESERIALIZER, 'name' : name, 'sizetype' : SIZETYPE, 'tmp_param' : template_param }))
if config.ns != '':
fprintln(hout, "}")
def handle_enum(enum, hout, cout, namespaces , parent_template_param = []):
[ns, ns_open, ns_close] = set_namespace(namespaces)
temp_def = ',' + ", ".join(map(lambda a: a[0] + " " + a[1], parent_template_param)) if parent_template_param else ""
name = enum["name"] if ns == "" else ns + "::" + enum["name"]
declear_methods(hout, name, temp_def)
fprintln(cout, Template("""
template<typename Output$temp_def>
void $ser_func(Output& buf, const $name& v) {
serialize(buf, static_cast<$type>(v));
}
template<typename Input$temp_def>
$name $deser_func(Input& buf, boost::type<$name>) {
return static_cast<$name>(deserialize(buf, boost::type<$type>()));
}""").substitute({'ser_func': SERIALIZER, 'deser_func' : DESERIALIZER, 'name' : name, 'size_type' : SIZETYPE, 'type': enum['underline_type'], 'temp_def' : temp_def}))
def join_template(lst):
return "<" + ", ".join([param_type(l) for l in lst]) + ">"
def param_type(lst):
if isinstance(lst, str):
return lst
if len(lst) == 1:
return lst[0]
return lst[0] + join_template(lst[1])
def is_class(obj):
return obj["type"] == "class" or obj["type"] == "struct"
def is_enum(obj):
try:
return not isinstance(obj["type"], str) and "".join(obj["type"]) == 'enumclass'
except:
return False
def handle_class(cls, hout, cout, namespaces=[], parent_template_param = []):
if "stub" in cls:
return
[ns, ns_open, ns_close] = set_namespace(namespaces)
tpl = "template" in cls
template_param_list = (cls["template"][0]["params"].asList() if tpl else [])
template_param = ", ".join(map(lambda a: a[0] + " " + a[1], template_param_list + parent_template_param)) if (template_param_list + parent_template_param) else ""
template = "template <"+ template_param +">\n" if tpl else ""
template_class_param = "<" + ",".join(map(lambda a: a[1], template_param_list)) + ">" if tpl else ""
temp_def = ',' + template_param if template_param != "" else ""
if ns == "":
name = cls["name"]
else:
name = ns + "::" + cls["name"]
full_name = name + template_class_param
for param in cls["members"]:
if is_class(param):
handle_class(param, hout, cout, namespaces + [cls["name"] + template_class_param], parent_template_param + template_param_list)
elif is_enum(param):
handle_enum(param, hout, cout, namespaces + [cls["name"] + template_class_param], parent_template_param + template_param_list)
declear_methods(hout, name + template_class_param, temp_def)
modifier = "final" in cls
fprintln(cout, Template("""
template<typename Output$temp_def>
void $func(Output& buf, const $name& obj) {""").substitute({'func' : SERIALIZER, 'name' : full_name, 'temp_def': temp_def}))
if not modifier:
fprintln(cout, Template(""" $set_size(buf, obj);""").substitute({'func' : SERIALIZER, 'set_size' : SETSIZE, 'name' : name, 'sizetype' : SIZETYPE}))
for param in cls["members"]:
if is_class(param) or is_enum(param):
continue
fprintln(cout, Template(""" $func(buf, obj.$var);""").substitute({'func' : SERIALIZER, 'var' : param["name"]}))
fprintln(cout, "}")
fprintln(cout, Template("""
template<typename Input$temp_def>
$name$temp_param $func(Input& buf, boost::type<$name$temp_param>) {""").substitute({'func' : DESERIALIZER, 'name' : name, 'temp_def': temp_def, 'temp_param' : template_class_param}))
if not modifier:
fprintln(cout, Template(""" $size_type size = $func(buf, boost::type<$size_type>());
Input in = buf.read_substream(size - sizeof($size_type));""").substitute({'func' : DESERIALIZER, 'size_type' : SIZETYPE}))
else:
fprintln(cout, """ Input& in = buf;""")
params = []
for index, param in enumerate(cls["members"]):
if is_class(param) or is_enum(param):
continue
local_param = "__local_" + str(index)
if "attribute" in param:
deflt = param["default"][0] if "default" in param else param["type"] + "()"
fprintln(cout, Template(""" $typ $local = (in.size()>0) ?
$func(in, boost::type<$typ>()) : $default;""").substitute({'func' : DESERIALIZER, 'typ': param_type(param["type"]), 'local' : local_param, 'default': deflt}))
else:
fprintln(cout, Template(""" $typ $local = $func(in, boost::type<$typ>());""").substitute({'func' : DESERIALIZER, 'typ': param_type(param["type"]), 'local' : local_param}))
params.append("std::move(" + local_param + ")")
fprintln(cout, Template("""
$name$temp_param res {$params};
return res;
}""").substitute({'name' : name, 'params': ", ".join(params), 'temp_param' : template_class_param}))
def handle_objects(tree, hout, cout, namespaces=[]):
for obj in tree:
if is_class(obj):
handle_class(obj, hout, cout, namespaces)
elif is_enum(obj):
handle_enum(obj, hout, cout, namespaces)
elif obj["type"] == "namespace":
handle_objects(obj["content"], hout, cout, namespaces + [obj["name"]])
else:
print("unknown type ", obj, obj["type"])
def load_file(name):
if config.o:
cout = open(config.o.replace('.hh', '.impl.hh'), "w+")
hout = open(config.o, "w+")
else:
cout = open(name.replace(EXTENSION, '.dist.impl.hh'), "w+")
hout = open(name.replace(EXTENSION, '.dist.hh'), "w+")
print_cw(hout)
fprintln(hout, """
/*
* The generate code should be included in a header file after
* The object definition
*/
""")
print_cw(cout)
if config.ns != '':
fprintln(cout, "namespace ", config.ns, " {")
data = parse_file(name)
if data:
handle_objects(data, hout, cout)
if config.ns != '':
fprintln(cout, "}")
cout.close()
hout.close()
def general_include(files):
name = config.o if config.o else "serializer.dist.hh"
cout = open(name.replace('.hh', '.impl.hh'), "w+")
hout = open(name, "w+")
print_cw(cout)
print_cw(hout)
for n in files:
fprintln(hout, '#include "' + n +'"')
fprintln(cout, '#include "' + n.replace(".dist.hh", '.dist.impl.hh') +'"')
cout.close()
hout.close()
if config.file:
general_include(config.file)
elif config.f != '':
load_file(config.f)

View File

@@ -1,24 +0,0 @@
/*
* Copyright 2016 ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
class frozen_mutation final {
bytes representation();
};

View File

@@ -1,24 +0,0 @@
/*
* Copyright 2016 ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
class frozen_schema final {
bytes representation();
};

Some files were not shown because too many files have changed in this diff Show More