Files
scylladb/main.cc
Vlad Zolotarov 3520d4de10 locator: introduce a global distributed<snitch_ptr> i_endpoint_snitch::snitch_instance()
Snitch class semantics defined to be per-Node. To make it so we
introduce here a static member in an i_endpoint_snitch class that
has to contain the pointer to the relevant snitch class instance.

Since the snitch contents are not always pure const it has to be per
shard, therefore we'll make it a "distributed". All the I/O is going
to take place on a single shard and if there are changes - they are going
to be propagated to the rest of the shards.

The application is responsible to initialize this distributed<shnitch>
before it's used for the first time.

This patch effectively reverts most of the "locator: futurize
snitch creation" a2594015f9 patch - the part that modifies the
code that was creating the snitch instance. Since snitch is
created explicitly by the application and all the rest of the code
simply assumes that the above global is initialized we won't need
all those changes any more and the code will get back to be nice and simple
as it was before the patch above.

So, to summarize, this patch does the following:
   - Reverts the changes introduced by a2594015f9 related to the fact that
     every time a replication strategy was created there should have been created
     a snitch that would have been stored in this strategy object. More specifically,
     methods like keyspace::create_replication_strategy() do not return a future<>
     any more and this allows to simplify the code that calls it significantly.
   - Introduce the global distributed<snitch_ptr> object:
      - It belongs to the i_endpoint_snitch class.
      - There has been added a corresponding interface to access both global and
        shard-local instances.
      - locator::abstract_replication_strategy::create_replication_strategy() does
        not accept snitch_ptr&& - it'll get and pass the corresponding shard-local
        instance of the snitch to the replication strategy's constructor by itself.
      - Adjusted the existing snitch infrastructure to the new semantics:
         - Modified the create_snitch() to create and start all per-shard snitch
           instances and update the global variable.
         - Introduced a static i_endpoint_snitch::stop_snitch() function that properly
           stops the global distributed snitch.
         - Added the code to the gossiping_property_file_snitch that distributes the
           changed data to all per-shard snitch objects.
         - Made all existing snitches classes properly maintain their state in order
           to be able to shut down cleanly.
         - Patched both urchin and cql_query_test to initialize a snitch instance before
           all other services.

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>

New in v6:
   - Rebased to the current master.
   - Extended a commit message a little - the summary.

New in v5:
   - database::create_keyspace(): added a missing _keyspaces.emplace()

New in v4:
   - Kept the database::create_keyspace() to return future<> by Glauber's request
     and added a description to this method that needs to be changed when Glauber
     adds his bits that require this interface.
2015-06-22 23:18:31 +03:00

107 lines
4.4 KiB
C++

/*
* Copyright 2014 Cloudius Systems
*/
#include "database.hh"
#include "core/app-template.hh"
#include "core/distributed.hh"
#include "thrift/server.hh"
#include "transport/server.hh"
#include "http/httpd.hh"
#include "api/api.hh"
#include "db/config.hh"
#include "message/messaging_service.hh"
#include "service/storage_service.hh"
#include "dns.hh"
#include <cstdio>
namespace bpo = boost::program_options;
static future<>
read_config(bpo::variables_map& opts, db::config& cfg) {
if (opts.count("options-file") == 0) {
return make_ready_future<>();
}
return cfg.read_from_file(opts["options-file"].as<sstring>());
}
int main(int ac, char** av) {
std::setvbuf(stdout, nullptr, _IOLBF, 1000);
app_template app;
auto opt_add = app.add_options();
auto cfg = make_lw_shared<db::config>();
cfg->add_options(opt_add)
("api-port", bpo::value<uint16_t>()->default_value(10000), "Http Rest API port")
// TODO : default, always read?
("options-file", bpo::value<sstring>(), "cassandra.yaml file to read options from")
;
distributed<database> db;
distributed<cql3::query_processor> qp;
distributed<service::storage_proxy> proxy;
api::http_context ctx(db);
return app.run(ac, av, [&] {
auto&& opts = app.configuration();
return read_config(opts, *cfg).then([&cfg, &db, &qp, &proxy, &ctx, &opts]() {
uint16_t thrift_port = cfg->rpc_port();
uint16_t cql_port = cfg->native_transport_port();
uint16_t api_port = opts["api-port"].as<uint16_t>();
sstring listen_address = cfg->listen_address();
sstring rpc_address = cfg->rpc_address();
auto seed_provider= cfg->seed_provider();
using namespace locator;
return i_endpoint_snitch::create_snitch(cfg->endpoint_snitch()).then(
[&db, cfg] () {
return service::init_storage_service().then([&db, cfg] {
return db.start(std::move(*cfg)).then([&db] {
engine().at_exit([&db] {
return db.stop().then([] {
return i_endpoint_snitch::stop_snitch();
});
});
return db.invoke_on_all(&database::init_from_data_directory);
});
});
}).then([listen_address, seed_provider] {
return net::init_messaging_service(listen_address, seed_provider);
}).then([&proxy, &db] {
return proxy.start(std::ref(db)).then([&proxy] {
engine().at_exit([&proxy] { return proxy.stop(); });
});
}).then([&db, &proxy, &qp] {
return qp.start(std::ref(proxy), std::ref(db)).then([&qp] {
engine().at_exit([&qp] { return qp.stop(); });
});
}).then([rpc_address] {
return dns::gethostbyname(rpc_address);
}).then([&db, &proxy, &qp, cql_port, thrift_port] (dns::hostent e) {
auto rpc_address = e.addresses[0].in.s_addr;
auto cserver = new distributed<cql_server>;
cserver->start(std::ref(proxy), std::ref(qp)).then([server = std::move(cserver), cql_port, rpc_address] () mutable {
server->invoke_on_all(&cql_server::listen, ipv4_addr{rpc_address, cql_port});
}).then([cql_port] {
std::cout << "CQL server listening on port " << cql_port << " ...\n";
});
auto tserver = new distributed<thrift_server>;
tserver->start(std::ref(db)).then([server = std::move(tserver), thrift_port, rpc_address] () mutable {
server->invoke_on_all(&thrift_server::listen, ipv4_addr{rpc_address, thrift_port});
}).then([thrift_port] {
std::cout << "Thrift server listening on port " << thrift_port << " ...\n";
});
}).then([&db, api_port, &ctx]{
ctx.http_server.start().then([api_port, &ctx] {
return set_server(ctx);
}).then([&ctx, api_port] {
ctx.http_server.listen(api_port);
}).then([api_port] {
std::cout << "Seastar HTTP server listening on port " << api_port << " ...\n";
});
}).or_terminate();
});
});
}