One version returns only the ranges
std::vector<range<token>>
Another version returns a map
std::unordered_map<range<token>, std::unordered_set<inet_address>>
which is converted from
std::unordered_multimap<range<token>, inet_address>
They are needed by token_metadata::pending_endpoints_for,
storage_service::get_all_ranges_with_strict_sources_for and
storage_service::decommission.
Given the current token_metadata and the new token which will be
inserted into the ring after bootstrap, calculate the ranges this new
node will be responsible for.
This is needed by boot_strapper::bootstrap().
"This series adds EC2Snich.
Since both GossipingPropertyFileSnitch and EC2SnitchXXX snitches family
are using the same property file it was logical to share the corresponding
code. Most of this series does just that... "
While trying to debug an unrelated bug, I was annoyed by the fact that parsing
caching options keep throwing exceptions all the time. Those exceptions have no
reason to happen: we try to convert the value to a number, and if we fail we
fall back to one of the two blessed strings.
We could just as easily just test for those strings beforehand and avoid all of
that.
While we're on it, the exception message should show the value of "r", not "k".
Signed-off-by: Glauber Costa <glommer@scylladb.com>
Currently, we are calculating truncated_at during truncate() independently for
each shard. It will work if we're lucky, but it is fairly easy to trigger cases
in which each shard will end up with a slightly different time.
The main problem here, is that this time is used as the snapshot name when auto
snapshots are enabled. Previous to my last fixes, this would just generate two
separate directories in this case, which is wrong but not severe.
But after the fix, this means that both shards will wait for one another to
synchronize and this will hang the database.
Fix this by making sure that the truncation time is calculated before
invoke_on_all in all needed places.
Signed-off-by: Glauber Costa <glommer@scylladb.com>
Checks the following:
- That EC2Snich is able to receive the availability zone from EC2.
- That the resulting DC and RACK values are distributed among all
shards.
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
This snitch will read the EC2 availability zone and set the DC
and RACK as follows:
If availability zone is "us-east-1d", then
DC="us-east" and RACK="1d".
If cassandra-rackdc.properties contains "dc_suffix" field then
DC will be appended with its value.
For instance if dc_suffix=_1_cassandra, then in the example above
DC=us-east_1_cassandra
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
This is a configuration file used by GossipingPropertyFileSnitch and
EC2SnitchXXX snitches family.
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
- Move property file parsing code into production_snitch_base class.
- Make a parsing code more general:
- Save the parsed keys in the hash table.
- Check only two types of errors:
- Repeating keys.
- Add a set of all supported keys and add the check for a key
being supported.
- Added production_snitch_base.cc file.
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
This function returns the directory containing the configuration
files. It takes into an account the evironment variables as follows:
- If SCYLLA_CONF is defines - this is the directory
- else if SCYLLA_HOME is defines, then $SCYLLA_HOME/conf is the directory
- else "conf" is a directory, namely the configuration files should be
looked at ./conf
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
New in v2:
- Updated get_conf_dir() description.
We are generating a general object ({}), whereas Cassandra 2.1.x generates an
array ([]). Let's do that as well to avoid surprising parsers.
Signed-off-by: Glauber Costa <glommer@scylladb.com>
We still need to write a manifest when there are no files in the snapshot.
But because we have never reached the touch_directory part in the sstables
loop for that case, nobody would have created jsondir in that case.
Since now all the file handling is done in the seal_snapshot phase, we should
just make sure the directory exists before initiating any other disk activity.
Signed-off-by: Glauber Costa <glommer@scylladb.com>
We currently have one optimization that returns early when there are no tables
to be snapshotted.
However, because of the way we are writing the manifest now, this will cause
the shard that happens to have tables to be waiting forever. So we should get
rid of it. All shards need to pass through the synchronization point.
Signed-off-by: Glauber Costa <glommer@scylladb.com>
If we are hashing more than one CF, the snapshot themselves will all have the same name.
This will cause the files from one of them to spill towards the other when writing the manifest.
The proper hash is the jsondir: that one is unique per manifest file.
Signed-off-by: Glauber Costa <glommer@scylladb.com>
There's no need to pass keyspace_metadata to notify_drop_keyspace()
because all we are interested in is the name. The keyspace has been
dropped so there's not much we could do with its metadata either.
Simplifies the next patch that wires up drop keyspace notification.
Signed-off-by: Pekka Enberg <penberg@scylladb.com>
"snapshotting the files themselves is easy: if more than one CF happens to link
an SSTable twice, all but one will fail, and we will end up with one copy.
The problem for us, is that the snapshot procedure is supposed to leave a
manifest file inside its directory. So if we just call snapshot() from
multiple shards, only the last one will succeed, writing its own SSTables to
the manifest leaving all other shards' SSTables unaccounted for.
Moreover, for things like drop table, the operation should only proceed when
the snapshot is complete. That includes the manifest file being correctly
written, and for this reason we need to wait for all shards to finish their
snapshotting before we can move on."
Currently, the snapshot code has all shards writing the manifest file. This is
wrong, because all previous writes to the last will be overwritten. This patch
fixes it, by synchronizing all writes and leaving just one of the shards with the
task of closing the manifest.
Signed-off-by: Glauber Costa <glommer@scylladb.com>
The way manifest creation is currently done is wrong: instead of a final
manifest containing all files from all shards, the current code writes a
manifest containing just the files from the shard that happens to be the
unlucky loser of the writing race.
In preparation to fix that, separate the manifest creation code from the rest.
Signed-off-by: Glauber Costa <glommer@scylladb.com>
We do need to sync jsondir after we write the manifest file (previously done,
but with a question), and before we start it (not previously done) to guarantee
that the manifest file won't reference any file that is not visible yet.
Signed-off-by: Glauber Costa <glommer@scylladb.com>