Asias He 00a14000cd storage_service: Replicate and advertise tokens early in the boot up process
When a node is restarted, there is a race between gossip starts (other
nodes will mark this node up again and send requests) and the tokens are
replicated to other shards. Here is an example:

- n1, n2
- n2 is down, n1 think n2 is down
- n2 starts again, n2 starts gossip service, n1 thinks n2 is up and sends
  reads/writes to n2, but n2 hasn't replicated the token_metadata to all
  the shards.
- n2 complains:
  token_metadata - sorted_tokens is empty in first_token_index!
  token_metadata - sorted_tokens is empty in first_token_index!
  token_metadata - sorted_tokens is empty in first_token_index!
  token_metadata - sorted_tokens is empty in first_token_index!
  token_metadata - sorted_tokens is empty in first_token_index!
  token_metadata - sorted_tokens is empty in first_token_index!
  storage_proxy - Failed to apply mutation from $ip#4: std::runtime_error
  (sorted_tokens is empty in first_token_index!)

The code path looks like below:

0 stoarge_service::init_server
1    prepare_to_join()
2          add gossip application state of NET_VERSION, SCHEMA and so on.
3         _gossiper.start_gossiping().get()
4    join_token_ring()
5           _token_metadata.update_normal_tokens(tokens, get_broadcast_address());
6           replicate_to_all_cores().get()
7           storage_service::set_gossip_tokens() which adds the gossip application state of TOKENS and STATUS

The race talked above is at line 3 and line 6.

To fix, we can replicate the token_metadata early after it is filled
with the tokens read from system table before gossip starts. So that
when other nodes think this restarting node is up, the tokens are
already replicated to all the shards.

In addition, this patch also fixes the issue that other nodes might see
a node miss the TOKENS and STATUS application state in gossip if that
node failed in the middle of a restarting process, i.e., it is killed
after line 3 and before line 7. As a result we could not replace the
node.

Tests: update_cluster_layout_tests.py
Fixes: #4709
Fixes: #4723
(cherry picked from commit 3b39a59135)
2019-09-22 12:46:36 +03:00
2019-02-20 11:03:12 +02:00
2018-08-01 16:50:58 +01:00
2018-12-25 14:41:24 +02:00
2018-09-17 18:40:23 +03:00
2019-09-18 18:37:23 +03:00
2018-10-24 19:32:25 +03:00
2018-06-19 16:26:51 +03:00
2017-06-23 11:35:35 -04:00
2018-02-14 14:15:59 -05:00
2017-08-27 13:11:33 +03:00
2018-07-12 16:51:30 +03:00
2017-06-23 11:35:35 -04:00
2018-06-05 11:10:24 +02:00

Scylla

Quick-start

$ git submodule update --init --recursive
$ sudo ./install-dependencies.sh
$ ./configure.py --mode=release
$ ninja-build -j4 # Assuming 4 system threads.
$ ./build/release/scylla
$ # Rejoice!

Please see HACKING.md for detailed information on building and developing Scylla.

Running Scylla

  • Run Scylla
./build/release/scylla

  • run Scylla with one CPU and ./tmp as data directory
./build/release/scylla --datadir tmp --commitlog-directory tmp --smp 1
  • For more run options:
./build/release/scylla --help

Building Fedora RPM

As a pre-requisite, you need to install Mock on your machine:

# Install mock:
sudo yum install mock

# Add user to the "mock" group:
usermod -a -G mock $USER && newgrp mock

Then, to build an RPM, run:

./dist/redhat/build_rpm.sh

The built RPM is stored in the build/mock/<configuration>/result directory. For example, on Fedora 21 mock reports the following:

INFO: Done(scylla-server-0.00-1.fc21.src.rpm) Config(default) 20 minutes 7 seconds
INFO: Results and/or logs in: build/mock/fedora-21-x86_64/result

Building Fedora-based Docker image

Build a Docker image with:

cd dist/docker
docker build -t <image-name> .

Run the image with:

docker run -p $(hostname -i):9042:9042 -i -t <image name>

Contributing to Scylla

Guidelines for contributing

Description
No description provided
Readme 307 MiB
Languages
C++ 73.5%
Python 25.3%
CMake 0.3%
GAP 0.3%
Shell 0.3%