When has_relevant_range_on_this_shard() found a relevant range, it will unnecessarily
iterate through the end. Verified manually that this could be thousands of pointless
iterations when streaming data to a node just added. The relevant code could be
simplified by de-futurizing it but I think it remains so to allow task scheduler
to preempt it if necessary.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20200220224048.28804-2-raphaelsc@scylladb.com>
Previously, alternator server was not directly sharded - and instead
kept a helper http server control class, which stored sharded http
server inside. That design is confusing and makes it hard to expand
alternator server with new sharded attributes, so from now on
the alternator server is itself sharded<>.
Tests: alternator-test(local, smp==1&smp==4)
Fixes#5913
Message-Id: <b50e0e29610c0dfea61f3a1571f8ca3640356782.1582788575.git.sarna@scylladb.com>
The CMake build system in seastar.git exports the package to CMake
package registry. However, we don't use it when building from scylla.git
(we link to seastar directly) and get the following warning when
building with "dbuild" (that does not bind mount $HOME/.cmake):
CMake Warning at CMakeLists.txt:1180 (export):
Cannot create package registry file:
/home/penberg/.cmake/packages/Seastar/3b6ede62290636bbf1ab4f0e4e6a9e0b
No such file or directory
Let's just disable the package registry for our builds by setting the
CMAKE_EXPORT_NO_PACKAGE_REGISTRY CMake option as discussed here to make
the warning go away:
https://cmake.org/cmake/help/v3.4/variable/CMAKE_EXPORT_NO_PACKAGE_REGISTRY.html
Message-Id: <20200227092743.27320-1-penberg@scylladb.com>
To install scylla using install.sh easily, we need to run following things:
- add scylla user/group
- configure scylla.yaml
- run scylla_post_install.sh
But we don't want to run them when we build .rpm/.deb package,
we also need to add --packaging option to skip them.
Fixes#5830
"
Here is a simple introduction to the node operations scylla supports and
some of the issues.
- Replace operation
It is used to replace a dead node. The token ring does not change. It
pulls data from only one of the replicas which might not be the
latest copy.
- Rebuild operation
It is used to get all the data this node owns form other nodes. It
pulls data from only one of the replicas which might not be the
latest copy.
- Bootstrap operation
It is used to add a new node into the cluster. The token ring
changes. Do no suffer from the "not the latest replica” issue. New
node pulls data from existing nodes that are losing the token range.
Suffer from failed streaming. We split the ranges in 10 groups and we
stream one group at a time. Restream the group if failed, causing
unnecessary data transmission on wire.
Bootstrap is not resumable. Failure after 99.99% of data is streamed.
If we restart the node again, we need to stream all the data again
even if the node already has 99.99% of the data.
- Decommission operation
It is used to remove a live node form the cluster. Token ring
changes. Do not suffer “not the latest replica” issue. The leaving
node pushes data to existing nodes.
It suffers from resumable issue like bootstrap operation.
- Removenode operation
It is used to remove a dead node out of the cluster. Existing nodes
pulls data from other existing nodes for the new ranges it own. It
pulls from one of the replicas which might not be the latest copy.
To solve all the issues above. We could use repair based node operation.
The idea behind repair based node operations is simple: use repair to
sync data between replicas instead of streaming.
The benefits:
- Latest copy is guaranteed
- Resumable in nature
- No extra data is streamed on wire
E.g., rebuild twice, will not stream the same data twice
- Unified code path for all the node operations
- Free repair operation during bootstrap, replace operation and so on.
Fixes: #3003Fixes: #4208
Tests: update_cluster_layout_tests.py + replace_address_test.py + manual test
"
* 'repair_for_node_ops' of https://github.com/asias/scylla:
docs: Add doc for repair_based_node_ops
storage_service: Enable node repair based ops for bootstrap
storage_service: Enable node repair based ops for decommission
storage_service: Enable node repair based ops for replace
storage_service: Enable node repair based ops for removenode
storage_service: Enable node repair based ops for rebuild
storage_service: Use the same tokens as previous bootstrap
storage_service: Add is_repair_based_node_ops_enabled helper
config: Add enable_repair_based_node_ops
repair: Add replace_with_repair
repair: Add rebuild_with_repair
repair: Add do_rebuild_replace_with_repair
repair: Add removenode_with_repair
repair: Add decommission_with_repair
repair: Add do_decommission_removenode_with_repair
repair: Add bootstrap_with_repair
repair: Introduce sync_data_using_repair
repair: Propagate exception in tracker::run
* seastar 7a3b4b4e4e...affc3a5107 (6):
> Merge "Add the possibility to remove rules from routes" from Pavel
> stall_detector: expose correct clock type to use
> queue: add has_blocked_consumer() function
> Merge "core: reduce memory use for idle connections" from Avi
> testing: Enable abort_on_internal_error on tests
> core: Add a on_internal_error helper
When we test Alternator on its HTTPS port (i.e., pytest --https),
we don't want requests to verify the pedigree of the SSL certificate.
Our "dynamodb" fixture (conftest.py) takes care of this for most of
the tests, but a few tests create their own requests and need to pass the
"verify=False" option on their own. In some tests, we forgot to do
this, and this patch fixes three tests which failed with "pytest --https".
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20200226142330.27846-1-nyh@scylladb.com>
Merged pull request https://github.com/scylladb/scylla/pull/5897
from Juliusz Stasiewicz:
Column operation now contains operation::row_delete (== 2)
after queries like delete from tbl where pk=x and ck=y;. Before
this patch row deletes were treated as updates, which was incorrect
because updates do not contain row tombstones (and row deletes do).
Refs #5709
Column `operation` now contains `operation::row_delete` (== 2)
after queries like `delete from tbl where pk=x AND ck=y;`. Before
this patch row deletes were treated as updates, which was incorrect
because updates do not contain row tombstones (and row deletes do).
Refs #5709
Merged patch series from Piotr Sarna:
Alternator shutdown routines were only registered in main.cc,
but it's not enough - other operations, like decommision,
also rely on shutting down client servers.
In order to remedy the situation, a notion of client shutdown
listeners is introduced to storage service.
A shutdown listener implements a callback used by the storage
service when client servers need to shut down, and at the same
time it does not force storage service to keep a reference
for the client service itself.
NOTE: the interface can also be used later to provide
proper shutdown routines for redis and any other future APIs.
Fixes#5886
Tests: alternator-test(local, including a shutdown during the run)
Piotr Sarna (4):
storage_service: make shutdown_client_servers() thread-only
storage_service: add client shutdown hook
main: make alternator shutdown hook-based
main: reduce scope of alternator services
main.cc | 18 +++++++++---------
service/storage_service.cc | 22 +++++++++++++++++-----
service/storage_service.hh | 15 ++++++++++++++-
3 files changed, 40 insertions(+), 15 deletions(-)
On some environment systemd-coredump does not work with symlink directory,
we can use bind-mount instead.
Also, it's better to check systemd-coredump is working by generating coredump.
Fixes#5753
With the new shutdown routines in place, alternator executor
and server do not need to be declared outside of the `if` clause
which conditionally sets up alternator.
In order to properly handle not only shutdown, but also
decommission, drain and similar operations, alternator
shutdown is now registered as a client shutdown hook,
which allows storage service to trigger its shutdown routines.
Fixes#5886
The shutdown hook interface can be used later by additional
client interfaces (e.g. alternator, redis) to register
shutdown routines for various operations: Scylla shutdown,
node decommission, drain, etc. It also decouples
the services themselves from being part of the storage
service, since it's huge enough as it is.
Until now, PutItem or UpdateItem could be used to insert almost any JSON
as an attribute's value - even those that do not match DynamoDB's typed
value specification.
Among other things, the new validation allows us to reject empty sets,
strings or byte arrays - which are (somewhat artificially) forbidden in
DynamoDB.
Also added tests for the empty sets, strings and byte arrays that should
be rejected.
Fixes#5896
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20200225150525.4926-1-nyh@scylladb.com>
DynamoDB does not support empty sets. Operations which remove elements
from a set attribute should remove the attribute when the last item is
removed - not leave an empty set as it incorrectly does now.
Incidentally, the same patch fixes another bug - deleting elements from
a non-existent set attribute should be allowed (and do nothing), not fail
as it does now.
This patch also includes tests for both bugs.
Fixes#5895
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20200225125343.31629-1-nyh@scylladb.com>
We have not yet implemented the DELETE-with-value and ADD operations in
UpdateItem's old-style "AttributeUpdates" parameter - see issue #5864
and issue #5893, respectively
This patch include comprehensive tests for both features. The new tests
pass on DynamoDB, but currently xfails on Alternator - until these
features will be implemented.
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20200225105546.25651-1-nyh@scylladb.com>
Currenly `get_text_range()` uses heuristics about which ELF section
actually contains the text for the main executable. It appears that this
fails from time-to-time and we have to adjust the heuristics.
We don't really have to guess however, a much better method of
determining the section hosting text is to find a vtable pointer and
locate the section it resides in. For this, we use the
`reactor::_backend` as a canary. When this is not available, we fall
back to the pre-existing heuristics.
Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <20200225164719.114500-1-bdenes@scylladb.com>
Fixes#5669
This implements non-atomic collection and UDT handling for
both cdc preimage + delta.
To be able to express deltas in a meaningful way (and reconstruct
using it), non-atomic values are represented somewhat
differently from regular values:
* maps - stored as is (frozen)
* sets - stored as is (frozen)
* lists - stored as map<timeuuid, value> (frozen)
this allows reconstructing the list, as otherwise
things like list[0] = value cannot be represented
in a meaningful way
* udt - stored as tuple<tuple<field0>, tuple<field1>...> (frozen)
UDTs are normally just tuples + metadata, but we need to
distinguish the case of outer tuple element == null, meaning
"no info/does not partake in mutation" from tuple element
being a tuple(null) (i.e. empty tuple), meaning "set field to
null"
* seastar 8b6bc659c7...7a3b4b4e4e (3):
> Merge "Add custom stack size to seastar threads" from Piotr
Ref #5742.
> expiring_fifo: Optimize memory usage for single-element lists
Ref #4235.
> Close connection, when reach to max retransmits
- Bootstrap operation
It is used to add a new node into the cluster. The token ring changes.
Do not suffer from the "not the latest replica” issue. New node pulls
data from existing nodes that are losing the token range.
Suffer from failed streaming. We split the ranges in 10 groups and we
stream one group at a time. Restream the group if failed, causing
unnecessary data transmission on wire.
Bootstrap is not resumable. Failure after 99.99% of data is streamed.
If we restart the node again, we need to stream all the data again even
if the node already has 99.99% of the data.
Fixes: #3003Fixes: #4208
Tests: update_cluster_layout_tests.py + replace_address_test.py + manual test
- Decommission operation
It is used to remove a live node form the cluster. Token ring
changes. Do not suffer “not the latest replica” issue. The leaving
node pushes data to existing nodes.
Fixes: #3003Fixes: #4208
Tests: update_cluster_layout_tests.py + replace_address_test.py + manual test
- Replace operation
It is used to replace a dead node. The token ring does not change. It
pulls data from only one of the replicas which might not be the
latest copy.
Fixes: #3003Fixes: #4208
Tests: update_cluster_layout_tests.py + replace_address_test.py + manual test
This patch adds a warning of deprecation to DTCS. In a follow up step,
we will start requiring a flag for it to be enabled to make sure users
notice.
For now we'll just be nice and add a warning for the log watchers.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
Message-Id: <20200224164405.9656-1-glauber@scylladb.com>
Since we set 'eth0' as default NIC name, we get following error when running scylla_setup in non-interactive mode without --nic parameter:
$ sudo scylla_setup --setup-nic-and-disks --no-raid-setup --no-verify-package --no-io-setup
NIC eth0 doesn't exist.
It looks strange since user actually does not specified 'eth0', they might forget to specify --nic.
I think we should shows up usage, when eth0 is not available on the system.
Fixes#5828
Changes the name of storage_proxy::mutate_hint_from_scratch function to
another name, whose meaning is more clear: send_hint_to_all_replicas.
Tests: unit(dev)
It seems like *.service is conflicting on install time because the file
installed twice, both debian/*.service and debian/scylla-server.install.
We don't need to use *.install, so we can just drop the line.
Fixes#5640
Aggregate functions on counters do not exist. Until now counters
could, at best, fall back to blob->blob overloads, e.g.:
```
cqlsh> select max(cnt) from ks.tbl;
system.max(cnt)
----------------------
0x000000000000000a
(1 rows)
cqlsh> select sum(entities) from ks.tbl;
InvalidRequest: Error from server: code=2200 [Invalid query]
message="Invalid call to function sum, none of its type signatures match
[...]
```
Meanwhile, counters are compatible with bigints (aka. `long_type'),
so bigint overloads can be used on them (e.g. sum(bigint)->bigint).
This is achieved here by a special rule in overload resolution, which
makes `selector' perceive counters as an `EXACT_MATCH' to counter's
underlying type (`long_type', aka. bigint).
Until now, attempts to print counter update cell would end up
calling abort() because `atomic_cell_view::value()` has no
specialized visitor for `imr::pod<int64_t>::basic_view<is_mutable>`,
i.e. counter update IMR type. Such visitor is not easy to write
if we want to intercept counters only (and not all int64_t values).
Anyway, linearized byte representation of counter cell would not
be helpful without knowing if it consists of counter shards or
counter update (delta) - and this must be known upon `deserialize`.
This commit introduces simple approach: it determines cell type on
high level (from `atomic_cell_view`) and prints counter contents by
`counter_cell_view` or `atomic_cell_view::counter_update_value()`.
Fixes#5616
By default, `/usr/lib/rpm/find-debuginfo.sh` will temper with
the binary's build-id when stripping its debug info as it is passed
the `--build-id-seed <version>.<release>` option.
To prevent that we need to set the following macros as follows:
unset `_unique_build_ids`
set `_no_recompute_build_ids` to 1
Fixes#5881
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
The official documentation language of Scylla is English, not French.
So correct the word "existant", which appeared several times throughout
Alternator's tests, to "existent".
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20200221224221.31237-6-nyh@scylladb.com>
This patch completes the support for the ReturnValues parameter for
the UpdateItem operation. This parameter has five settings - NONE, ALL_OLD,
ALL_NEW, UPDATED_OLD and UPDATED_NEW. Before this patch we already
supported NONE and ALL_OLD - and this patch completes the support for the
three remaining modes: ALL_NEW, UPDATED_OLD and UPDATED_NEW.
The patch also continues to improve test_returnvalues.py with additional
corner cases discovered during the development. After this patch, only
one xfailing test remains - testing updates to nested document paths,
which we do not yet support (even without the ReturnValues parameter).
After this patch, the support of ReturnValues is complete - for all
operations (UpdateItem, PutItem and DeleteItem) and all of its possible
settings.
Fixes#5053
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20200221224221.31237-5-nyh@scylladb.com>
The rjson::set_with_string_name() utility function copies the given
string into the JSON key. The existing implementation required that this
input string be an std::string&, but a std::string_view would be fine too,
and I want to use it in new code to avoid yet another unnecessary copy.
Adding the overloads also exposes a few places where things were
implicitly converted to std::string and now cause an ambiguity - and
clearing up this ambiguity also allowed me to find places where this
conversion was unnecessary.
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20200221224221.31237-4-nyh@scylladb.com>
UpdateItem operations usually need to add a row marker:
* An empty UpdateItem is supposed to create a new empty item (row).
Such an empty item needs to have a row marker.
* An UpdateItem to add an attribute x and then later an UpdateItem
to remove this attribute x should leave an empty item behind.
This means the first UpdateItem needed to add a row marker, so
it will be left behind after the second UpdateItem.
So the existing code always added a row marker in UpdateItem.
However, there is one case where we should NOT create the row marker:
When the UpdateItem operation only has attribute deletions, and nothing
else, and it is applied to a key with no pre-existing item, DynamoDB
does not create this item. So neither should we.
This patch includes a new test for this test_update_item_non_existent,
which passes on DynamoDB, failed on Alternator before this patch, and
passes after the patch.
Fixes#5862.
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20200221224221.31237-3-nyh@scylladb.com>
In issue #5698 I raised a theory that we might have a bug when
BatchWriteItem is given two writes to the *same* key but in two different
tables. The test added here verifies that this theory was wrong, and
this case already works correctly.
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20200221224221.31237-2-nyh@scylladb.com>
This series adds an option to the API that supports deleting
a specific table from a snapshot.
The implementation works in a similar way to the option
to specify specific keyspaces when deleting a snapshot.
The motivation is to allow reducing disk-space when using
the snapshot for backup. A dtest PR is sent to the dtest
repository.
Fixes#5658
Original PR #5805
Tests: (database_test) (dtest snapshot_test.py:TestSnapshot.test_cleaning_snapshot_by_cf)
* amnonh/delete_table_snapshot:
test/boost/database_test: adopt new clear_snapshot signature
api/storage_service: Support specifying a table when deleting a snapshot
storage_service: Add optional table name to clear snapshot
* amnonh/delete_table_snapshot:
test/boost/database_test: adopt new clear_snapshot signature
api/storage_service: Support specifying a table when deleting a snapshot
storage_service: Add optional table name to clear snapshot
The error message (silently) changed to "DB index is out of range" the
following commit:
c7a4e694ad
The new error message is part of Redis 4.0, released in 2017, so let's
switch Scylla to use the new one.
Message-Id: <20200211133946.746-1-penberg@scylladb.com>
- Removenode operation
It is used to remove a dead node out of the cluster. Existing nodes
pulls data from other existing nodes for the new ranges it own. It
pulls from one of the replicas which might not be the latest copy.
Fixes: #3003Fixes: #4208
Tests: update_cluster_layout_tests.py + replace_address_test.py + manual test
- Rebuild operation
It is used to get all the data this node owns form other nodes. It
pulls data from only one of the replicas which might not be the
latest copy.
Fixes: #3003Fixes: #4208
Tests: update_cluster_layout_tests.py + replace_address_test.py + manual test