this series applies some random cleanups to bloom_filter. these cleanups were the side products when the author was working on #13314 .
Closes#13315
* github.com:scylladb/scylladb:
bloom_filter: mark internal help function static
bloom_filter: add more constness to false positive rate tables
bloom_filter: use vector::back() when appropriate
before this change, we use `round(random.random(), 5)` for
the value of `bloom_filter_fp_chance` config option. there are
chances that this expression could return a number lower or equal
to 6.71e-05.
but we do have a minimal for this option, which is defined by
`utils::bloom_calculations::probs`. and the minimal false positive
rate is 6.71e-05.
we are observing test failures where the we are using 0 for
the option, and scylla right rejected it with the error message of
```
bloom_filter_fp_chance must be larger than 6.71e-05 and less than or equal to 1.0 (got 0)
```.
so, in this change, to address the test failure, we always use a number
slightly greater or equal to a number slightly greater to the minimum to
ensure that the randomly picked number is in the range of supported
false positive rate.
Fixes#13313
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
Closes#13314
When creating the reader, the lifecycle policy might return one that was saved on the last page and survived in the cache. This reader might have skipped some fast-forwarding ranges while sitting in the cache. To avoid using a reader reading a stale range (from the read's POV), check its read range and fast forward it if necessary.
Fixes: https://github.com/scylladb/scylladb/issues/12916Closes#12932
* github.com:scylladb/scylladb:
readers/multishard: shard_reader: fast-forward created reader to current range
readers/multishard: reader_lifecycle_policy: add get_read_range()
test/boost/multishard_mutation_query_test: paging: handle range becoming wrapping
The Wasm compilation is a slow, low priority task, so it should
not compete with reactor threads or the networking core.
To achieve that, we increase the niceness of the thread by 10.
An alternative solution would be to set the priority using
pthread_setschedparam, but it's not currently feasible,
because as long as we're using the SCHED_OTHER policy for our
threads, we cannot select any other priority than 0.
Closes#13307
Fixes https://github.com/scylladb/scylladb/issues/13106
This commit removes the information that BYPASS CACHE
is an Enterprise-only feature and replaces that info
with the link to the BYPASS CACHE description.
Closes#13316
<iterator> was introduced back in
1cf02cb9d8, but lexicographical_compare.hh
was extracted out in bdfc0aa748, since we
don't have any users of <iterator> in types.hh anymore, let's remove it.
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
Closes#13327
s/%{version}/%{version}-%{release}/ in `Requires:` sections.
this enforces the runtime dependencies of exactly the same releases between scylla packages.
Fixes#13222
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
Closes#13229
* github.com:scylladb/scylladb:
dist/redhat: split Requires section into multiple lines
dist/redhat: enforce dependency on %{release} also
When creating the reader, the lifecycle policy might return one that was
saved on the last page and survived in the cache. This reader might have
skipped some fast-forwarding ranges while sitting in the cache. To avoid
using a reader reading a stale range (from the read's POV), check its
read range and fast forward it if necessary.
After each page, the read range is adjusted so it continues from/after
the last read partition. Sometimes this can result in the range becoming
wrapped like this: (pk, pk]. In this case, we can just drop this range
and continue with the rest of the ranges (if there are multiple ones).
There was an attempt to cut feature-service -> system-keyspace dependency (#13172) which turned out to require more changes. Here's a preparation squeezing from this future work.
This set
- leaves only batch-enabling API in feature service
- keeps the need for async context in feature service
- narrows down system keyspace features API to only load and store records
- relaxes features updating logic in sys.ks.
- cosmetic
Closes#13264
* github.com:scylladb/scylladb:
feature_service: Indentation fix after previous patch
feature_service: Move async context into enable()
system_keyspace: Refactor local features load/save helpers
feature_service: Mark supported_feature_set() const
feature_service: Remove single feature enabling method
boot: Enable features in batch
gossiper: Enable features in batch
This commit removes the Enterprise upgrade guides from
the Open Source documentation. The Enterprise upgrade guides
should only be available in the Enterprise documentation,
with the source files stored in scylla-enterprise.git.
In addition, this commit:
- adds the links to the Enterprise user guides in the Enterprise
documentation at https://enterprise.docs.scylladb.com/
- adds the redirections for the removed pages to avoid
breaking any links.
This commit must be reverted in scylla-enterprise.git.
Closes#13298
no need to use `size - 1` for accessing the last element in a vector,
let's just use `vector::back()` for more compacted code.
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
When propagating a view update to a paired view
replica fails, there is an error message.
This message is printed for every mutation,
which causes log spam when some node goes down.
This isn't a fatal error - it's normal that
a remote view replica goes down, it'll hopefully
receive the updates later through hints.
I'm unsure if the error message should
be printed at all, but for now we can
just rate limit it and that will improve
the situation with log spamming.
Signed-off-by: Jan Ciolek <jan.ciolek@scylladb.com>
Closes#13175
The validate_column_family() tries to find a schema and throws if it
doesn't exist. The latter is determined by the exception thrown by the
database::find_schema(), but there's a throw-less way of doing it.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Closes#13295
The patch series introduces linearisable topology changes using
raft protocol. The state machine driven by raft is described in
"service: Introduce topology state machine". Some explanations about
the implementation can be found in "storage_service: raft topology:
implement topology management through raft".
The code is not ready for production. There is not much in terms of error
handling and integration with the rest of the system is not even started.
For full integration request fencing will need to be implemented and
token_metadata has to be extended to support not just "pending" nodes
but concepts of "read replica set" and "write replica set".
The code may be far from be usable, but it is hidden behind the
"experimental raft" flag and having it in tree will relieve me from
constant rebase burden.
* 'raft-topology-v6' of github.com:scylladb/scylla-dev:
storage_service: fix indentation from previous patch
storage_service: raft topology: implement topology management through raft
service: raft: make group0_guard move assignable
service: raft: wire up apply() and snapshot transfer for topology in group0 state machine
storage_service: raft topology: introduce a function that applies topology cmd to local state machine
storage_service: raft topology: introduce a raft monitor and topology coordinator fibers
storage_service: raft topology: introduce snapshot transfer code for the topology table
raft topology: add RAFT_TOPOLOGY_CMD verb that will be used by topology coordinator to communicated with nodes
bootstrapper: Add get_random_bootstrap_tokens function
service: raft: add support for topology_change command into raft_group0_client
service: raft: introduce topology_change group0 command
system_keyspace: add a table to persist topology change state machine's state
service: Introduce topology state machine data structures
storage_proxy: not consult topology on local table write
The code here implements the state machine described in "service:
Introduce topology state machine". A topology operation is requested
by writing into topology_request field through raft. After that
topology_change_transition() function running on a leader is responsible
to drive the operation to completion. There is no much in terms of error
handling here yet. It something fails the code will just continue trying.
topology_change_state_load() which is (eventually) called on all nodes each
time state machine's state changes is a glue between the raft view of
the topology and the rest of the "legacy" system. The code there creates
token_metadata object from the raft view and fills in peers table which
is needed for drivers. The gossiper is almost completely cut of from the
topology management, but the code still updates node's sate there to
'normal' and 'left' for some legacy functionality to continue working.
Note that handlers for those states are disabled in raft mode.
raft_topology_cmd_handler() is called by topology coordinator and this
is where the streaming happens. The kind of streaming depends on the
state the node is in. The function is "re-entrable". It can be called
more then once and will either start new operation if it is the first
invocation or previous one failed, or it will wait from previous
operation to complete.
The new code is hidden behind "experimental raft" and should not change
how the system works if disabled.
Some indentation here is intentionally left wrong and will be fixed by
the next patch.
The function applies to persistent storage and call stub function
topology_change_state_load() that will load the new state into the
memory in later patches.
Raft monitor fiber monitors local's server raft state and starts the
topology coordinator fiber when it becomes a leader. Stops it when it
is not longer a leader.
The coordinator fiber waits for topology state changes, but there will
be none yet.
Empty for now. Will be used later by the topology coordinator to
communicate with other nodes to instruct them to start streaming,
or start to fence read/writes.
This patch increases the connection timeout in the get_cql_cluster()
function in test/cql-pytest/run.py. This function is used to test
that Scylla came up, and also test/alternator/run uses it to set
up the authentication - which can only be done through CQL.
The Python driver has 2-second and 5-second default timeouts that should
have been more than enough for everybody (TM), but in #13239 we saw
that in one case it apparently wasn't enough. So to be extra safe,
let's increase the default connection-related timeouts to 60 seconds.
Note this change only affects the Scylla *boot* in the test/*/run
scripts, and it does not affect the actual tests - those have different
code to connect to Scylla (see cql_session() in test/cql-pytest/util.py),
and we already increased the timeouts there in #11289.
Fixes#13239
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Closes#13291
This reverts commit c6087cf3a0.
Said commit can cause a deadlock when 2 or more repairs compete for
locks on 2 or more nodes. Consider the following scenario:
Node n1 and n2 in the cluster, 1 shard per node, rf = 2, each shard has
1 available unit for the reader lock
n1 starts repair r1
r1-n1 (instance of r1 on node1) takes the reader lock on node1
n2 starts repair r2
r2-n2 (instance of r2 on node2) takes the reader lock on node2
r1-n2 will fail to take the reader lock on node2
r2-n1 will fail to take the reader lock on node1
As a result, r1 and r2 could not make progress and deadlock happens.
The complexity comes from the fact that a repair job needs lock on more
than one node. It is not guaranteed that all the participant nodes could
take the lock in one short.
There is no simple solution to this so we have to revert this locking
mechanism and look for another way to prevent reader trashing when
repairing nodes with mismatching shard count.
Fixes: #12693Closes#13266
Cassandra detects when a batch has both an IF EXISTS and IF NOT EXISTS
on the same row, and complains this is not a useful request (after all,
it can never succeed, because the batch can only succeed if both conditions
are true, and that can't be if one checks IF EXISTS and the other
IF NOT EXISTS).
This patch adds a test, test_lwt_with_batch_conflict_1, which checks
that this case results in an error. It passes on Cassandra, but xfails
on Scylla which doesn't report an error in this case.
A second test, test_lwt_with_batch_conflict_2, shows that the detection
of the EXISTS / NOT EXISTS conflict is special, and other conflicts
such as having both "r=1" and "r=2" for the same row, are NOT detected
by Cassandra.
Refs #13011.
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Closes#13270
It's declared in header, but is not used outside of .cc. Forward
declaration in header would be enough.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Closes#13289
for better readability.
also, add `#include <concepts>`, as we should include what we use
instead of relying on other headers do this on behalf of us.
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
Closes#13277
clang warns when the implicit conversion changes the precision of the
converted number. in this case, the before being multiplied,
`std::numeric_limits<unsigned long>::max() >> 1` is implicitly
promoted to double so it can obtain the common type of double and
unsigned long. and the compiler warns:
```
/home/kefu/dev/scylladb/test/boost/network_topology_strategy_test.cc:129:84: error: implicit conversion from 'unsigned long' to 'double' changes value from 9223372036854775807 to 9223372036854775808 [-Werror,-Wimplicit-const-int-float-conversion]
return static_cast<unsigned long>(d*(std::numeric_limits<unsigned long>::max() >> 1)) << 1;
~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~
```
but
1. we don't really care about the precision here, we just want to map a
double to a token represented by an int64_t
2. the maximum possible number being converted is less than
9223372036854775807, which is the maximum number of int64_t, which
is in general an alias of `long long`, not to mention that
LONG_MAX is always 2147483647, after shifting right, the result
would be 1073741823
so this is a false alarm. in order to silence it, we explicitly
cast the RHS of `*` operator to double.
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
Closes#13221
Hey y'all!
Me and @malusev998 are maintaining a updated version of the [PHP Driver ](https://github.com/he4rt/scylladb-php-driver) together with @he4rt community and it had a bunch of improvements on these last month.
Before it was working only at PHP 7.1 (DataStax branch), and at our branch we have it working at PHP 8.1 and 8.2.
We are also using the ScyllaDB C++ Driver on this project and I think that is a good idea to point new users for this project since it's the most updated PHP Driver maintained now.
What do y'all think about that?
Closes#13218
* github.com:scylladb/scylladb:
fix: links to php driver
fix: adding php versions into driver's description
docs: scylladb better php driver