Attempting to call advance_to() on the index, after it is positioned at
EOF, can result in an assert failure, because the operation results in
an attempt to move backwards in the index-file (to read the last index
page, which was already read). This only happens if the index cache
entry belonging to the last index page is evicted, otherwise the advance
operation just looks-up said entry and returns it.
To prevent this, we add an early return conditioned on eof() to all the
partition-level advance-to methods.
A regression unit test reproducing the above described crash is also
added.
If we are redefining the log table, we need to ensure any dropped
columns are registered in "dropped_columns" table, otherwise clients will not
be able to read data older than now.
Includes unit test.
Should probably be backported to all CDC enabled versions.
Fixes#10473Closes#10474
When updating an updateable value via CQL the new value comes as a
string that's then boost::lexical_cast-ed to the desired value. If the
cast throws the respective exception is printed in logs which is very
likely uncalled for.
fixes: #10394
tests: manual
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Message-Id: <20220503142942.8145-1-xemul@scylladb.com>
"
- Alternator gets gossiper for its proxy dependency
- Forward service method that takes global gossiper can re-use
proxy method (forward -> proxy reference is already there)
- Table code is patched to require gossiper argument
- Snitch gets a dependency reference on snitch_ptr and some extra
care for snitch driver vs snitch-ptr interaction and gossip test
- Cql test env should carry gossiper reference on-board
- Few places can re-use the existing local gossiper reference
- Scylla-gdb needs to get gossiper from debug namespace and needs
_not_ to get feature service from gossiper
"
* 'br-gossiper-deglobal-2' of https://github.com/xemul/scylla:
code: De-globalize gossiper
scylla-gdb, main: Get feature service without gossiper help
test: Use cql-test-env gossiper
cql test env: Keep gossiper reference on board
code: Use gossiper reference where possible
snitch: Use local gossiper in drivers
snitch: Keep gossiper reference
test: Remove snitch from manual gossip test
gossiper: Use container() instead of the global pointer
main, cql_test_env: Start snitch later
snitch: Move snitch_base::get_endpoint_info()
forward service: Re-use proxy's helper with duplicated code
table: Don't use global gossiper
alternator: Don't use global gossiper
This is a translation of Cassandra's CQL unit test source file
validation/operations/SelectTest.java into our our cql-pytest framework.
This large test file includes 78 tests for various types of SELECT
operations. Four additional tests require UDF in Java syntax,
and were skipped.
All 78 tests pass on Cassandra. 25 of the tests fail on Scylla
reproducing 3 already known Scylla issues and 8 previously-unknown
issues:
Previously known issues:
Refs #2962: Collection column indexing
Refs #4244: Add support for mixing token, multi- and single-column
restrictions
Refs #8627: Cleanly reject updates with indexed values where
value > 64k
Newly-discovered issues:
Refs #10354: SELECT DISTINCT should allow filter on static columns,
not just partition keys
Refs #10357: Spurious static row returned from query with filtering,
despite not matching filter
Refs #10358: Comparison with UNSET_VALUE should produce an error
Refs #10359: "CONTAINS NULL" and "CONTAINS KEY NULL" restrictions
should match nothing
Refs #10361: Null or UNSET_VALUE subscript should generate an
invalid request error
Refs #10366: Enforce Key-length limits during SELECT
Refs #10443: SELECT with IN and ORDER BY orders rows per partition
instead of for the entire response
Refs #10448: The CQL token() function should validate its parameters
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Closes#10449
No code uses global gossiper instance, it can be removed. The main and
cql-test-env code now have their own real local instances.
This change also requires adding the debug:: pointer and fixing the
scylle-gdb.py to find the correct global location.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
This is needed not to mess with removed global gossiper in the next
patch. Other than this, it's better to access services by their own
debug:: pointers, not via under-the-good dependencies chains.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
There's yet another -test-env -- the alternator- one -- which needs
gossiper. It now uses global reference, but can grab gossiper reference
from the cql-test-env which partitipates in initialization.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
The reference is already available at the env initialization, but it's
not kept on the env instance itself. Will be used by the next patch.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Some places in the code has function-local gossiper reference but
continue to use global instance. Re-use the local reference (it's going
to become sharded<> instance soon).
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Each driver has a pointer to this shard snitch_ptr which, in turn, has
the reference on gossiper. This lets drivers stop using the global
gossiper instance.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
The reference is put on the snitch_ptr because this is the sharded<>
thing and because gossiper reference is the same for different snitch
drivers. Also, getting gossiper from snitch_ptr by driver will look
simpler than getting it from any base class.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Snitch depends on gossiper and system keyspace, so it needs to be
started after those two do.
fixes#10402
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
The get_live_endpoints matches the same method on the proxy side. Since
the forward service carries proxy reference, it can use its method
(which needs to be made public for that sake).
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
The table::get_hit_rate needs gossiper to get hitrates state from.
There's no way to carry gossiper reference on the table itself, so it's
up to the callers of that method to provide it. Fortunately, there's
only one caller -- the proxy -- but the call chain to carry the
reference it not very short ... oh, well.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
"
Today, both operations are picking the highest level as the ideal level for
placing the output, but the size of input should be used instead.
The formula for calculating the ideal level is:
ceil(log base(fan_out) of (total_input_size / max_fragment_size))
where fan_out = 10 by default,
total_input_size = total size of input data and
max_fragment_size = maximum size for fragment (160M by default)
such that 20 fragments will be placed at level 2, as level 1
capacity is 10 fragments only.
By placing the output in the incorrect level, tons of backlog will be generated
for LCS because it will either have to promote or demote fragments until the
levels are properly balanced.
"
* 'optimize_lcs_major_and_reshape/v2' of https://github.com/raphaelsc/scylla:
compaction: LCS: avoid needless work post major compaction completion
compaction: LCS: avoid needless work post reshape completion
compaction: LCS: extract calculation of ideal level for input
compaction: LCS: Fix off-by-one in formula used to calculate ideal level
This series enforces a minimum size of the unprivileged section when
performing `shrink()` operation.
When the cache is shrunk, we still drop entries first from unprivileged
section (as before this commit), however, if this section is already small
(smaller than `max_size / 2`), we will drop entries from the privileged
section.
This is necessary, as before this change the unprivileged section could
be starved. For example if the cache could store at most 50 entries and
there are 49 entries in privileged section, after adding 5 entries (that would
go to unprivileged section) 4 of them would get evicted and only the 5th one
would stay. This caused problems with BATCH statements where all
prepared statements in the batch have to stay in cache at the same time
for the batch to correctly execute.
To correctly check if the unprivileged section might get too small after
dropping an entry, `_current_size` variable, which tracked the overall size
of cache, is changed to two variables: `_unprivileged_section_size` and
`_privileged_section_size`, tracking section sizes separately.
New tests are added to check this new behavior and bookkeeping of the section
sizes. A test is added, that sets up a CQL environment with a very small
prepared statement cache, reproduces issue in #10440 and stresses the cache.
Fixes#10440.
Closes#10456
* github.com:scylladb/scylla:
loading_cache_test: test prepared stmts cache
loading_cache: force minimum size of unprivileged
loading_cache: extract dropping entries to lambdas
loading_cache: separately track size of sections
loading_cache: fix typo in 'privileged'
This series gets rid of the remaining usage of flat_mutation_reader v1 in compaction
Test: sstable_compaction_test
Closes#10454
* github.com:scylladb/scylla:
compaction: sanitize headers from flat_mutation_reader v1
flat_mutation_reader: get rid of class filter
compaction: cleanup_compaction: make_partition_filter: return flat_mutation_reader_v2::filter
There was some doubts about which internal prepared statements are cached and which aren't.
In addition some queries that should have been cached (IMO), weren't. This PR adds some verbosity
to the caching enabling parameter as well as adding caching to some queries.
As a followup I would suggest to have internal queries as a compile time strings that have a compile time hash, this
will make the cache lookup not be dependent on the query textual length as it is today, this makes sense given that the
queries are static even today.
Closes#10465Fixes#10335.
* github.com:scylladb/scylla:
internal queries: add caching to some queries
query_processor: remove default internal query caching behavior
query_processor: make execute_internal caching parameter more verbose
Some of the internal queries didn't have caching enabled even though
there are chances of the query executing in large bursts or relatively
often, example of the former is `default_authorized::authorize` and for
the later is `system_distributed_keyspace::get_service_levels`.
Fixes#10335
Signed-off-by: Eliran Sinvani <eliransin@scylladb.com>
Also, pass `node_ops_cmd` by value to get rid of lifetime issues
when converting to coroutine.
Signed-off-by: Pavel Solodovnikov <pa.solodovnikov@scylladb.com>
When executing internal queries, it is important that the developer
will decide if to cache the query internally or not since internal
queries are cached indefinitely. Also important is that the programmer
will be aware if caching is going to happen or not.
The code contained two "groups" of `query_processor::execute_internal`,
one group has caching by default and the other doesn't.
Here we add overloads to eliminate default values for caching behaviour,
forcing an explicit parameter for the caching values.
All the call sites were changed to reflect the original caching default
that was there.
Signed-off-by: Eliran Sinvani <eliransin@scylladb.com>
`execute_internal` has a parameter to indicate if caching a prepared
statement is needed for a specific call. However this parameter was a
boolean so it was easy to miss it's meaning in the various call sites.
This replaces the parameter type to a more verbose one so it is clear
from the call site what decision was made.
Said method has to evict all querier cache entries, belonging to the to-be-dropped table. This is already the case, but there was a window where new entries could sneak in, causing a stale reference to the table to be de-referenced later when they are evicted due to TTL. This window is now closed, the entries are evicted after the method has waited for all ongoing operations on said table to stop.
Fixes: #10450Closes#10451
* github.com:scylladb/scylla:
replica/database: drop_column_family(): drop querier cache entries after waiting for ops
replica/database: finish coroutinizing drop_column_family()
replica/database: make remove(const column_family&) private
Fix hangs on Scylla node startup with Raft enabled that were caused by:
- a deadlock when enabling the USES_RAFT feature,
- a non-voter server forgetting who the leader is and not being able to forward a `modify_config` entry to become a voter.
Read the commit messages for details.
Fixes: #10379
Refs: #10355Closes#10380
* github.com:scylladb/scylla:
raft: actively search for a leader if it is not known for a tick duration
raft: server: return immediately from `wait_for_leader` if leader is known
service: raft: don't support/advertise USES_RAFT feature
Add a new test that sets up a CQL environment with a very small prepared
statements cache. The test reproduces a scenario described in #10440,
where a privileged section of prepared statement cache gets large
and that could possibly starve the unprivileged section, making it
impossible to execute BATCH statements. Additionally, at the end of the
test, prepared statements/"simulated batches" with prepared statements
are executed a random number of times, stressing the cache.
To create a CQL environment with small prepared cache, cql_test_config
is extended to allow setting custom memory_config value.
This patch enforces a minimum size of unprivileged section when
performing shrink() operation.
When the cache is shrank, we still drop entries first from unprivileged
section (as before this commit), however if this section is already small
(smaller than max_size / 2), we will drop entries from the privileged
section.
For example if the cache could store at most 50 entries and there are 49
entries in privileged section, after adding 5 entries (that would go to
unprivileged section) 4 of them would get evicted and only the 5th one
would stay. This caused problems with BATCH statements where all
prepared statements in the batch have to stay in cache at the same time
for the batch to correctly execute.
New tests are added to check this behavior and bookkeeping of section
sizes.
Fixes#10440.
This patch splits _current_size variable, which tracked the overall size
of cache, to two variables: _unprivileged_section_size and
_privileged_section_size.
Their sum is equal to the old _current_size, but now you can get the
size of each section separately.
lru_entry's cache_size() is replaced with owning_section_size() which
references in which counter the size of lru_entry is currently stored.
That's done by picking the ideal level for the input, such
that LCS won't have to either promote or demote data, because
the output level is not the best candidate for having the
size of the output data.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
That's done by picking the ideal level for reshape input, such
that LCS won't have to either promote or demote data, because
the output level is not the best candidate for having the
size of the output data.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
ideal level is calculated as:
ceil(log base10 of ((input_size + max_fragment_size - 1) / max_fragment_size))
such that 20 fragments will be placed at level 2, as level 1
capacity is 10 fragments only.
The goal of extracting it is that the formula will be useful for
major in addition to reshape.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
To calculate ideal level, we use the formula:
log 10 (input_size / max_fragment_size)
input_size / max_fragment_size is calculating number of fragments.
the problem is that the calculation can miss the last fragment, so
wrong level may be picked if last fragment would cause the target
level to exceed its capacity.
To fix it, let's tweak the formula to:
log 10 ((input_size + max_fragment_size - 1) / max_fragment_size)
such that the actual # of fragments will be calculated.
If wrong level is picked, it can cause unnecessary writeamp as,
LCS will later have to promote data into the next level.
Problem spotted by Benny Halevy.
Fixes#10458.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
"
After the recent conversion of the row-cache, two v1 mutation sources
remained: the memtable and the kl sstable reader.
This series converts both to a native v2 implementation. The conversion
is shallow: both continue to read and process the underlying (v1) data
in v1, the fragments are converted to v2 right before being pushed to
the reader's buffer. This conversion is simple, surgical and low-risk.
It is also better than the upgrade_to_v2() used previously.
Following this, the remaining v1 reader implementations are removed,
with the exception of the downgrade_to_v1(), which is the only one left
at this point. Removing this requires converting all mutation sinks to
accept a v2 stream.
upgrade_to_v2() is now not used in any production code. It is still
needed to properly test downgrade_to_v1() (which is till used), so we
can't remove it yet. Instead it hidden as a private method of
mutation_source. This still allows for the above mentioned testing to
continue, while preventing anyone from being tempted to introduce new
usage.
tests: https://jenkins.scylladb.com/job/releng/job/Scylla-CI/191
"
* 'convert-remaining-v1-mutation-sources/v2' of https://github.com/denesb/scylla:
readers: make upgrade_to_v2() private
test/lib/mutation_source_test: remove upgrade_to_v2 tests
readers: remove v1 forwardable reader
readers: remove v1 empty_reader
readers: remove v1 delegating_reader
sstables/kl: make reader impl v2 native
sstables/kl: return v2 reader from factory methods
sstables: move mp_row_consumer_reader_k_l to kl/reader.cc
partition_snapshot_reader: convert implementation to native v2
mutation_fragment_v2: range_tombstone_change: add minimal_memory_usage()
flat_mutation_reader make_scrubbing_reader no longer exists
and there is no need to include flat_mutation_reader.hh
nor forward declare the class.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
We filter only on the parittion key, so it doesn't matter,
but we want to get rid of flat_mutation_reader v1.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
If SSTable write fails, it will leave a partial sst which contains
a temporary TOC in addition to other components partially written.
temporary TOC content is written upfront, to allow us from deleting
all partial components using the former content if write fails.
After commit e5fc4b6, partial sst cannot be deleted because it is
incorrectly assuming all files being deleted unconditionally has
TOC, but that's not true for partial files that need to be removed.
The consequence of this is that space of partial files cannot be
reclaimed, making it worse for Scylla to recover from ENOSPC,
which could happen by selecting a set of files for compaction with
higher chance of suceeeding given the free space.
Let's fix this by taking into account temp TOC for partial files.
Fixes#10410.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Closes#10411
* github.com:scylladb/scylla:
sstables: Fix deletion of partial SSTables
sstables: Fix fsync_directory()
sstables: Rename dirname() to a more descriptive name
Slice restrictions on the "duration" type are not allowed, and also if
we have a collection, tuple or UDT of durations. We made an effort to
print helpful messages on the specific case encountered, such as "Slice
restrictions are not supported on UDTs containing duration".
But the if()s were reverse, meaning that a UDT - which is also a tuple -
will be reported as a tuple instead of UDT as we intended (and as Cassandra
reports it).
The wrong message was reproduced in the unit test translated from
Cassandra, select_test.py::testFilteringOnUdtContainingDurations
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20220428071807.1769157-1-nyh@scylladb.com>