Currently, because serialize_visitor::operator() is not implemented for counters, we cannot convert a counter returned by a WASM UDF to bytes when returning from wasm::run_script().
We could disallow using counters as WASM UDF return types, but an easier solution which we're already using in Lua UDFs is treating the returned counters as 64-bit integers when deserializing. This patch implements the latter approach and adds a test for it.
Closes#12806
* github.com:scylladb/scylladb:
wasm udf: deserialize counters as integers
test_wasm.py: add utility function for reading WASM UDF saved in files
This patch is based on #12681, only last 3 commits are relevant.
As described in #12709, currently, when a UDF used in a UDA is replaced, the UDA is not updated until the whole node is restarted.
This patch fixes the issue by updating all affected UDAs when a UDF is replaced.
Additionally, it includes a few convenience changes
Closes#12710
* github.com:scylladb/scylladb:
uda: change the UDF used in a UDA if it's replaced
functions: add helper same_signature method
uda: return aggregate functions as shared pointers
LWT `IF` (column_condition) duplicates the expression prepare and evaluation code. Annoyingly,
LWT IF semantics are a little different than the rest of CQL: a NULL equals NULL, whereas usually
NULL = NULL evaluates to NULL.
This series converts `IF` prepare and evaluate to use the standard expression code. We employ
expression rewriting to adjust for the slightly different semantics.
In a few places, we adjust LWT semantics to harmonize them with the rest of CQL. These are pointed
out in their own separate patches so the changes don't get lost in the flood.
Closes#12356
* github.com:scylladb/scylladb:
cql3: lwt: move IF clause expression construction to grammar
cql3: column_condition: evaluate column_condition as a single expression
cql3: lwt: allow negative list indexes in IF clause
cql3: lwt: do not short-circuit col[NULL] in IF clause
cql3: column_condition: convert _column to an expression
cql3: expr: generalize evaluation of subscript expressions
cql3: expr: introduce adjust_for_collection_as_maps()
cql3: update_parameters: use evaluation_inputs compatible row prefetch
cql3: expr: protect extract_column_value() from partial clustering keys
cql3: expr: extract extract_column_value() from evaluation machinery
cql3: selection: introduce selection_from_partition_slice
cql3: expr: move check for ordering on duration types from restrictions to prepare
cql3: expr: remove restrictions oper_is_slice() in favor of expr::is_slice()
cql3: column_condition: optimize LIKE with constant pattern after preparing
cql3: expr: add optimizer for LIKE with constant pattern
test: lib: add helper to evaluate an expression with bind variables but no table
cql3: column_condition: make the left-hand-side part of column_condition::raw
cql3: lwt: relax constraints on map subscripts and LIKE patterns
cql3: expr: fix search_and_replace() for subscripts
cql3: expr: fix function evaluation with NULL inputs
cql3: expr: add LWT IF clause variants of binary operators
cql3: expr: change evaluate_binop_sides to return more NULL information
In issue #12601, a dtest involving paging of ListStreams showed
incorrect results - the paged results had one duplicate stream and one
missing stream. We believe that the cause of this bug was that the
unsorted map of tables can change order between pages. In this patch
we add a test test_list_streams_paged_with_new_table which can
demonstrate this bug - by adding a lot of tables in mid-paging, we
cause the unsorted map to be reshufled and the paging to break.
This is not the same situation as in #12601 (which did not involve
new tables) but we believe it demonstrates the same bug - and check
its fix. Indeed this passes with the fix in pull request #12614 and
fails without it.
This patch also adds a second test, test_stream_arn_unchanging:
That test eliminates a guess we had for the cause of #12601. We
thought that maybe stream ARN changing on a table if its schema
version changes, but the new test confirms that it actually behaves
as expected (the stream ARN doesn't change).
Refs #12601
Refs #12614
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Closes#12616
This patch adds yet another reproducer for issue #10649, where a
the combination of filtering and LIMIT returns fewer results when
a secondary index is added to the table.
Whereas the previous tests we had for this issue involved a regular
(global) index, the new test uses a local index (a Scylla-only feature).
It shows that the same bug exists also for local indexes, as noticed
by a user in #12766.
Refs #10649
Refs #12766
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Closes#12783
Recently we enabled RBNO by default in all topology operations. This
made the operations a bit slower (repair-based topology ops are a bit
slower than classic streaming - they do more work), and in debug mode
with large number of concurrent tests running, they might timeout.
The timeout for bootstrap was already increased before, do the same for
decommission/removenode. The previously used timeout was 300 seconds
(this is the default used by aiohttp library when it makes HTTP
requests), now use the TOPOLOGY_TIMEOUT constant from ScyllaServer which
is 1000 seconds.
Closes#12765
* github.com:scylladb/scylladb:
test/pylib: use larger timeout for decommission/removenode
test/pylib: scylla_cluster: rename START_TIMEOUT to TOPOLOGY_TIMEOUT
Issue #7659, which we solved long ago, was about a query which included
a non-EQ restriction and wrongly picked up one of the indexes. It had
a short C++ regression test, but here we add a more elaborate Python
test for the same bug. The advantages of the Python test are:
1. The Python test can be run against any version of Scylla (e.g., to
whether a certain version contains a backport of the fix).
2. The Python test reproduces not only a "benign" query error, but also
an assertion-failed crash which happened when the non-EQ restriction
was an "IN".
3. The Python test reproduces the same bug not just for a regular
index, but also a local index.
I checked that, as expected, these tests pass on master, but fail
(and crash Scylla) in old branches before the fix for #7659.
Refs #7659.
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Closes#12797
It's only used by the single test and apparently exists since the times
seastar was missing the future::discard_result() sugar
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Closes#12803
Currently, because serialize_visitor::operator() is not implemented
for counters, we cannot convert a counter returned by a WASM UDF
to bytes when returning from wasm::run_script().
We could disallow using counters as WASM UDF return types, but an
easier solution which we're already using in Lua UDFs is treating
the returned counters as 64-bit integers when deserializing. This
patch implements the latter approach and adds a test for it.
Currently, we're repeating the same os.path, open, read, replace
each time we read a WASM UDF from a file.
To reduce code bloat, this patch adds a utility function
"read_function_from_file" that finds the file and reads it given
a function name and an optional new name, for cases when we want
to use a different name in cql (mostly for unique_names).
Move long running topology tests out of `test_topology.py` and into their own files, so they can be run in parallel.
While there, merge simple schema tests.
Closes#12804
* github.com:scylladb/scylladb:
test/topology: rename topology test file
test/topology: lint and type for topology tests
test/topology: move topology ip tests to own file
test/topology: move topology test remove garbaje...
test/topology: move topology rejoin test to own file
test/topology: merge topology schema tests and...
test/topology: isolate topology smp params test
test/topology: move topology helpers to common file
Instead of the grammar passing expression bits to column_condition,
have the grammar construct an unprepared expression and pass it as
a whole. column_condition::raw then uses prepare_expression() to
prepare it.
The call to validate_operation_on_durations() is eliminated, since it's
already done be prepare_expression().
Some tests adjusted for slightly different wording.
LWT IF clause errors out on negative list index. This deviates
from non-LWT subscript evaluation, PostgresQL, and too-large index,
all of which evaluate the subscript operation to NULL.
Make things more consistent by also evaluating list[-1] to NULL.
A test is adjusted.
Currently if an LWT IF clause contains a subscript with NULL
as the key, then the entire IF clause is evaluated as FALSE.
This is incorrect, because col[NULL] = NULL would simplify
to NULL = NULL, which is interpreted as TRUE using the LWT
comparisons. Even with SQL NULL handling, "col[NULL] IS NULL"
should evaluate to true, but since we short-circuit as soon
as we encounter the NULL key, we cannot complete the evaluation.
Fix by setting cell_value to null instead of returning immediately.
Tests that check for this were adjusted. Since the test changed
behavior from not applying the statement to applying it, a new
statement is added that undoes the previous one, so downstream
statements are not affected.
After this change, all components of column_condition are expressions.
One LWT-specific hack was removed from the evaluation path:
- lists being represented as maps is made transparent by
converting during evaluation with adjust_for_collections_as_maps()
column_condition::applies_to() previously handled a missing row
by materializing a NULL for the column being evaluated; now it
materializes a NULL row instead, since evaluation of the column is
moved to common code.
A few more cases in lwt_test became legal, though I'm not sure
exactly why in this patch.
Both LWT IF clause and SELECT WHERE clause check that a duration type
isn't used in an ordered comparison, since duration types are unordered
(is 1mo more or less than 30d?). As a first step towards centralizing this
check, move the check from restrictions into prepare. When LWT starts using
prepare, the duplication will be removed.
The error message was changed: the word "slice" is an internal term, and
a comparison does not necessarily have to be in a restriction (which is
also an internal term).
Tests were adjusted.
Compiling a pattern is expensive and so we should try to do it
at prepare time, if the pattern is a constant. Add an optimizer
that looks for such cases and replaces them with a unary function
that embeds the compiled pattern.
This isn't integrated yet with prepare_expr(), since the filtering
code isn't ready for generic expressions. Its first user will be LWT,
which contains the optimization already (filtering had it as well,
but lost it sometime during the expression rewrite).
A unit test is added.
Sometimes we want to defeat the expression optimizer's ability to
fold constant expressions. A bind variable is a convenient way to
do this, without the complexity of faking a schema and row inputs.
Add a helper to evaluate an expression with bind variable parameters,
doing all the paperwork for us.
A companion make_bind_variable() is added to likewise simplify
creating bind variables for tests.
Previously, we rejected map subscripts that are NULL, as well as
LIKE patterns that are NULL. General SQL expression evaluation
allows NULL everywhere, and doesn't raise errors - an expression
involving NULL generally yields NULL. Change the behavior to
follow that. Since the new behavior was previously disallowed,
no one should have been relying on it and there is no compatibility
problem.
Update the tests and note it as a CQL extension.
LWT IF clause interprets equality differently from SQL (and the
rest of CQL): it thinks NULL equals NULL. Currently, it implements
binary operators all by itself so the fact that oper_t::EQ (and
friends) means something else in the rest of the code doesn't
bother it. However, we can't unify the code (in
column_condition.cc) with the rest of expression evaluation if
the meaning changes in different places.
To prepare for this, introduce a null_handling_style field to
binary_operator that defaults to `sql` but can be changed to
`lwt_nulls` to indicate this special semantic.
A few unit tests are added. LWT itself still isn't modified.
group0 members to own file
Move slow test for removenode with nodes not present in group0 to a
server after a sudden stop to a separate file.
Signed-off-by: Alejo Sanchez <alejo.sanchez@scylladb.com>
`paxos_response_handler::learn_decision` was calling
`cdc_service::augment_mutation_call` concurrently with
`storage_proxy::mutate_internal`. `augment_mutation_call` was selecting
rows from the base table in order to create the preimage, while
`mutate_internal` was writing rows to the table. It was therefore
possible for the preimage to observe the update that it accompanied,
which doesn't make any sense, because the preimage is supposed to show
the state before the update.
Fix this by performing the operations sequentially. We can still perform
the CDC mutation write concurrently with the base mutation write.
`cdc_with_lwt_test` was sometimes failing in debug mode due to this bug
and was marked flaky. Unmark it.
Also fix a comment in `cdc_with_lwt_test`.
Fixes#12098Closes#12768
* github.com:scylladb/scylladb:
test/cql-pytest: test_cdc: regression test for #12098
test/cql: cdc_with_lwt_test: fix comment
service: storage_proxy: sequence CDC preimage select with Paxos learn
... move them to their own file.
Schema verification tests for restart, add, and hard stop of server can
be done with the same cluster. Merge them in the same test case.
While there, move them to a separate file to be run independently as
this is a slow test.
Signed-off-by: Alejo Sanchez <alejo.sanchez@scylladb.com>
Improve logging by printing the cluster at the end of each test.
Stop performing operations like attempting queries or dropping keyspaces on dirty clusters. Dirty clusters might be completely dead and these operations would only cause more "errors" to happen after a failed test, making it harder to find the real cause of failure.
Mark cluster as dirty when a test that uses it fails - after a failed test, we shouldn't assume that the cluster is in a usable state, so we shouldn't reuse it for another test.
Rely on the `is_dirty` flag in `PythonTest`s and `CQLApprovalTest`s, similarly to what `TopologyTest`s do.
Closes#12652
* github.com:scylladb/scylladb:
test.py: rely on ScyllaCluster.is_dirty flag for recycling clusters
test/topology: don't drop random_tables keyspace after a failed test
test/pylib: mark cluster as dirty after a failed test
test: pylib, topology: don't perform operations after test on a dirty cluster
test/pylib: print cluster at the end of test
Recently we enabled RBNO by default in all topology operations. This
made the operations a bit slower (repair-based topology ops are a bit
slower than classic streaming - they do more work), and in debug mode
with large number of concurrent tests running, they might timeout.
The timeout for bootstrap was already increased before, do the same for
decommission/removenode. The previously used timeout was 300 seconds
(this is the default used by aiohttp library when it makes HTTP
requests), now use the TOPOLOGY_TIMEOUT constant from ScyllaServer which
is 1000 seconds.
Force snapshot with schema changes while server down. Then verify schema when bringing back up the server.
Closes#12726
* github.com:scylladb/scylladb:
pytest/topology: check snapshot transfer
raft conf error injection for snapshot
test/pylib: one-shot error injection helper
Perform multiple LWT inserts to different keys ensuring none of them
observes a preimage.
On my machine this test reproduces the problem more than 50% of the time
in debug mode.
The helping wrapper facilitates the usage of sharded<sstable_directory> for several test cases and the helper and its callers had deserved some cleanup over time.
Closes#12791
* github.com:scylladb/scylladb:
sstable_directory_test: Reindent and de-multiline
sstable_directory_test: Enlighten and rename sstable_from_existing_file
sstable_directory_test: Remove constant parallelizm parameter
Merging rows from different partition versions should preserve the LRU link of
the entry from the newer version. We need this in case we're merging two last
dummy entries where the older dummy is already unlinked from the LRU. The
newer dummy could be the last entry which is still holding the partition
entry linked in the LRU.
The mutation_partition_v2 merging didn't take the LRU link from the newer
entry, and we could end up with the partition entry not having any entries
linked in the LRU.
Introduced in f73e2c992f.
Fixes#12778Closes#12785
System keyspace is a keyspace with local replication strategy and thus
it does not need to be repaired. It is possible to invoke repair
of this keyspace through the api, which leads to runtime error since
peer_events and scylla_table_schema_history have different sharding logic.
For keyspaces with local replication strategy repair_service::do_repair_start
returns immediately.
Closes#12459
* github.com:scylladb/scylladb:
test: rest_api: check if repair of system keyspace returns before corresponding task is created
repair: finish repair immediately on local keyspaces
Many tests using sstable directory wrapper have broken indentation with
previous patching. Fix it. No functional changes.
Also, while at it, convert multiline wrapper calls into one-line, after
previous patch these are short enough for that.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
It used to be the sstable maker for sstable::test_env / cql_test_env,
now sstables for tests are made via sstables manager explicitly, so the
guy can be remaned to something more relevant to its current status.
Also, de-mark its constructors as explicit to make callers look shorter.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Now it's scattered between dist. loader and sstable directory code making the latter quite bloated. Keeping everything in distributed loader makes the sstable_directory code compact and easier to patch to support object storage backend.
Closes#12771
* github.com:scylladb/scylladb:
sstable_directory: Rename remove_input_sstables_from_reshaping()
sstable_directory: Make use of remove_sstables() helper
sstable_directory: Merge output sstables collecting methods
distributed_loader: Remove max_compaction_threshold argument from reshard()
distributed_loader: Remove compaction_manager& argument from reshard()
sstable_directory: Move the .reshard() to distributed_loader
sstable_directory: Add helper to load foreign sstable
sstable_directory: Add io-prio argument to .reshard()
sstable_directory: Move reshard() to distributed_loader.cc
distributed_loader: Remove compaction_manager& argument from reshape()
sstable_directory: Move the .reshape() to distributed loader
sstable_directory: Add helper to retrive local sstables
sstable_directory: Add io-prio argument to .reshape()
sstable_directory: Move reshape() to distributed_loader.cc
CQL transport sever error handling fixes and improvements:
* log failed requests with `DEBUG` level for easier debugging;
* in case of unhandled errors, deliver them to the client as `SERVER_ERROR`'s
* fix for `protocol_error`'s in case of shedded big requests;
* explicit tests have been written for the error handling problems above.
Closes#11949
* github.com:scylladb/scylladb:
transport server: fix "request size too large" handling
transport server: log failed requests with debug level
transport server: fix unexpected server errors handling
transport server: log client errors with debug level
We have a cql3::expr::expression::printer wrapper that annotates
an expression with a debug_mode boolean prior to formatting. The
fmt library, however, provides a much simpler alterantive: a custom
format specifier. With this, we can write format("{:user}", expr) for
user-oriented prints, or format("{:debug}", expr) for debug-oriented
prints (if nothing is specified, the default remains debug).
This is done by implementing fmt::formatter::parse() for the
expression type, can using expression::printer internally.
Since sometimes we pass expression element types rather than
the expression variant, we also provide a custom formatter for all
ExpressionElement Types.
Uses for expression::printer are updated to use the nicer syntax. In
one place we eliminate a temporary that is no longer needed since
ExpressionElement:s can be formatted directly.
Closes#12702
Calling _read_buf.close() doesn't imply eof(), some data
may have already been read into kernel or client buffers
and will be returned next time read() is called.
When the _server._max_request_size limit was exceeded
and the _read_buf was closed, the process_request method
finished and we started processing the next request in
connection::process. The unread data from _read_buf was
treated as the header of the next request frame, resulting
in "Invalid or unsupported protocol version" error.
The existing test_shed_too_large_request was adjusted.
It was originally written with the assumption that the data
of a large query would simply be dropped from the socket
and the connection could be used to handle the
next requests. This behaviour was changed in scylladb#8800,
now the connection is closed on the Scylla side and
can no longer be used. To check there are no errors
in this case, we use Scylla metrics, getting them
from the Scylla Prometheus API.
If request processing ended with an error, it is worth
sending the error to the client through
make_error/write_response. Previously in this case we
just wrote a message to the log and didn't handle the
client connection in any way. As a result, the only
thing the client got in this case was timeout error.
A new test_batch_with_error is added. It is quite
difficult to reproduce error condition in a test,
so we use error injection instead. Passing injection_key
in the body of the request ensures that the exception
will be thrown only for this test request and
will not affect other requests that
the driver may send in the background.
Closes: scylladb#12104
Now it gets one from this-> but the method is becoming static one in
distributed_loader which only has it as an argument. That's not big deal
as the current IO class is going to be derived from current sched group,
so this extra arg will go away at all some day.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
`paxos_response_handler::learn_decision` was calling
`cdc_service::augment_mutation_call` concurrently with
`storage_proxy::mutate_internal`. `augment_mutation_call` was selecting
rows from the base table in order to create the preimage, while
`mutate_internal` was writing rows to the table. It was therefore
possible for the preimage to observe the update that it accompanied,
which doesn't make any sense, because the preimage is supposed to show
the state before the update.
Fix this by performing the operations sequentially. We can still perform
the CDC mutation write concurrently with the base mutation write.
`cdc_with_lwt_test` was sometimes failing in debug mode due to this bug
and was marked flaky. Unmark it.
Fixes#12098
Test snapshot transfer by reducing the snapshot threshold on initial
servers (3 and 1 trailing).
Then creates a table, and does 3 extra schema changes (add column),
triggering at least 2 snapshots.
Then brings a new server to the cluster, which will get the schema
through a snapshot.
Then the test stops the initial servers and verifies the table
schema is up to date on the new server.
Signed-off-by: Alejo Sanchez <alejo.sanchez@scylladb.com>
Let the initial range passed to query_partition_key_range
be [1, 2) where 2 is the successor of 1 in terms
of ring_position order and 1 is equal to vnode.
Then query_ranges_to_vnodes_generator() -> [[1, 1], (1, 2)],
so we get an empty range (1,2) and subsequently will
make a data request with this empty range in
storage_proxy::query_partition_key_range_concurrent,
which will be redundant.
The patch adds a check for this condition after
making a split in the main loop in process_one_range.
The patch does not attempt to handle cases where the
original ranges were empty, since this check is the
responsibility of the caller. We only take care
not to add empty ranges to the result as an
unintentional artifact of the algorithm in
query_ranges_to_vnodes_generator.
A test case is added in test_get_restricted_ranges.
The helper lambda check is changed so that not to limit
the number of ranges to the length of expected
ranges, otherwise this check passes without
the change in process_one_range.
Fixes: #12566Closes#12755