We noticed that old branches of Scylla had problems with looking up a
null value in a local secondary index - hanging or crashing. This patch
includes tests to reproduce these bugs. The tests pass on current
master - apparently this bug has already been fixed, but we didn't
have a regression test for it.
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Closes#12570
This PR fixes three problems that prevented/could prevent a successful build in ScyllaDB's Nix development environment.
The first commit adds a missing `abseil-cpp` dependency to Nix devenv, as this dependency is now required after 8635d2442.
The second commit bumps the version of Lua from 5.3 to 5.4, as after 9dd5107919 a 4-argument version of `lua_resume` (only available in Lua 5.4) is used in the ScyllaDB codebase.
The third commit explicitly adds `rustc` to Nix devenv dependencies. This places `rustc` from nixpkgs on the `PATH`, preventing `cargo` from executing `rustc` installed globally on the system (see the commit message for additional reasoning).
After those changes, ScyllaDB can be succesfully built in both `nix-shell .` and `nix develop .` environments.
Closes#12568
* github.com:scylladb/scylladb:
build: explicitly add rustc to Nix devenv
build: bump Lua version (5.3 -> 5.4) in Nix devenv
build: add abseil-cpp dependency to Nix devenv
If an endpoint handler throws an exception, the details of the exception
are not returned to the client. Normally this is desirable so that
information is not leaked, but in this test framework we do want to
return the details to the client so it can log a useful error message.
Do it by wrapping every handler into a catch clause that returns
the exception message.
Also modify a bit how HTTPErrors are rendered so it's easier to discern
the actual body of the error from other details (such as the params used
to make the request etc.)
Before:
```
E test.pylib.rest_client.HTTPError: HTTP error 500: 500 Internal Server Error
E
E Server got itself in trouble, params None, json None, uri http+unix://api/cluster/before-test/test_stuff
```
After:
```
E test.pylib.rest_client.HTTPError: HTTP error 500, uri: http+unix://api/cluster/before-test/test_stuff, params: None, json: None, body:
E Failed to start server at host 127.155.129.1.
E Check the log files:
E /home/kbraun/dev/scylladb/testlog/test.py.dev.log
E /home/kbraun/dev/scylladb/testlog/dev/scylla-1.log
```
Closes#12563
When we obtained a new cluster for a test case after the previous test
case left a dirty cluster, we would release the old cluster's used IP
addresses (`_before_test` function). However, we would not release the
last cluster's IP after the last test case. We would run out of IPs with
sufficiently many test files or `--repeat` runs. Fix this.
Also reorder the operations a bit: stop the cluster (and release its
IPs) before freeing up space in the cluster pool (i.e. call
`self.cluster.stop()` before `self.clusters.steal()`). This reduces
concurrency a bit - fewer Scyllas running at the same time, which is
good (the pool size gives a limit on the desired max number of
concurrently running clusters). Killing a cluster is quick so it won't
make a significant difference for the next guy waiting on the pool.
Closes#12564
Before this patch, "cargo" was the only Rust toolchain dependency in Nix
development environment. Due to the way "cargo" tool is packaged in Nix,
"cargo" would first try to use "rustc" from PATH (for example some
version already installed globally on OS). If it didn't find any, it
would fallback to "rustc" from nixpkgs.
There are issues with such approach:
- "rustc" installed globally on the system could be old.
- the goal of having a Nix development environment is that such
environment is separate from the programs installed globally on the
system and the versions of all tools are pinned (via flake.lock).
Fix this problem by adding rustc to nativeBuildInputs in default.nix.
After this patch, "rustc" from nixpkgs is present on the PATH
(potentially overriding "rustc" already installed on the system), so
"cargo" can correctly use it.
You can validate this behavior experimentally by adding a fake failing
rustc before entering the Nix development environment:
mkdir fakerustc
echo '#!/bin/bash' >> fakerustc/rustc
echo 'exit 1' >> fakerustc/rustc
chmod +x fakerustc/rustc
export PATH=$(pwd)/fakerustc:$PATH
nix-shell .
A recent commit (9dd5107919) started using a 4-argument version of
lua_resume, which is only available in Lua 5.4. This caused build
problems when trying to build Scylla in Nix development environment:
tools/lua_sstable_consumer.cc:1292:19: error: no matching function for call to 'lua_resume'
ret = lua_resume(l, nullptr, nargs, &nresults);
^~~~~~~~~~
/nix/store/wiz3xb19x2pv7j3hf29rbafm4s5zp2kx-lua-5.3.6/include/lua.h:290:15: note: candidate function not viable: requires 3 arguments, but 4 were provided
LUA_API int (lua_resume) (lua_State *L, lua_State *from, int narg);
^
1 error generated.
Fix the problem by bumping the version of Lua from 5.3 to 5.4 in
default.nix. Since "lua54Packages.lua" was added to nixpkgs fairly
recently (NixOS/nixpkgs#207862), flake.lock is updated to get the newest
version of nixpkgs (updated using "nix flake update" command).
Main assumption here is that if is_big is good enough for
GetBatchItems operation it should work well also for Scan,
Query and GetRecords. And it's easier to maintain more unified
code.
Additionally 'future<> print' documentation used for streaming
suggests that there is quite big overhead so since it seems the
only motivation for streaming was to reduce contiguous allocation
size below some threshold we should not stream when this threshold
is not exceeded.
Closes#12164
This PR is not related to any reported issue in the repo.
I've just discovered a broken link in the university caused by a
missing redirection.
Closes#12567
After 8635d2442 commit, the abseil submodule was removed in favor of
using pre-built abseil distribution. Installation of abseil-cpp was
added to install-dependencies.sh and dbuild image, but no change was
made to the Nix development environment, which resulted in error
while executing ./configure.py (while in Nix devenv):
Package absl_raw_hash_set was not found in the pkg-config search path.
Perhaps you should add the directory containing `absl_raw_hash_set.pc'
to the PKG_CONFIG_PATH environment variable
No package 'absl_raw_hash_set' found
Fix the issue by adding "abseil-cpp" to buildInputs in default.nix.
Recently, commit 0b418fa made the checking for "unset" values more
centralized and more robust, but as the tests added in this patch
show, the situation is good (and in particular, that #10358 is
solved).
The tests in this patch check that the behavior of "unset" values in
the CQL v4 protocol matches Cassandra's behavior and its documentation,
and how it compares to our wishes of how we want unset values to behave.
One of these tests fail on Cassandra (we consider this a Cassandra bug).
One test fails on Scylla because it doesn't yet support arithmetic
expressions (Refs #2693).
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Closes#12534
The CQL protocol and specification call for lists with NULLs in
some places. For example, the statement:
```cql
UPDATE tab
SET x = 3
IF y IN (1, 2, NULL)
WHERE pk = 4
```
has a list `(1, 2, NULL)` that contains NULL. Although the syntax is tuple-like, the value is a list;
consider the same statement as a prepared statement:
```cql
UPDATE tab
SET x = :x
IF y IN :y_values
WHERE pk = :pk
```
`:y_values` must have a list type, since the number of elements is unknown.
Currently, this is done with special paths inside LWT that bypass normal
evaluation, but if we want to unify those paths, we must allow NULLs in
lists (except in storage). This series does that.
Closes#12411
* github.com:scylladb/scylladb:
test: materialized view: add test exercising synthetic empty-type columns
cql3: expr: relax evaluate_list() to allow allow NULL elements
types: allow lists with NULL
test: relax NULL check test predicate
cql3, types: validate listlike collections (sets, lists) for storage
types: make empty type deserialize to non-null value
following tests are integrated into scylla executable
- perf_fast_forward
- perf_row_cache_update
- perf_simple_query
- perf_row_cache_update
- perf_sstable
before this change
```console
$ size build/release/scylla
text data bss dec hex filename
82284664 288960 335897 82909521 4f11951 build/release/scylla
$ ls -l build/release/scylla
-rwxrwxr-x 1 kefu kefu 1719672112 Jan 19 17:51 build/release/scylla
```
after this change
```console
$ size build/release/scylla
text data bss dec hex filename
84349449 289424 345257 84984130 510c142 build/release/scylla
$ ls -l build/release/scylla
-rwxrwxr-x 1 kefu kefu 1774204800 Jan 19 17:52 build/release/scylla
```
Fixes#12484Closes#12558
* github.com:scylladb/scylladb:
main: move perf_sstable into scylla
main: move perf_row_cache_update into scylla
test: perf_row_cache_update: add static specifier to local functions
main: move perf_fast_forward into scylla
main: move perf_simple_query into scylla
test: extract debug::the_database out
main: shift the args when checking exec_name
main: extract lookup_main_func() out
If a cluster fails to boot, it saves the exception in
`self.start_exception` variable; the exception will be rethrown when
a test tries to start using this cluster. As explained in `before_test`:
```
def before_test(self, name) -> None:
"""Check that the cluster is ready for a test. If
there was a start error, throw it here - the server is
running when it's added to the pool, which can't be attributed
to any specific test, throwing it here would stop a specific
test."""
```
It's arguable whether we should blame some random test for a failure
that it didn't cause, but nevertheless, there's a problem here: the
`start_exception` will be rethrown and the test will fail, but then the
cluster will be simply returned to the pool and the next test will
attempt to use it... and so on.
Prevent this by marking the cluster as dirty the first time we rethrow
the exception.
Closes#12560
This decreases the whole alternator::get_table cpu time by 78%
(from 2.8 us to 0.6 us on my cpu).
In perf_simple_query it decreases allocs/op by 1.6% (by removing 4 allocations)
and increases median tps by 3.4%.
Raw results from running:
./build/release/test/perf/perf_simple_query_g --smp 1 \
--alternator forbid --default-log-level error \
--random-seed=1235000092 --duration=180 --write
Before the patch:
median 46903.65 tps (197.2 allocs/op, 12.1 tasks/op, 170886 insns/op, 0 errors)
median absolute deviation: 210.15
maximum: 47354.59
minimum: 42535.63
After the patch:
median 48484.76 tps (194.1 allocs/op, 12.1 tasks/op, 168512 insns/op, 0 errors)
median absolute deviation: 317.32
maximum: 49247.69
minimum: 44656.38
Closes#12445
Commitlog O_DSYNC is intended to make Raft and schema writes durable
in the face of power loss. To make O_DSYNC performant, we preallocate
the commitlog segments, so that the commitlog writes only change file
data and not file metadata (which would require the filesystem to commit
its own log).
However, in tests, this causes each ScyllaDB instance to write 384MB
of commitlog segments. This overloads the disks and slows everything
down.
Fix this by disabling O_DSYNC (and therefore preallocation) during
the tests. They can't survive power loss, and run with
--unsafe-bypass-fsync anyway.
Closes#12542
* configure.py:
- include `test/perf/perf_sstable` and its dependencies in scylla_perfs
* test/perf/perf_sstable.cc: change `main()` to
`perf::scylla_sstable_main()`
* test/perf/entry_point.hh: add
`perf::scylla_sstable_main()`
* main.cc:
- dispatch "perf-sstable" subcommand to
`perf::scylla_sstable_main`
before this change, we have a tool at `test/perf/perf_sstable`
for running performance tests by exercising sstable related operations.
after this change, the `test/perf/perf_sstable` is integreated
into `scylla` as a subcommand. so we can run `scylla perf-sstable`
[options, ...]` to perform the same tests previous driven by the tool.
Fixes#12484
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
* configure.py:
- include `test/perf/perf_row_cache_update.cc` in scylla_perfs
* main.cc:
- dispatch "perf-row-cache-update" subcommand to
`perf::scylla_row_cache_update_main`
* test/perf/perf_fast_forward.cc: change `main()` to
`perf::scylla_row_cache_update_main()`
* test/perf/entry_point.hh: add
`perf::scylla_row_cache_update_main()`
before this change, we have a tool at `test/perf/perf_row_cache_update`
for running performance tests by updating row cache.
after this change, the `test/perf/perf_row_cache_update` is integreated
into `scylla` as a subcommand. so we can run `scylla perf-row-cache-update
[options, ...]` to perform the same tests previous driven by the tool.
Fixes#12484
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
now that these functions are only used by the same compiling unit,
they don't need external linkage. so let's hide them using `static`.
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
* configure.py:
- include `test/perf/perf_simple_query.cc` in scylla_perfs
* main.cc:
- dispatch "perf-fast-forward" subcommand to
`perf::scylla_fast_forward_main`
* test/perf/perf_fast_forward.cc: change `main()` to
`perf::scylla_simple_query_main()`
* test/perf/entry_point.hh: add
`perf::scylla_simple_query_main()`
before this change, we have a tool at `test/perf/perf_fast_forward`
for running performance tests by fast forwarding the reader.
after this change, the `test/perf/perf_fast_forward` is integreated
into `scylla` as a subcommand. so we can run `scylla perf-fast-forward
[options, ...]` to perform the same tests previous driven by the tool.
Fixes#12484
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
* configure.py:
- include scylla_perfs in scylla
- move 'test/lib/debug.cc' down scylla_perfs, as the latter uses
`debug::the_database`
- link `scylla` against seastar_testing_libs also. because we
use the helpers in `test/lib/random_utils.hh` for generating
random numbers / sequences in `perf_simple_query.cc`, and
`random_utils.hh` references `seastar::testing::local_random_engine`
as a local RNG. but `seastar::testing::local_random_engine`
is included in `libseastar_testing.a` or
`libseastar_perf_testing.a`. since we already have the rules for
linking against `libseastar_testing.a`, let's just reuse them,
and link `scylla` against this new dependency.
* main.cc:
- dispatch "perf-simple-query" subcommand to
`perf::scylla_simple_query_main`
* test/perf/perf_simple_query.cc: change `main()` to
`perf::scylla_simple_query_main()`
* test/perf/entry_point.hh: define the main function entries
so `main.cc` can find them. it's quite like how we collect
the entries in `tools/entry_point.hh`
before this change, we have a tool at `test/perf/perf_simple_query`
for running performance test by sending simple query to a single-node
cluster.
after this change, the `test/perf/perf_simple_query` is integreated
into `scylla` as a subcommand. so we can run `scylla perf-simple-query
[options, ...]` to perform the same tests previous driven by the tool.
Fixes#12484
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
we want to integrate some perf test into scylla executable, so we
can run them on a regular basis. but `test/lib/cql_test_env.cc`
shares `debug::the_database` with `main.cc`, so we cannot just
compile them into a single binary without changing them.
before this change, both `test/lib/cql_test_env.cc`
and `main.cc` define `debug::the_database`.
after this change, `debug::the_database` is extracted into
`debug.cc`, so it compiles into a separate compiling unit.
and scylla and tests using seastar testing framework are linked
against `debug.cc` via `scylla_core` respectively. this paves the road to
integrating scylla with the tests linking aginst
`test/lib/cql_test_env.cc`.
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
Commit 0b418fa improved the error detection of unset values in
inappropriate CQL statements, and some of the unit tests translated
from Cassandra started to pass, so this patch removes their "xfail"
mark.
In a couple of places Scylla's error message is worded differently
from Cassandra, so the test was modified to look for a shorter
string common to both implementations.
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Closes#12553
The reader concurrency semaphore has no mechanism to limit the memory consumption of already admitted read. Once memory collective memory consumption of all the admitted reads is above the limit, all it can do is to not admit any more. Sometimes this is not enough and the memory consumption of the already admitted reads balloons to the point of OOMing the node. This pull-request offers a solution to this: it introduces two more layers of defense above this: a soft and a hard limit. Both are multipliers applied on the semaphores normal memory limit.
When the soft limit threshold is surpassed, all readers but one are blocked via a new blocking `request_memory()` call which is used by the `tracking_file_impl`. The reader to be allowed to proceed is chosen at random, it is the first reader which happens to request memory after the limit is surpassed. This is both very simple and should avoid situations where the algorithm choosing the reader to be allowed to proceed chooses a reader which will then always time out.
When the hard limit threshold is surpassed, `reader_concurrency_semaphore::consume()` starts throwing `std::bad_alloc`. This again will result in eliminating whichever reader was unlucky enough to request memory at the right moment.
With this, the semaphore is now effectively enforcing an upper bound for memory consumption, defined by the hard limit.
Refs: https://github.com/scylladb/scylladb/issues/11927Closes#11955
* github.com:scylladb/scylladb:
test: reader_concurrency_semaphore_test: add tests for semaphore memory limits
reader_permit: expose operator<<(reader_permit::state)
reader_permit: add id() accessor
reader_concurrency_semaphore: add foreach_permit()
reader_concurrency_semaphore: document the new memory limits
reader_concurrency_semaphore: add OOM killer
reader_concurrency_semaphore: make consume() and signal() private
test: stop using reader_concurrency_semaphore::{consume,signal}() directly
reader_concurrency_semaphore: move consume() out-of-line
reader_permit: consume(): make it exception-safe
reader_permit: resource_units::reset(): only call consume() if needed
reader_concurrency_semaphore: tracked_file_impl: use request_memory()
reader_concurrency_semaphore: add request_memory()
reader_concurrency_semaphore: wrap wait list
reader_concurrency_semaphore: add {serialize,kill}_limit_multiplier parameters
test/boost/reader_concurrency_semaphore_test: dummy_file_impl: don't use hardoced buffer size
reader_permit: add make_new_tracked_temporary_buffer()
reader_permit: add get_state() accessor
reader_permit: resource_units: add constructor for already consumed res
reader_permit: resource_units: remove noexcept qualifier from constructor
db/config: introduce reader_concurrency_semaphore_{serialize,kill}_limit_multiplier
scylla-gdb.py: scylla-memory: extract semaphore stats formatting code
scylla-gdb.py: fix spelling of "graphviz"
`prepare_expression` takes an unprepared CQL expression straight from the parser output and prepares it. Preparation consists of various type checks that are needed to ensure that the expression is correct and to reason about it.
While `prepare_expression` supports a number of different types of expressions, until now it was impossible to prepare a `binary_operator`. Eventually we would like to be able to prepare all kinds of expressions, so this PR adds the missing support for `binary_operator`.
Closes#12550
* github.com:scylladb/scylladb:
expr_test: test preparing binary_operator with NULL RHS
expr_test: test preparing IS NOT NULL binary_operator
expr_test: test preparing binary_operator with LIKE
expr_test: test preparing binary_operator with CONTAINS KEY
expr_test: test preparing binary_operator with CONTAINS
expr_test: test preparing binary_operator with IN
expr_test: test preparing binary_operator with =, !=, <, <=, >, >=
expr_test: use make_*_untyped function in existing tests
expr_test_utils: add utilities to create untyped_constant
expr_test_utils: add make_float_* and make_double_*
cql3: expr: make it possible to prepare binary_operator using prepare_expression
cql3/expr: check that RHS of IS NOT NULL is a null value when preparing binary operators
cql3: expr: pass non-empty keyspace name in prepare_binary_operator
cql3: expr: take reference to schema in prepare_binary_operator
instead of introducing yet another variable for tracking the
status, update the args right away. for better readability.
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
refactor main() to extract lookup_main_func() out, so we find
the main_func in a table instead of using a lengthy if-then-else
clause.
when the length of the list of candidates of dispatch grows, the
code would be less structured. so in this change, the code looking
up for the main_func is extracted into a dedicated function for
better readability.
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
* seastar 8889cbc198...d41af8b592 (14):
> Merge 'Perf stall detector related improvements' from Travis Downs
Ref #8828, #7882, #11582 (may help make progress)
> build: pass HEAPPROF definition to src/core/reactor.cc too
> Limit memory address space per core to 64GB when hwloc is not available
> build: revert use pkg_search_module(.. IMPORTED_TARGET ..) changes
> Fix missing newlines in seastar-addr2line
> Use an integral type for uniform_int_distribution
> Merge 'tls_test: use a dedicated https server for testing' from Kefu Chai
> build: use ${CMAKE_BINARY_DIR} when running 'cmake --build ..'
> build: do not set c-ares_FOUND with PARENT_SCOPE
> reactor: drop unused member function declaration
> sstring: refactor to_sstring() using fmt::format_to()
> http: delay input stream close until responses sent
> build: enable non-library targets using default option value
> Merge 'sstring: specialize uninitialize_string() and use resize_and_overwrite if available' from Kefu Chai
Closes#12509
Add unit test which check that preparing binary_operators
which represent IS NOT NULL works as expected
Signed-off-by: Jan Ciolek <jan.ciolek@scylladb.com>
Add unit test which check that preparing binary_operators
with the LIKE operation works as expected.
Signed-off-by: Jan Ciolek <jan.ciolek@scylladb.com
Add unit test which check that preparing binary_operators
with the CONTAINS KEY operation works as expected.
Signed-off-by: Jan Ciolek <jan.ciolek@scylladb.com>
Add unit test which check that preparing binary_operators
with the CONTAINS operation works as expected.
Signed-off-by: Jan Ciolek <jan.ciolek@scylladb.com>
Add unit test which check that preparing binary_operators
with basic comparison operations works as expected.
Signed-off-by: Jan Ciolek <jan.ciolek@scylladb.com>
Use the newly introduced convenience methods that create
untyped_constant in existing tests.
This will make the code more readable by removing
visual clutter that came with the previous overly
verbose code.
Signed-off-by: Jan Ciolek <jan.ciolek@scylladb.com>
expression tests often need to create instances of untyped_constant.
Creating them by hand is tedious because the required code is overly verbose.
Having convenience functions for it speeds up test writing.
Signed-off-by: Jan Ciolek <jan.ciolek@scylladb.com>
prepare_expression didn't allow to prepare binary_operators.
so it's now implemented.
If prepare_binary_operator is unable to infer
the types it will fail with an exception instead
of returning std::nullopt, but we can live with
that for now.
Preparing binary_operators inside the WHERE
clause is currently more complicated than just
calling prepare_binary_operator. Preparation
of the WHERE clause is done inside statement_restrictions
constructor. It's done by iterating over all binary_operators,
validating them and then preparing. The validation contains
additional checks with custom error messages.
Preparation has to be done after validation,
because otherwise the error messages will change
and some tests will start failing.
Because of that we can't just call prepare_expression
on the WHERE clause yet.
It's still useful to have the ability to prepare
binary_operators using prepare_expression.
In cases where we know that the WHERE clause is valid,
we can just call prepare_expression and be done with it.
Once grammar is fully relaxed the artificial constraints
checked by the validation code will be removed and
it will be possible to prepare the whole WHERE clause
using just prepare_expression.
prepare_expression does a bit more than
prepare_binary_operator. In case where
both sides of the binary_operator are known
it will evaluate the whole binary_operator
to a constant value.
Query analysis code is NOT ready
to encounter constant boolean values inside
the WHERE clause, so for the WHERE we still use
prepare_binary_operator which doesn't
evaluate the binary_operator to a
constant value.
Signed-off-by: Jan Ciolek <jan.ciolek@scylladb.com>
When preparing a binary operator we first prepare the LHS,
which gives us information about its type and allows
to infer the desired type of RHS.
Then the RHS is prepared with the expectation that it
is compatible with the inferred type.
This is enough for all types of operations apart
from IS NOT NULL.
For IS NOT we should also check that the RHS value
is actually null. It's not enough to check that
RHS is of right type.
Before this change preparing `int_col IS NOT 123`
would end in success, which is wrong.
The missing check doesn't cause any real problems,
it's impossible for the user to produce such input
because the parser will reject it.
Still it's better to have the check because
in the future the grammar might get more relaxed
and the parser could become more generic,
making it possible to write such things.
It would be better to introduce unary_operators,
but that's a bigger change.
Signed-off-by: Jan Ciolek <jan.ciolek@scylladb.com>
For some reason we passed an empty keyspace name
to prepare_expression when preparing the LHS
of a binary operator.
This doesn't look correct. We have keyspace
name available from the schema_ptr so let's use that.
Signed-off-by: Jan Ciolek <jan.ciolek@scylladb.com>
prepare_binary_operator takes a schema_ptr,
but it would be useful to take a reference to schema instead.
Every schema_ptr can be easily converted to a reference
so there is no loss of functionality.
Signed-off-by: Jan Ciolek <jan.ciolek@scylladb.com>
Several cases where fixed in this patches, all are related to processing of malformed base64 data. Main purpose was to bring alternator implementation closer to what DynamoDB does. We now:
- Throw error when padding is missing during base64 decoding
- Throw error when base64 data is malformed
- In alternator when invalid base64 data is fetched from DB (as opposed to being part of user's request) we now exclude such row during filtering
Additionally some small code quality improvements:
- avoid unnecessary type conversions in calls to rjson:from_strings functions
- avoid some copy constructions in calls to rjson:from_strings functions
Fixes https://github.com/scylladb/scylladb/issues/6487Closes#11944
* github.com:scylladb/scylladb:
alternator: evaluate expressions as false for stored malformed binary data
rjson: avoid copy constructors in from_string calls when possible
alternator: remove unused parameters from describe_items func
utils: throw error on malformed input in base64 decode
utils: throw error on missing padding in base64 decode
Materialized views inject synthetic empty-type columns in some conditions.
Since we just touched empty-type serialization/deserialization, add a
test to exercise it and make sure it still works.
Tests are similarly relaxed. A test is added in lwt_test to show
that insertion of a list with NULL is still rejected, though we
allow NULLs in IF conditions.
One test is changed from a list of longs to a list of ints, to
prevent churn in the test helper library.
Allow transient lists that contain NULL throughout the
evaluation machinery. This makes is possible to evalute things
like `IF col IN (1, 2, NULL)` without hacks, once LWT conditions
are converted to expressions.
A few tests are relaxed to accommodate the new behavior:
- cql_query_test's test_null_and_unset_in_collections is relaxed
to allow `WHERE col IN ?`, with the variable bound to a list
containing NULL; now it's explicitly allowed
- expr_test's evaluate_bind_variable_validates_no_null_in_list was
checking generic lists for NULLs, and was similary relaxed (and
renamed)
- expr_Test's evaluate_bind_variable_validates_null_in_lists_recursively
was similarly relaxed to allow NULLs.
When we start allowing NULL in lists in some contexts, the exact
location where an error is raised (when it's disallowed) will
change. To prepare for that, relax the exception check to just
ensure the word NULL is there, without caring about the exact
wording.
Lists allow NULL in some contexts (bind variables for LWT "IN ?"
conditions), but not in most others. Currently, the implementation
just disallows NULLs in list values, and the cases where it is allowed
are hacked around. To reduce the special cases, we'll allow lists
to have NULLs, and just restrict them for storage. This is similar
to how scalar values can be NULL, but not when they are part of a
partition key.
To prepare for the transition, identify the locations where lists
(and sets, which share the same storage) are stored as frozen
values and add a NULL check there. Non-frozen lists already have the
check. Since sets share the same format as lists, apply the same to
them.
No actual checks are done yet, since NULLs are impossible. This
is just a stub.