This fixes a use-after-free bug when parsing clustering key across pages. Also includes a fix for allocating section retry, which is potentially not safe (not in practice yet). Details of the first problem: Clustering key index lookup is based on the index file page cache. We do a binary search within the index, which involves parsing index blocks touched by the algorithm. Index file pages are 4 KB chunks which are stored in LSA. To parse the first key of the block, we reuse clustering_parser, which is also used when parsing the data file. The parser is stateful and accepts consecutive chunks as temporary_buffers. The parser is supposed to keep its state across chunks. In93482439, the promoted index cursor was optimized to avoid fully page copy when parsing index blocks. Instead, parser is given a temporary_buffer which is a view on the page. A bit earlier, inb1b5bda, the parser was changed to keep shared fragments of the buffer passed to the parser in its internal state (across pages) rather than copy the fragments into a new buffer. This is problematic when buffers come from page cache because LSA buffers may be moved around or evicted. So the temporary_buffer which is a view on the LSA buffer is valid only around the duration of a single consume() call to the parser. If the blob which is parsed (e.g. variable-length clustering key component) spans pages, the fragments stored in the parser may be invalidated before the component is fully parsed. As a result, the parsed clustering key may have incorrect component values. This never causes parsing errors because the "length" field is always parsed from the current buffer, which is valid, and component parsing will end at the right place in the next (valid) buffer. The problematic path for clustering_key parsing is the one which calls primitive_consumer::read_bytes(), which is called for example for text components. Fixed-size components are not parsed like this, they store the intermediate state by copying data. This may cause incorrect clustering keys to be parsed when doing binary search in the index, diverting the search to an incorrect block. Details of the solution: We adapt page_view to a temporary_buffer-like API. For this, a new concept is introduced called ContiguousSharedBuffer. We also change parsers so that they can be templated on the type of the buffer they work with (page_view vs temporary_buffer). This way we don't introduce indirection to existing algorithms. We use page_view instead of temporary_buffer in the promoted index parser which works with page cache buffers. page_view can be safely shared via share() and stored across allocating sections. It keeps hold to the LSA buffer even across allocating sections by the means of cached_file::page_ptr. Fixes #20766 Closes scylladb/scylladb#20837 * github.com:scylladb/scylladb: sstables: bsearch_clustered_cursor: Add trace-level logging sstables: bsearch_clustered_cursor: Move definitions out of line test, sstables: Verify parsing stability when allocating section is retried test, sstables: Verify parsing stability when buffers cross page boundary sstables: bsearch_clustered_cursor: Switch parsers to work with page_view cached_file: Adapt page_view to ContiguousSharedBuffer cached_file: Change meaning of page_view::_size to be relative to _offset rather than page start sstables, utils: Allow parsers to work with different buffer types sstables: promoted_index_block_parser: Make reset() always bring parser to initial state sstables: bsearch_clustered_cursor: Switch read_block_offset() to use the read() method sstables: bsearch_clustered_cursor: Fix parsing when allocating section is retried
Scylla unit tests using C++ and the Boost test framework
The source files in this directory are Scylla unit tests written in C++ using the Boost.Test framework. These unit tests come in three flavors:
-
Some simple tests that check stand-alone C++ functions or classes use Boost's
BOOST_AUTO_TEST_CASE. -
Some tests require Seastar features, and need to be declared with Seastar's extensions to Boost.Test, namely
SEASTAR_TEST_CASE. -
Even more elaborate tests require not just a functioning Seastar environment but also a complete (or partial) Scylla environment. Those tests use the
do_with_cql_env()ordo_with_cql_env_thread()function to set up a mostly-functioning environment behaving like a single-node Scylla, in which the test can run.
While we have many tests of the third flavor, writing new tests of this type should be reserved to white box tests - tests where it is necessary to inspect or control Scylla internals that do not have user-facing APIs such as CQL. In contrast, black-box tests - tests that can be written only using user-facing APIs, should be written in one of newer test frameworks that we offer - such as test/cql-pytest or test/alternator (in Python, using the CQL or DynamoDB APIs respectively) or test/cql (using textual CQL commands), or - if more than one Scylla node is needed for a test - using the test/topology* framework.
Running tests
Because these are C++ tests, they need to be compiled before running.
To compile a single test executable row_cache_test, use a command like
ninja build/dev/test/boost/row_cache_test
You can also use ninja dev-test to build all C++ tests, or use
ninja deb-build to build the C++ tests and also the full Scylla executable
(however, note that full Scylla executable isn't needed to run Boost tests).
Replace "dev" by "debug" or "release" in the examples above and below to use the "debug" build mode (which, importantly, compiles the test with ASAN and UBSAN enabling on and helps catch difficult-to-catch use-after-free bugs) or the "release" build mode (optimized for run speed).
To run an entire test file row_cache_test, including all its test
functions, use a command like:
build/dev/test/boost/row_cache_test -- -c1 -m1G
to run a single test function test_reproduce_18045() from the longer test
file, use a command like:
build/dev/test/boost/row_cache_test -t test_reproduce_18045 -- -c1 -m1G
In these command lines, the parameters before the -- are passed to
Boost.Test, while the parameters after the -- are passed to the test code,
and in particular to Seastar. In this example Seastar is asked to run on one
CPU (-c1) and use 1G of memory (-m1G) instead of hogging the entire
machine. The Boost.Test option -t test_reproduce_18045 asks it to run just
this one test function instead of all the test functions in the executable.
Unfortunately, interrupting a running test with control-C while doesn't
work. This is a known bug (#5696). Kill a test with SIGKILL (-9) if you
need to kill it while it's running.
Boost tests can also be run using test.py - which is a script that provides a uniform way to run all tests in scylladb.git - C++ tests, Python tests, etc.
Writing tests
Because of the large build time and build size of each separate test executable, it is recommended to put test functions into relatively large source files. But not too large - to keep compilation time of a single source file (during development) at reasonable levels.
When adding new source files in test/boost, don't forget to list the new source file in configure.py and also in CMakeLists.txt. The former is needed by our CI, but the latter is preferred by some developers.