With this patch we use data_consume_rows to try to read the entire
sstable. The patch also adds a test with a particular corruption that
would not be found without parsing the file.
Signed-off-by: Rafael Ávila de Espíndola <espindola@scylladb.com>
This makes the tests a bit more strict by also checking the message
returned by the what() function.
This shows that some of the tests are out of sync with which errors
they check for. I will hopefully fix this in another pass.
Signed-off-by: Rafael Ávila de Espíndola <espindola@scylladb.com>
sstable_test.cc was already a bit too big and there is potential for
having a lot of tests about broken sstables.
Signed-off-by: Rafael Ávila de Espíndola <espindola@scylladb.com>
Add a reference to a docker image that contains an "official" toolchain
for building Scylla. In addition, add a script that allows easy usage of
the image, and some documentation.
Message-Id: <20181202120829.21218-1-avi@scylladb.com>
git, python, sudo packages are installed by default on normal Fedora
installation but not in Docker image, we need to install it by this
script.
Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <20181201020834.24961-1-syuu@scylladb.com>
"
This patchset addresses two recently discovered bugs both triggered by
summary regeneration:
Tests: unit {release}
+
Validated with debug build of Scylla (ASAN) that no use-after-free
occurs when re-generating Summary.db.
"
* 'projects/sstables-30/summary-regeneration/v1' of https://github.com/argenet/scylla:
tests: Add test reading SSTables in 'mc' format with missing summary.
sstables: When loading, read statistics before summary.
database: Capture io_priority_class by reference to avoid dangling ref.
In case if summary is missing and we attempt to re-generate it,
statistics must be already read to provide us with values stored in
serialization header to facilitate clustering prefixes deserialization.
Fixes#3947
Signed-off-by: Vladimir Krivopalov <vladimir@scylladb.com>
The original reference points to a thread-local storage object that
guaranteed to outlive the continuation, but copying it make the
subsequent calls point to a local object and introduces a use-after-free
bug.
Fixes#3948
Signed-off-by: Vladimir Krivopalov <vladimir@scylladb.com>
"
This series adds proper handling of filtering queries with LIMIT.
Previously the limit was erroneously applied before filtering,
which leads to truncated results.
To avoid that, paged filtering queries now use an enhanced pager,
which remembers how many rows dropped and uses that information
to fetch for more pages if the limit is not yet reached.
For unpaged filtering queries, paging is done internally as in case
of aggregations to avoid returning keeping huge results in memory.
Also, previously, all limited queries used the page size counted
from max(page size, limit). It's not good for filtering,
because with LIMIT 1 we would then query for rows one-by-one.
To avoid that, filtered queries ask for the whole page and the results
are truncated if need be afterwards.
Tests: unit (release)
"
* 'fix_filtering_with_limit_2' of https://github.com/psarna/scylla:
tests: add filtering with LIMIT test
tests: split filtering tests from cql_query_test
cql3: add proper handling of filtering with LIMIT
service/pager: use dropped_rows to adjust how many rows to read
service/pager: virtualize max_rows_to_fetch function
cql3: add counting dropped rows in filtering pager
"
This patchset adds support to scylla_io_setup for
multiple data directories as well as commitlog,
hints, and saved_caches directories.
Refs #2415
Tests: manual testing with scylla-ccm generated scylla.yaml
"
* 'projects/multidev/v3' of https://github.com/bhalevy/scylla:
scylla_io_setup: assume default directories under /var/lib/scylla
scylla_io_setup: add support for commitlog, hints, and saved_caches directory
scylla_io_setup: support multiple data directories
Previously, limit was erroneously applied before filtering,
which might have resulted in truncated results.
Now, both paged and unpaged queries are filtered first,
and only after that properly trimmed so only X rows are returned
for LIMIT X.
Fixes#3902
Filtering pager may drop some rows and as a result return less
than what was fetched from the replica. To properly adjust how
many rows were actually read, dropped_rows variable is introduced.
Regular pagers use max_rows to figure out how many rows to fetch,
but filtering pager potentially needs the whole page to be fetched
in order to filter the results.
If a specific directory is not configure in scylla.yaml, scylla assumes
a default location under /var/lib/scylla.
Hard code these locations in scylla_io_setup until we have a better way
to probe scylla about it.
Be permissive and ignore the default directories if they don't not exist
on disk and silently ignore them.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Counter for dropped rows is added to the filtering pager.
This metrics can be used later to implement applying LIMIT
to filtering queries properly.
Dropped rows are returned on visitor::accept_partition_end.
"
This series fixes#3891 by amending the way restrictions
are checked for filtering. Previous implementation that returned
false from need_filtering() when multi-column restrictions
were present was incorrect.
Now, the error is going to be returned from restrictions filter layer,
and once multi-column support is implemented for filtering, it will
require no further changes.
Tests: unit (release)
"
* 'fix_multi_column_filtering_check_3' of https://github.com/psarna/scylla:
tests: add multi-column filtering check
cql3: remove incorrect multi-column check
cql3: check filtering restrictions only if applicable
cql3: add pk/ck_restrictions_need_filtering()
need_filtering() incorrectly returned false if multi-column restrictions
were present. Instead, these restrictions should be allowed to need
filtering.
Fixes#3891
"
This miniseries ensures that system tables are not checked
for having view updates, because they never do.
What's more, distributed system table is used in the process,
so it's unsafe to query the table while streaming it.
Tests: unit (release), dtest(update_cluster_layout_tests.py:TestUpdateClusterLayout.simple_decommission_node_2_test)
"
* 'fix_checking_if_system_tables_need_view_updates_3' of https://github.com/psarna/scylla:
streaming: don't check view building of system tables
database: add is_internal_keyspace
streaming: remove unused sstable_is_staging bool class
System tables will never need view building, and, what's more,
are actually used in the process of view build checking.
So, checking whether system tables need a view update path
is simplified to returning 'false'.
"
The series fixed#3565 and #3566
"
* 'gleb/write_failure_fixes' of github.com:scylladb/seastar-dev:
storage_proxy: store hint for CL=ANY if all nodes replied with failure
storage_proxy: complete write request early if all replicas replied with success of failure
storage_proxy: check that write failure response comes from recognized replica
storage_proxy: move code executed on write timeout into separate function
Sync Debian variants dependencies with dist/debian/control.mustache
(before merging relocatable), use scylla 3rdparty packages.
Since we use 3rdparty repo on seastar/install-dependencies.sh, drop repo
setup part from this script.
Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <20181031122800.11802-1-syuu@scylladb.com>
Current code assumes that request failed if all replicas replied with
failure, but this is not true for CL=ANY requests. Take it into account.
Fixed: #3565
Currently if write request reaches CL and all replicas replied, but some
replied with failures, the request will wait for timeout to be retired.
Detect this case and retire request immediately instead.
Fixes#3566
As far as I can tell the old sstable reading code required reading the
data into a contiguous buffer. The function data_consume_rows_at_once
implemented the old behavior and incrementally code was moved away
from it.
Right now the only use is in two tests. The sstables used in those
tests are already used in other tests with data_consume_rows.
Signed-off-by: Rafael Ávila de Espíndola <espindola@scylladb.com>
Message-Id: <20181127024319.18732-2-espindola@scylladb.com>
"
Compression is not deterministic so instead of binary comparing the sstable files we just read data back
and make sure everything that was written down is still present.
Tests: unit(release)
"
* 'haaawk/binary-compare-of-compressed-sstables/v3' of github.com:scylladb/seastar-dev:
sstables: Remove compressed parameter from get_write_test_path
sstables: Remove unused sstable test files
sstables: Ensure compare_sstables isn't used for compressed files
sstables: Don't binary compare compressed sstables
sstables: Remove debug printout from test_write_many_partitions
"
consistency_level.hh is rather heavyweighy in both its contents and what it
includes. Reduce the number of inclusion sites and split the file to reduce
dependencies.
"
* tag 'cl-header/v2' of https://github.com/avikivity/scylla:
consistency_level: simplify validation API
Split consistency_level.hh header
database: remove unneeded consistency_level.hh include
cql: remove unneeded includes of consistency_level.hh
It has two unrelated users: cql for validation, and storage_proxy for
complicated calculations. Split the simple stuff into a new header to reduce
dependencies.
Not all compaction operations submitted through compaction manager sets a callback
for releasing references of exhausted sstables in compaction manager itself.
That callback lives in compaction descriptor which is passed to table::compaction().
Let's make the call conditional to avoid bad function call exceptions.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20181126235616.10452-1-raphaelsc@scylladb.com>
There's no _M_t._M_head_impl any more in the standard library.
We now have std_unique_ptr wrapper which abstracts this fact away so
use that.
Message-Id: <20181126174837.11542-1-tgrabiec@scylladb.com>