Store schema_ptr in reader permit instead of storing a const pointer to
schema to ensure that the schema doesn't get changed elsewhere when the
permit is holding on to it. Also update the constructors and all the
relevant callers to pass down schema_ptr instead of a raw pointer.
Fixes#16180
Signed-off-by: Lakshmi Narayanan Sreethar <lakshmi.sreethar@scylladb.com>
Closesscylladb/scylladb#16658
(cherry picked from commit 76f0d5e35b)
Closesscylladb/scylladb#17677
Major compaction already flushes each table to make
sure it considers any mutations that are present in the
memtable for the purpose of tombstone purging.
See 64ec1c6ec6
However, tombstone purging may be inhibited by data
in commitlog segments based on `gc_time_min` in the
`tombstone_gc_state` (See f42eb4d1ce).
Flushing all sstables in the database release
all references to commitlog segments and there
it maximizes the potential for tombstone purging,
which is typically the reason for running major compaction.
However, flushing all tables too frequently might
result in tiny sstables. Since when flushing all
keyspaces using `nodetool flush` the `force_keyspace_compaction`
api is invoked for keyspace successively, we need a mechanism
to prevent too frequent flushes by major compaction.
Hence a `compaction_flush_all_tables_before_major_seconds` interval
configuration option is added (defaults to 24 hours).
In the case that not all tables are flushed prior
to major compaction, we revert to the old behavior of
flushing each table in the keyspace before major-compacting it.
Fixesscylladb/scylladb#15777Closesscylladb/scylladb#15820
to address the confliction, following change is also included in this changeset:
tools/scylla-nodetool: implement the cleanup command
The --jobs command-line argument is accepted but ignored, just like the
current nodetool does.
Refs: scylladb/scylladb#15588Closesscylladb/scylladb#16756
* github.com:scylladb/scylladb:
docs: nodetool: flush: enrich examples
docs: nodetool: compact: fix example
api: add /storage_service/compact
api: add /storage_service/flush
tools/scylla-nodetool: implement the flush command
compaction_manager: flush_all_tables before major compaction
database: add flush_all_tables
api: compaction: add flush_memtables option
test/nodetool: jmx: fix path to scripts/scylla-jmx
scylla-nodetool, docs: improve optional params documentation
tools/scylla-nodetool: extract keyspace/table parsing
tools/scylla-nodetool: implement the cleanup command
test/nodetool: rest_api_mock: add more options for multiple requests
For major compacting all tables in the database.
The advantage of this api is that `commitlog->force_new_active_segment`
happens only once in `database::flush_all_tables` rather than
once per keyspace (when `nodetool compact` translates to
a sequence of `/storage_service/keyspace_compaction` calls).
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
(cherry picked from commit b12b142232)
For flushing all tables in the database.
The advantage of this api is that `commitlog->force_new_active_segment`
happens only once in `database::flush_all_tables` rather than
once per keyspace (when `nodetool flush` translates to
a sequence of `/storage_service/keyspace_flush` calls).
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
(cherry picked from commit 1b576f358b)
When flushing is done externally, e.g. by running
`nodetool flush` prior to `nodetool compact`,
flush_memtables=false can be passed to skip flushing
of tables right before they are major-compacted.
This is useful to prevent creation of small sstables
due to excessive memtable flushing.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
(cherry picked from commit 1fd85bd37b)
Having to extract 1 keyspace and N tables from the command-line is
proving to be a common pattern among commands. Extract this into a
method, so the boiler-plate can be shared. Add a forward-looking
overload as well, which will be used in the next patch.
(cherry picked from commit f082cc8273)
Currently, `tool_app_template::run_async()` crashes when invoked with empty argv (with just `argv[0]` populated). This can happen if the tool app is invoked without any further args, e.g. just invoking `scylla nodetool`. The crash happens because unconditional dereferencing of `argv[1]` to get the current operation.
To fix, add an early-exit for this case, just printing a usage message and exiting with exit code 2.
Fixes: #16451Closesscylladb/scylladb#16456
* github.com:scylladb/scylladb:
test: add regression tests for invoking tools with no args
tools/utils: tool_app_template: handle the case of no args
tools/utils: tool_app_template: remove "scylla-" prefix from app name
(cherry picked from commit 5866d265c3)
The reader used to read the sstables was not closed. This could
sometimes trigger an abort(), because the reader was destroyed, without
it being closed first.
Why only sometimes? This is due to two factors:
* read_mutation_from_flat_mutation_reader() - the method used to extract
a mutation from the reader, uses consume(), which does not trigger
`set_close_is_required()` (#16520). Due to this, the top-level
combined reader did not complain when destroyed without close.
* The combined reader closes underlying readers who have no more data
for the current range. If the circumstances are just right, all
underlying readers are closed, before the combined reader is
destoyed. Looks like this is what happens for the most time.
This bug was discovered in SCT testing. After fixing #16520, all
invokations of `scylla-sstable`, which use this code would trigger the
abort, without this patch. So no further testing is required.
Fixes: #16519Closesscylladb/scylladb#16521
(cherry picked from commit da033343b7)
Fixes#16153
* jmx 166599f...f45067f (3):
> ColumnFamilyStore: only quote table names if necessary
> APIBuilder: allow quoted scope names
> ColumnFamilyStore: don't fail if there is a table with ":" in its name
* java dfbf3726ee...3764ae94db (1):
> NodeProbe: allow addressing table name with colon in it
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Closesscylladb/scylladb#16294
Update node_exporter to 1.7.0.
The previous version (1.6.1) was flagged by security scanners (such as
Trivy) with HIGH-severity CVE-2023-39325. 1.7.0 release fixed that
problem.
[Botond: regenerate frozen toolchain]
Fixes#16085Closesscylladb/scylladb#16086Closesscylladb/scylladb#16090
(cherry picked from commit 321459ec51)
[avi: regenerate frozen toolchain]
* ./tools/jmx 8d15342e...9a03d4fa (1):
> Merge "scylla-apiclient: update several Java dependencies" from Piotr Grabowski
* ./tools/java 3c09ab97...dfbf3726 (1):
> Merge 'build: update several dependencies' from Piotr Grabowski
Update build dependencies which were flagged by security scanners.
Refs: scylladb/scylla-jmx#220
Refs: scylladb/scylla-tools-java#351Closesscylladb/scylladb#16149
* tools/cqlsh 66ae7eac...426fa0ea (8):
> Updated Scylla Driver[Issue scylladb/scylla-cqlsh#55]
> copyutil: closing the local end of pipes after processes starts
> setup.py: specify Cython language_level explicitly
> setup.py: pass extensions as a list
> setup.py: reindent block in else branch
> setup.py: early return in get_extension()
> reloc: install build==0.10.0
> reloc: add --verbose option to build_reloc.sh
Fixes: https://github.com/scylladb/scylla-cqlsh/issues/37Closesscylladb/scylladb#15685
Currently, mutation query on replica side will not respond with a result which doesn't have at least one live row. This causes problems if there is a lot of dead rows or partitions before we reach a live row, which stem from the fact that resulting reconcilable_result will be large:
1. Large allocations. Serialization of reconcilable_result causes large allocations for storing result rows in std::deque
2. Reactor stalls. Serialization of reconcilable_result on the replica side and on the coordinator side causes reactor stalls. This impacts not only the query at hand. For 1M dead rows, freezing takes 130ms, unfreezing takes 500ms. Coordinator does multiple freezes and unfreezes. The reactor stall on the coordinator side is >5s
3. Too large repair mutations. If reconciliation works on large pages, repair may fail due to too large mutation size. 1M dead rows is already too much: Refs https://github.com/scylladb/scylladb/issues/9111.
This patch fixes all of the above by making mutation reads respect the memory accounter's limit for the page size, even for dead rows.
This patch also addresses the problem of client-side timeouts during paging. Reconciling queries processing long strings of tombstones will now properly page tombstones,like regular queries do.
My testing shows that this solution even increases efficiency. I tested with a cluster of 2 nodes, and a table of RF=2. The data layout was as follows (1 partition):
* Node1: 1 live row, 1M dead rows
* Node2: 1M dead rows, 1 live row
This was designed to trigger reconciliation right from the very start of the query.
Before:
```
Running query (node2, CL=ONE, cold cache)
Query done, duration: 140.0633503ms, pages: 101, result: [Row(pk=0, ck=3000000, v=0)]
Running query (node2, CL=ONE, hot cache)
Query done, duration: 66.7195275ms, pages: 101, result: [Row(pk=0, ck=3000000, v=0)]
Running query (all-nodes, CL=ALL, reconcile, cold-cache)
Query done, duration: 873.5400742ms, pages: 2, result: [Row(pk=0, ck=0, v=0), Row(pk=0, ck=3000000, v=0)]
```
After:
```
Running query (node2, CL=ONE, cold cache)
Query done, duration: 136.9035122ms, pages: 101, result: [Row(pk=0, ck=3000000, v=0)]
Running query (node2, CL=ONE, hot cache)
Query done, duration: 69.5286021ms, pages: 101, result: [Row(pk=0, ck=3000000, v=0)]
Running query (all-nodes, CL=ALL, reconcile, cold-cache)
Query done, duration: 162.6239498ms, pages: 100, result: [Row(pk=0, ck=0, v=0), Row(pk=0, ck=3000000, v=0)]
```
Non-reconciling queries have almost identical duration (1 few ms changes can be observed between runs). Note how in the after case, the reconciling read also produces 100 pages, vs. just 2 pages in the before case, leading to a much lower duration (less than 1/4 of the before).
Refs https://github.com/scylladb/scylladb/issues/7929
Refs https://github.com/scylladb/scylladb/issues/3672
Refs https://github.com/scylladb/scylladb/issues/7933
Fixes https://github.com/scylladb/scylladb/issues/9111Closesscylladb/scylladb#15414
* github.com:scylladb/scylladb:
test/topology_custom: add test_read_repair.py
replica/mutation_dump: detect end-of-page in range-scans
tools/scylla-sstable: write: abort parser thread if writing fails
test/pylib: add REST methods to get node exe and workdir paths
test/pylib/rest_client: add load_new_sstables, keyspace_{flush,compaction}
service/storage_proxy: add trace points for the actual read executor type
service/storage_proxy: add trace points for read-repair
storage_proxy: Add more trace-level logging to read-repair
database: Fix accounting of small partitions in mutation query
database, storage_proxy: Reconcile pages with no live rows incrementally
* tools/jmx d107758...8d15342 (2):
> Revert "install-dependencies.sh: do not install weak dependencies"
> install-dependencies.sh: do not install weak dependencies Especially for Java, we really do not need the tens of packages and MBs it adds, just because Java apps can be built and use sound and graphics and whatnot.
The descriptor in question is used to parse sstable's file path and return back the result. Parser, among "relevant" info, also parses sstable directory and keyspace+table names. However, there are no code (almost) that needs those strings. And the need to construct descriptor with those makes some places obscurely use empty strings.
The PR removes sstable's directory, keyspace and table names from descriptor and, while at it, relaxes the sstable directory code that makes descriptor out of a real sstable object by (!) parsing its Data file path back.
Closesscylladb/scylladb#15617
* github.com:scylladb/scylladb:
sstables: Make descriptor from sstable without parsing
sstables: Do not keep directory, keyspace and table names on descriptor
sstables: Make tuple inside helper parser method
sstables: Do not use ks.cf pair from descriptor
sstables: Return tuple from parse_path() without ks.cf hints
sstables: Rename make_descriptor() to parse_path()
There's only one place that needs ks.cf pair from the parsed desctipror
-- sstables loader from tools/. This code already has ks.cf from the
tuple returned after parsing and can use them.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
There are two path parsers. One of them accepts keyspace and table names
and the other one doesn't. The latter is then supposed to parse the
ks.cf pair from path and put it on the descriptor. This patch makes this
method return ks.cf so that later it will be possible to remove these
strings from the desctiptor itself.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
The method really parses provided path, so the existing name is pretty
confusing. It's extra confusing in the table::get_snapshot_details()
where it's just called and the return value is simply ignored.
Named "parse_..." makes it clear what the method is for.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
This check is redundant. Originally it was intended to work around by
rapidjson using an assert by default to check that the fields have the
expected type. But turns out we already configure rapidjson to use a
plain exception in utils/rjson.hh, so check_json_type() is not needed
for graceful error handling.
Use operation_option to describe positional options. The structure used
before -- app_template::positional_option -- was not a good fit for
this, as it was designed to store a description that is immediately
passed to the boost::program_options subsystem and then discarded.
As such, it had a raw pointer member, which was expected to be
immediately wrapped by boost::shared_ptr<> by boost::program_options.
This produced memory leaks for tools, for options that ended up not
being used. To avoid this altogether, use operation_option, converting
to the app_template::positional_option at the last moment.
Currently, the tools loosely follow the following convention on
error-codes:
* return 1 if the error is with any of the command-line arguments
* return 2 on other errors
This patch changes the returned error-code on unknown operation/command
to 100 (instead of the previous 1). The intent is to allow any wrapper
script to determine that the tool failed because the operation is
unrecognized and not because of something else. In particular this
should enable us to write a wrapper script for scylla-nodetool, which
dispatches commands still un-implemented in scylla-nodetool, to the java
nodetool.
Note that the tool will still print an error message on an unknown
operation. So such wrapper script would have to make sure to not let
this bleed-through when it decides to forward the operation.
Closesscylladb/scylladb#15517
Currently if writing the sstable fails, e.g. because the input data is
out-of-order, the json parser thread hangs because its output is no
longer consumed. This results in the entire application just freezing.
Fix this by aborting the parsing thread explicitely in the
json_mutation_stream_parser destructor. If the parser thread existed
successfully, this will be a no-op, but on the error-path, this will
ensure that the parser thread doesn't hang.
* seastar 576ee47d...bab1625c (13):
> build: s/{dpdk_libs}/${dpdk_libs}/
> build: build with dpdk v23.07
> scripts: Fix escaping of regexes in addr2line
> linux-aio: print more specific error when setup_aio fails
> linux-aio: correct the error message raised when io_setup() fails
> build: reenable -Warray-bound compiling option
> build: error out if find_program() fails
> build: enable systemtap only if it is available
> build: check if libucontext is necessary for using ucontext functions
> smp: reference correct variable when fetch_or()
> build: use target_compile_definitions() for adding -D...
> http/client: pass tls_options to tls::connect()
> Merge 'build, process: avoid using stdout or stderr as C++ identifiers' from Kefu Chai
Frozen toolchain regenerated for new Seastar depdendencies.
configure.py adjusted for new Seastar arch names.
Closesscylladb/scylladb#15476
in other words, do not create bpo::value unless transfer it to an
option_description.
`boost::program_options::value()` create a new typed_value<T> object,
without holding it with a shared_ptr. boost::program_options expects
developer to construct a `bpo::option_description` right away from it.
and `boost::program_options::option_description` takes the ownership
of the `type_value<T>*` raw pointer, and manages its life cycle with
a shared_ptr. but before passing it to a `bpo::option_description`,
the pointer created by `boost::program_options::value()` is a still
a raw pointer.
before this change, we initialize `operations_with_func` as global
variables using `boost::program_options::value()`. but unfortunately,
we don't always initialize a `bpo::option_description` from it --
we only do this on demand when the corresponding subcommand is
called.
so, if the corresponding subcommand is not called, the created
`typed_value<T>` objects are leaked. hence LeakSanitizer warns us.
after this change, we create the option map as a static
local variable in a function so it is created on demand as well.
as an alternative, we could initialize the options map as local
variable where it used. but to be more consistent with how
`global_option` is specified. and to colocate them in a single
place, let's keep the existing code layout.
this change is quite similar to 374bed8c3d
Fixes https://github.com/scylladb/scylladb/issues/15429Closesscylladb/scylladb#15430
* github.com:scylladb/scylladb:
tools/scylla-nodetools: reindent
tools/scylla-nodetools: do not create unowned bpo::value