Compare commits

..

14 Commits

Author SHA1 Message Date
copilot-swe-agent[bot]
b2c75edccd Update documentation for sstables conversions
Co-authored-by: tgrabiec <283695+tgrabiec@users.noreply.github.com>
2025-12-10 18:25:47 +00:00
copilot-swe-agent[bot]
a5c217aef4 Convert all sstables SCYLLA_ASSERT to scylla_assert (58 conversions across 22 files)
Co-authored-by: tgrabiec <283695+tgrabiec@users.noreply.github.com>
2025-12-10 18:24:02 +00:00
copilot-swe-agent[bot]
3e8c1e47c8 Update documentation for storage_service.cc conversions
Co-authored-by: tgrabiec <283695+tgrabiec@users.noreply.github.com>
2025-12-05 22:28:11 +00:00
copilot-swe-agent[bot]
a4fc85c915 Convert SCYLLA_ASSERT to scylla_assert in storage_service.cc (28 safe conversions)
Co-authored-by: tgrabiec <283695+tgrabiec@users.noreply.github.com>
2025-12-05 22:27:06 +00:00
copilot-swe-agent[bot]
ff155a2c32 Update documentation for topology_coordinator.cc conversions
Co-authored-by: tgrabiec <283695+tgrabiec@users.noreply.github.com>
2025-12-05 22:18:27 +00:00
copilot-swe-agent[bot]
e698e89113 Convert SCYLLA_ASSERT to scylla_assert in topology_coordinator.cc
Co-authored-by: tgrabiec <283695+tgrabiec@users.noreply.github.com>
2025-12-05 22:15:51 +00:00
copilot-swe-agent[bot]
efe3e73b5c Add comprehensive summary of SCYLLA_ASSERT conversion work
Co-authored-by: tgrabiec <283695+tgrabiec@users.noreply.github.com>
2025-12-05 21:11:06 +00:00
copilot-swe-agent[bot]
13644ff110 Update documentation with correct conversion counts
Co-authored-by: tgrabiec <283695+tgrabiec@users.noreply.github.com>
2025-12-05 21:09:43 +00:00
copilot-swe-agent[bot]
307262ca27 Add descriptive error messages to scylla_assert unreachable code paths
Co-authored-by: tgrabiec <283695+tgrabiec@users.noreply.github.com>
2025-12-05 21:08:19 +00:00
copilot-swe-agent[bot]
f7e1ca23f7 Convert additional safe SCYLLA_ASSERT usages to scylla_assert
Co-authored-by: tgrabiec <283695+tgrabiec@users.noreply.github.com>
2025-12-05 21:06:35 +00:00
copilot-swe-agent[bot]
254c7e8cc9 Add comprehensive documentation for SCYLLA_ASSERT conversion
Co-authored-by: tgrabiec <283695+tgrabiec@users.noreply.github.com>
2025-12-05 21:03:14 +00:00
copilot-swe-agent[bot]
f447c4464b Replace SCYLLA_ASSERT with scylla_assert in safe contexts (sample files)
Co-authored-by: tgrabiec <283695+tgrabiec@users.noreply.github.com>
2025-12-05 21:01:27 +00:00
copilot-swe-agent[bot]
62cda957bc Add scylla_assert() macro based on on_internal_error()
Co-authored-by: tgrabiec <283695+tgrabiec@users.noreply.github.com>
2025-12-05 20:55:52 +00:00
copilot-swe-agent[bot]
d68f071b91 Initial plan 2025-12-05 20:47:55 +00:00
45 changed files with 1267 additions and 240 deletions

View File

@@ -0,0 +1,182 @@
# SCYLLA_ASSERT to scylla_assert() Conversion Summary
## Objective
Replace crash-inducing `SCYLLA_ASSERT` with exception-throwing `scylla_assert()` to prevent cluster-wide crashes and maintain availability.
## What Was Done
### 1. Infrastructure Implementation ✓
Created new `scylla_assert()` macro in `utils/assert.hh`:
- Based on `on_internal_error()` for exception-based error handling
- Supports optional custom error messages via variadic arguments
- Uses `seastar::format()` for string formatting
- Compatible with C++23 standard (uses `__VA_OPT__`)
**Key difference from SCYLLA_ASSERT:**
```cpp
// Old: Crashes the process immediately
SCYLLA_ASSERT(condition);
// New: Throws exception (or aborts based on config)
scylla_assert(condition);
scylla_assert(condition, "custom error message: {}", value);
```
### 2. Comprehensive Analysis ✓
Analyzed entire codebase to identify safe vs unsafe conversion locations:
**Statistics:**
- Total SCYLLA_ASSERT usages: ~1307 (including tests)
- Non-test usages: ~886
- **Unsafe to convert**: 223 usages (25%)
- In noexcept functions: 187 usages across 50 files
- In destructors: 36 usages across 25 files
- **Safe to convert**: ~668 usages (75%)
- **Converted in this PR**: 112 usages (16.8% of safe conversions)
### 3. Documentation ✓
Created comprehensive documentation:
1. **Conversion Guide** (`docs/dev/scylla_assert_conversion.md`)
- Explains safe vs unsafe contexts
- Provides conversion strategy
- Lists all completed conversions
- Includes testing guidance
2. **Unsafe Locations Report** (`docs/dev/unsafe_scylla_assert_locations.md`)
- Detailed listing of 223 unsafe locations
- Organized by file with line numbers
- Separated into noexcept and destructor categories
### 4. Sample Conversions ✓
Converted 112 safe SCYLLA_ASSERT usages across 32 files as demonstration:
| File | Conversions | Context |
|------|------------|---------|
| db/large_data_handler.{cc,hh} | 5 | Future-returning functions |
| db/schema_applier.cc | 1 | Coroutine function |
| db/system_distributed_keyspace.cc | 1 | Regular function |
| db/commitlog/commitlog_replayer.cc | 1 | Coroutine function |
| db/view/row_locking.cc | 2 | Regular function |
| db/size_estimates_virtual_reader.cc | 1 | Lambda in coroutine |
| db/corrupt_data_handler.cc | 2 | Lambdas in future-returning function |
| raft/tracker.cc | 2 | Unreachable code (switch defaults) |
| service/topology_coordinator.cc | 11 | Coroutine functions (topology operations) |
| service/storage_service.cc | 28 | Critical node lifecycle operations |
| sstables/* (22 files) | 58 | SSTable operations (read/write/compress/index) |
All conversions were in **safe contexts** (non-noexcept, non-destructor functions). 3 assertions in storage_service.cc remain as SCYLLA_ASSERT (in noexcept functions).
## Why These Cannot Be Converted
### Unsafe Context #1: noexcept Functions (187 usages)
**Problem**: Throwing from noexcept causes `std::terminate()`, same as crash.
**Example** (from `locator/production_snitch_base.hh`):
```cpp
virtual bool prefer_local() const noexcept override {
SCYLLA_ASSERT(_backreference != nullptr); // Cannot convert!
return _backreference->prefer_local();
}
```
**Solution for these**: Keep as SCYLLA_ASSERT or use `on_fatal_internal_error()`.
### Unsafe Context #2: Destructors (36 usages)
**Problem**: Destructors are implicitly noexcept, throwing causes `std::terminate()`.
**Example** (from `utils/file_lock.cc`):
```cpp
~file_lock() noexcept {
if (_fd.get() != -1) {
SCYLLA_ASSERT(_fd.get() != -1); // Cannot convert!
auto r = ::flock(_fd.get(), LOCK_UN);
SCYLLA_ASSERT(r == 0); // Cannot convert!
}
}
```
**Solution for these**: Keep as SCYLLA_ASSERT.
## Benefits of scylla_assert()
1. **Prevents Cluster-Wide Crashes**
- Exception can be caught and handled gracefully
- Failed node doesn't bring down entire cluster
2. **Maintains Availability**
- Service can continue with degraded functionality
- Better than complete crash
3. **Better Error Reporting**
- Includes backtrace via `on_internal_error()`
- Supports custom error messages
- Configurable abort-on-error for testing
4. **Backward Compatible**
- SCYLLA_ASSERT still exists for unsafe contexts
- Can be gradually adopted
## Testing
- Created manual test in `test/manual/test_scylla_assert.cc`
- Verifies passing and failing assertions
- Tests custom error messages
- Code review passed with improvements made
## Next Steps (Future Work)
1. **Gradual Conversion**
- Convert remaining ~653 safe SCYLLA_ASSERT usages incrementally
- Prioritize high-impact code paths first
2. **Review noexcept Functions**
- Evaluate if some can be made non-noexcept
- Consider using `on_fatal_internal_error()` where appropriate
3. **Integration Testing**
- Run full test suite with conversions
- Monitor for any unexpected behavior
- Validate exception propagation
4. **Automated Analysis Tool**
- Create tool to identify safe conversion candidates
- Generate conversion patches automatically
- Track conversion progress
## Files Modified in This PR
### Core Implementation
- `utils/assert.hh` - Added scylla_assert() macro
### Conversions
- `db/large_data_handler.cc`
- `db/large_data_handler.hh`
- `db/schema_applier.cc`
- `db/system_distributed_keyspace.cc`
- `db/commitlog/commitlog_replayer.cc`
- `db/view/row_locking.cc`
- `db/size_estimates_virtual_reader.cc`
- `db/corrupt_data_handler.cc`
- `raft/tracker.cc`
- `service/topology_coordinator.cc`
- `service/storage_service.cc`
- `sstables/` (22 files across trie/, mx/, and core sstables)
### Documentation
- `docs/dev/scylla_assert_conversion.md`
- `docs/dev/unsafe_scylla_assert_locations.md`
- `test/manual/test_scylla_assert.cc`
## Conclusion
This PR establishes the infrastructure and methodology for replacing SCYLLA_ASSERT with scylla_assert() to improve cluster availability. The sample conversions demonstrate the approach, while comprehensive documentation enables future work.
**Key Achievement**: Provided a safe path forward for converting 75% (~668) of SCYLLA_ASSERT usages to exception-based assertions, while clearly documenting the 25% (~223) that must remain as crash-inducing assertions due to language constraints. Converted 112 usages as demonstration (16.8% of safe conversions), prioritizing critical files like storage_service.cc (node lifecycle) and all sstables files (data persistence), with ~556 remaining.

View File

@@ -1,109 +0,0 @@
# Analysis of unimplemented::cause Enum Values
This document provides an analysis of the `unimplemented::cause` enum values after cleanup.
## Removed Unused Enum Values (20 values removed)
The following enum values had **zero usages** in the codebase and have been removed:
- `LWT` - Lightweight transactions
- `PAGING` - Query result paging
- `AUTH` - Authentication
- `PERMISSIONS` - Permission checking
- `COUNTERS` - Counter columns
- `MIGRATIONS` - Schema migrations
- `GOSSIP` - Gossip protocol
- `TOKEN_RESTRICTION` - Token-based restrictions
- `LEGACY_COMPOSITE_KEYS` - Legacy composite key handling
- `COLLECTION_RANGE_TOMBSTONES` - Collection range tombstones
- `RANGE_DELETES` - Range deletion operations
- `COMPRESSION` - Compression features
- `NONATOMIC` - Non-atomic operations
- `CONSISTENCY` - Consistency level handling
- `WRAP_AROUND` - Token wrap-around handling
- `STORAGE_SERVICE` - Storage service operations
- `SCHEMA_CHANGE` - Schema change operations
- `MIXED_CF` - Mixed column family operations
- `SSTABLE_FORMAT_M` - SSTable format M
## Remaining Enum Values (8 values kept)
### 1. `API` (4 usages)
**Impact**: REST API features that are not fully implemented.
**Usages**:
- `api/column_family.cc:1052` - Fails when `split_output` parameter is used in major compaction
- `api/compaction_manager.cc:100,146,216` - Warns when force_user_defined_compaction or related operations are called
**User Impact**: Some REST API endpoints for compaction management are stubs and will warn or fail.
### 2. `INDEXES` (6 usages)
**Impact**: Secondary index features not fully supported.
**Usages**:
- `api/column_family.cc:433,440,449,456` - Warns about index-related operations
- `cql3/restrictions/statement_restrictions.cc:1158` - Fails when attempting filtering on collection columns without proper indexing
- `cql3/statements/update_statement.cc:149` - Warns about index operations
**User Impact**: Some advanced secondary index features (especially filtering on collections) are not available.
### 3. `TRIGGERS` (2 usages)
**Impact**: Trigger support is not implemented.
**Usages**:
- `db/schema_tables.cc:2017` - Warns when loading trigger metadata from schema tables
- `service/storage_proxy.cc:4166` - Warns when processing trigger-related operations
**User Impact**: Cassandra triggers (stored procedures that execute on data changes) are not supported.
### 4. `METRICS` (1 usage)
**Impact**: Some query processor metrics are not collected.
**Usages**:
- `cql3/query_processor.cc:585` - Warns about missing metrics implementation
**User Impact**: Minor - some internal metrics may not be available.
### 5. `VALIDATION` (4 usages)
**Impact**: Schema validation checks are partially implemented.
**Usages**:
- `cql3/functions/token_fct.hh:38` - Warns about validation in token functions
- `cql3/statements/drop_keyspace_statement.cc:40` - Warns when dropping keyspace
- `cql3/statements/truncate_statement.cc:87` - Warns when truncating table
- `service/migration_manager.cc:750` - Warns during schema migrations
**User Impact**: Some schema validation checks are skipped (with warnings logged).
### 6. `REVERSED` (1 usage)
**Impact**: Reversed type support in CQL protocol.
**Usages**:
- `transport/server.cc:2085` - Fails when trying to use reversed types in CQL protocol
**User Impact**: Reversed types are not supported in the CQL protocol implementation.
### 7. `HINT` (1 usage)
**Impact**: Hint replaying is not implemented.
**Usages**:
- `db/batchlog_manager.cc:251` - Warns when attempting to replay hints
**User Impact**: Cassandra hints (temporary storage of writes when nodes are down) are not supported.
### 8. `SUPER` (2 usages)
**Impact**: Super column families are not supported.
**Usages**:
- `db/legacy_schema_migrator.cc:157` - Fails when encountering super column family in legacy schema
- `db/schema_tables.cc:2288` - Fails when encountering super column family in schema tables
**User Impact**: Super column families (legacy Cassandra feature) will cause errors if encountered in legacy data or schema migrations.
## Summary
- **Removed**: 20 unused enum values (76% reduction)
- **Kept**: 8 actively used enum values (24% remaining)
- **Total lines removed**: ~40 lines from enum definition and switch statement
The remaining enum values represent actual unimplemented features that users may encounter, with varying impacts ranging from warnings (TRIGGERS, METRICS, VALIDATION, HINT) to failures (API split_output, INDEXES on collections, REVERSED types, SUPER tables).

View File

@@ -186,6 +186,10 @@ void alter_table_statement::add_column(const query_options&, const schema& schem
if (!schema.is_compound()) {
throw exceptions::invalid_request_exception("Cannot use non-frozen collections with a non-composite PRIMARY KEY");
}
if (schema.is_super()) {
throw exceptions::invalid_request_exception("Cannot use non-frozen collections with super column families");
}
// If there used to be a non-frozen collection column with the same name (that has been dropped),
// we could still have some data using the old type, and so we can't allow adding a collection

View File

@@ -165,7 +165,7 @@ future<> db::commitlog_replayer::impl::init() {
future<db::commitlog_replayer::impl::stats>
db::commitlog_replayer::impl::recover(const commitlog::descriptor& d, const commitlog::replay_state& rpstate) const {
SCYLLA_ASSERT(_column_mappings.local_is_initialized());
scylla_assert(_column_mappings.local_is_initialized());
replay_position rp{d};
auto gp = min_pos(rp.shard_id());

View File

@@ -10,6 +10,7 @@
#include "reader_concurrency_semaphore.hh"
#include "replica/database.hh"
#include "utils/UUID_gen.hh"
#include "utils/assert.hh"
static logging::logger corrupt_data_logger("corrupt_data");
@@ -75,14 +76,14 @@ future<corrupt_data_handler::entry_id> system_table_corrupt_data_handler::do_rec
auto set_cell_raw = [this, &entry_row, &corrupt_data_schema, timestamp] (const char* cell_name, managed_bytes cell_value) {
auto cdef = corrupt_data_schema->get_column_definition(cell_name);
SCYLLA_ASSERT(cdef);
scylla_assert(cdef);
entry_row.cells().apply(*cdef, atomic_cell::make_live(*cdef->type, timestamp, cell_value, _entry_ttl));
};
auto set_cell = [this, &entry_row, &corrupt_data_schema, timestamp] (const char* cell_name, data_value cell_value) {
auto cdef = corrupt_data_schema->get_column_definition(cell_name);
SCYLLA_ASSERT(cdef);
scylla_assert(cdef);
entry_row.cells().apply(*cdef, atomic_cell::make_live(*cdef->type, timestamp, cell_value.serialize_nonnull(), _entry_ttl));
};

View File

@@ -39,7 +39,7 @@ large_data_handler::large_data_handler(uint64_t partition_threshold_bytes, uint6
}
future<large_data_handler::partition_above_threshold> large_data_handler::maybe_record_large_partitions(const sstables::sstable& sst, const sstables::key& key, uint64_t partition_size, uint64_t rows, uint64_t range_tombstones, uint64_t dead_rows) {
SCYLLA_ASSERT(running());
scylla_assert(running());
partition_above_threshold above_threshold{partition_size > _partition_threshold_bytes, rows > _rows_count_threshold};
static_assert(std::is_same_v<decltype(above_threshold.size), bool>);
_stats.partitions_bigger_than_threshold += above_threshold.size; // increment if true
@@ -83,7 +83,7 @@ sstring large_data_handler::sst_filename(const sstables::sstable& sst) {
}
future<> large_data_handler::maybe_delete_large_data_entries(sstables::shared_sstable sst) {
SCYLLA_ASSERT(running());
scylla_assert(running());
auto schema = sst->get_schema();
auto filename = sst_filename(*sst);
using ldt = sstables::large_data_type;
@@ -247,7 +247,7 @@ future<> cql_table_large_data_handler::record_large_rows(const sstables::sstable
future<> cql_table_large_data_handler::delete_large_data_entries(const schema& s, sstring sstable_name, std::string_view large_table_name) const {
auto sys_ks = _sys_ks.get_permit();
SCYLLA_ASSERT(sys_ks);
scylla_assert(sys_ks);
const sstring req =
seastar::format("DELETE FROM system.{} WHERE keyspace_name = ? AND table_name = ? AND sstable_name = ?",
large_table_name);

View File

@@ -80,7 +80,7 @@ public:
future<bool> maybe_record_large_rows(const sstables::sstable& sst, const sstables::key& partition_key,
const clustering_key_prefix* clustering_key, uint64_t row_size) {
SCYLLA_ASSERT(running());
scylla_assert(running());
if (row_size > _row_threshold_bytes) [[unlikely]] {
return with_sem([&sst, &partition_key, clustering_key, row_size, this] {
return record_large_rows(sst, partition_key, clustering_key, row_size);
@@ -100,7 +100,7 @@ public:
future<bool> maybe_record_large_cells(const sstables::sstable& sst, const sstables::key& partition_key,
const clustering_key_prefix* clustering_key, const column_definition& cdef, uint64_t cell_size, uint64_t collection_elements) {
SCYLLA_ASSERT(running());
scylla_assert(running());
if (cell_size > _cell_threshold_bytes || collection_elements > _collection_elements_count_threshold) [[unlikely]] {
return with_sem([&sst, &partition_key, clustering_key, &cdef, cell_size, collection_elements, this] {
return record_large_cells(sst, partition_key, clustering_key, cdef, cell_size, collection_elements);

View File

@@ -152,8 +152,8 @@ public:
builder.with_version(sm.digest());
auto type_str = td.get_or("type", sstring("standard"));
if (type_str == "Super") {
cf_type cf = sstring_to_cf_type(td.get_or("type", sstring("standard")));
if (cf == cf_type::super) {
fail(unimplemented::cause::SUPER);
}
@@ -284,8 +284,13 @@ public:
if (kind_str == "compact_value") {
continue;
}
if (kind == column_kind::clustering_key && !is_compound) {
continue;
if (kind == column_kind::clustering_key) {
if (cf == cf_type::super && component_index != 0) {
continue;
}
if (cf != cf_type::super && !is_compound) {
continue;
}
}
}

View File

@@ -1121,7 +1121,7 @@ future<> schema_applier::commit() {
// Run func first on shard 0
// to allow "seeding" of the effective_replication_map
// with a new e_r_m instance.
SCYLLA_ASSERT(this_shard_id() == 0);
scylla_assert(this_shard_id() == 0);
commit_on_shard(sharded_db.local());
co_await sharded_db.invoke_on_others([this] (replica::database& db) {
commit_on_shard(db);

View File

@@ -143,7 +143,7 @@ static computed_columns_map get_computed_columns(const schema_mutations& sm);
static std::vector<column_definition> create_columns_from_column_rows(
const query::result_set& rows, const sstring& keyspace,
const sstring& table, column_view_virtual is_view_virtual, const computed_columns_map& computed_columns,
const sstring& table, bool is_super, column_view_virtual is_view_virtual, const computed_columns_map& computed_columns,
const data_dictionary::user_types_storage& user_types);
@@ -1804,6 +1804,9 @@ static schema_mutations make_table_mutations(schema_ptr table, api::timestamp_ty
auto scylla_tables_mutation = make_scylla_tables_mutation(table, timestamp);
list_type_impl::native_type flags;
if (table->is_super()) {
flags.emplace_back("super");
}
if (table->is_dense()) {
flags.emplace_back("dense");
}
@@ -2277,6 +2280,7 @@ schema_ptr create_table_from_mutations(const schema_ctxt& ctxt, schema_mutations
auto id = table_id(table_row.get_nonnull<utils::UUID>("id"));
schema_builder builder{ks_name, cf_name, id};
auto cf = cf_type::standard;
auto is_dense = false;
auto is_counter = false;
auto is_compound = false;
@@ -2285,6 +2289,7 @@ schema_ptr create_table_from_mutations(const schema_ctxt& ctxt, schema_mutations
if (flags) {
for (auto& s : *flags) {
if (s == "super") {
// cf = cf_type::super;
fail(unimplemented::cause::SUPER);
} else if (s == "dense") {
is_dense = true;
@@ -2300,7 +2305,9 @@ schema_ptr create_table_from_mutations(const schema_ctxt& ctxt, schema_mutations
std::vector<column_definition> column_defs = create_columns_from_column_rows(
query::result_set(sm.columns_mutation()),
ks_name,
cf_name,
cf_name,/*,
fullRawComparator, */
cf == cf_type::super,
column_view_virtual::no,
computed_columns,
user_types);
@@ -2479,7 +2486,9 @@ static computed_columns_map get_computed_columns(const schema_mutations& sm) {
static std::vector<column_definition> create_columns_from_column_rows(
const query::result_set& rows,
const sstring& keyspace,
const sstring& table,
const sstring& table, /*,
AbstractType<?> rawComparator, */
bool is_super,
column_view_virtual is_view_virtual,
const computed_columns_map& computed_columns,
const data_dictionary::user_types_storage& user_types)
@@ -2556,12 +2565,12 @@ static schema_builder prepare_view_schema_builder_from_mutations(const schema_ct
}
auto computed_columns = get_computed_columns(sm);
auto column_defs = create_columns_from_column_rows(query::result_set(sm.columns_mutation()), ks_name, cf_name, column_view_virtual::no, computed_columns, user_types);
auto column_defs = create_columns_from_column_rows(query::result_set(sm.columns_mutation()), ks_name, cf_name, false, column_view_virtual::no, computed_columns, user_types);
for (auto&& cdef : column_defs) {
builder.with_column_ordered(cdef);
}
if (sm.view_virtual_columns_mutation()) {
column_defs = create_columns_from_column_rows(query::result_set(*sm.view_virtual_columns_mutation()), ks_name, cf_name, column_view_virtual::yes, computed_columns, user_types);
column_defs = create_columns_from_column_rows(query::result_set(*sm.view_virtual_columns_mutation()), ks_name, cf_name, false, column_view_virtual::yes, computed_columns, user_types);
for (auto&& cdef : column_defs) {
builder.with_column_ordered(cdef);
}

View File

@@ -187,7 +187,7 @@ static future<std::vector<token_range>> get_local_ranges(replica::database& db,
auto ranges = db.get_token_metadata().get_primary_ranges_for(std::move(tokens));
std::vector<token_range> local_ranges;
auto to_bytes = [](const std::optional<dht::token_range::bound>& b) {
SCYLLA_ASSERT(b);
scylla_assert(b);
return utf8_type->decompose(b->value().to_sstring());
};
// We merge the ranges to be compatible with how Cassandra shows it's size estimates table.

View File

@@ -231,7 +231,7 @@ static schema_ptr get_current_service_levels(data_dictionary::database db) {
}
static schema_ptr get_updated_service_levels(data_dictionary::database db, bool workload_prioritization_enabled) {
SCYLLA_ASSERT(this_shard_id() == 0);
scylla_assert(this_shard_id() == 0);
auto schema = get_current_service_levels(db);
schema_builder b(schema);
for (const auto& col : new_service_levels_columns(workload_prioritization_enabled)) {

View File

@@ -153,14 +153,14 @@ row_locker::unlock(const dht::decorated_key* pk, bool partition_exclusive,
mylog.error("column_family::local_base_lock_holder::~local_base_lock_holder() can't find lock for partition", *pk);
return;
}
SCYLLA_ASSERT(&pli->first == pk);
scylla_assert(&pli->first == pk);
if (cpk) {
auto rli = pli->second._row_locks.find(*cpk);
if (rli == pli->second._row_locks.end()) {
mylog.error("column_family::local_base_lock_holder::~local_base_lock_holder() can't find lock for row", *cpk);
return;
}
SCYLLA_ASSERT(&rli->first == cpk);
scylla_assert(&rli->first == cpk);
mylog.debug("releasing {} lock for row {} in partition {}", (row_exclusive ? "exclusive" : "shared"), *cpk, *pk);
auto& lock = rli->second;
if (row_exclusive) {

View File

@@ -0,0 +1,198 @@
# SCYLLA_ASSERT to scylla_assert() Conversion Guide
## Overview
This document tracks the conversion of `SCYLLA_ASSERT` to the new `scylla_assert()` macro based on `on_internal_error()`. The new macro throws exceptions instead of crashing the process, preventing cluster-wide crashes and loss of availability.
## Status Summary
- **Total SCYLLA_ASSERT usages**: ~1307 (including tests)
- **Non-test usages**: ~886
- **Unsafe conversions (noexcept)**: ~187
- **Unsafe conversions (destructors)**: ~36
- **Safe conversions possible**: ~668
- **Converted so far**: 112
## Safe vs Unsafe Contexts
### Safe to Convert ✓
- Regular functions (non-noexcept)
- Coroutine functions (returning `future<T>`)
- Member functions without noexcept specifier
- Functions where exception propagation is acceptable
### Unsafe to Convert ✗
1. **noexcept functions** - throwing exceptions from noexcept causes `std::terminate()`
2. **Destructors** - destructors are implicitly noexcept
3. **noexcept lambdas and callbacks**
4. **Code with explicit exception-safety requirements** that cannot handle exceptions
## Files with Unsafe Conversions
### Files with SCYLLA_ASSERT in noexcept contexts (examples)
1. **reader_concurrency_semaphore.cc**
- Lines with noexcept functions containing SCYLLA_ASSERT
- Must remain as SCYLLA_ASSERT
2. **db/large_data_handler.cc**
- Line 86: `maybe_delete_large_data_entries()` - marked noexcept but contains SCYLLA_ASSERT
- Analysis shows this is actually safe (not truly noexcept)
3. **db/row_cache.cc**
- Multiple SCYLLA_ASSERT usages in noexcept member functions
4. **db/schema_tables.cc**
- SCYLLA_ASSERT in noexcept contexts
5. **raft/server.cc**
- Multiple noexcept functions with SCYLLA_ASSERT
### Files with SCYLLA_ASSERT in destructors
1. **reader_concurrency_semaphore.cc**
- Line 1116: SCYLLA_ASSERT in destructor
2. **api/column_family.cc**
- Line 102: SCYLLA_ASSERT in destructor
3. **utils/logalloc.cc**
- Line 1991: SCYLLA_ASSERT in destructor
4. **utils/file_lock.cc**
- Lines 34, 36: SCYLLA_ASSERT in destructor
5. **utils/disk_space_monitor.cc**
- Line 66: SCYLLA_ASSERT in destructor
## Conversion Strategy
### Phase 1: Infrastructure (Completed)
- Created `scylla_assert()` macro in `utils/assert.hh`
- Uses `on_internal_error()` for exception-based error handling
- Supports optional message parameters
### Phase 2: Safe Conversions
Convert SCYLLA_ASSERT to scylla_assert in contexts where:
- Function is not noexcept
- Not in a destructor
- Exception propagation is safe
### Phase 3: Document Remaining Uses
For contexts that cannot be converted:
- Add comments explaining why SCYLLA_ASSERT must remain
- Consider alternative approaches (e.g., using `on_fatal_internal_error()` in noexcept)
## Converted Files
### Completed Conversions
1. **db/large_data_handler.cc** (3 conversions)
- Line 42: `maybe_record_large_partitions()`
- Line 86: `maybe_delete_large_data_entries()`
- Line 250: `delete_large_data_entries()`
2. **db/large_data_handler.hh** (2 conversions)
- Line 83: `maybe_record_large_rows()`
- Line 103: `maybe_record_large_cells()`
3. **db/schema_applier.cc** (1 conversion)
- Line 1124: `commit()` coroutine
4. **db/system_distributed_keyspace.cc** (1 conversion)
- Line 234: `get_updated_service_levels()`
5. **db/commitlog/commitlog_replayer.cc** (1 conversion)
- Line 168: `recover()` coroutine
6. **db/view/row_locking.cc** (2 conversions)
- Line 156: `unlock()` - partition lock check
- Line 163: `unlock()` - row lock check
7. **db/size_estimates_virtual_reader.cc** (1 conversion)
- Line 190: Lambda in `get_local_ranges()`
8. **db/corrupt_data_handler.cc** (2 conversions)
- Line 78: `set_cell_raw` lambda
- Line 85: `set_cell` lambda
9. **raft/tracker.cc** (2 conversions)
- Line 49: Switch default case with descriptive error
- Line 90: Switch default case with descriptive error
10. **service/topology_coordinator.cc** (11 conversions)
- Line 363: Node lookup assertion in `retake_node()`
- Line 2313: Bootstrapping state ring check
- Line 2362: Replacing state ring check
- Line 2365: Normal nodes lookup assertion
- Line 2366: Node ring and state validation
- Line 3025: Join request ring check
- Line 3036: Leave request ring check
- Line 3049: Remove request ring check
- Line 3061: Replace request ring check
- Line 3166: Transition nodes empty check
- Line 4016: Barrier validation in `stop()`
11. **service/storage_service.cc** (28 conversions, 3 unsafe kept as SCYLLA_ASSERT)
- Lines 603, 691, 857, 901, 969: Core service operations
- Lines 1523, 1575, 1844, 2086, 2170, 2195: Bootstrap and join operations
- Lines 2319, 2352, 2354: Replacement operations
- Lines 3003, 3028, 3228: Cluster join and drain operations
- Lines 3995, 4047, 4353: Decommission and removenode operations
- Lines 4473, 5787, 5834, 5958: CDC and topology change operations
- Lines 6490, 6491: Tablet streaming operations
- Line 7512: Join node response handler
- **Unsafe (kept as SCYLLA_ASSERT)**: Lines 3398, 5760, 5775 (noexcept functions)
12. **sstables/** (58 conversions across 22 files)
- **sstables/trie/bti_node_reader.cc** (6): Node reading operations
- **sstables/mx/writer.cc** (6): MX format writing
- **sstables/sstable_set.cc** (5): SSTable set management
- **sstables/compressor.cc** (5): Compression/decompression
- **sstables/trie/trie_writer.hh** (4): Trie writing
- **sstables/downsampling.hh** (4): Downsampling operations
- **sstables/storage.{cc,hh}** (6): Storage operations
- **sstables/sstables_manager.{cc,hh}** (6): SSTable lifecycle management
- **sstables/trie/writer_node.{hh,impl.hh}** (4): Trie node writing
- **sstables/trie/bti_key_translation.cc** (2): Key translation
- **sstables/sstable_directory.cc** (2): Directory management
- **sstables/trie/trie_writer.cc** (1): Trie writer implementation
- **sstables/trie/trie_traversal.hh** (1): Trie traversal
- **sstables/sstables.cc** (1): Core SSTable operations
- **sstables/partition_index_cache.hh** (1): Index caching
- **sstables/generation_type.hh** (1): Generation management
- **sstables/compress.{cc,hh}** (2): Compression utilities
- **sstables/exceptions.hh** (1): Comment update
## Testing
### Manual Testing
Created `test/manual/test_scylla_assert.cc` to verify:
- Passing assertions succeed
- Failing assertions throw exceptions
- Custom messages are properly formatted
### Integration Testing
- Run existing test suite with converted assertions
- Verify no regressions in error handling
- Confirm exception propagation works correctly
## Future Work
1. **Automated Analysis Tool**
- Create tool to identify safe vs unsafe conversion contexts
- Generate reports of remaining conversions
2. **Gradual Conversion**
- Convert additional safe usages incrementally
- Monitor for any unexpected issues
3. **noexcept Review**
- Review functions marked noexcept that contain SCYLLA_ASSERT
- Consider if they should use `on_fatal_internal_error()` instead
## References
- `utils/assert.hh` - Implementation of both SCYLLA_ASSERT and scylla_assert
- `utils/on_internal_error.hh` - Exception-based error handling infrastructure
- GitHub Issue: [Link to original issue tracking this work]

View File

@@ -0,0 +1,614 @@
# Unsafe SCYLLA_ASSERT Locations
This document lists specific locations where SCYLLA_ASSERT cannot be safely converted to scylla_assert().
## Summary
- Files with noexcept SCYLLA_ASSERT: 50
- Files with destructor SCYLLA_ASSERT: 25
- Total unsafe SCYLLA_ASSERT in noexcept: 187
- Total unsafe SCYLLA_ASSERT in destructors: 36
## SCYLLA_ASSERT in noexcept Functions
### auth/cache.cc
- Line 118: `SCYLLA_ASSERT(this_shard_id() == 0);`
Total: 1 usages
### db/cache_mutation_reader.hh
- Line 309: `SCYLLA_ASSERT(sr->is_static_row());`
Total: 1 usages
### db/commitlog/commitlog.cc
- Line 531: `SCYLLA_ASSERT(!*this);`
- Line 544: `SCYLLA_ASSERT(!*this);`
- Line 662: `SCYLLA_ASSERT(_iter != _end);`
- Line 1462: `SCYLLA_ASSERT(i->second >= count);`
Total: 4 usages
### db/hints/manager.hh
- Line 167: `SCYLLA_ASSERT(_ep_managers.empty());`
Total: 1 usages
### db/partition_snapshot_row_cursor.hh
- Line 384: `SCYLLA_ASSERT(_latest_it);`
Total: 1 usages
### db/row_cache.cc
- Line 1365: `SCYLLA_ASSERT(it->is_last_dummy());`
Total: 1 usages
### db/schema_tables.cc
- Line 774: `SCYLLA_ASSERT(this_shard_id() == 0);`
Total: 1 usages
### db/view/view.cc
- Line 3623: `SCYLLA_ASSERT(thread::running_in_thread());`
Total: 1 usages
### gms/gossiper.cc
- Line 876: `SCYLLA_ASSERT(ptr->pid == _permit_id);`
Total: 1 usages
### locator/production_snitch_base.hh
- Line 77: `SCYLLA_ASSERT(_backreference != nullptr);`
- Line 82: `SCYLLA_ASSERT(_backreference != nullptr);`
- Line 87: `SCYLLA_ASSERT(_backreference != nullptr);`
Total: 3 usages
### locator/topology.cc
- Line 135: `SCYLLA_ASSERT(_shard == this_shard_id());`
Total: 1 usages
### mutation/counters.hh
- Line 314: `SCYLLA_ASSERT(_cell.is_live());`
- Line 315: `SCYLLA_ASSERT(!_cell.is_counter_update());`
Total: 2 usages
### mutation/mutation_partition_v2.hh
- Line 271: `SCYLLA_ASSERT(s.version() == _schema_version);`
Total: 1 usages
### mutation/partition_version.cc
- Line 364: `SCYLLA_ASSERT(!_snapshot->is_locked());`
- Line 701: `SCYLLA_ASSERT(!rows.empty());`
- Line 703: `SCYLLA_ASSERT(last_dummy.is_last_dummy());`
- Line 746: `SCYLLA_ASSERT(!_snapshot->is_locked());`
- Line 770: `SCYLLA_ASSERT(at_latest_version());`
- Line 777: `SCYLLA_ASSERT(at_latest_version());`
Total: 6 usages
### mutation/partition_version.hh
- Line 211: `SCYLLA_ASSERT(_schema);`
- Line 217: `SCYLLA_ASSERT(_schema);`
- Line 254: `SCYLLA_ASSERT(!_version->_backref);`
- Line 282: `SCYLLA_ASSERT(_version);`
- Line 286: `SCYLLA_ASSERT(_version);`
- Line 290: `SCYLLA_ASSERT(_version);`
- Line 294: `SCYLLA_ASSERT(_version);`
Total: 7 usages
### mutation/partition_version_list.hh
- Line 36: `SCYLLA_ASSERT(!_head->is_referenced_from_entry());`
- Line 42: `SCYLLA_ASSERT(!_tail->is_referenced_from_entry());`
- Line 70: `SCYLLA_ASSERT(!_head->is_referenced_from_entry());`
Total: 3 usages
### mutation/range_tombstone_list.cc
- Line 412: `SCYLLA_ASSERT (it != rt_list.end());`
- Line 422: `SCYLLA_ASSERT (it != rt_list.end());`
Total: 2 usages
### raft/server.cc
- Line 1720: `SCYLLA_ASSERT(_non_joint_conf_commit_promise);`
Total: 1 usages
### reader_concurrency_semaphore.cc
- Line 109: `SCYLLA_ASSERT(_permit == o._permit);`
- Line 432: `SCYLLA_ASSERT(_need_cpu_branches);`
- Line 455: `SCYLLA_ASSERT(_awaits_branches);`
- Line 1257: `SCYLLA_ASSERT(!_stopped);`
- Line 1585: `SCYLLA_ASSERT(_stats.need_cpu_permits);`
- Line 1587: `SCYLLA_ASSERT(_stats.need_cpu_permits >= _stats.awaits_permits);`
- Line 1593: `SCYLLA_ASSERT(_stats.need_cpu_permits >= _stats.awaits_permits);`
- Line 1598: `SCYLLA_ASSERT(_stats.awaits_permits);`
Total: 8 usages
### readers/multishard.cc
- Line 296: `SCYLLA_ASSERT(!_irh);`
Total: 1 usages
### repair/repair.cc
- Line 1073: `SCYLLA_ASSERT(table_names().size() == table_ids.size());`
Total: 1 usages
### replica/database.cc
- Line 3299: `SCYLLA_ASSERT(!_cf_lock.try_write_lock()); // lock should be acquired before the`
- Line 3304: `SCYLLA_ASSERT(!_cf_lock.try_write_lock()); // lock should be acquired before the`
Total: 2 usages
### replica/database.hh
- Line 1971: `SCYLLA_ASSERT(_user_sstables_manager);`
- Line 1976: `SCYLLA_ASSERT(_system_sstables_manager);`
Total: 2 usages
### replica/dirty_memory_manager.cc
- Line 67: `SCYLLA_ASSERT(!child->_heap_handle);`
Total: 1 usages
### replica/dirty_memory_manager.hh
- Line 261: `SCYLLA_ASSERT(_shutdown_requested);`
Total: 1 usages
### replica/memtable.cc
- Line 563: `SCYLLA_ASSERT(_mt._flushed_memory <= static_cast<int64_t>(_mt.occupancy().total_`
- Line 860: `SCYLLA_ASSERT(!reclaiming_enabled());`
Total: 2 usages
### replica/table.cc
- Line 2829: `SCYLLA_ASSERT(!trange.start()->is_inclusive() && trange.end()->is_inclusive());`
Total: 1 usages
### schema/schema.hh
- Line 1022: `SCYLLA_ASSERT(_schema->is_view());`
Total: 1 usages
### schema/schema_registry.cc
- Line 257: `SCYLLA_ASSERT(_state >= state::LOADED);`
- Line 262: `SCYLLA_ASSERT(_state >= state::LOADED);`
- Line 329: `SCYLLA_ASSERT(o._cpu_of_origin == current);`
Total: 3 usages
### service/direct_failure_detector/failure_detector.cc
- Line 628: `SCYLLA_ASSERT(alive != endpoint_liveness.marked_alive);`
Total: 1 usages
### service/storage_service.cc
- Line 3398: `SCYLLA_ASSERT(this_shard_id() == 0);`
- Line 5760: `SCYLLA_ASSERT(this_shard_id() == 0);`
- Line 5775: `SCYLLA_ASSERT(this_shard_id() == 0);`
- Line 5787: `SCYLLA_ASSERT(this_shard_id() == 0);`
Total: 4 usages
### sstables/generation_type.hh
- Line 132: `SCYLLA_ASSERT(bool(gen));`
Total: 1 usages
### sstables/partition_index_cache.hh
- Line 62: `SCYLLA_ASSERT(!ready());`
Total: 1 usages
### sstables/sstables_manager.hh
- Line 244: `SCYLLA_ASSERT(_sstables_registry && "sstables_registry is not plugged");`
Total: 1 usages
### sstables/storage.hh
- Line 86: `SCYLLA_ASSERT(false && "Changing directory not implemented");`
- Line 89: `SCYLLA_ASSERT(false && "Direct links creation not implemented");`
- Line 92: `SCYLLA_ASSERT(false && "Direct move not implemented");`
Total: 3 usages
### sstables_loader.cc
- Line 735: `SCYLLA_ASSERT(p);`
Total: 1 usages
### tasks/task_manager.cc
- Line 56: `SCYLLA_ASSERT(inserted);`
- Line 76: `SCYLLA_ASSERT(child->get_status().progress_units == progress_units);`
- Line 454: `SCYLLA_ASSERT(this_shard_id() == 0);`
Total: 3 usages
### tools/schema_loader.cc
- Line 281: `SCYLLA_ASSERT(p);`
Total: 1 usages
### utils/UUID.hh
- Line 59: `SCYLLA_ASSERT(is_timestamp());`
Total: 1 usages
### utils/bptree.hh
- Line 289: `SCYLLA_ASSERT(n.is_leftmost());`
- Line 301: `SCYLLA_ASSERT(n.is_rightmost());`
- Line 343: `SCYLLA_ASSERT(leaf->is_leaf());`
- Line 434: `SCYLLA_ASSERT(d->attached());`
- Line 453: `SCYLLA_ASSERT(n._num_keys > 0);`
- Line 505: `SCYLLA_ASSERT(n->is_leftmost());`
- Line 511: `SCYLLA_ASSERT(n->is_rightmost());`
- Line 517: `SCYLLA_ASSERT(n->is_root());`
- Line 557: `SCYLLA_ASSERT(!is_end());`
- Line 566: `SCYLLA_ASSERT(!is_end());`
- Line 613: `SCYLLA_ASSERT(n->_num_keys > 0);`
- Line 833: `SCYLLA_ASSERT(_left->_num_keys > 0);`
- Line 926: `SCYLLA_ASSERT(rl == rb);`
- Line 927: `SCYLLA_ASSERT(rl <= nr);`
- Line 1037: `SCYLLA_ASSERT(is_leaf());`
- Line 1042: `SCYLLA_ASSERT(is_leaf());`
- Line 1047: `SCYLLA_ASSERT(is_leaf());`
- Line 1052: `SCYLLA_ASSERT(is_leaf());`
- Line 1062: `SCYLLA_ASSERT(t->_right == this);`
- Line 1083: `SCYLLA_ASSERT(t->_left == this);`
- Line 1091: `SCYLLA_ASSERT(t->_right == this);`
- Line 1103: `SCYLLA_ASSERT(false);`
- Line 1153: `SCYLLA_ASSERT(i <= _num_keys);`
- Line 1212: `SCYLLA_ASSERT(off <= _num_keys);`
- Line 1236: `SCYLLA_ASSERT(from._num_keys > 0);`
- Line 1389: `SCYLLA_ASSERT(!is_root());`
- Line 1450: `SCYLLA_ASSERT(_num_keys == NodeSize);`
- Line 1563: `SCYLLA_ASSERT(_num_keys < NodeSize);`
- Line 1577: `SCYLLA_ASSERT(i != 0 || left_kid_sorted(k, less));`
- Line 1647: `SCYLLA_ASSERT(nodes.empty());`
- Line 1684: `SCYLLA_ASSERT(_num_keys > 0);`
- Line 1686: `SCYLLA_ASSERT(p._kids[i].n == this);`
- Line 1788: `SCYLLA_ASSERT(_num_keys == 0);`
- Line 1789: `SCYLLA_ASSERT(is_root() || !is_leaf() || (get_prev() == this && get_next() == th`
- Line 1821: `SCYLLA_ASSERT(_parent->_kids[i].n == &other);`
- Line 1841: `SCYLLA_ASSERT(i <= _num_keys);`
- Line 1856: `SCYLLA_ASSERT(!_nodes.empty());`
- Line 1938: `SCYLLA_ASSERT(!attached());`
- Line 1943: `SCYLLA_ASSERT(attached());`
Total: 39 usages
### utils/cached_file.hh
- Line 104: `SCYLLA_ASSERT(!_use_count);`
Total: 1 usages
### utils/compact-radix-tree.hh
- Line 1026: `SCYLLA_ASSERT(check_capacity(head, ni));`
- Line 1027: `SCYLLA_ASSERT(!_data.has(ni));`
- Line 1083: `SCYLLA_ASSERT(next_cap > head._capacity);`
- Line 1149: `SCYLLA_ASSERT(capacity != 0);`
- Line 1239: `SCYLLA_ASSERT(i < Size);`
- Line 1240: `SCYLLA_ASSERT(_idx[i] == unused_node_index);`
- Line 1470: `SCYLLA_ASSERT(kid != nullptr);`
- Line 1541: `SCYLLA_ASSERT(ret.first != nullptr);`
- Line 1555: `SCYLLA_ASSERT(leaf_depth >= depth);`
- Line 1614: `SCYLLA_ASSERT(n->check_prefix(key, depth));`
- Line 1850: `SCYLLA_ASSERT(_root.is(nil_root));`
Total: 11 usages
### utils/cross-shard-barrier.hh
- Line 134: `SCYLLA_ASSERT(w.has_value());`
Total: 1 usages
### utils/double-decker.hh
- Line 200: `SCYLLA_ASSERT(!hint.match);`
- Line 366: `SCYLLA_ASSERT(nb == end._bucket);`
Total: 2 usages
### utils/intrusive-array.hh
- Line 217: `SCYLLA_ASSERT(!is_single_element());`
- Line 218: `SCYLLA_ASSERT(pos < max_len);`
- Line 225: `SCYLLA_ASSERT(pos > 0);`
- Line 238: `SCYLLA_ASSERT(train_len < max_len);`
- Line 329: `SCYLLA_ASSERT(idx < max_len); // may the force be with us...`
Total: 5 usages
### utils/intrusive_btree.hh
- Line 148: `SCYLLA_ASSERT(to.num_keys == 0);`
- Line 157: `SCYLLA_ASSERT(!attached());`
- Line 227: `SCYLLA_ASSERT(n->is_inline());`
- Line 232: `SCYLLA_ASSERT(n->is_inline());`
- Line 288: `SCYLLA_ASSERT(n.is_root());`
- Line 294: `SCYLLA_ASSERT(n.is_leftmost());`
- Line 302: `SCYLLA_ASSERT(n.is_rightmost());`
- Line 368: `SCYLLA_ASSERT(_root->is_leaf());`
- Line 371: `SCYLLA_ASSERT(_inline.empty());`
- Line 601: `SCYLLA_ASSERT(n->is_leaf());`
- Line 673: `SCYLLA_ASSERT(!is_end());`
- Line 674: `SCYLLA_ASSERT(h->attached());`
- Line 677: `SCYLLA_ASSERT(_idx < cur.n->_base.num_keys);`
- Line 679: `SCYLLA_ASSERT(_hook->attached());`
- Line 690: `SCYLLA_ASSERT(!is_end());`
- Line 764: `SCYLLA_ASSERT(n->num_keys > 0);`
- Line 994: `SCYLLA_ASSERT(!_it.is_end());`
- Line 1178: `SCYLLA_ASSERT(is_leaf());`
- Line 1183: `SCYLLA_ASSERT(is_root());`
- Line 1261: `SCYLLA_ASSERT(!is_root());`
- Line 1268: `SCYLLA_ASSERT(p->_base.num_keys > 0 && p->_kids[0] == this);`
- Line 1275: `SCYLLA_ASSERT(p->_base.num_keys > 0 && p->_kids[p->_base.num_keys] == this);`
- Line 1286: `SCYLLA_ASSERT(false);`
- Line 1291: `SCYLLA_ASSERT(!nb->is_inline());`
- Line 1296: `SCYLLA_ASSERT(!nb->is_inline());`
- Line 1338: `SCYLLA_ASSERT(_base.num_keys == 0);`
- Line 1373: `SCYLLA_ASSERT(!(is_leftmost() || is_rightmost()));`
- Line 1378: `SCYLLA_ASSERT(p->_kids[i] != this);`
- Line 1396: `SCYLLA_ASSERT(!is_leaf());`
- Line 1537: `SCYLLA_ASSERT(src != _base.num_keys); // need more keys for the next leaf`
- Line 1995: `SCYLLA_ASSERT(_parent.n->_base.num_keys > 0);`
- Line 2135: `SCYLLA_ASSERT(is_leaf());`
- Line 2144: `SCYLLA_ASSERT(_base.num_keys != 0);`
- Line 2160: `SCYLLA_ASSERT(_base.num_keys != 0);`
- Line 2172: `SCYLLA_ASSERT(!empty());`
- Line 2198: `SCYLLA_ASSERT(leaf == ret->is_leaf());`
Total: 36 usages
### utils/loading_shared_values.hh
- Line 203: `SCYLLA_ASSERT(!_set.size());`
Total: 1 usages
### utils/logalloc.cc
- Line 544: `SCYLLA_ASSERT(!_background_reclaimer);`
- Line 926: `SCYLLA_ASSERT(idx < _segments.size());`
- Line 933: `SCYLLA_ASSERT(idx < _segments.size());`
- Line 957: `SCYLLA_ASSERT(i != _segments.end());`
- Line 1323: `SCYLLA_ASSERT(_lsa_owned_segments_bitmap.test(idx_from_segment(seg)));`
- Line 1366: `SCYLLA_ASSERT(desc._region);`
- Line 1885: `SCYLLA_ASSERT(desc._buf_pointers.empty());`
- Line 1911: `SCYLLA_ASSERT(&desc == old_ptr->_desc);`
- Line 2105: `SCYLLA_ASSERT(seg);`
- Line 2116: `SCYLLA_ASSERT(seg);`
- Line 2341: `SCYLLA_ASSERT(pool.current_emergency_reserve_goal() >= n_segments);`
Total: 11 usages
### utils/logalloc.hh
- Line 307: `SCYLLA_ASSERT(this_shard_id() == _cpu);`
Total: 1 usages
### utils/reusable_buffer.hh
- Line 60: `SCYLLA_ASSERT(_refcount == 0);`
Total: 1 usages
## SCYLLA_ASSERT in Destructors
### api/column_family.cc
- Line 102: `SCYLLA_ASSERT(this_shard_id() == 0);`
Total: 1 usages
### cdc/generation.cc
- Line 846: `SCYLLA_ASSERT(_stopped);`
Total: 1 usages
### cdc/log.cc
- Line 173: `SCYLLA_ASSERT(_stopped);`
Total: 1 usages
### compaction/compaction_manager.cc
- Line 1074: `SCYLLA_ASSERT(_state == state::none || _state == state::stopped);`
Total: 1 usages
### db/hints/internal/hint_endpoint_manager.cc
- Line 188: `SCYLLA_ASSERT(stopped());`
Total: 1 usages
### mutation/partition_version.cc
- Line 347: `SCYLLA_ASSERT(!_snapshot->is_locked());`
Total: 1 usages
### reader_concurrency_semaphore.cc
- Line 1116: `SCYLLA_ASSERT(!_stats.waiters);`
- Line 1125: `SCYLLA_ASSERT(_inactive_reads.empty() && !_close_readers_gate.get_count() && !_p`
Total: 2 usages
### repair/row_level.cc
- Line 3647: `SCYLLA_ASSERT(_state == state::none || _state == state::stopped);`
Total: 1 usages
### replica/cell_locking.hh
- Line 371: `SCYLLA_ASSERT(_partitions.empty());`
Total: 1 usages
### replica/distributed_loader.cc
- Line 305: `SCYLLA_ASSERT(_sstable_directories.empty());`
Total: 1 usages
### schema/schema_registry.cc
- Line 45: `SCYLLA_ASSERT(!_schema);`
Total: 1 usages
### service/direct_failure_detector/failure_detector.cc
- Line 378: `SCYLLA_ASSERT(_ping_fiber.available());`
- Line 379: `SCYLLA_ASSERT(_notify_fiber.available());`
- Line 701: `SCYLLA_ASSERT(_shard_workers.empty());`
- Line 702: `SCYLLA_ASSERT(_destroy_subscriptions.available());`
- Line 703: `SCYLLA_ASSERT(_update_endpoint_fiber.available());`
- Line 707: `SCYLLA_ASSERT(!_impl);`
Total: 6 usages
### service/load_broadcaster.hh
- Line 37: `SCYLLA_ASSERT(_stopped);`
Total: 1 usages
### service/paxos/paxos_state.cc
- Line 323: `SCYLLA_ASSERT(_stopped);`
Total: 1 usages
### service/storage_proxy.cc
- Line 281: `SCYLLA_ASSERT(_stopped);`
- Line 3207: `SCYLLA_ASSERT(!_remote);`
Total: 2 usages
### service/tablet_allocator.cc
- Line 3288: `SCYLLA_ASSERT(_stopped);`
Total: 1 usages
### sstables/compressor.cc
- Line 1271: `SCYLLA_ASSERT(thread::running_in_thread());`
Total: 1 usages
### sstables/sstables_manager.cc
- Line 58: `SCYLLA_ASSERT(_closing);`
- Line 59: `SCYLLA_ASSERT(_active.empty());`
- Line 60: `SCYLLA_ASSERT(_undergoing_close.empty());`
Total: 3 usages
### sstables/sstables_manager.hh
- Line 188: `SCYLLA_ASSERT(_storage != nullptr);`
Total: 1 usages
### utils/cached_file.hh
- Line 477: `SCYLLA_ASSERT(_cache.empty());`
Total: 1 usages
### utils/disk_space_monitor.cc
- Line 66: `SCYLLA_ASSERT(_poller_fut.available());`
Total: 1 usages
### utils/file_lock.cc
- Line 34: `SCYLLA_ASSERT(_fd.get() != -1);`
- Line 36: `SCYLLA_ASSERT(r == 0);`
Total: 2 usages
### utils/logalloc.cc
- Line 1991: `SCYLLA_ASSERT(desc.is_empty());`
- Line 1996: `SCYLLA_ASSERT(segment_pool().descriptor(_active).is_empty());`
Total: 2 usages
### utils/lru.hh
- Line 41: `SCYLLA_ASSERT(!_lru_link.is_linked());`
Total: 1 usages
### utils/replicator.hh
- Line 221: `SCYLLA_ASSERT(_stopped);`
Total: 1 usages

View File

@@ -46,7 +46,7 @@ bool follower_progress::is_stray_reject(const append_reply::rejected& rejected)
// any reject during snapshot transfer is stray one
return true;
default:
SCYLLA_ASSERT(false);
scylla_assert(false, "invalid follower_progress state: {}", static_cast<int>(state));
}
return false;
}
@@ -87,7 +87,7 @@ bool follower_progress::can_send_to() {
// before starting to sync the log.
return false;
}
SCYLLA_ASSERT(false);
scylla_assert(false, "invalid follower_progress state in can_send_to: {}", static_cast<int>(state));
return false;
}

View File

@@ -134,11 +134,14 @@ bool is_compatible(column_kind k1, column_kind k2);
enum class cf_type : uint8_t {
standard,
super,
};
inline sstring cf_type_to_sstring(cf_type t) {
if (t == cf_type::standard) {
return "Standard";
} else if (t == cf_type::super) {
return "Super";
}
throw std::invalid_argument(format("unknown type: {:d}\n", uint8_t(t)));
}
@@ -146,6 +149,8 @@ inline sstring cf_type_to_sstring(cf_type t) {
inline cf_type sstring_to_cf_type(sstring name) {
if (name == "Standard") {
return cf_type::standard;
} else if (name == "Super") {
return cf_type::super;
}
throw std::invalid_argument(format("unknown type: {}\n", name));
}
@@ -683,13 +688,13 @@ public:
}
bool is_cql3_table() const {
return !is_dense() && is_compound();
return !is_super() && !is_dense() && is_compound();
}
bool is_compact_table() const {
return !is_cql3_table();
}
bool is_static_compact_table() const {
return !is_dense() && !is_compound();
return !is_super() && !is_dense() && !is_compound();
}
const table_id& id() const {
@@ -706,6 +711,10 @@ public:
return _raw._type;
}
bool is_super() const {
return _raw._type == cf_type::super;
}
gc_clock::duration gc_grace_seconds() const {
auto seconds = std::chrono::seconds(_raw._props.gc_grace_seconds);
return std::chrono::duration_cast<gc_clock::duration>(seconds);

View File

@@ -600,7 +600,7 @@ future<storage_service::nodes_to_notify_after_sync> storage_service::sync_raft_t
co_await update_topology_change_info(tmptr, ::format("{} {}/{}", rs.state, id, ip));
break;
case node_state::replacing: {
SCYLLA_ASSERT(_topology_state_machine._topology.req_param.contains(id));
scylla_assert(_topology_state_machine._topology.req_param.contains(id));
auto replaced_id = std::get<replace_param>(_topology_state_machine._topology.req_param[id]).replaced_id;
auto existing_ip = _address_map.find(locator::host_id{replaced_id.uuid()});
const auto replaced_host_id = locator::host_id(replaced_id.uuid());
@@ -688,7 +688,7 @@ future<> storage_service::notify_nodes_after_sync(nodes_to_notify_after_sync&& n
future<> storage_service::topology_state_load(state_change_hint hint) {
#ifdef SEASTAR_DEBUG
static bool running = false;
SCYLLA_ASSERT(!running); // The function is not re-entrant
scylla_assert(!running); // The function is not re-entrant
auto d = defer([] {
running = false;
});
@@ -854,7 +854,7 @@ future<> storage_service::topology_state_load(state_change_hint hint) {
}
future<> storage_service::topology_transition(state_change_hint hint) {
SCYLLA_ASSERT(this_shard_id() == 0);
scylla_assert(this_shard_id() == 0);
co_await topology_state_load(std::move(hint)); // reload new state
_topology_state_machine.event.broadcast();
@@ -898,7 +898,7 @@ future<> storage_service::view_building_state_load() {
}
future<> storage_service::view_building_transition() {
SCYLLA_ASSERT(this_shard_id() == 0);
scylla_assert(this_shard_id() == 0);
co_await view_building_state_load();
_view_building_state_machine.event.broadcast();
@@ -966,7 +966,7 @@ future<> storage_service::merge_topology_snapshot(raft_snapshot snp) {
}
future<> storage_service::update_service_levels_cache(qos::update_both_cache_levels update_only_effective_cache, qos::query_context ctx) {
SCYLLA_ASSERT(this_shard_id() == 0);
scylla_assert(this_shard_id() == 0);
if (_sl_controller.local().is_v2()) {
// Skip cache update unless the topology upgrade is done
co_await _sl_controller.local().update_cache(update_only_effective_cache, ctx);
@@ -1520,7 +1520,7 @@ future<> storage_service::update_topology_with_local_metadata(raft::server& raft
}
future<> storage_service::start_upgrade_to_raft_topology() {
SCYLLA_ASSERT(this_shard_id() == 0);
scylla_assert(this_shard_id() == 0);
if (_topology_state_machine._topology.upgrade_state != topology::upgrade_state_type::not_upgraded) {
co_return;
@@ -1572,7 +1572,7 @@ future<> storage_service::start_upgrade_to_raft_topology() {
}
topology::upgrade_state_type storage_service::get_topology_upgrade_state() const {
SCYLLA_ASSERT(this_shard_id() == 0);
scylla_assert(this_shard_id() == 0);
return _topology_state_machine._topology.upgrade_state;
}
@@ -1841,7 +1841,7 @@ future<> storage_service::join_topology(sharded<service::storage_proxy>& proxy,
slogger.info("Nodes {} are alive", get_sync_nodes());
}
SCYLLA_ASSERT(_group0);
scylla_assert(_group0);
join_node_request_params join_params {
.host_id = _group0->load_my_id(),
@@ -2083,7 +2083,7 @@ future<> storage_service::join_topology(sharded<service::storage_proxy>& proxy,
if (!_sys_ks.local().bootstrap_complete()) {
// If we're not bootstrapping then we shouldn't have chosen a CDC streams timestamp yet.
SCYLLA_ASSERT(should_bootstrap() || !cdc_gen_id);
scylla_assert(should_bootstrap() || !cdc_gen_id);
// Don't try rewriting CDC stream description tables.
// See cdc.md design notes, `Streams description table V1 and rewriting` section, for explanation.
@@ -2167,7 +2167,7 @@ future<> storage_service::join_topology(sharded<service::storage_proxy>& proxy,
throw std::runtime_error(err);
}
SCYLLA_ASSERT(_group0);
scylla_assert(_group0);
co_await _group0->finish_setup_after_join(*this, _qp, _migration_manager.local(), false);
co_await _cdc_gens.local().after_join(std::move(cdc_gen_id));
@@ -2192,7 +2192,7 @@ future<> storage_service::join_topology(sharded<service::storage_proxy>& proxy,
}
future<> storage_service::track_upgrade_progress_to_topology_coordinator(sharded<service::storage_proxy>& proxy) {
SCYLLA_ASSERT(_group0);
scylla_assert(_group0);
while (true) {
_group0_as.check();
@@ -2316,7 +2316,7 @@ future<> storage_service::bootstrap(std::unordered_set<token>& bootstrap_tokens,
// After we pick a generation timestamp, we start gossiping it, and we stick with it.
// We don't do any other generation switches (unless we crash before complecting bootstrap).
SCYLLA_ASSERT(!cdc_gen_id);
scylla_assert(!cdc_gen_id);
cdc_gen_id = _cdc_gens.local().legacy_make_new_generation(bootstrap_tokens, !is_first_node()).get();
@@ -2349,9 +2349,9 @@ future<> storage_service::bootstrap(std::unordered_set<token>& bootstrap_tokens,
slogger.debug("Removing replaced endpoint {} from system.peers", replace_addr);
_sys_ks.local().remove_endpoint(replace_addr).get();
SCYLLA_ASSERT(replaced_host_id);
scylla_assert(replaced_host_id);
auto raft_id = raft::server_id{replaced_host_id.uuid()};
SCYLLA_ASSERT(_group0);
scylla_assert(_group0);
bool raft_available = _group0->wait_for_raft().get();
if (raft_available) {
slogger.info("Replace: removing {}/{} from group 0...", replace_addr, raft_id);
@@ -3000,7 +3000,7 @@ future<> storage_service::stop_transport() {
}
future<> storage_service::drain_on_shutdown() {
SCYLLA_ASSERT(this_shard_id() == 0);
scylla_assert(this_shard_id() == 0);
return (_operation_mode == mode::DRAINING || _operation_mode == mode::DRAINED) ?
_drain_finished.get_future() : do_drain();
}
@@ -3025,7 +3025,7 @@ bool storage_service::is_topology_coordinator_enabled() const {
future<> storage_service::join_cluster(sharded<service::storage_proxy>& proxy,
start_hint_manager start_hm, gms::generation_type new_generation) {
SCYLLA_ASSERT(this_shard_id() == 0);
scylla_assert(this_shard_id() == 0);
if (_sys_ks.local().was_decommissioned()) {
auto msg = sstring("This node was decommissioned and will not rejoin the ring unless "
@@ -3225,7 +3225,7 @@ future<> storage_service::join_cluster(sharded<service::storage_proxy>& proxy,
}
future<token_metadata_change> storage_service::prepare_token_metadata_change(mutable_token_metadata_ptr tmptr, const schema_getter& schema_getter) {
SCYLLA_ASSERT(this_shard_id() == 0);
scylla_assert(this_shard_id() == 0);
std::exception_ptr ex;
token_metadata_change change;
@@ -3992,7 +3992,7 @@ future<> storage_service::decommission() {
slogger.info("DECOMMISSIONING: starts");
ctl.req.leaving_nodes = std::list<gms::inet_address>{endpoint};
SCYLLA_ASSERT(ss._group0);
scylla_assert(ss._group0);
bool raft_available = ss._group0->wait_for_raft().get();
try {
@@ -4044,7 +4044,7 @@ future<> storage_service::decommission() {
if (raft_available && left_token_ring) {
slogger.info("decommission[{}]: leaving Raft group 0", uuid);
SCYLLA_ASSERT(ss._group0);
scylla_assert(ss._group0);
ss._group0->leave_group0().get();
slogger.info("decommission[{}]: left Raft group 0", uuid);
}
@@ -4350,7 +4350,7 @@ future<> storage_service::removenode(locator::host_id host_id, locator::host_id_
auto stop_ctl = deferred_stop(ctl);
auto uuid = ctl.uuid();
const auto& tmptr = ctl.tmptr;
SCYLLA_ASSERT(ss._group0);
scylla_assert(ss._group0);
auto raft_id = raft::server_id{host_id.uuid()};
bool raft_available = ss._group0->wait_for_raft().get();
bool is_group0_member = raft_available && ss._group0->is_member(raft_id, false);
@@ -4470,7 +4470,7 @@ future<> storage_service::removenode(locator::host_id host_id, locator::host_id_
}
future<> storage_service::check_and_repair_cdc_streams() {
SCYLLA_ASSERT(this_shard_id() == 0);
scylla_assert(this_shard_id() == 0);
if (!_cdc_gens.local_is_initialized()) {
return make_exception_future<>(std::runtime_error("CDC generation service not initialized yet"));
@@ -5784,7 +5784,7 @@ future<> storage_service::mutate_token_metadata(std::function<future<> (mutable_
}
future<> storage_service::update_topology_change_info(mutable_token_metadata_ptr tmptr, sstring reason) {
SCYLLA_ASSERT(this_shard_id() == 0);
scylla_assert(this_shard_id() == 0);
try {
locator::dc_rack_fn get_dc_rack_by_host_id([this, &tm = *tmptr] (locator::host_id host_id) -> std::optional<locator::endpoint_dc_rack> {
@@ -5831,7 +5831,7 @@ future<> storage_service::keyspace_changed(const sstring& ks_name) {
}
future<locator::mutable_token_metadata_ptr> storage_service::prepare_tablet_metadata(const locator::tablet_metadata_change_hint& hint, mutable_token_metadata_ptr pending_token_metadata) {
SCYLLA_ASSERT(this_shard_id() == 0);
scylla_assert(this_shard_id() == 0);
if (hint) {
co_await replica::update_tablet_metadata(_db.local(), _qp, pending_token_metadata->tablets(), hint);
} else {
@@ -5955,7 +5955,7 @@ void storage_service::start_tablet_split_monitor() {
}
future<> storage_service::snitch_reconfigured() {
SCYLLA_ASSERT(this_shard_id() == 0);
scylla_assert(this_shard_id() == 0);
auto& snitch = _snitch.local();
co_await mutate_token_metadata([&snitch] (mutable_token_metadata_ptr tmptr) -> future<> {
// re-read local rack and DC info
@@ -6487,8 +6487,8 @@ future<> storage_service::stream_tablet(locator::global_tablet_id tablet) {
co_await utils::get_local_injector().inject("block_tablet_streaming", [this, &tablet] (auto& handler) -> future<> {
const auto keyspace = handler.get("keyspace");
const auto table = handler.get("table");
SCYLLA_ASSERT(keyspace);
SCYLLA_ASSERT(table);
scylla_assert(keyspace);
scylla_assert(table);
auto s = _db.local().find_column_family(tablet.table).schema();
bool should_block = s->ks_name() == *keyspace && s->cf_name() == *table;
while (should_block && !handler.poll_for_message() && !_async_gate.is_closed()) {
@@ -7509,7 +7509,7 @@ future<join_node_request_result> storage_service::join_node_request_handler(join
}
future<join_node_response_result> storage_service::join_node_response_handler(join_node_response_params params) {
SCYLLA_ASSERT(this_shard_id() == 0);
scylla_assert(this_shard_id() == 0);
// Usually this handler will only run once, but there are some cases where we might get more than one RPC,
// possibly happening at the same time, e.g.:

View File

@@ -360,7 +360,7 @@ class topology_coordinator : public endpoint_lifecycle_subscriber {
auto& topo = _topo_sm._topology;
auto it = topo.find(id);
SCYLLA_ASSERT(it);
scylla_assert(it);
std::optional<topology_request> req;
auto rit = topo.requests.find(id);
@@ -2310,7 +2310,7 @@ class topology_coordinator : public endpoint_lifecycle_subscriber {
switch (node.rs->state) {
case node_state::bootstrapping: {
SCYLLA_ASSERT(!node.rs->ring);
scylla_assert(!node.rs->ring);
auto num_tokens = std::get<join_param>(node.req_param.value()).num_tokens;
auto tokens_string = std::get<join_param>(node.req_param.value()).tokens_string;
@@ -2359,11 +2359,11 @@ class topology_coordinator : public endpoint_lifecycle_subscriber {
}
break;
case node_state::replacing: {
SCYLLA_ASSERT(!node.rs->ring);
scylla_assert(!node.rs->ring);
auto replaced_id = std::get<replace_param>(node.req_param.value()).replaced_id;
auto it = _topo_sm._topology.normal_nodes.find(replaced_id);
SCYLLA_ASSERT(it != _topo_sm._topology.normal_nodes.end());
SCYLLA_ASSERT(it->second.ring && it->second.state == node_state::normal);
scylla_assert(it != _topo_sm._topology.normal_nodes.end());
scylla_assert(it->second.ring && it->second.state == node_state::normal);
topology_mutation_builder builder(node.guard.write_timestamp());
@@ -3022,7 +3022,7 @@ class topology_coordinator : public endpoint_lifecycle_subscriber {
rtbuilder.set("start_time", db_clock::now());
switch (node.request.value()) {
case topology_request::join: {
SCYLLA_ASSERT(!node.rs->ring);
scylla_assert(!node.rs->ring);
// Write chosen tokens through raft.
builder.set_transition_state(topology::transition_state::join_group0)
.with_node(node.id)
@@ -3033,7 +3033,7 @@ class topology_coordinator : public endpoint_lifecycle_subscriber {
break;
}
case topology_request::leave:
SCYLLA_ASSERT(node.rs->ring);
scylla_assert(node.rs->ring);
// start decommission and put tokens of decommissioning nodes into write_both_read_old state
// meaning that reads will go to the replica being decommissioned
// but writes will go to new owner as well
@@ -3046,7 +3046,7 @@ class topology_coordinator : public endpoint_lifecycle_subscriber {
"start decommission");
break;
case topology_request::remove: {
SCYLLA_ASSERT(node.rs->ring);
scylla_assert(node.rs->ring);
builder.set_transition_state(topology::transition_state::tablet_draining)
.set_version(_topo_sm._topology.version + 1)
@@ -3058,7 +3058,7 @@ class topology_coordinator : public endpoint_lifecycle_subscriber {
break;
}
case topology_request::replace: {
SCYLLA_ASSERT(!node.rs->ring);
scylla_assert(!node.rs->ring);
builder.set_transition_state(topology::transition_state::join_group0)
.with_node(node.id)
@@ -3163,7 +3163,7 @@ class topology_coordinator : public endpoint_lifecycle_subscriber {
auto id = node.id;
SCYLLA_ASSERT(!_topo_sm._topology.transition_nodes.empty());
scylla_assert(!_topo_sm._topology.transition_nodes.empty());
release_node(std::move(node));
@@ -4013,7 +4013,7 @@ future<> topology_coordinator::stop() {
// but let's check all of them because we never reset these holders
// once they are added as barriers
for (auto& [stage, barrier]: tablet_state.barriers) {
SCYLLA_ASSERT(barrier.has_value());
scylla_assert(barrier.has_value());
co_await stop_background_action(barrier, gid, [stage] { return format("at stage {}", tablet_transition_stage_to_string(stage)); });
}

View File

@@ -251,7 +251,7 @@ void compression::discard_hidden_options() {
}
compressor& compression::get_compressor() const {
SCYLLA_ASSERT(_compressor);
scylla_assert(_compressor);
return *_compressor.get();
}

View File

@@ -170,7 +170,7 @@ struct compression {
const_iterator(const const_iterator& other) = default;
const_iterator& operator=(const const_iterator& other) {
SCYLLA_ASSERT(&_offsets == &other._offsets);
scylla_assert(&_offsets == &other._offsets);
_index = other._index;
return *this;
}

View File

@@ -24,6 +24,7 @@
#include "sstables/sstable_compressor_factory.hh"
#include "compressor.hh"
#include "exceptions/exceptions.hh"
#include "utils/assert.hh"
#include "utils/config_file_impl.hh"
#include "utils/class_registrator.hh"
#include "gms/feature_service.hh"
@@ -295,7 +296,7 @@ size_t zstd_processor::uncompress(const char* input, size_t input_len, char* out
if (_ddict) {
return ZSTD_decompress_usingDDict(dctx, output, output_len, input, input_len, _ddict->dict());
} else {
SCYLLA_ASSERT(!_cdict && "Write-only compressor used for reading");
scylla_assert(!_cdict && "Write-only compressor used for reading");
return ZSTD_decompressDCtx(dctx, output, output_len, input, input_len);
}
});
@@ -310,7 +311,7 @@ size_t zstd_processor::compress(const char* input, size_t input_len, char* outpu
if (_cdict) {
return ZSTD_compress_usingCDict(cctx, output, output_len, input, input_len, _cdict->dict());
} else {
SCYLLA_ASSERT(!_ddict && "Read-only compressor used for writing");
scylla_assert(!_ddict && "Read-only compressor used for writing");
return ZSTD_compressCCtx(cctx, output, output_len, input, input_len, _compression_level);
}
});
@@ -627,7 +628,7 @@ size_t lz4_processor::uncompress(const char* input, size_t input_len,
if (_ddict) {
ret = LZ4_decompress_safe_usingDict(input, output, input_len, output_len, reinterpret_cast<const char*>(_ddict->raw().data()), _ddict->raw().size());
} else {
SCYLLA_ASSERT(!_cdict && "Write-only compressor used for reading");
scylla_assert(!_cdict && "Write-only compressor used for reading");
ret = LZ4_decompress_safe(input, output, input_len, output_len);
}
if (ret < 0) {
@@ -657,7 +658,7 @@ size_t lz4_processor::compress(const char* input, size_t input_len,
LZ4_resetStream_fast(ctx);
}
} else {
SCYLLA_ASSERT(!_ddict && "Read-only compressor used for writing");
scylla_assert(!_ddict && "Read-only compressor used for writing");
ret = LZ4_compress_default(input, output + 4, input_len, LZ4_compressBound(input_len));
}
if (ret == 0) {
@@ -1268,7 +1269,7 @@ lz4_cdict::~lz4_cdict() {
}
std::unique_ptr<sstable_compressor_factory> make_sstable_compressor_factory_for_tests_in_thread() {
SCYLLA_ASSERT(thread::running_in_thread());
scylla_assert(thread::running_in_thread());
struct wrapper : sstable_compressor_factory {
using impl = default_sstable_compressor_factory;
sharded<impl> _impl;

View File

@@ -44,14 +44,14 @@ public:
* @return A list of `sampling_level` unique indices between 0 and `sampling_level`
*/
static const std::vector<int>& get_sampling_pattern(int sampling_level) {
SCYLLA_ASSERT(sampling_level > 0 && sampling_level <= BASE_SAMPLING_LEVEL);
scylla_assert(sampling_level > 0 && sampling_level <= BASE_SAMPLING_LEVEL);
auto& entry = _sample_pattern_cache[sampling_level-1];
if (!entry.empty()) {
return entry;
}
if (sampling_level <= 1) {
SCYLLA_ASSERT(_sample_pattern_cache[0].empty());
scylla_assert(_sample_pattern_cache[0].empty());
_sample_pattern_cache[0].push_back(0);
return _sample_pattern_cache[0];
}
@@ -96,7 +96,7 @@ public:
* @return a list of original indexes for current summary entries
*/
static const std::vector<int>& get_original_indexes(int sampling_level) {
SCYLLA_ASSERT(sampling_level > 0 && sampling_level <= BASE_SAMPLING_LEVEL);
scylla_assert(sampling_level > 0 && sampling_level <= BASE_SAMPLING_LEVEL);
auto& entry = _original_index_cache[sampling_level-1];
if (!entry.empty()) {
return entry;
@@ -128,7 +128,7 @@ public:
* @return the number of partitions before the next index summary entry, inclusive on one end
*/
static int get_effective_index_interval_after_index(int index, int sampling_level, int min_index_interval) {
SCYLLA_ASSERT(index >= -1);
scylla_assert(index >= -1);
const std::vector<int>& original_indexes = get_original_indexes(sampling_level);
if (index == -1) {
return original_indexes[0] * min_index_interval;

View File

@@ -31,7 +31,7 @@ public:
[[noreturn]] void on_parse_error(sstring message, std::optional<component_name> filename);
[[noreturn, gnu::noinline]] void on_bti_parse_error(uint64_t pos);
// Use this instead of SCYLLA_ASSERT() or assert() in code that is used while parsing SSTables.
// Use this instead of scylla_assert() or assert() in code that is used while parsing SSTables.
// SSTables can be corrupted either by ScyllaDB itself or by a freak accident like cosmic background
// radiation hitting the disk the wrong way. Either way a corrupt SSTable should not bring down the
// whole server. This method will call on_internal_error() if the condition is false.

View File

@@ -129,7 +129,7 @@ public:
/// way to determine that is overlapping its partition-ranges with the shard's
/// owned ranges.
static bool maybe_owned_by_this_shard(const sstables::generation_type& gen) {
SCYLLA_ASSERT(bool(gen));
scylla_assert(bool(gen));
int64_t hint = 0;
if (gen.is_uuid_based()) {
hint = std::hash<utils::UUID>{}(gen.as_uuid());

View File

@@ -91,7 +91,7 @@ public:
{}
void increment() {
SCYLLA_ASSERT(_range);
scylla_assert(_range);
if (!_range->next()) {
_range = nullptr;
}
@@ -102,7 +102,7 @@ public:
}
const ValueType dereference() const {
SCYLLA_ASSERT(_range);
scylla_assert(_range);
return _range->get_value();
}
@@ -153,7 +153,7 @@ public:
auto limit = std::min(_serialization_limit_size, _offset + clustering_block::max_block_size);
_current_block = {};
SCYLLA_ASSERT (_offset % clustering_block::max_block_size == 0);
scylla_assert (_offset % clustering_block::max_block_size == 0);
while (_offset < limit) {
auto shift = _offset % clustering_block::max_block_size;
if (_offset < _prefix.size(_schema)) {
@@ -280,7 +280,7 @@ public:
++_current_index;
}
} else {
SCYLLA_ASSERT(_mode == encoding_mode::large_encode_missing);
scylla_assert(_mode == encoding_mode::large_encode_missing);
while (_current_index < total_size) {
auto cell = _row.find_cell(_columns[_current_index].get().id);
if (!cell) {
@@ -1180,7 +1180,7 @@ void writer::write_cell(bytes_ostream& writer, const clustering_key_prefix* clus
if (cdef.is_counter()) {
if (!is_deleted) {
SCYLLA_ASSERT(!cell.is_counter_update());
scylla_assert(!cell.is_counter_update());
auto ccv = counter_cell_view(cell);
write_counter_value(ccv, writer, _sst.get_version(), [] (bytes_ostream& out, uint32_t value) {
return write_vint(out, value);
@@ -1489,7 +1489,7 @@ template <typename W>
requires Writer<W>
static void write_clustering_prefix(sstable_version_types v, W& writer, bound_kind_m kind,
const schema& s, const clustering_key_prefix& clustering) {
SCYLLA_ASSERT(kind != bound_kind_m::static_clustering);
scylla_assert(kind != bound_kind_m::static_clustering);
write(v, writer, kind);
auto is_ephemerally_full = ephemerally_full_prefix{s.is_compact_table()};
if (kind != bound_kind_m::clustering) {

View File

@@ -59,7 +59,7 @@ private:
// Live entry_ptr should keep the entry alive, except when the entry failed on loading.
// In that case, entry_ptr holders are not supposed to use the pointer, so it's safe
// to nullify those entry_ptrs.
SCYLLA_ASSERT(!ready());
scylla_assert(!ready());
}
}

View File

@@ -496,7 +496,7 @@ sstable_directory::move_foreign_sstables(sharded<sstable_directory>& source_dire
return make_ready_future<>();
}
// Should be empty, since an SSTable that belongs to this shard is not remote.
SCYLLA_ASSERT(shard_id != this_shard_id());
scylla_assert(shard_id != this_shard_id());
dirlog.debug("Moving {} unshared SSTables of {}.{} to shard {} ", info_vec.size(), _schema->ks_name(), _schema->cf_name(), shard_id);
return source_directory.invoke_on(shard_id, &sstables::sstable_directory::load_foreign_sstables, std::move(info_vec));
});
@@ -540,7 +540,7 @@ sstable_directory::collect_output_unshared_sstables(std::vector<sstables::shared
dirlog.debug("Collecting {} output SSTables (remote={})", resharded_sstables.size(), remote_ok);
return parallel_for_each(std::move(resharded_sstables), [this, remote_ok] (sstables::shared_sstable sst) {
auto shards = sst->get_shards_for_this_sstable();
SCYLLA_ASSERT(shards.size() == 1);
scylla_assert(shards.size() == 1);
auto shard = shards[0];
if (shard == this_shard_id()) {

View File

@@ -283,7 +283,7 @@ bool partitioned_sstable_set::store_as_unleveled(const shared_sstable& sst) cons
}
sstlog.info("SSTable {}, as_unleveled={}, expect_unleveled={}, sst_tr={}, overlap_ratio={}",
sst->generation(), as_unleveled, expect_unleveled, sst_tr, dht::overlap_ratio(_token_range, sst_tr));
SCYLLA_ASSERT(as_unleveled == expect_unleveled);
scylla_assert(as_unleveled == expect_unleveled);
});
return as_unleveled;
@@ -712,8 +712,8 @@ public:
// by !empty(bound) and `_it` invariant:
// _it != _end, _it->first <= bound, and filter(*_it->second) == true
SCYLLA_ASSERT(_cmp(_it->first, bound) <= 0);
// we don't SCYLLA_ASSERT(filter(*_it->second)) due to the requirement that `filter` is called at most once for each sstable
scylla_assert(_cmp(_it->first, bound) <= 0);
// we don't scylla_assert(filter(*_it->second)) due to the requirement that `filter` is called at most once for each sstable
// Find all sstables with the same position as `_it` (they form a contiguous range in the container).
auto next = std::find_if(std::next(_it), _end, [this] (const value_t& v) { return _cmp(v.first, _it->first) != 0; });
@@ -1301,7 +1301,7 @@ sstable_set::create_single_key_sstable_reader(
mutation_reader::forwarding fwd_mr,
const sstable_predicate& predicate,
sstables::integrity_check integrity) const {
SCYLLA_ASSERT(pr.is_singular() && pr.start()->value().has_key());
scylla_assert(pr.is_singular() && pr.start()->value().has_key());
return _impl->create_single_key_sstable_reader(cf, std::move(schema),
std::move(permit), sstable_histogram, pr, slice, std::move(trace_state), fwd, fwd_mr, predicate, integrity);
}
@@ -1408,7 +1408,7 @@ sstable_set::make_local_shard_sstable_reader(
{
auto reader_factory_fn = [s, permit, &slice, trace_state, fwd, fwd_mr, &monitor_generator, &predicate, integrity]
(shared_sstable& sst, const dht::partition_range& pr) mutable {
SCYLLA_ASSERT(!sst->is_shared());
scylla_assert(!sst->is_shared());
if (!predicate(*sst)) {
return make_empty_mutation_reader(s, permit);
}

View File

@@ -36,6 +36,7 @@
#include "utils/error_injection.hh"
#include "utils/to_string.hh"
#include "utils/assert.hh"
#include "data_dictionary/storage_options.hh"
#include "dht/sharder.hh"
#include "writer.hh"
@@ -4161,7 +4162,7 @@ future<data_sink> file_io_extension::wrap_sink(const sstable& sst, component_typ
}
future<data_source> file_io_extension::wrap_source(const sstable& sst, component_type c, data_source) {
SCYLLA_ASSERT(0 && "You are not supposed to get here, file_io_extension::wrap_source() is not implemented");
scylla_assert(0 && "You are not supposed to get here, file_io_extension::wrap_source() is not implemented");
}
namespace trie {

View File

@@ -55,9 +55,9 @@ sstables_manager::sstables_manager(
}
sstables_manager::~sstables_manager() {
SCYLLA_ASSERT(_closing);
SCYLLA_ASSERT(_active.empty());
SCYLLA_ASSERT(_undergoing_close.empty());
scylla_assert(_closing);
scylla_assert(_active.empty());
scylla_assert(_undergoing_close.empty());
}
void sstables_manager::subscribe(sstables_manager_event_handler& handler) {

View File

@@ -185,12 +185,12 @@ public:
size_t buffer_size = default_sstable_buffer_size);
shared_ptr<object_storage_client> get_endpoint_client(sstring endpoint) const {
SCYLLA_ASSERT(_storage != nullptr);
scylla_assert(_storage != nullptr);
return _storage->get_endpoint_client(std::move(endpoint));
}
bool is_known_endpoint(sstring endpoint) const {
SCYLLA_ASSERT(_storage != nullptr);
scylla_assert(_storage != nullptr);
return _storage->is_known_endpoint(std::move(endpoint));
}
@@ -241,7 +241,7 @@ public:
// Only for sstable::storage usage
sstables::sstables_registry& sstables_registry() const noexcept {
SCYLLA_ASSERT(_sstables_registry && "sstables_registry is not plugged");
scylla_assert(_sstables_registry && "sstables_registry is not plugged");
return *_sstables_registry;
}

View File

@@ -109,7 +109,7 @@ future<data_sink> filesystem_storage::make_data_or_index_sink(sstable& sst, comp
options.buffer_size = sst.sstable_buffer_size;
options.write_behind = 10;
SCYLLA_ASSERT(
scylla_assert(
type == component_type::Data
|| type == component_type::Index
|| type == component_type::Rows
@@ -129,7 +129,7 @@ future<data_sink> filesystem_storage::make_data_or_index_sink(sstable& sst, comp
}
future<data_source> filesystem_storage::make_data_or_index_source(sstable&, component_type type, file f, uint64_t offset, uint64_t len, file_input_stream_options opt) const {
SCYLLA_ASSERT(type == component_type::Data || type == component_type::Index);
scylla_assert(type == component_type::Data || type == component_type::Index);
co_return make_file_data_source(std::move(f), offset, len, std::move(opt));
}
@@ -717,7 +717,7 @@ static future<data_source> maybe_wrap_source(const sstable& sst, component_type
}
future<data_sink> object_storage_base::make_data_or_index_sink(sstable& sst, component_type type) {
SCYLLA_ASSERT(
scylla_assert(
type == component_type::Data
|| type == component_type::Index
|| type == component_type::Rows

View File

@@ -83,13 +83,13 @@ class storage {
// Internal, but can also be used by tests
virtual future<> change_dir_for_test(sstring nd) {
SCYLLA_ASSERT(false && "Changing directory not implemented");
scylla_assert(false && "Changing directory not implemented");
}
virtual future<> create_links(const sstable& sst, const std::filesystem::path& dir) const {
SCYLLA_ASSERT(false && "Direct links creation not implemented");
scylla_assert(false && "Direct links creation not implemented");
}
virtual future<> move(const sstable& sst, sstring new_dir, generation_type generation, delayed_commit_changes* delay) {
SCYLLA_ASSERT(false && "Direct move not implemented");
scylla_assert(false && "Direct move not implemented");
}
public:

View File

@@ -8,6 +8,7 @@
#include "bti_key_translation.hh"
#include "sstables/mx/types.hh"
#include "utils/assert.hh"
namespace sstables::trie {
@@ -56,7 +57,7 @@ void lazy_comparable_bytes_from_ring_position::init_first_fragment(dht::token dh
}
void lazy_comparable_bytes_from_ring_position::trim(const size_t n) {
SCYLLA_ASSERT(n <= _size);
scylla_assert(n <= _size);
_size = n;
}
@@ -127,7 +128,7 @@ lazy_comparable_bytes_from_clustering_position::lazy_comparable_bytes_from_clust
{}
void lazy_comparable_bytes_from_clustering_position::trim(unsigned n) {
SCYLLA_ASSERT(n <= _size);
scylla_assert(n <= _size);
_size = n;
}

View File

@@ -8,6 +8,7 @@
#include "bti_node_reader.hh"
#include "bti_node_type.hh"
#include "utils/assert.hh"
namespace sstables::trie {
@@ -448,37 +449,37 @@ seastar::future<> bti_node_reader::load(int64_t pos, const reader_permit& permit
}
trie::load_final_node_result bti_node_reader::read_node(int64_t pos) {
SCYLLA_ASSERT(cached(pos));
scylla_assert(cached(pos));
auto sp = _cached_page->get_view().subspan(pos % cached_file::page_size);
return bti_read_node(pos, sp);
}
trie::node_traverse_result bti_node_reader::walk_down_along_key(int64_t pos, const_bytes key) {
SCYLLA_ASSERT(cached(pos));
scylla_assert(cached(pos));
auto sp = _cached_page->get_view().subspan(pos % cached_file::page_size);
return bti_walk_down_along_key(pos, sp, key);
}
trie::node_traverse_sidemost_result bti_node_reader::walk_down_leftmost_path(int64_t pos) {
SCYLLA_ASSERT(cached(pos));
scylla_assert(cached(pos));
auto sp = _cached_page->get_view().subspan(pos % cached_file::page_size);
return bti_walk_down_leftmost_path(pos, sp);
}
trie::node_traverse_sidemost_result bti_node_reader::walk_down_rightmost_path(int64_t pos) {
SCYLLA_ASSERT(cached(pos));
scylla_assert(cached(pos));
auto sp = _cached_page->get_view().subspan(pos % cached_file::page_size);
return bti_walk_down_rightmost_path(pos, sp);
}
trie::get_child_result bti_node_reader::get_child(int64_t pos, int child_idx, bool forward) const {
SCYLLA_ASSERT(cached(pos));
scylla_assert(cached(pos));
auto sp = _cached_page->get_view().subspan(pos % cached_file::page_size);
return bti_get_child(pos, sp, child_idx, forward);
}
const_bytes bti_node_reader::get_payload(int64_t pos) const {
SCYLLA_ASSERT(cached(pos));
scylla_assert(cached(pos));
auto sp = _cached_page->get_view().subspan(pos % cached_file::page_size);
return bti_get_payload(pos, sp);
}

View File

@@ -204,7 +204,7 @@ inline void descend_leftmost_single_page(
next_pos = -1;
trail.back().child_idx = -1;
} else {
SCYLLA_ASSERT(traverse_one.n_children >= 1);
scylla_assert(traverse_one.n_children >= 1);
next_pos = traverse_one.body_pos - traverse_one.child_offset;
}
}

View File

@@ -9,6 +9,7 @@
#include <seastar/util/log.hh>
#include "writer_node.hh"
#include "common.hh"
#include "utils/assert.hh"
seastar::logger trie_logger("trie");
@@ -27,7 +28,7 @@ auto writer_node::create(const_bytes b, bump_allocator& alctr) -> ptr<writer_nod
}
auto writer_node::add_child(const_bytes b, bump_allocator& alctr) -> ptr<writer_node> {
SCYLLA_ASSERT(get_children().empty() || b[0] > get_children().back()->_transition[0]);
scylla_assert(get_children().empty() || b[0] > get_children().back()->_transition[0]);
reserve_children(get_children().size() + 1, alctr);
auto new_child = create(b, alctr);
push_child(new_child, alctr);

View File

@@ -406,7 +406,7 @@ inline void trie_writer<Output>::complete_until_depth(size_t depth) {
template <trie_writer_sink Output>
inline void trie_writer<Output>::add(size_t depth, const_bytes key_tail, const trie_payload& p) {
SCYLLA_ASSERT(p._payload_bits);
scylla_assert(p._payload_bits);
add_partial(depth, key_tail);
_stack.back()->set_payload(p);
}
@@ -416,10 +416,10 @@ template <trie_writer_sink Output>
inline void trie_writer<Output>::add_partial(size_t depth, const_bytes key_frag) {
expensive_log("writer_node::add_partial: end, stack={}, depth={}, _current_depth={} tail={}", _stack.size(), depth, _current_depth, fmt_hex(key_frag));
expensive_assert(_stack.size() >= 1);
SCYLLA_ASSERT(_current_depth >= depth);
scylla_assert(_current_depth >= depth);
// There is only one case where a zero-length tail is legal:
// when inserting the empty key.
SCYLLA_ASSERT(!key_frag.empty() || depth == 0);
scylla_assert(!key_frag.empty() || depth == 0);
complete_until_depth(depth);
if (key_frag.size()) {
@@ -444,7 +444,7 @@ inline sink_pos trie_writer<Output>::finish() {
if (!try_write(_stack[0])) {
_out.pad_to_page_boundary();
bool ok = try_write(_stack[0]);
SCYLLA_ASSERT(ok);
scylla_assert(ok);
}
auto root_pos = _stack[0]->_pos;

View File

@@ -203,7 +203,7 @@ private:
[[nodiscard]] ptr<T> alloc_impl(size_t n) {
using value_type = ptr<T>::value_type;
expensive_assert(n < _segment_size / sizeof(value_type));
SCYLLA_ASSERT(n > 0);
scylla_assert(n > 0);
auto sz = n * sizeof(value_type);
_remaining -= _remaining % alignof(value_type);
if (sz > _remaining) [[unlikely]] {
@@ -230,7 +230,7 @@ private:
public:
bump_allocator(size_t segment_size) : _segment_size(segment_size) {
SCYLLA_ASSERT(_segment_size % alignof(max_align_t) == 0);
scylla_assert(_segment_size % alignof(max_align_t) == 0);
}
// Total memory usage by this allocator.

View File

@@ -9,6 +9,7 @@
#pragma once
#include "writer_node.hh"
#include "utils/assert.hh"
#include "utils/small_vector.hh"
namespace sstables::trie {
@@ -111,9 +112,9 @@ void writer_node::write(ptr<writer_node> self, Output& out, bool guaranteed_fit)
fmt::ptr(node.get()), out.pos().value, node->get_children().size(), node->_node_size.value, node->_transition_length);
if (guaranteed_fit) {
SCYLLA_ASSERT(out.pos() - startpos == node->_branch_size);
scylla_assert(out.pos() - startpos == node->_branch_size);
node->_pos = sink_pos(out.write(*node, sink_pos(out.pos())));
SCYLLA_ASSERT(out.pos() - startpos == node->_branch_size + node->_node_size);
scylla_assert(out.pos() - startpos == node->_branch_size + node->_node_size);
} else {
if (uint64_t(out.serialized_size(*node, sink_pos(out.pos())).value) > out.bytes_left_in_page()) {
out.pad_to_page_boundary();

View File

@@ -0,0 +1,49 @@
/*
* Copyright (C) 2024-present ScyllaDB
*/
/*
* SPDX-License-Identifier: LicenseRef-ScyllaDB-Source-Available-1.0
*/
// Simple manual test for scylla_assert macro
#include "utils/assert.hh"
#include <iostream>
#include <exception>
void test_passing_assertion() {
scylla_assert(true);
scylla_assert(1 + 1 == 2);
scylla_assert(1 + 1 == 2, "basic math should work");
std::cout << "✓ All passing assertions succeeded\n";
}
void test_failing_assertion_without_message() {
try {
scylla_assert(false);
std::cout << "✗ Expected exception was not thrown\n";
} catch (const std::exception& e) {
std::cout << "✓ Caught expected exception: " << e.what() << "\n";
}
}
void test_failing_assertion_with_message() {
try {
scylla_assert(1 + 1 == 3, "this should fail");
std::cout << "✗ Expected exception was not thrown\n";
} catch (const std::exception& e) {
std::cout << "✓ Caught expected exception with message: " << e.what() << "\n";
}
}
int main() {
std::cout << "Testing scylla_assert macro...\n\n";
test_passing_assertion();
test_failing_assertion_without_message();
test_failing_assertion_with_message();
std::cout << "\n✓ All tests completed successfully\n";
return 0;
}

View File

@@ -21,14 +21,33 @@ static logging::logger ulogger("unimplemented");
std::string_view format_as(cause c) {
switch (c) {
case cause::API: return "API";
case cause::INDEXES: return "INDEXES";
case cause::LWT: return "LWT";
case cause::PAGING: return "PAGING";
case cause::AUTH: return "AUTH";
case cause::PERMISSIONS: return "PERMISSIONS";
case cause::TRIGGERS: return "TRIGGERS";
case cause::COUNTERS: return "COUNTERS";
case cause::METRICS: return "METRICS";
case cause::MIGRATIONS: return "MIGRATIONS";
case cause::GOSSIP: return "GOSSIP";
case cause::TOKEN_RESTRICTION: return "TOKEN_RESTRICTION";
case cause::LEGACY_COMPOSITE_KEYS: return "LEGACY_COMPOSITE_KEYS";
case cause::COLLECTION_RANGE_TOMBSTONES: return "COLLECTION_RANGE_TOMBSTONES";
case cause::RANGE_DELETES: return "RANGE_DELETES";
case cause::VALIDATION: return "VALIDATION";
case cause::REVERSED: return "REVERSED";
case cause::COMPRESSION: return "COMPRESSION";
case cause::NONATOMIC: return "NONATOMIC";
case cause::CONSISTENCY: return "CONSISTENCY";
case cause::HINT: return "HINT";
case cause::SUPER: return "SUPER";
case cause::WRAP_AROUND: return "WRAP_AROUND";
case cause::STORAGE_SERVICE: return "STORAGE_SERVICE";
case cause::API: return "API";
case cause::SCHEMA_CHANGE: return "SCHEMA_CHANGE";
case cause::MIXED_CF: return "MIXED_CF";
case cause::SSTABLE_FORMAT_M: return "SSTABLE_FORMAT_M";
}
abort();
}

View File

@@ -15,14 +15,33 @@
namespace unimplemented {
enum class cause {
API, // REST API features not implemented (force_user_defined_compaction, split_output in major compaction)
INDEXES, // Secondary index features (filtering on collections, clustering columns)
TRIGGERS, // Trigger support in schema tables and storage proxy
METRICS, // Query processor metrics
VALIDATION, // Schema validation in DDL statements (drop keyspace, truncate, token functions)
REVERSED, // Reversed types in CQL protocol
HINT, // Hint replaying in batchlog manager
SUPER, // Super column families (legacy Cassandra feature, never supported)
API,
INDEXES,
LWT,
PAGING,
AUTH,
PERMISSIONS,
TRIGGERS,
COUNTERS,
METRICS,
MIGRATIONS,
GOSSIP,
TOKEN_RESTRICTION,
LEGACY_COMPOSITE_KEYS,
COLLECTION_RANGE_TOMBSTONES,
RANGE_DELETES,
VALIDATION,
REVERSED,
COMPRESSION,
NONATOMIC,
CONSISTENCY,
HINT,
SUPER,
WRAP_AROUND, // Support for handling wrap around ranges in queries on database level and below
STORAGE_SERVICE,
SCHEMA_CHANGE,
MIXED_CF,
SSTABLE_FORMAT_M,
};
[[noreturn]] void fail(cause what);

View File

@@ -4,6 +4,27 @@
#pragma once
#include <cassert>
#include <seastar/core/format.hh>
#include "utils/on_internal_error.hh"
/// Like assert(), but independent of NDEBUG. Active in all build modes.
#define SCYLLA_ASSERT(x) do { if (!(x)) { __assert_fail(#x, __FILE__, __LINE__, __PRETTY_FUNCTION__); } } while (0)
/// Exception-throwing assertion based on on_internal_error()
///
/// Unlike SCYLLA_ASSERT which crashes the process, scylla_assert() throws an
/// exception (or aborts depending on configuration). This prevents cluster-wide
/// crashes and loss of availability by allowing graceful error handling.
///
/// Use this instead of SCYLLA_ASSERT in contexts where throwing exceptions is safe.
/// DO NOT use in:
/// - noexcept functions
/// - destructors
/// - contexts with special exception-safety requirements
#define scylla_assert(condition, ...) \
do { \
if (!(condition)) [[unlikely]] { \
::utils::on_internal_error(::seastar::format("Assertion failed: {} at {}:{} in {}" __VA_OPT__(": {}"), \
#condition, __FILE__, __LINE__, __PRETTY_FUNCTION__ __VA_OPT__(,) __VA_ARGS__)); \
} \
} while (0)