* Also dump diagnostics when a read times out while active (not queued). * Add the "Trigger permit" line, containing the details of the permit which caused the diagnostics dump (by e.g. timing out). * Add the "Identified bottleneck(s)" line, containing the identified bottlenecks which lead to permits being queued. This line is missing if no such bottleneck can be identified. * Document the new features, as well as the stat dump, which was added some time ago. Example of the new dump format: ``` INFO 2024-09-12 08:09:48,046 [shard 0:main] reader_concurrency_semaphore - Semaphore reader_concurrency_semaphore_dump_reader_diganostics with 8/10 count and 106192275/32768 memory resources: timed out, dumping permit diagnostics: Trigger permit: count=0, memory=0, table=ks.tbl0, operation=mutation-query, state=waiting_for_admission Identified bottleneck(s): memory permits count memory table/operation/state 3 2 26M *.*/push-view-updates-2/active 3 2 16M ks.tbl1/push-view-updates-1/active 1 1 15M ks.tbl2/push-view-updates-1/active 1 0 13M ks.tbl1/multishard-mutation-query/active 1 0 12M ks.tbl0/push-view-updates-1/active 1 1 10M ks.tbl3/push-view-updates-2/active 1 1 6060K ks.tbl3/multishard-mutation-query/active 2 1 1930K ks.tbl0/push-view-updates-2/active 1 0 1216K ks.tbl0/multishard-mutation-query/active 6 0 0B ks.tbl1/shard-reader/waiting_for_admission 3 0 0B *.*/data-query/waiting_for_admission 9 0 0B ks.tbl0/mutation-query/waiting_for_admission 2 0 0B ks.tbl2/shard-reader/waiting_for_admission 4 0 0B ks.tbl0/shard-reader/waiting_for_admission 9 0 0B ks.tbl0/data-query/waiting_for_admission 7 0 0B ks.tbl3/mutation-query/waiting_for_admission 5 0 0B ks.tbl1/mutation-query/waiting_for_admission 2 0 0B ks.tbl2/mutation-query/waiting_for_admission 8 0 0B ks.tbl1/data-query/waiting_for_admission 1 0 0B *.*/mutation-query/waiting_for_admission 26 0 0B permits omitted for brevity 96 8 101M total Stats: permit_based_evictions: 0 time_based_evictions: 0 inactive_reads: 0 total_successful_reads: 0 total_failed_reads: 0 total_reads_shed_due_to_overload: 0 total_reads_killed_due_to_kill_limit: 0 reads_admitted: 1 reads_enqueued_for_admission: 82 reads_enqueued_for_memory: 0 reads_admitted_immediately: 1 reads_queued_because_ready_list: 0 reads_queued_because_need_cpu_permits: 82 reads_queued_because_memory_resources: 0 reads_queued_because_count_resources: 0 reads_queued_with_eviction: 0 total_permits: 97 current_permits: 96 need_cpu_permits: 0 awaits_permits: 0 disk_reads: 0 sstables_read: 0 ``` Fixes: https://github.com/scylladb/scylladb/issues/19535 Improvement, no backport needed. Closes scylladb/scylladb#20545 * github.com:scylladb/scylladb: docs/dev/reader-concurrency-semaphore.md: update the documentation on diagnostics dumps test/boost/reader_concurrency_semaphore_test: test the new diagnostics functionality reader_concurrency_semaphore: add bottleneck self-diagnosis to diagnosis dump reader_concurrency_semaphore: include trigger permit in diagnostic dump reader_concurrency_semaphore: propagate permit to do_dump_reader_permit_diagnostics() reader_concurrency_semaphore: use consistent exception type for timeout reader_concurrency_semaphore: dump diagnostics when non-waiting reader times out
Scylla in-source tests.
For details on how to run the tests, see docs/dev/testing.md
Shared C++ utils, libraries are in lib/, for Python - pylib/
alternator - Python tests which connect to a single server and use the DynamoDB API unit, boost, raft - unit tests in C++ cql-pytest - Python tests which connect to a single server and use CQL topology* - tests that set up clusters and add/remove nodes cql - approval tests that use CQL and pre-recorded output rest_api - tests for Scylla REST API Port 9000 scylla-gdb - tests for scylla-gdb.py helper script nodetool - tests for C++ implementation of nodetool
If you can use an existing folder, consider adding your test to it. New folders should be used for new large categories/subsystems, or when the test environment is significantly different from some existing suite, e.g. you plan to start scylladb with different configuration, and you intend to add many tests and would like them to reuse an existing Scylla cluster (clusters can be reused for tests within the same folder).
To add a new folder, create a new directory, and then
copy & edit its suite.ini.