Files
scylladb/docs/design-notes/system_keyspace.md
Botond Dénes 64f658aea4 db/system_keyspace: add snapshots virtual table
Lists the equivalent of the `nodetool listsnapshots` command.
2021-11-05 15:42:41 +02:00

6.7 KiB

System keyspace layout

This section describes layouts and usage of system.* tables.

The system.large_* tables

Scylla performs better if partitions, rows, or cells are not too large. To help diagnose cases where these grow too large, scylla keeps 3 tables that record large partitions, rows, and cells, respectively.

The meaning of an entry in each of these tables is similar. It means that there is a particular sstable with a large partition, row, or cell. In particular, this implies that:

  • There is no entry until compaction aggregates enough data in a single sstable.
  • The entry stays around until the sstable is deleted.

In addition, the entries also have a TTL of 30 days.

system.large_partitions

Large partition table can be used to trace largest partitions in a cluster.

Schema:

CREATE TABLE system.large_partitions (
    keyspace_name text,
    table_name text,
    sstable_name text,
    partition_size bigint,
    partition_key text,
    compaction_time timestamp,
    PRIMARY KEY ((keyspace_name, table_name), sstable_name, partition_size, partition_key)
) WITH CLUSTERING ORDER BY (sstable_name ASC, partition_size DESC, partition_key ASC);

Example usage

Extracting large partitions info

SELECT * FROM system.large_partitions;

Extracting large partitions info for a single table

SELECT * FROM system.large_partitions WHERE keyspace_name = 'ks1' and table_name = 'standard1';

system.large_rows

Large row table can be used to trace large clustering and static rows in a cluster.

This table is currently only used with the MC format (issue #4868).

Schema:

CREATE TABLE system.large_rows (
    keyspace_name text,
    table_name text,
    sstable_name text,
    row_size bigint,
    partition_key text,
    clustering_key text,
    compaction_time timestamp,
    PRIMARY KEY ((keyspace_name, table_name), sstable_name, row_size, partition_key, clustering_key)
) WITH CLUSTERING ORDER BY (sstable_name ASC, row_size DESC, partition_key ASC, clustering_key ASC);

Example usage

Extracting large row info

SELECT * FROM system.large_rows;

Extracting large rows info for a single table

SELECT * FROM system.large_rows WHERE keyspace_name = 'ks1' and table_name = 'standard1';

system.large_cells

Large cell table can be used to trace large cells in a cluster.

This table is currently only used with the MC format (issue #4868).

Schema:

CREATE TABLE system.large_cells (
    keyspace_name text,
    table_name text,
    sstable_name text,
    cell_size bigint,
    partition_key text,
    clustering_key text,
    column_name text,
    compaction_time timestamp,
    PRIMARY KEY ((keyspace_name, table_name), sstable_name, cell_size, partition_key, clustering_key, column_name)
) WITH CLUSTERING ORDER BY (sstable_name ASC, cell_size DESC, partition_key ASC, clustering_key ASC, column_name ASC)

Note that a collection is just one cell. There is no information about the size of each collection element.

Example usage

Extracting large cells info

SELECT * FROM system.large_cells;

Extracting large cells info for a single table

SELECT * FROM system.large_cells WHERE keyspace_name = 'ks1' and table_name = 'standard1';

system.truncated

Holds truncation replay positions per table and shard

Schema:

CREATE TABLE system.truncated (
    table_uuid uuid,    # id of truncated table
    shard int,          # shard
    position int,       # replay position
    segment_id bigint,  # replay segment
    truncated_at timestamp static,  # truncation time
    PRIMARY KEY (table_uuid, shard)
) WITH CLUSTERING ORDER BY (shard ASC)

When a table is truncated, sstables are removed and the current replay position for each shard (last mutation to be committed to either sstable or memtable) is collected. These are then inserted into the above table, using shard as clustering.

When doing commitlog replay (in case of a crash), the data is read from the above table and mutations are filtered based on the replay positions to ensure truncated data is not resurrected.

Note that until the above table was added, truncation records where kept in the truncated_at map column in the system.local table. When booting up, scylla will merge the data in the legacy store with data the truncated table. Until the whole cluster agrees on the feature TRUNCATION_TABLE truncation will write both new and legacy records. When the feature is agreed upon the legacy map is removed.

Virtual tables in the system keyspace

Virtual tables behave just like a regular table from the user's point of view. The difference between them and regular tables comes down to how they are implemented. While regular tables have memtables/commitlog/sstables and all you would expect from CQL tables, virtual tables translate some in-memory structure to CQL result format. For more details see the docs/guides/virtual-tables.md.

Below you can find a list of virtual tables. Sorted in alphabetical order (please keep it so when modifying!).

system.cluster_status

Contain information about the status of each endpoint in the cluster. Equivalent of the nodetool status command.

Schema:

CREATE TABLE system.cluster_status (
    peer inet PRIMARY KEY,
    dc text,
    host_id uuid,
    load text,
    owns float,
    status text,
    tokens int,
    up boolean
)

Implemented by cluster_status_table in db/system_keyspace.cc.

system.size_estimates

Size estimates for individual token-ranges of each keyspace/table.

Schema:

CREATE TABLE system.size_estimates (
    keyspace_name text,
    table_name text,
    range_start text,
    range_end text,
    mean_partition_size bigint,
    partitions_count bigint,
    PRIMARY KEY (keyspace_name, table_name, range_start, range_end)
)

Implemented by size_estimates_mutation_reader in db/size_estimates_virtual_reader.{hh,cc}.

system.snapshots

The list of snapshots on the node. Equivalent to the nodetool listsnapshots command.

Schema:

CREATE TABLE system.snapshots (
    keyspace_name text,
    table_name text,
    snapshot_name text,
    live bigint,
    total bigint,
    PRIMARY KEY (keyspace_name, table_name, snapshot_name)
)

Implemented by snapshots_table in db/system_keyspace.cc.

system.token_ring

The ring description for each keyspace. Equivalent of the nodetool describe_ring $KEYSPACE command (when filtered for WHERE keyspace=$KEYSPACE). Overlaps with the output of nodetool ring.

Schema:

CREATE TABLE system.token_ring (
    keyspace_name text,
    start_token text,
    endpoint inet,
    dc text,
    end_token text,
    rack text,
    PRIMARY KEY (keyspace_name, start_token, endpoint)
)

Implemented by token_ring_table in db/system_keyspace.cc.