Files
scylladb/test/cql-pytest/test_ttl.py
Nadav Har'El 59fe6a402c test/cql-pytest: use unique keys instead of random keys
Some of the tests in test/cql-pytest share the same table but use
different keys to ensure they don't collide. Before this patch we used a
random key, which was usually fine, but we recently noticed that the
pytest-randomly plugin may cause different tests to run through the *same*
sequence of random numbers and ruin our intent that different tests use
different keys.

So instead of using a *random* key, let's use a *unique* key. We can
achieve this uniqueness trivially - using a counter variable - because
anyway the uniqueness is only needed inside a single temporary table -
which is different in every run.

Another benefit is that it will now be clearer that the tests are
deterministic and not random - the intent of a random_string() key
was never to randomly walk the entire key space (random_string()
anyway had a pretty narrow idea of what a random string looks like) -
it was just to get a unique key.

Refs #9988 (fixes it for cql-pytest, but not for test/alternator)

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
2022-01-31 09:01:23 +02:00

55 lines
2.5 KiB
Python

# Copyright 2021-present ScyllaDB
#
# SPDX-License-Identifier: AGPL-3.0-or-later
#############################################################################
# Various tests for Scylla's ttl feature - USING TTL and DEFAULT_TIME_TO_LIVE
#############################################################################
from util import new_test_table, unique_key_int
import pytest
import time
# Fixture with a table with a default TTL set to 1, so new data would be
# inserted by default with TTL of 1.
@pytest.fixture(scope="module")
def table_ttl_1(cql, test_keyspace):
with new_test_table(cql, test_keyspace, 'p int primary key, v int', 'with default_time_to_live = 1') as table:
yield table
@pytest.fixture(scope="module")
def table_ttl_100(cql, test_keyspace):
with new_test_table(cql, test_keyspace, 'p int primary key, v int', 'with default_time_to_live = 100') as table:
yield table
# Basic test that data inserted *without* an explicit ttl into a table with
# default TTL inherits this TTL, but an explicit TTL overrides it.
def test_basic_default_ttl(cql, table_ttl_1):
p1 = unique_key_int()
p2 = unique_key_int()
cql.execute(f'INSERT INTO {table_ttl_1} (p, v) VALUES ({p1}, 1) USING TTL 1000')
cql.execute(f'INSERT INTO {table_ttl_1} (p, v) VALUES ({p2}, 1)')
# p2 item should expire in *less* than one second (it will expire
# in the next whole second),
start = time.time()
while len(list(cql.execute(f'SELECT * from {table_ttl_1} where p={p2}'))):
assert time.time() < start + 2
time.sleep(0.1)
# p1 should not have expired yet. By the way, its current ttl(v) would
# normally be exactly 999 now, but theoretically could be a bit lower in
# case of delays in the test.
assert len(list(cql.execute(f'SELECT *from {table_ttl_1} where p={p1}'))) == 1
# Above we tested that explicitly setting "using ttl" overrides the default
# ttl set for the table. Here we check that also in the special case of
# "using ttl 0" (which means the item should never expire) it also overrides
# the default TTL. Reproduces issue #9842.
@pytest.mark.xfail(reason="Issue #9842")
def test_default_ttl_0_override(cql, table_ttl_100):
p = unique_key_int()
cql.execute(f'INSERT INTO {table_ttl_100} (p, v) VALUES ({p}, 1) USING TTL 0')
# We can immediately check that this item's TTL is "null", meaning it
# will never expire. There's no need to have any sleeps.
assert list(cql.execute(f'SELECT ttl(v) from {table_ttl_100} where p={p}')) == [(None,)]