Ensure that the size of the cache is never bigger than the "max_size".
Before this patch the size of the cache could have been indefinitely bigger than
the requested value during the refresh time period which is clearly an undesirable
behaviour.
Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
Sometimes we don't want the cached values to be periodically reloaded.
This patch adds the ability to control this using a ReloadEnabled template parameter.
In case the reloading is not needed the "loading" function is not given to the constructor
but rather to the get_ptr(key, loader) method (currently it's the only method that is used, we may add
the corresponding get(key, loader) method in the future when needed).
Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
Current get(...) interface restricts the cache to work only with copy-constructable
values (it returns future<Tp>).
To make it able to work with non-copyable value we need to introduce an interface that would
return something like a reference to the cached value (like regular containers do).
We can't return future<Tp&> since the caller would have to ensure somehow that the underlying
value is still alive. The much more safe and easy-to-use way would be to return a shared_ptr-like
pointer to that value.
"Luckily" to us we value we actually store in a cache is already wrapped into the lw_shared_ptr
and we may simply return an object that impersonates itself as a smart_pointer<Tp> value while
it keeps a "reference" to an object stored in the cache.
Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
Allow a variable entry size parameter.
Provide an EntrySize functor that would return a size for a
specific entry.
Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
Get rid of the "proprietary" solution for asynchronous values on-demand loading.
Use utils::loading_shared_values instead.
We would still need to maintain intrusive set and list for efficient shrink and invalidate
operations but their entry is not going to contain the actual key and value anymore
but rather a loading_shared_values::entry_ptr which is essentially a shared pointer to a key-value
pair value.
In general, we added another level of dereferencing in order to get the key value but since
we use the bi::store_hash<true> in the hook and the bi::compare_hash<true> in the bi::unordered_set
this should not translate into an additional set lookup latency.
Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
Arm the timer with a period that is not greater than either the permissions_validity_in_ms
or the permissions_update_interval_in_ms in order to ensure that we are not stuck with
the values older than permissions_validity_in_ms.
Fixes#2590
Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
"_free_segments_in_zones is not adjusted by
segment_pool::reclaim_segments() for empty zones on reclaim under some
conditions. For instance when some zone becomes empty due to regular
free() and then reclaiming is called from the std allocator, and it is
satisfied from a zone after the one which is empty. This would result
in free memory in such zone to appear as being leaked due to corrupted
free segment count, which may cause a later reclaim to fail. This
could result in bad_allocs.
The fix is to always collect such zones.
Fixes#3129
Refs #3119
Refs #3120"
* 'tgrabiec/fix-free_segments_in_zones-leak' of github.com:scylladb/seastar-dev:
tests: lsa: Test _free_segments_in_zones is kept correct on reclaim
lsa: Expose max_zone_segments for tests
lsa: Expose tracker::non_lsa_used_space()
lsa: Fix memory leak on zone reclaim
(cherry picked from commit 4ad212dc01)
Fixes#2938.
* 'tgrabiec/fix-range-tombstone-list-exception-safety-v1' of github.com:scylladb/seastar-dev:
tests: range_tombstone_list: Add test for exception safety of apply()
tests: Introduce range_tombstone_list assertions
cache: Make range tombstone merging exception-safe
range_tombstone_list: Introduce apply_monotonically()
range_tombstone_list: Make reverter::erase() exception-safe
range_tombstone_list: Fix memory leaks in case of bad_alloc
mutation_partition: Fix abort in case range tombstone copying fails
managed_bytes: Declare copy constructor as allocation point
Integrate with allocation failure injection framework
(cherry picked from commit 5a4b46f555)
"The main problem fixed is slow processing of application state changes.
This may lead to a bootstrapping node not having up to date view on the
ring, and serve incorrect data.
Fixes #2855."
* tag 'tgrabiec/gossip-performance-v3' of github.com:scylladb/seastar-dev:
gms/gossiper: Remove periodic replication of endpoint state map
gossiper: Check for features in the change listener
gms/gossiper: Replicate changes incrementally to other shards
gms/gossiper: Document validity of endpoint_state properties
storage_service: Update token_metadata after changing endpoint_state
gms/gossiper: Process endpoints in parallel
gms/gossiper: Serialize state changes and notifications for given node
utils/loading_shared_values: Allow Loader to return non-future result
gms/gossiper: Encapsulate lookup of endpoint_state
storage_service: Batch token metadata and endpoint state replication
utils/serialized_action: Introduce trigger_later()
gossiper: Add and improve logging
gms/gossiper: Don't fire change listeners when there is no change
gms/gossiper: Allow parallel apply_state_locally()
gms/gossiper: Avoid copies in endpoint_state::add_application_state()
gms/failure_detector: Ignore short update intervals
(cherry picked from commit 044b8deae4)
This class implements an key-value container that is populated
using the provided asynchronous callback.
The value is loaded when there are active references to the value for the given key.
Container ensures that only one entry is loaded per key at any given time.
The returned value is a lw_shared_ptr to the actual value.
The value for a specific key is immediately evicted when there are no
more references to it.
The container is based on the boost::intrusive::unordered_set and is rehashed (grown) if needed
every time a new value is added (asynchronously loaded).
The container has a rehash() method that would grow or shrink the container as needed
in order to get the load factor into the [0.25, 0.75] range.
Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
(cherry picked from commit ec3fed5c4d)
The timer is armed inside the section guarded by the _timer_reads_gate
therefore it has to be canceled after the gate is closed.
Otherwise we may end up with the armed timer after stop() method has
returned a ready future.
Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
Message-Id: <1501603059-32515-1-git-send-email-vladz@scylladb.com>
(cherry picked from commit 4b28ea216d)
"This series reduce that effect in two ways:
1. Remove the latency counters from the system keyspaces
2. Reduce the histogram size by limiting the maximum number of buckets and
stop the last bucket."
Fixes#2650.
* 'amnon/remove_cf_latency_v2' of github.com:cloudius-systems/seastar-dev:
database: remove latency from the system table
estimated histogram: return a smaller histogram
(cherry picked from commit 3fe6731436)
loading_cache invokes a timer that may issue asynchronous operations
(queries) that would end with writing into the internal fields.
We have to ensure that these operations are over before we can destroy
the loading_cache object.
Fixes#2624
Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
Message-Id: <1501208345-3687-1-git-send-email-vladz@scylladb.com>
Region comparator, used by the two, calls region_impl::min_occupancy(),
which calls log_histogram::largest(). The latter is O(N) in terms of
the number of segments, and is supposed to be used only in tests.
We should call one_of_largest() instead, which is O(1).
This caused compact_on_idle() to take more CPU as the number of
segments grew (even when there was nothing to compact). Eviction
would see the same kind of slow down as well.
Introduced in 11b5076b3c.
Message-Id: <1498641973-20054-1-git-send-email-tgrabiec@scylladb.com>
(cherry picked from commit 3489c68a68)
Use the boost::intrusive containers in order to achieve a O(1) complexity
for both "LRU list" update and to minimize the memory overhead in the hash
table item to "LRU list" item connection:
- Make the timestamped_val be both a bi::list and a bi::unordered_set
item.
- Make a bi::unordered_set be a cache backend instead of the
std::unordered_map.
As a result dropping k LRU items becomes an O(k) operation instead of
O(N log N), where N is a total number of all cached items:
- Every time a value is read - move it to the front of the "LRU list"
(O(1)).
- When we need to remove k LRU items:
- Repeat k times:
- Take an element from the back of the "LRU list". (O(1)).
- Remove it from the bi::unordered_set and dispose. (O(1)).
We use an auto-unlink configuration for bi::list, therefore
disposing an item is going to auto unlink it from the list.
* 'permissions_cache_move_to_intrusive-v1' of github.com:scylladb/seastar-dev:
utils::loading_cache: cleanup
utils/loading_cache.hh: use intrusive list to store the lru entry
utils::loading_cache: implement automatic rehashing
utils::loading_cache: make the underlying map to be an intrusive unordered_set
"Enforces commutativity of addition:
m1 + m2 == m2 + m1
and consistency of difference and addition with equality:
m1 + (m2 - m1) == m1 + m2"
* tag 'tgrabiec/fix-range-tombstone-commutativity-v2' of github.com:cloudius-systems/seastar-dev:
mutation: Make compare_*_for_merge() consistent with equals()
tests: mutation: Improve assertion failure message
tests: Use default equality in test_mutation_diff_with_random_generator
mutation: Make counter cell difference consistent with apply
tests: range_tombstone_list_test: Improve error message
tests: range_tombstone_list: Check adjacent range merging
range_tombstone_list: Merge adjacent range tombstones in apply()
tests: mutation: Check commutativity of mutation addition
range_tombstone_list: Avoid violating set invariant
range_tombstone_list: Make tombstone merging commutative
range_tombstone_list: Add erase() operation to the reverter
range_tombstone_list: Make all undo operations ordered relative to each other
utils: Extract to_boost_visitor() to a separate header
allocating_strategy: Introduce alloc_strategy_unique_ptr<>
Fix the shrink() O(n log n) complexity issue by constantly pushing the corresponding intrusive
list entry to the head of the list every time the values are read.
This will keep the list ordered by the last read time from the most recently read
to the least recently read entry.
Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
- Start the cache with 256 buckets - the minimum number of buckets.
- Limit the maximal number of buckets by 1M buckets.
- Keep the load factor between 0.25 and 1.0 as long as the number of buckets is
between the minimum and the maximum values mentioned above.
- Grow and shrink the hash every "refresh" period if needed.
- Enable bi::power_2_buckets and bi::compare_hash bi::unordered_set options.
- Enable bi::unordered_set_base_hook's bi::store_hash option.
Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
Make the underlying map to be a boost::intrusive::unordered_set<timestamped_val>
instead of std::unordered_set<Key, timestamped_val>.
Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
- introcduced "seastarx.hh" header, which does a "using namespace seastar";
- 'net' namespace conflicts with seastar::net, renamed to 'netw'.
- 'transport' namespace conflicts with seastar::transport, renamed to
cql_transport.
- "logger" global variables now conflict with logger global type, renamed
to xlogger.
- other minor changes
According to description of permissions_validity_in_ms the permissions_cache is enabled if this
value is set to a non-zero value. Otherwise the permissions_cache is disabled.
According to the permissions_update_interval_in_ms description it must have a non-zero value if permissions_cache
is enabled.
permissions_cache_max_entries description doesn't explicitly state it but it makes no sense to allow it to be zero
if permissions_cache is enabled.
Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
This patch changes the way a loading_cache works.
Before this patch:
1) If a permissions key is not in the cache it's loaded in the foreground and the original
query is blocked till the permissions are loaded.
2) Every _period the timer does the following:
1) If a value was loaded more than _expiry time ago it is removed from the cache.
2) If the cache is too big - the less recently loaded values are removed till the cache
fits the requested size.
After this patch:
1) If a permissions key is not in the cache it's loaded in the foreground and the original
query is blocked till the permissions are loaded.
2) Every _period the timer does the following:
1) If a value in the cache was loaded or read for the last time more than _expiry time ago - it's removed from the cache.
2) If the cache is too big - the less recently read values are removed till the cache fits the requested size.
3) The values that were loaded more than _refresh time ago are re-read in the background.
The new implementation allows to minimize the amount of the foreground reads for a frequently used value to a single
event (when the value is loaded for the first time).
It also ensures we do not reload values we no longer need.
Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
The standard serialization API (e.g. in data_value) includes the following methods:
size_t serialized_size() const;
void serialize(bytes::iterator& it) const;
bytes serialize() const;
Align the utils::UUID API with the pattern above.
The only addition is that we are going to make an output iterator parameter of a second method above
a template so that we may serialize into different output sources.
Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
Use the same templated implementation for all different serialize_XXX(...).
The chosen implementation is based on the std::copy_n(char*, size, OutputIterator),
which is heavily optimized and will be using memcpy/memmove where possible.
This patch also removes the not needed specializations that accept signed integer
values since we were casting them to unsigned value anyway.
The std::ostream based specifications are also removed since they are not used
anywhere except for a test-serialization.cc and adjusting the ostream to the iterator
is a single-liner.
Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
While blob_storage is marked as an unaligned type, the back references also
point to an unaligned type (a pointer to blob_storage), since a back
reference can live in a blob_storage. This triggers errors from zapcc/clang 4.
Fix by creating a type for the reference, which is marked as unaligned.
Message-Id: <20170502071404.507-1-avi@scylladb.com>
Attempting to create huge zones may introduce significant latency. This
patch introduces the maximum allowed zone size so that the time spent
trying to allocate and initialising zone is bounded.
Fixes#2335.
Message-Id: <20170428145916.28093-1-pdziepak@scylladb.com>
This patch extracts out the relational operators in struct tombstone
to a class capable of generating them from a tri-compare function.
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
"This series fixes some more errors found by clang, with the aim of enabling
clang/zapcc as a supported compiler. A single issue remains, but it's
probably in std::experimental::optional::swap(); not in our code."
* tag 'clang/2/v1' of https://github.com/avikivity/scylla:
sstable_test: avoid passing negative non-type template arguments to unsigned parameters
UUID: add more comparison operators
sstable_datafile_test: avoid string_view user-defined literal conversion operator
mutation_source_test: avoid template function without template keyword
cql_query_test: define static variable
cql_query_test: add braces for single-item collection initializers
storage_service: don't use typeid(temporary)
logalloc: remove unused max_occupancy_for_compaction
storage_proxy: drop overzealous use of __int128_t in recently-modified-no-read-repair logic
storage_proxy: drop unused member access from return value
storage_proxy: fix reference bound to temporary in data_read_resolver::less_compare
read_repair_decision: fix operator<<(std::ostream&, ...)
Every lsa-allocated object is prefixed by a header that contains information
needed to free or migrate it. This includes its size (for freeing) and
an 8-byte migrator (for migrating). Together with some flags, the overhead
is 14 bytes (16 bytes if the default alignment is used).
This patch reduces the header size to 1 byte (8 bytes if the default alignment
is used). It uses the following techniques:
- ULEB128-like encoding (actually more like ULEB64) so a live object's header
can typically be stored using 1 byte
- indirection, so that migrators can be encoded in a small index pointing
to a migrator table, rather than using an 8-byte pointer; this exploits
the fact that only a small number of types are stored in LSA
- moving the responsibility for determining an object's size to its
migrator, rather than storing it in the header; this exploits the fact
that the migrator stores type information, and object size is in fact
information about the type
The patch improves the results of memory_footprint_test as following:
Before:
- in cache: 976
- in memtable: 947
After:
mutation footprint:
- in cache: 880
- in memtable: 858
A reduction of about 10%. Further reductions are possible by reducing the
alignment of lsa objects.
logalloc_test was adjusted to free more objects, since with the lower
footprint, rounding errors (to full segments) are different and caused
false errors to be detected.
Missing: adjustments to scylla-gdb.py; will be done after we agree on the
new descriptor's format.
Currently eviction is performed until occupancy of the whole region
drops below the 85% threshold. This may take a while if region had
high occupancy and is large. We could improve the situation by only
evicting until occupancy of the sparsest segment drops below the
threshold, as is done by this change.
I tested this using a c-s read workload in which the condition
triggers in the cache region, with 1G per shard:
lsa-timing - Reclamation cycle took 12.934 us.
lsa-timing - Reclamation cycle took 47.771 us.
lsa-timing - Reclamation cycle took 125.946 us.
lsa-timing - Reclamation cycle took 144356 us.
lsa-timing - Reclamation cycle took 655.765 us.
lsa-timing - Reclamation cycle took 693.418 us.
lsa-timing - Reclamation cycle took 509.869 us.
lsa-timing - Reclamation cycle took 1139.15 us.
The 144ms pause is when large eviction is necessary.
Statistics for reclamation pauses for a read workload over
larger-than-memory data set:
Before:
avg = 865.796362
stdev = 10253.498038
min = 93.891000
max = 264078.000000
sum = 574022.988000
samples = 663
After:
avg = 513.685650
stdev = 275.270157
min = 212.286000
max = 1089.670000
sum = 340573.586000
samples = 663
Refs #1634.
Message-Id: <1484730859-11969-1-git-send-email-tgrabiec@scylladb.com>
We will want to reuse the min_size mechanism for the whole compaction
threshold, including the occupancy threshold. That threshold is close
to the segment size and we cannot pick a power of two which would be
close enough to what we need.
Therefore, change log_histogram to support arbitrary minimum base.
bucket_of() was moved into log_histogram_options so that it can be used
in number_of_buckets(), which makes for a simple and much less
error-prone implementation.