Commit Graph

63 Commits

Author SHA1 Message Date
Tomasz Grabiec
237819c31f logalloc: Excluded zones' free segments in lsa/byres-non_lsa_used_space
Historically the purpose of the metric is to show how much memory is
in standard allocations. After zones were introduced, this would also
include free space in lsa zones, which is almost all memory, and thus
the metric lost its original meaning. This change brings it back to
its original meaning.

Message-Id: <1452865125-4033-1-git-send-email-tgrabiec@scylladb.com>
2016-01-18 10:48:14 +02:00
Avi Kivity
c8b09a69a9 lsa: disable constant_time_size in binomial_heap implementation
Corrupts heap on boost < 1.60, and not needed.

Fixes #698.
2015-12-29 12:59:00 +01:00
Vlad Zolotarov
33552829b2 core: use steady_clock where monotinic clock is required
Use steady_clock instead of high_resolution_clock where monotonic
clock is required. high_resolution_clock is essentially a
system_clock (Wall Clock) therefore may not to be assumed monotonic
since Wall Clock may move backwards due to time/date adjustments.

Fixes issue #638

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2015-12-27 18:07:53 +02:00
Lucas Meneghel Rodrigues
2167173251 utils/logalloc.cc - Declare member minimum_size from segment_zone struct
This fixes compile error:

In function `logalloc::segment_zone::segment_zone()':
/home/lmr/Code/scylla/utils/logalloc.cc:412: undefined reference to `logalloc::segment_zone::minimum_size'
collect2: error: ld returned 1 exit status
ninja: build stopped: subcommand failed.

Signed-off-by: Lucas Meneghel Rodrigues <lmr@scylladb.com>
2015-12-10 12:54:34 +02:00
Paweł Dziepak
0d66300d43 lsa: add more counters
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-08 19:31:40 +01:00
Paweł Dziepak
83b004b2fb lsa: avoid fragmenting memory
Originally, lsa allocated each segment independently what could result
in high memory fragmentation. As a result many compaction and eviction
passes may be needed to release a sufficiently big contiguous memory
block.

These problems are solved by introduction of segment zones, contiguous
groups of segments. All segments are allocated from zones and the
algorithm tries to keep the number of zones to a minimum. Moreover,
segments can be migrated between zones or inside a zone in order to deal
with fragmentation inside zone.

Segment zones can be shrunk but cannot grow. Segment pool keeps a tree
containing all zones ordered by their base addresses. This tree is used
only by the memory reclamer. There is also a list of zones that have
at least one free segments that is used during allocation.

Segment allocation doesn't have any preferences which segment (and zone)
to choose. Each zone contains a free list of unused segments. If there
are no zones with free segments a new one is created.

Segment reclamation migrates segments from the zones higher in memory
to the ones at lower addresses. The remaining zones are shrunk until the
requested number of segments is reclaimed.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-08 19:31:40 +01:00
Paweł Dziepak
40dda261f2 lsa: maintain segment to region mapping
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-08 19:31:40 +01:00
Paweł Dziepak
2e94086a2c lsa: use bi::list to implement segment_stack
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-08 19:31:40 +01:00
Avi Kivity
0c2fba7e0b lsa: advertize our preferred maximum allocation size
Let managed_bytes know that allocating below a tenth of the segment size is
the right thing to do.
2015-12-08 15:17:09 +02:00
Paweł Dziepak
89f7f746cb lsa: fix printing object_descriptor::_alignment
object_descriptor::_alignment is of type uint8_t which is actually an
unsigned char.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-11-24 20:13:29 +01:00
Paweł Dziepak
65875124b7 lsa: guarantee that segment_heap doesn't throw
boost::heap::binomial_heap allocates helper object in push() and,
therefore, may throw an exception. This shouldn't happen during
compaction.

The solution is to reserve space for this helper object in
segment_descriptor and use a custom allocator with
boost::heap::binomial_heap.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-11-24 19:51:22 +01:00
Paweł Dziepak
273b8daeeb lsa: add no-op default constructor for segment
Zero initialization of segment::data when segment is value initialized
is undesirable.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-11-24 16:37:37 +01:00
Paweł Dziepak
e6cf3e915f lsa: add counters for memory used by large objects
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-11-24 16:36:27 +01:00
Paweł Dziepak
6b113a9a7a lsa: fix eviction of large blobs
LSA memory reclaimer logic assumes that the amount of memory used by LSA
equals: segments_in_use * segment_size. However, LSA is also responsible
for eviction of large objects which do not affect the used segmentcount,
e.g. region with no used segments may still use a lot of memory for
large objects. The solution is to switch from measuring memory in used
segments to used bytes count that includes also large objects.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-11-24 16:29:09 +01:00
Paweł Dziepak
c37afcfdee lsa: account for size of objects too big for LSA
While the objects above max_manage_object_size aren't stored in the
LSA segments they are still considered to be belonging to the LSA
region and are evictable using that region evictor.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-11-16 12:22:12 +01:00
Paweł Dziepak
64f1c2866c lsa: free segment in trim_emergency_reserve_to_max()
_emergency_reserve is an intrusive containers and it doesn't care about
segment lifetime.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-11-05 18:04:38 +02:00
Avi Kivity
16006949d0 logalloc: make migrator an object, not a function pointer
The migrator tells lsa how to move an object when it is compacted.
Currently it is a function pointer, which means we must know how to move
the object at compile time.  Making it an object allows us to build the
migration function at runtime, making it suitable for runtime-defined types
(such as tuples and user-defined types).

In the future, we may also store the size there for fixed-size types,
reducing lsa overhead.

C++ variable templates would have made this patch smaller, but unfortunately
they are only supported on gcc 5+.
2015-10-21 11:24:56 +02:00
Tomasz Grabiec
67d0f9c7df lsa: Restore heap invariant before calling _segments.erase()
This is certainly the right thing to do and seems to fix #403. However
I didn't manage to convince myself that this would cause problems for
binomial_heap, given that binomial_heap::erase() calls siftup()
anyway:

    void erase(handle_type handle)
    {
        node_pointer n = handle.node_;
        siftup(n, force_inf());
        top_element = n;
        pop();
    }

    void increase (handle_type handle)
    {
        node_pointer n = handle.node_;
        siftup(n, *this);

        update_top_element();
        sanity_check();
    }
2015-10-20 15:18:05 +03:00
Avi Kivity
9c5a36efd0 logalloc: fix segment free in debug mode
Must match allocation function.
2015-09-30 09:45:25 +02:00
Avi Kivity
d5cf0fb2b1 Add license notices 2015-09-20 10:43:39 +03:00
Tomasz Grabiec
53caf5ecca lsa: Fix segment heap corruption
The segment heap is a max-heap, with sparser segments on the top. When
we free from a segment its occupancy is decreased, but its position in
the heap increases.

This bug caused that we picked up segments for compaction in the wrong
order. In extreme cases this can lead to a livelock, in some cases may
just increase compaction latency.
2015-09-10 17:20:04 +03:00
Avi Kivity
6d0a2b5075 logalloc: don't invalidate merged region
A region being merged can still be in use; but after merging, compaction_lock
and the reclaim counter will no longer work.  This can lead to
use-after-compact-without-re-lookup errors.

Fix by making the source region be the same as the target region; they
will share compaction locks and reclaim counters, so lookup avoidance
will still work correctly.

Fixes #286.
2015-09-08 08:55:44 +02:00
Tomasz Grabiec
fecc87e601 lsa: stub allocation_section with default allocator
memory::stats() always returns 0 as free memory which confuses
guard::enter().
2015-09-07 17:23:02 +02:00
Paweł Dziepak
03f5827570 logalloc: add missing methods to DEFAULT_ALLOCATOR version
Signed-off-by: Paweł Dziepak <pdziepak@cloudius-systems.com>
2015-09-07 16:59:27 +02:00
Tomasz Grabiec
3b441416fa lsa: Make segment size publicly accessible
Some tests depend on segment size.
2015-09-06 21:25:44 +02:00
Tomasz Grabiec
c82325a76c lsa: Make region evictor signal forward progress
In some cases region may be in a state where it is not empty and
nothing could be evicted from it. For example when creating the first
entry, reclaimer may get invoked during creation before it gets
linked. We therefore can't rely on emptiness as a stop condition for
reclamation, the evction function shall signal us if it made forward
progress.
2015-09-06 21:25:44 +02:00
Tomasz Grabiec
94f0db933f lsa: Fix typo in the word 'emergency' 2015-09-06 21:24:59 +02:00
Tomasz Grabiec
200562abe7 lsa: Reclaim over-max segments from segment pool reserve 2015-09-06 21:24:59 +02:00
Tomasz Grabiec
d022a1a4a3 lsa: Introduce allocating_section
Related to #259. In some cases we need to allocate memory and hold
reclaim lock at the same time. If that region holds most of the
reclaimable memory, allocations inside that code section may
fail. allocating_section is a work-around of the problem. It learns
how big reserves shold be from past execution of critical section and
tries to ensure proper reserves before entering the section.
2015-09-06 21:24:59 +02:00
Tomasz Grabiec
3caad2294b lsa: Tolerate empty segments when region is destroyed
Some times we may close an empty active segment, if all data in it was
evicted. Normally segments are removed as soon as the last object in
it is freed, but if the segment is already empty when closed, noone is
supposed to call free on it. Such segments would be quickly reclaimed
during compaction, but it's possible that we will destroy the region
before they're reclaimed by compaction. Currently we would fail on an
assertion which checks that there are no segments. This change fixes
the problem by handling empty closed segments when region is
destroyed.
2015-09-06 21:24:59 +02:00
Tomasz Grabiec
c37aa73051 lsa: Drop alignment requirement from segment 2015-09-06 21:24:59 +02:00
Tomasz Grabiec
2c1536b5a7 lsa: Make free() path noexcept
Memory releasing is invoked from destructors so should not throw. As a
consequence it should not allocate memory, so emergency segment pool
was switched from std::deque<> to an alloc-free intrusive stack.
2015-09-06 21:24:59 +02:00
Tomasz Grabiec
fa8d530cc2 lsa: Add ability to trace reclaiming latency 2015-09-06 21:24:58 +02:00
Tomasz Grabiec
870e9e5729 lsa: Replace compaction_lock with broader reclaim_lock
Disabling compaction of a region is currently done in order to keep
the references valid. But disabling only compaction is not enough, we
also need to disable eviction, as it also invalidates
references. Rather than introducing another type of lock, compaction
and eviction are controlled together, generalized as "reclaiming"
(hence the reclaim_lock).
2015-09-01 17:29:04 +03:00
Tomasz Grabiec
48569651ea lsa: Fix calculation of bytes.non_lsa_used_space 2015-09-01 17:29:03 +03:00
Tomasz Grabiec
d20fae96a2 lsa: Make reclaimer run synchronously with allocations
The goal is to make allocation less likely to fail. With async
reclaimer there is an implicit bound on the amount of memory that can
be allocated between deferring points. This bound is difficult to
enforce though. Sync reclaimer lifts this limitation off.

Also, allocations which could not be satisfied before because of
fragmentation now will have higher chances of succeeding, although
depending on how much memory is fragmented, that could involve
evicting a lot of segments from cache, so we should still avoid them.

Downside of sync reclaiming is that now references into regions may be
invalidated not only across deferring points but at any allocation
site. compaction_lock can be used to pin data, preferably just
temporarily.
2015-08-31 21:50:18 +02:00
Tomasz Grabiec
42dce17c82 lsa: Fix documentation for eviction functions 2015-08-31 21:50:17 +02:00
Avi Kivity
203b349722 Merge seastar upstream
* seastar 5176352...68fee6c (1):
  > Merge "Memory reclamation infrastructure follow-up" from Tomasz

Adjusted logalloc::tracker's reclaimer to fit new API
2015-08-31 20:01:07 +03:00
Tomasz Grabiec
110a55886c lsa: Introduce region::compaction_counter() 2015-08-31 13:58:42 +02:00
Tomasz Grabiec
9ad3dbe592 lsa: Add region::compaction_enabled() 2015-08-31 13:58:42 +02:00
Tomasz Grabiec
048387782a lsa: Rename region::set_compactible() to set_compaction_enabled()
To avoid confusion with region_impl::is_compactible() when the getter
is added.
2015-08-31 13:58:42 +02:00
Avi Kivity
0617aecb62 lsa: downgrade "no compactible pool" warning to trace
It's a fairly standard condition.
2015-08-24 17:26:48 +02:00
Avi Kivity
77b3212c88 lsa: provide a fallback during normal allocation
Instead of failing normal allocations when the seastar allocator cannot
allocate a segment, provide a generous reserve.  An allocation failure
will now be satisified from the reserve, but it will still trigger a
reclaim.  This allows hiding low-memory conditions from the user.
2015-08-23 16:38:04 +03:00
Avi Kivity
f531f36a44 lsa: fix types in logs 2015-08-20 15:29:08 +03:00
Avi Kivity
9012f991bf logalloc: really allow dipping into the emergency pool during reclaim
The RAII wrapper for the emergency pool was invoked without an object,
and so had no effect.
2015-08-20 12:10:03 +03:00
Avi Kivity
9ed2bbb25c lsa: introduce region_group
A region_group is a nestable group of regions, for cumulative statistics
purposes.
2015-08-19 19:36:40 +03:00
Avi Kivity
71aad57ca8 lsa: make region::impl a top-level class
Makes using forward declarations possible.
2015-08-19 14:43:17 +03:00
Avi Kivity
932ddc328c logalloc: optimize current_allocation_strategy()
This heavily used function shows up in many places in the profile (as part
of other functions), so it's worth optimizing by eliminating the special
case for the standard allocator.  Use a statically allocated object instead.

(a non-thread-local object is fine since it has no data members).
2015-08-17 16:51:10 +03:00
Avi Kivity
5a061fe66e lsa: increase segment size
While #152 is still open, we need to allow for moderately sized allocations
to succeed.  Extend the segment size to 256k, which allows for threads to
be allocated.

Fixes #151.
2015-08-16 19:26:59 +03:00
Avi Kivity
ecc3ccc716 lsa: emergency segment reserve for compaction
To free memory, we need to allocate memory.  In lsa compaction, we convert
N segments with average occupancy of (N-1)/N into N-1 new segments.  However,
to do that, we need to allocate segments, which we may not be able to do
due to the low memory condition which caused us to compact anyway.

Fix by introducing a segment reserve, which we normally try to ensure is
full.  During low memory conditions, we temporarily allow allocating from
the emergency reserve.
2015-08-12 11:29:09 +03:00