When this tool was written, we were still using /var/lib/cassandra as a default
location. We should update it.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
* seastar 5dc22fa...c5e595b (3):
> memory: be less strict about NUMA bindings
> reactor: let the resource code specify the default memory reserve
> resource: reserve even more memory when hwloc is compiled in
Fixes#642
"This series attempts to make LSA more friendly for large (i.e. bigger
than LSA segment) allocations. It is achieved by introducing segment
zones – large, contiguous areas of segments and using them to allocate
segments instead of calling malloc() directly.
Zones can be shrunk when needed to reclaim memory and segments can be
migrated either to reduce number of zone or to defragment one in order
to be able to shrink it. LSA tries to keep all segments at the lower
addresses and reclaims memory starting from the zones in the highest
parts of the address space."
Also:
[PATCH scylla v1 0/7] gossip mark node down fix + cleanup
[PATCH scylla v1 0/2] Refuse decommissioned node to rejoin
[PATCH scylla] storage_service: Fix added node not showing up in nodetool in status joining
When replacing a node, we might ignore the tokens so that the tokens is
empty. In this case, we will have
std::unordered_map<inet_address, std::unordered_set<token>> = {ip, {}}
passed to token_metadata::update_normal_tokens(std::unordered_map<inet_address,
std::unordered_set<token>>& endpoint_tokens)
and hit the assert
assert(!tokens.empty());
1) Start node 1, node 2, node 3
2) Stop node 3
3) Start node 4 to replace node 3
4) Kill node 4 (removal of node 3 in system.peers is not flushed to disk)
5) Start node 4 (will load node 3's token and host_id info in bootup)
This makes
"Token .* changing ownership from 127.0.0.3 to 127.0.0.4"
messages printed again in step 5) which are not expected, which fails the dtest
FAIL: replace_first_boot_test (replace_address_test.TestReplaceAddress)
----------------------------------------------------------------------
Traceback (most recent call last):
File "scylla-dtest/replace_address_test.py",
line 220, in replace_first_boot_test
self.assertEqual(len(movedTokensList), numNodes)
AssertionError: 512 != 256
In commit 56df32ba56 (gossip: Mark node as
dead even if already left). A node liveness check is missed.
Fix it up.
Before: (mark a node down multiple times)
[Tue Dec 8 12:16:33 2015] INFO [shard 0] gossip - InetAddress 127.0.0.3 is now DOWN
[Tue Dec 8 12:16:33 2015] DEBUG [shard 0] storage_service - endpoint=127.0.0.3 on_dead
[Tue Dec 8 12:16:34 2015] INFO [shard 0] gossip - InetAddress 127.0.0.3 is now DOWN
[Tue Dec 8 12:16:34 2015] DEBUG [shard 0] storage_service - endpoint=127.0.0.3 on_dead
[Tue Dec 8 12:16:35 2015] INFO [shard 0] gossip - InetAddress 127.0.0.3 is now DOWN
[Tue Dec 8 12:16:35 2015] DEBUG [shard 0] storage_service - endpoint=127.0.0.3 on_dead
[Tue Dec 8 12:16:36 2015] INFO [shard 0] gossip - InetAddress 127.0.0.3 is now DOWN
[Tue Dec 8 12:16:36 2015] DEBUG [shard 0] storage_service - endpoint=127.0.0.3 on_dead
After: (mark a node down only one time)
[Tue Dec 8 12:28:36 2015] INFO [shard 0] gossip - InetAddress 127.0.0.3 is now DOWN
[Tue Dec 8 12:28:36 2015] DEBUG [shard 0] storage_service - endpoint=127.0.0.3 on_dead
The only reason we needed it is to make
_application_state[key] = value
work.
With the current default constructor, we increase the version number
needlessly. To fix and to be safe, remove the default constructor
completely.
Backport: CASSANDRA-8801
a53a6ce Decommissioned nodes will not rejoin the cluster.
Tested with:
topology_test.py:TestTopology.decommissioned_node_cant_rejoin_test
The get_token_endpoint API should return a map of tokens to endpoints,
including the bootstrapping ones.
Use get_local_storage_service().get_token_to_endpoint_map() for it.
$ nodetool -p 7100 status
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 127.0.0.1 12645 256 ? eac5b6cf-5fda-4447-8104-a7bf3b773aba rack1
UN 127.0.0.2 12635 256 ? 2ad1b7df-c8ad-4cbc-b1f1-059121d2f0c7 rack1
UN 127.0.0.3 12624 256 ? 61f82ea7-637d-4083-acc9-567e0c01b490 rack1
UJ 127.0.0.4 ? 256 ? ced2725e-a5a4-4ac3-86de-e1c66cecfb8d rack1
Fixes#617
Originally, lsa allocated each segment independently what could result
in high memory fragmentation. As a result many compaction and eviction
passes may be needed to release a sufficiently big contiguous memory
block.
These problems are solved by introduction of segment zones, contiguous
groups of segments. All segments are allocated from zones and the
algorithm tries to keep the number of zones to a minimum. Moreover,
segments can be migrated between zones or inside a zone in order to deal
with fragmentation inside zone.
Segment zones can be shrunk but cannot grow. Segment pool keeps a tree
containing all zones ordered by their base addresses. This tree is used
only by the memory reclamer. There is also a list of zones that have
at least one free segments that is used during allocation.
Segment allocation doesn't have any preferences which segment (and zone)
to choose. Each zone contains a free list of unused segments. If there
are no zones with free segments a new one is created.
Segment reclamation migrates segments from the zones higher in memory
to the ones at lower addresses. The remaining zones are shrunk until the
requested number of segments is reclaimed.
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
A dynamic bitset implementation that provides functions to search for
both set and cleared bits in both directions.
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
Currently test case "Testing reading when memory can't be reclaimed."
assumes that the allocation section used by row cache upon entering
will require more free memory than there is available (inc. evictable).
However, the reserves used by allocation section are adjusted
dynamically and depend solely on previous events. In other words there
is no guarantee that the reserve would be increased so much that the
allocation will fail.
The problem is solved by adding another allocation that is guaranteed
to be bigger than all evictable and free memory.
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
Scattering of blobs from Avi:
This patchset converts the stack to scatter managed_bytes in lsa memory,
allowing large blobs (and collections) to be stored in memtable and cache.
Outside memtable/cache, they are still stored sequentially, but it is assumed
that the number of transient objects is bounded.
The approach taken here is to scatter managed_bytes data in multiple
blob_storage objects, but to linearize them back when accessing (for
example, to merge cells). This allows simple access through the normal
bytes_view. It causes an extra two copies, but copying a megabyte twice
is cheap compared to accessing a megabyte's worth of small cells, so
per-byte throughput is increased.
Testing show that lsa large object space is kept at zero, but throughput
is bad because Scylla easily overwhelms the disk with large blobs; we'll
need Glauber's throttling patches or a really fast disk to see good
throughput with this.
Add linearize() and unlinearize() methods that allow making an
atomic_cell_or_collection object temporarily contiguous, so we can examine
it as a bytes_view.
Instead of allocating a single blob_storage, chain multiple blob_storage
objects in a list, each limited not to exceed the allocation_strategy's
max_preferred_allocation_size. This allows lsa to allocate each blob_storage
object as an lsa managed object that can be migrated in memory.
Also provide linearize()/scatter() methods that can be used to temporarily
consolidate the storage into a single blob_storage. This makes the data
contiguous, so we can use a regular bytes_view to examine it.
This adds the implementation for the index_summary_off_heap_memory for a
single column family and for all of them.
Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Similiar to origin, off heap memory, memory_footprint is the size of
queus multiply by the structure size.
memory_footprint is used by the API to report the memory that is taken
by the summary.
Signed-off-by: Amnon Heiman <amnon@scylladb.com>
If there is no snapshot directory for the specific column family,
get_snapshot_details should return an empty map.
This patch check that a directory exists before trying to iterate over
it.
Fixes#619
Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Scylla changes:
sstable.cc: Remove file_exists() function which conflicts with seastar's
Amnon Heiman (2):
reactor: Add file_exists method
Add a wrapper for file_exists
Avi Kivity (2):
Merge "Introduce shared_future" from Tomasz
Merge ""scripts: a few fixes in posix_net_conf.sh" from Vlad
Gleb Natapov (3):
rpc: not stop client in error state
avoid allocation in parallel_for_each is there is nothing to do
memory: fix size_to_idx calculation
Nadav Har'El (1):
test: fix use-after-free in timertest
Pawe�� Dziepak (1):
memory: use size instead of old_size to shrink memory block
Tomasz Grabiec (7):
file: Mark move constructor as noexcept
core: future: Add static asserts about type's noexcept guarantees
core: future: Drop now redundant move_noexcept flag
core: future_state: Make state getters non-destructive for non-rvalue-refs
core: future: Make get_available_state() noexcept
core: Introduce shared_future
Make json_return_type movable
Vlad Zolotarov (8):
scripts: posix_net_conf.sh: ban NIC IRQs from being moved by irqbalance
scripts: posix_net_conf.sh: exclude CPU0 siblings from RPS
scripts: posix_net_conf.sh: Configure XPS
scripts: posix_net_conf.sh: Add a new mode for MQ NICs
scripts: posix_net_conf.sh: increase some backlog sizes
core: to_sstring(): cleanup
core: to_sstring_strintf(): always use %g(or %lg) format for floating point values
core: prevent explicit calls for to_sstring_sprintf()
In a recent discussion with the XFS developers, Dave Chinner recommended
us *not* to use discard, but rather issue fstrims explicitly. In machines
like Amazon's c3-class, the situation is made worse by the fact that discard
is not supported by the disk. Contrary to my intuition, adding the discard
mount option in such situation is *not* a nop and will just create load
for no reason.
Signed-off-by: Glauber Costa <glommer@scylladb.com>
Objects extending json_base are not movable, so we won't be able to
pass them via future<>, which will assert that types are nothrow move
constructible.
This problem only affects httpd::utils_json::histogram, which is used
in map-reduce. This patch changes the aggregation to work on domain
value (utils::ihistrogram) instead of json objects.