Use seastar::cache_line_size for cache line alignment instead of a hard coded value (64) - this value is
not always correct, e.g. PPC64 platform, where cache line size is 128B.
Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
"This series implements the missing API to terminate all repairs.
For example:
$ curl -X POST --header "Accept: application/json"
"http://127.0.0.1:10000/storage_service/force_terminate_repair"
With the new stream_plan::abort() api we can now abort the stream
session assocaited with the repair as well.
On top of this, we can support termination of single repair job instead all
jobs.
Fixes#2105"
* tag 'asisas/repair_abort_v4' of github.com:scylladb/seastar-dev:
repair: Support termination of repair jobs
repair: Track repair_info
repair: Intorduce repair id to repair_info map
api: Add force_terminate_repair API
streaming: Add abort to stream_plan
streaming: Add abort_all_stream_sessions for stream_coordinator
streaming: Introduce streaming::abort()
streaming: Make stream_manager and coordinator message debug level
streaming: Check if _stream_result is valid
streaming: Log peer address in on_error
streaming: Introduce received_failed_complete_message
This patch implements the missing API to terminate all repairs.
For example:
$ curl -X POST --header "Accept: application/json"
"http://127.0.0.2:10000/storage_service/force_terminate_repair"
With the new stream_plan::abort() api we can now abort the stream
session assocaited with the repair as well.
Fixes#2105
The maps are stored in a vector. The vector has smp::count elements, each
element will be accessed by only one shard.
The add_repair_info, remove_repair_info and get_repair_info helpers
are added.
The following backtrace was reported by user when running repair and keeping restarting the node at the same time.
#0 0x00007eff077281d7 in raise () from /lib64/libc.so.6
#1 0x00007eff07729a08 in abort () from /lib64/libc.so.6
#2 0x00007eff07721146 in __assert_fail_base () from /lib64/libc.so.6
#3 0x00007eff077211f2 in __assert_fail () from /lib64/libc.so.6
#4 0x00000000010ef2c2 in locator::token_metadata::first_token_index (this=0x641000214e98, start=...) at locator/token_metadata.cc:133
#5 0x00000000010ef2d9 in locator::token_metadata::first_token (this=0x641000214e98, start=...) at locator/token_metadata.cc:143
#6 0x00000000010e329d in locator::abstract_replication_strategy::get_natural_endpoints (this=0x641000494000, search_token=...)
at locator/abstract_replication_strategy.cc:66
#7 0x0000000001481186 in get_neighbors (hosts=std::vector of length 0, capacity 0, data_centers=std::vector of length 0, capacity 0,
range=<error reading variable: access outside bounds of object referenced via synthetic pointer>, ksname=..., db=...) at repair/repair.cc:196
#8 repair_range<nonwrapping_range<dht::token> > (range=..., ri=...) at repair/repair.cc:781
#9 <lambda(auto:99&)>::<lambda(auto:100&&)>::<lambda(auto:101&)>::<lambda()>::operator() (__closure=0x7efec07f7460) at repair/repair.cc:1005
#10 futurize<future<bool_class<stop_iteration_tag> > >::apply<repair_ranges(repair_info)::<lambda(auto:99&)>::
It is reproduced with
1) while true; do curl -X POST --header "Content-Type: application/json" --header "Accept: application/json" "http://127.0.0.1:10000/storage_service/repair_async/ks3"; done
2) start node 127.0.0.1, stop node 127.0.0.1 in a loop
The problem is, during boot up, the token_metadata is not replicated to all shards until
the node goes into NORMAL status.
To fix, check until node is in NORMAL status before allowing repair.
Fixes#2723
With this change, we ask all the shard to handle the ranges provided by
user and we use selective_token_range_sharder to split the ranges and
ignore the ranges do not belong to the current shard.
Repair today has a semaphore limiting the number of ongoing checksum
comparisons running in parallel (on one shard) to 100. We needed this
number to be fairly high, because a "checksum comparison" can involve
high latency operations - namely, sending an RPC request to another node
in a remote DC and waiting for it to calculate a checksum there, and while
waiting for a response we need to proceed calculating checksums in parallel.
But as a consequence, in the current code, we can end up with as many as
100 fibers all at the same stage of reading partitions to checksum from
sstables. This requires tons of memory, to hold at least 128K of buffer
(even more with read-ahead) for each of these fibers, plus partition data
for each. But doing 100 reads in parallel is pointless - one (or very few)
should be enough.
So this patch adds another semaphore to limit the number of checksum
*calculations* (including the read and checksum calculation) on each shard
to just 2. There may still be 100 ongoing checksum *comparisons*, in
other stages of the comparisons (sending the checksum requests to other
and waiting for them to return), but only 2 will ever be in the stage of
reading from disk and checksumming them.
The limit of 2 checksum calculations (per shard) applies on the repair
slave, not just to the master: The slave may receive many checksum
requests in parallel, but will only actually work on 2 at a time.
Because the parallelism=100 now rate-limits operations which use very little
memory, in the future we can safely increase it even more, to support
situations where the disk is very fast but the link between nodes has
very high latency.
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20170703151329.25716-1-nyh@scylladb.com>
When peer nodes have the same partition data, i.e., with the same
checksum, we currently choose to stream from any of them randomly.
To improve streaming performance, select the peer within the same DC.
This patch is supposed to improve repair perforamnce with multiple DC.
Message-Id: <c6a345b6e8ed2b59f485e53c865241e463b44507.1498490831.git.asias@scylladb.com>
Currently, shard zero is the coordinator of the repair. All the work of
checksuming of the local node and sending of the repair checksum rpc
verb is done on shard zero only. This causes other shards being
underutilized.
With this patch, we split the ranges need to be repaired into at least
smp::count ranges, so sizeof(ranges) / smp::count will be assigned to
each shard. For exmaple, we have 8 shards and 256 ragnes, each shard
will repair 32 ranges. Each shard will repair the 32 ranges
sequencially. There will be at most 8 (smp::count) ranges of repair in
parallel.
In "repair: Use more stream_plan" (commit 2043ffc064), we
switched to do stream while doing checksum instead of do stream only
after checksum pahse is completed. We take a parallelism_semaphore
before we do checksum, if there are more than sub_ranges_to_stream
(1024) ranges, we start a stream_plan and wait for the streaming to
complete (still under the parallelism_semaphore). So at most
parallelism_semaphore (100) stream_plans can be in parallel.
The parallelism_semaphore limits the parallelism of both checksum and the
streaming plan. However, it is not necessary to have the same
parallelism for both checksum and streaming, because 1) a streaming
operation itself runs in parallel (handling ranges on all shards in
prallel, sending mutaitons in parallel) , 2) and with more streaming plan
(in worse case 100) means we can write to 100 memtables at the same time
and flush 100 memtables to disk at the same time which can take a lot of
memory.
With this patch, we only allow one stream plan in flight.
We currently repair all the ranges in parallel.
1) All the ranges will contend for parallelism_semaphore, instead of
processing multiple ranges in parallel and calculating the sub ranges
(which take memory) for each range in parallel, we can handle the ranges
one bye one.
We could have enough parallelism because the checksum are calucated on
all the shards.
2) If for some reason the repair failed, if we handle ranges 1 by 1, we
can log which range of repair is successful. Next time, we can ignore
them. If we start ranges in parallel, it has a high chance, no single
range is completed because all the ranges are on going.
Refs #1912
- Count n out m ranges the repair is running for (kind of progress report)
- Make the 'Found differing range' log debug because it can be millions
of such entries
- Print the failed ranges
In the very beginning, we use a stream_plan for each checksum range.
Later, we changed to use a single stream_plan for all the checksum
ranges. It pushes memory presure to streaming, e.g., millinons of ranges
in a vector to send over RPC.
To fix, we do checksum and streaming in parallel, limit the number of
checksum ranges stored in memory.
Fixes#2430
When starting repair, we divided the large token ranges (vnodes) linto small
subranges of a desired length (around 100 partition), and built a huge list
of those subranges - to iterate over them later and compare checksums of
those chunks.
However, building this list up-front is completely unnecessary, and wastes
a lot of memory: In a test with 1 TB of data, as much as 3 gigabytes was
spent on this list. Instead, what we do in this patch is to find the next
chunk in a DFS-like splitting algorithm, using only the token range
midpoint() function (as before). The amount of memory needed for this is
O(logN), instead of O(N) in the previous implementation.
Refs #2430.
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
- introcduced "seastarx.hh" header, which does a "using namespace seastar";
- 'net' namespace conflicts with seastar::net, renamed to 'netw'.
- 'transport' namespace conflicts with seastar::transport, renamed to
cql_transport.
- "logger" global variables now conflict with logger global type, renamed
to xlogger.
- other minor changes
We estimate number of partitions for a given range of a column familiy
and split the range into sub ranges contains fewer partitions as a
checksum unit.
The estimation is wrong, because we need to count the partitions on all
the shards, instead of only counting the local shard.
Fixes#2299
Message-Id: <7876285bd26cfaf65563d6e03ec541626814118a.1493817339.git.asias@scylladb.com>
We have:
auto halves = range.split(midpoint, dht::token_comparator());
We saw a case where midpoint == range.start, as a result, range.split
will assert becasue the range.start is marked non-inclusive, so the
midpoint doesn't appear to be contain()ed in the range - hence the
assertion failure.
Fixes#2148
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Signed-off-by: Asias He <asias@scylladb.com>
Message-Id: <93af2697637c28fbca261ddfb8375a790824df65.1489023933.git.asias@scylladb.com>
"In 7c873f0d (repair: Reduce unnecessary streaming traffic), we optimize
in cases when 1) all the remote nodes has the same checksum and 2) local node
has zero checksum.
In this series, we make the optimization more generec and cover more cases."
* tag 'asias/repair/node_reducer/v3' of github.com:cloudius-systems/seastar-dev:
repair: Reduce unnecessary streaming traffic even more
repair: Add hash specialization for partition_checksum
In 7c873f0d (repair: Reduce unnecessary streaming traffic), we optimize
in cases when 1) all the remote nodes has the same checksum and 2) local node
has zero checksum.
In this patch, we make the optimization more generec and cover more cases.
1) With RF = 3, 3 nodes cluster, rm data on node3 then run repair on node2
Before:
INFO 2016-12-09 16:24:31,961 [shard 0] repair - Found differing range (-4091524285777924069, -4086237930244473115]
on nodes {127.0.0.3, 127.0.0.1}, in = {127.0.0.3, 127.0.0.1}, out = {127.0.0.3, 127.0.0.1}
INFO 2016-12-09 16:24:31,963 [shard 0] repair - Found differing range (-609511120964672970, -605253169726090861]
on nodes {127.0.0.1, 127.0.0.3}, in = {127.0.0.1, 127.0.0.3}, out = {127.0.0.1, 127.0.0.3}
INFO 2016-12-09 16:24:31,964 [shard 0] repair - Found differing range (-7655412157560911259, -7652234653747163387]
on nodes {127.0.0.3, 127.0.0.1}, in = {127.0.0.3, 127.0.0.1}, out = {127.0.0.3, 127.0.0.1}
INFO 2016-12-09 16:24:31,965 [shard 0] repair - Found differing range (-4133815130045531703, -4128528774512080749]
on nodes {127.0.0.3, 127.0.0.1}, in = {127.0.0.3, 127.0.0.1}, out = {127.0.0.3, 127.0.0.1}
INFO 2016-12-09 16:24:31,967 [shard 0] repair - Found differing range (-605253169726090861, -600995218487508751]
on nodes {127.0.0.1, 127.0.0.3}, in = {127.0.0.1, 127.0.0.3}, out = {127.0.0.1, 127.0.0.3}
INFO 2016-12-09 16:24:31,968 [shard 0] repair - Found differing range (438510347741343837, 441475345714861354]
on nodes {127.0.0.1, 127.0.0.3}, in = {127.0.0.1, 127.0.0.3}, out = {127.0.0.1, 127.0.0.3}
After:
INFO 2016-12-09 16:30:29,204 [shard 0] repair - Found differing range (-660606535827658284, -656348584589076175]
on nodes {127.0.0.1, 127.0.0.3}, in = {}, out = {127.0.0.3}
INFO 2016-12-09 16:30:29,204 [shard 0] repair - Found differing range (-4234255885181099833, -4228969529647648879]
on nodes {127.0.0.3, 127.0.0.1}, in = {}, out = {127.0.0.3}
INFO 2016-12-09 16:30:29,204 [shard 0] repair - Found differing range (-4228969529647648879, -4223683174114197925]
on nodes {127.0.0.3, 127.0.0.1}, in = {}, out = {127.0.0.3}
INFO 2016-12-09 16:30:29,204 [shard 0] repair - Found differing range (-4223683174114197925, -4218396818580746971]
on nodes {127.0.0.3, 127.0.0.1}, in = {}, out = {127.0.0.3}
INFO 2016-12-09 16:30:29,204 [shard 0] repair - Found differing range (-7728494745277112315, -7725317241463364443]
on nodes {127.0.0.3, 127.0.0.1}, in = {}, out = {127.0.0.3}
INFO 2016-12-09 16:30:29,204 [shard 0] repair - Found differing range (-720217853167807818, -715959901929225709]
on nodes {127.0.0.1, 127.0.0.3}, in = {}, out = {127.0.0.3}
Before, we need to fetch data from both node 1 and node 3 and send data back to node 1 and node 3, i.e., 2 IN, 2 OUT
After, we only need to fetch data from node 3, i.e. 0 IN, 1 OUT
We saved 3X traffic, with higher RF, we can save even more.
2) With RF = 3, 3 nodes cluster, rm data on node3 then run repair on node3
Before:
INFO 2016-12-09 16:20:11,448 [shard 0] repair - Found differing range (-8533861887892628919, -8052600134279395253]
on nodes {127.0.0.1, 127.0.0.2}, in = {127.0.0.1}, out = {}
INFO 2016-12-09 16:20:11,465 [shard 0] repair - Found differing range (7190719703944308372, 7692358524564683543]
on nodes {127.0.0.1, 127.0.0.2}, in = {127.0.0.1}, out = {}
INFO 2016-12-09 16:20:11,486 [shard 0] repair - Found differing range (-3305328316052774469, -2671876682129336880]
on nodes {127.0.0.1, 127.0.0.2}, in = {127.0.0.1}, out = {}
INFO 2016-12-09 16:20:11,494 [shard 0] repair - Found differing range (-2190610927722759275, -1305178847032904465]
on nodes {127.0.0.2, 127.0.0.1}, in = {127.0.0.2}, out = {}
INFO 2016-12-09 16:20:11,518 [shard 0] repair - Found differing range (-4747032371925842389, -4070378863644120252]
on nodes {127.0.0.2, 127.0.0.1}, in = {127.0.0.2}, out = {}
INFO 2016-12-09 16:20:11,519 [shard 0] repair - Found differing range (-1137497074548854552, -592479316010344531]
on nodes {127.0.0.1, 127.0.0.2}, in = {127.0.0.1}, out = {}
After:
INFO 2016-12-09 16:29:22,433 [shard 0] repair - Found differing range (67885601051654285, 447405341661896387]
on nodes {127.0.0.2, 127.0.0.1}, in = {127.0.0.2}, out = {}
INFO 2016-12-09 16:29:22,454 [shard 0] repair - Found differing range (-2190610927722759275, -1305178847032904465]
on nodes {127.0.0.2, 127.0.0.1}, in = {127.0.0.2}, out = {}
INFO 2016-12-09 16:29:22,473 [shard 0] repair - Found differing range (2523396860109747637, 3083778975065200884]
on nodes {127.0.0.2, 127.0.0.1}, in = {127.0.0.2}, out = {}
INFO 2016-12-09 16:29:22,474 [shard 0] repair - Found differing range (-3305328316052774469, -2671876682129336880]
on nodes {127.0.0.1, 127.0.0.2}, in = {127.0.0.1}, out = {}
INFO 2016-12-09 16:29:22,487 [shard 0] repair - Found differing range (-4747032371925842389, -4070378863644120252]
on nodes {127.0.0.2, 127.0.0.1}, in = {127.0.0.2}, out = {}
INFO 2016-12-09 16:29:22,493 [shard 0] repair - Found differing range (-1137497074548854552, -592479316010344531]
on nodes {127.0.0.1, 127.0.0.2}, in = {127.0.0.1}, out = {}
This shows the new more generic methods covers the optimization we had before as well.
A range now alternates between different shards: the first part of the
range goes to shard X, the next to shard X+1, but after a while we go
back to shard X. So we can't do a simple loop between shard_begin and
shard_end.
Fix by using the newly introduced dht::split_range_to_shards
Use the cf.make_streaming_reader with ranges to simplify the code a bit.
A range is diveded into N sub ranges so that each sub range contains 100
partitions. So N depends on the number of partitions in that range. N
can grow unbounded and the memory usage of vector to hold these sub
ranges can go unbouded.
Limit the max number of sub ranges a range can divided into.
The downside is that the limited sub range will make we include more
partitions in the checksum.
Fixes#1917