Recursion takes up space on stack which takes up space in caches which
means less room for useful data.
In addition to that, a limit on iteration count can be larger than the
limit on recursion, because we're not limited by stack size here.
Also, recursion makes flame-graphs really hard to analyze because
keep_doing() frames appear at different levels of nesting in the
profile leading to many short "towers" instead of one big tower.
This change reuses the same counter for limiting iterations as is used
to limit the number of tasks executed by the reactor before polling.
There was a run-time parameter added for controlling task quota.
Set RTO (retransmission timer) according to RFC6298. Now, we have a
dynamic RTO istead of the hard coded 3 seconds, and an exponential
back-off timer for retransmission.
Tell host to interrupt less. This is useful for tx queue completion
since we do not care much when the tx is completed exactly.
Passed test with memcached and tcp_server.
When doing tcp rx testing, I saw a lot of retransmission because of the
delayed ACK. Our current delayed ACK algorithm does not comply with
what RFC 1122 suggests.
As described in RFC 1122, a host may delay sending an ACK response by up
to 500 ms. Additionally, with a stream of full-sized incoming segments,
ACK responses must be sent for every second segment.
=== Before ===
[asias@hjpc pingpong]$ go run client-rxrx.go
Bytes Sent(MiB): 100
Total Time(Secs): 322.620879376
Bandwidth(MiB/Sec): 0.30996133974160595
78 2.412385 192.168.66.100 -> 192.168.66.123 TCP 32174 37672 > 10000
[ACK] Seq=2149425323 Ack=1000001 Win=229 Len=32120
79 2.612985 192.168.66.100 -> 192.168.66.123 TCP 1514 [TCP Retransmission]
37672 > 10000 [ACK] Seq=2149425323 Ack=1000001 Win=229 Len=1460
80 2.613131 192.168.66.123 -> 192.168.66.100 TCP 54 10000 > 37672
[ACK] Seq=1000001 Ack=2149457443 Win=29200 Len=0
=== After ===
[asias@hjpc pingpong]$ go run client-rxrx.go
Bytes Sent(MiB): 100
Total Time(Secs): 0.244951095
Bandwidth(MiB/Sec): 408.2447559583271
No retransmission is seen.
Assuming the output_stream size is set to 8K, a sequence of writes of
lengths: 128B, 8K, 128B would yield three fragments of exactly those
sizes. This is not optimal as one could fit those in just 2 fragments
of up to 8K size. This change makes the output_stream yield 8K and
256B fragments for this case.
output_stream can be used by only one fiber at a time so from
correctness point of view it doesn't matter if we set _end before or
after put(), but setting it before it allows us to have one future
less, which is a win.
Commit 405f3ea8c3 changed reactor so
that _network_stack is no longer default initialized to POSIX but to
nullptr. This caused tests to segfault, becayse they are not using
application template which takes care of configuration.
The fix is to call configure() so that netwrok stack will be set to
POSIX.
We store spans in freelist i if the span's size >= 2^i. However, when
picking a span to satisfy an allocation, we must use the next larger list
if the size is not a power of two, so that we can be sure that all spans on
that list can satisfy that request.
The current code doesn't do that, so it under-allocates, leading to memory
corruption.
Now that our reactor supports non-file-descriptor notification
mechanisms, switch to using one instead of eventfd when notifying
of virtio interrupts.
This will allow us to change the OSv enable_interrupt() code to
run the handler directly, not in a separate thread, because it
no longer needs to do sleepable write() to an eventfd file descriptor.
Signed-off-by: Nadav Har'El <nyh@cloudius-systems.com>
Improves memaslap UDP posix throughput on my laptop by 40% (from 73k to 105k).
When item is created we cache flags and size part of the response so
that there's no need to call expensive string formatting in get(). The
down side is that this pollutes "item" object with protocol-specific
field, but since ASCII is the only protocol which is supported now and
it's not like we can't fix it later, I think it's fine.
It concatenates multiple string-like entities in one go and gives away
an sstring. It does at most one allocation for the final sstring and
one copy per each string. Works with heterogenous arguments, both
sstrings and constant strings are supported, string_views are planned.
The reactor is currently designed around the concept of file descriptors
and polling them. Every source of events is a file descriptor, and those
which are not, like timers, signals and inter-thread notifications, are
"converted" to file-descriptor events using timerfd, signalfd and eventfd
respectively.
But for running OSv with a directly assigned virtio device, we don't want
to use file descriptors for notifications: When we need each interrupt
to signal an eventfd, this is slow, and also problematic because file
descriptors contain locks so we can't signal an eventfd at interrupt
time, causing the existing code to use an extra thread to do this.
So this patch refactors the reactor to allow the main loop to be based
no just on file descriptors, but on a different type of abstractions.
We have a reactor_backend (with epoll and osv implementation), to which we
We don't add "file descriptors" but rather more abstract notions like
timer, signal or "notifier" (similar to eventfd). The Linux epoll
implementation indeed uses file descriptors internally (with timer
using a timerfd, signal using signalfd and notifier using eventfd)
but the OSv implementation does not use file descriptors.
Signed-off-by: Nadav Har'El <nyh@cloudius-systems.com>
From Tomasz:
"There will be now a separate DB per core, each serving a subset of the key
space (sharding). From the outside in appears to behave as one DB."
There is a separate DB per core, each serving a subset of the key
space. From the outside in appears to behave as one DB.
item_key type was changed to include the hash so that we calculate the
hash only once. The same hash is used for sharding and hashing. No
need for store_hash<> option on unordered_set<> any more.
Some seastar-specific and hashtable-specific stats were moved from the
general "stats" command into "stats hash", which shows per-core
statistics.
Use like this:
engine.at_exit([] {
std::cout << "so long!\n";
return make_ready_future<>();
});
All lambdas will be executed when reactor is stopped, in order, on the
same CPU on which they were registerred.
POSIX stack does not allow one to bind more than one socket to given
port. Native stack on the other hand does. The way services are set up
depends on that. For instance, on native stack one might want to start
the service on all cores, but on POSIX stack only on one of them.
Fixes assert failure during ^C:
#0 0x0000003e134348c7 in raise () from /lib64/libc.so.6
#1 0x0000003e1343652a in abort () from /lib64/libc.so.6
#2 0x0000003e1342d46d in __assert_fail_base () from /lib64/libc.so.6
#3 0x0000003e1342d522 in __assert_fail () from /lib64/libc.so.6
#4 0x0000000000409a7c in boost::intrusive::list_impl<boost::intrusive::mhtraits<timer, boost::intrusive::list_
at /usr/include/boost/intrusive/list.hpp:1263
#5 0x00000000004881cc in iterator_to (this=<optimized out>, value=...) at core/timer-set.hh:71
#6 reactor::del_timer (this=<optimized out>, tmr=tmr@entry=0x60000005cda8) at core/reactor.cc:287
#7 0x00000000004682a5 in ~timer (this=0x60000005cda8, __in_chrg=<optimized out>) at ./core/reactor.hh:974
#8 ~resolution (this=0x60000005cd90, __in_chrg=<optimized out>) at net/arp.hh:86
#9 ~pair (this=0x60000005cd88, __in_chrg=<optimized out>) at /usr/include/c++/4.9.2/bits/stl_pair.h:96
Say which prerequisites to install on Ubuntu 12.04, and how to set up
gcc 4.9 side-by-side with the existing gcc 4.8 (without harming the
existing gcc 4.8 installation).
Signed-off-by: Nadav Har'El <nyh@cloudius-systems.com>
Currently semaphore is used to keep track off free space in smp queue,
but our semaphore does not guaranty that order in which tasks call wait()
will be the same order they will get access to a resource. This may cause
packet reordering in smp which is not desirable for TCP performance. This
patch replaces the semaphore with a simple counter and another queue to
hold items that cannot be places into smp queue due to lack of space.
The current code (this will change soon with my reactor patches)
constructs a default (Posix) network stack before reactore::configure()
reassigns it to the requested network stack.
It turns out there is one place we use the network stack before calling
reactore::configure(), which ends up using the Posix stack even though
we want the native stack - this is both silly and plainly doesn't work on
the OSv setup.
The problem is that app_template.hh tries to configure scollectd before
the engine is started. This calls scollectd::impl::start() which calls
engine.net().make_udp_channel(). When this happens this early, it creates
a Posix socket...
This patch moves the scollectd configuration to after the engine is
started. It makes sense to me: As far as I understand, scollectd is all
about sending packets (diagnostic packets), and it's kind of silly to
start sending packets before starting the machinary which allows us to
send packets.
Signed-off-by: Nadav Har'El <nyh@cloudius-systems.com>
[avi: use customary indentation, remove unneeded make_ready_future()]