1. Replace the completion promise<> with a custom deleter class; this
is lighter weight, and we don't really need the destructor to be
executed by the scheduler.
2. Add lots of consuctors for composing packets from existing packet,
by appending or prepending packets
3. Over-allocate in some cases to accomodate for the common practice of
prepending protocol headers.
The build fails for me like this:
/tmp/ccOUUuiH.ltrans0.ltrans.o: In function `reactor::reactor()':
/home/tgrabiec/src/seastar/build/release/../../reactor.cc:41: undefined reference to `io_setup'
/tmp/ccOUUuiH.ltrans1.ltrans.o: In function `reactor::process_io()':
/home/tgrabiec/src/seastar/build/release/../../reactor.cc:133: undefined reference to `io_getevents'
collect2: error: ld returned 1 exit status
../../build.mk:27: recipe for target 'seastar' failed
The workaround was taken from
https://bugs.launchpad.net/ubuntu/+source/gcc-defaults/+bug/1228201
Signed-off-by: Tomasz Grabiec <tgrabiec@cloudius-systems.com>
[avi: move to separate line with comment to justify the ugliness]
Signed-off-by: Avi Kivity <avi@cloudius-systems.com>
Supported:
- basic virtio ring pump
- basic virtio-net driver (most features missing)
- vhost interface (running as a privileged user process, not guest)
Typically one side (read or write) of the eventfd is used within the
framework, and the other side is used by an external process, so two
classes are provided, depending on which side is used in the framework.
The new classes are used with the thread pool.
- If a .then() clause, whose chained function returns a future, throws an
exception, then the returned future will contain the exception. A chain
of .then() clauses will propagae the exception until the end.
- Add a .rescue() clause that can be used to catch the exception
Some operations cannot be done either non-blockingly or asynchronously, so
add a thread pool to execute them in.
Currently the thread pool has just one thread.
Otherwise, all of the writes are submitted at once, consuming tons of memory,
and preventing reads from happening in parallel to writes.
Add a semaphore to limit the amount of parallel I/O.
- launch 10,000 concurrent writes
- when any one of these complete, launch a read for the same offset
- compare read/write data
- when all reads complete, terminate
Add integration with the Linux libaio framework:
- an I/O context is initialized
- all asynchronous I/Os request that the kernel notify an eventfd
- a semaphore is used to guard access to the I/O context, so as not to
exceed the maximum parallelism
- an internal function submit_io() is used to submit I/O to the kernel,
returning a future representing completion
- an internal loop running process_io() is used to consume completions
when the eventfd is signalled
This removes the need to create a structure if a future has multiple
return values, and has the nice side effect of removing the specialization
future<void> (replacing it with future<>).
In many cases, we can guess the result of an epoll_wait() before it happens:
- if a read() or write() consumes the entire buffer, a following call
will likely succeed (and if it doesn't, it likely won't)
- after an accept() completes, a write() will likely succeed
Speculatively add these events to events_known; if we mispredict and
fail with EAGAIN, all we need to do is retry.
Reduce calls to epoll_ctl() by allowing epoll events that are not
requested by the user, but have still not been satisfied by the system,
to remain installed in epoll. We may get a spurious wakeup later, but if
we do, we remember it so that when the user does want the event, it will be
ready without a syscall.