Our queue<T> is a convenient mechanism for passing data between a producer
fiber (a set of consecutive continuations) and a consumer fiber.
However, queue<T> is difficult to use *correctly*. The biggest problem is
how to handle premature stopping: What if one of the two fibers (the reader
or the writer) stops prematurely, and will never read or write any more?
When queue<T> is used naively, the other fiber will just hang indefinitely
while it waits to read from the empty queue, or write to the full queue.
The solution proposed in this patch is a new pipe mechanism, implemented
internally over a queue. pipe<T>() returns two separate objects - a pipe
reader, and a pipe writer. Typically each object is std::move()ed into a
different fiber. When a fiber stops and its captured variables are destroyed,
one end of the pipe is destroyed, and that causes the other end's operations
to return immediately (if the other end was already blocked, it will resume
immediately, and return an exceptions). This behavior is analogous to
Unix's EOF or broken-pipe behavior.
Signed-off-by: Nadav Har'El <nyh@cloudius-systems.com>
Otherwise tester may crash if _instances destructor is called when thread
responsible for the allocation (which tester spawned to run seastar in)
no longer running.
There are many situations in which we would like to make sure a directory
exists. We can do that by creating the directory we want, and just ignoring
the relevant error.
It is a lot of code though, and I believe it is an idiom common enough to exist
on its own.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
Reviewed-by: Nadav Har'El <nyh@cloudius-systems.com>
wait() with timeout takes reference to an entry, but when
circular_buffer is resized it may be moved, so freed instance will be
accessed on timeout. Fix it by using std::list instead.
The thread switching code assumed that we will always switch out of a
thread due to being blocked on an unavailable future. This allows
the core to store the blocked thread's context in the synthetic
continuation chained to that future (which switched back to that thread).
That assumption failed in one place: when we create a thread from within a
thread. In that case we switch to the new thread immediately, but forget
all about the old thread. We never come back to the old thread, and anything
that depends on it hangs.
Fix by creating a linked list of active thread contexts. These are all
threads that have been "preempted" by the act of creating a new thread,
terminated by the main, unthreaded, reactor context. This gives us a place
to store those threads and we come back to them and continue where we left
off.
Reported by Pekka.
Instead of issuing a system call for every aio, wait for them to accumulate,
and issue them all at once. This reduces syscall count, and allows the kernel
to batch requests (bu plugging the I/O queues during the call). A poller is
added so that requests are not delayed too much.
Reviewed-by: Pekka Enberg <penberg@cloudius-systems.com>
For some reason, I added a fsync call when the file underlying the
stream gets truncated. That happens when flushing a file, which
size isn't aligned to the requested DMA buffer.
Instead, fsync should only be called when closing the stream, so this
patch changes the code to do that.
Signed-off-by: Raphael S. Carvalho <raphaelsc@cloudius-systems.com>
When a promise that still tracks its future is destroyed, but after
future::get() or future::then() was called, the promise thinks it was
destroyed before generating a value, and fails an assertion.
Fix be detaching the promise from the future as soon as get() or then()
remove the value.
Following std::async(), seastar::async(func) causes func to be executed
in a seastar thread, where it can block using future<>::get(). Whatever
func returns is converted to a future, and returned as async()s return
value.
Sometimes remote data has to be copied to local cpu, but if data is
already local copy can be avoided. Introduce helper function that moves
or copies data depending on origin cpu.