This patch enables xen event channels. It creates the placeholder for the
kernel evtchns when we move to OSv as well.
The main problem with this patch, is that evtchn::pending can return more than
one evtchn, so this that I am doing here is technically wrong. We should probably
call keep_doing() in pending() itself, and have that to store the references to
futures equivalent to the possible event channels, that would then be made ready.
I am, however, having a bit of a hard time coding this, since it's still
unclear how, once the future is consumed, we would generate the next.
Please note: All of this is moot if we disable "split event channels", which
can be done by masking that feature in case it is even available. In that case,
only one event channel will be notified, and when ready, we process both tx and
rx. This is yet another reason why I haven't insisted so much in fixing this properly
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
This patch creates a seastar enabled version of the xen gntalloc device.
TODO: grow the table dynamically, and fix the index selection algorithm. Shouldn't
just always bump 1.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
This patch enables to interact with xenstore. Since now OSv now fakes the
presence of libxenstore, the code is the same for userspace and kernel.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
Rewrite http connection termination management to support the
various cases dictated by the HTTP spec: client-side connection
close, server-side connection close, and header specified connection
close.
It's required for instantiating a sstring with the constructor
basic_sstring(initialized_later, size_t size).
Signed-off-by: Raphael S. Carvalho <raphaelsc@cloudius-systems.com>
when_all(f1, f2) returns a future that becomes ready when all input futures
are ready. The return value is a tuple with all input futures, so the values
and exceptions can be accessed.
Unlike future::then(), which unwraps the value, then_wrapped() keeps it
wrapped in a future<>, so if it is exceptional, it can still be accessed.
This is similar to the proposed std::future::then(), so we should later
rename it to match (and rename the existing future::then() to future::next().
It did not handle properly the case when the target promise's future gets dead
without installing a callback or the future was never installed. The
mishanlding of the former case was causing httpd to abort on SMP.
Summary:
distributed<my_service> dist;
dist.start() - constructs my_service on all cpus
dist.stop() - destroys previously constructed instances
dist.invoke(cpu, &my_service::method, args...) - run method on one cpu
dist.invoke_on_all(cpu, &my_service::method, args...) - run method on all cpus
"Often, a function we wish to execute on another cpu will not be able to
complete immediatley. This patchset allows it to return a future; the caller
will not be resumed until that future is ready."
We can avoid extra allocation and chaining by linking the current
future's promise with the target promise's future, as if the target
promise was moved into the current future's promise.
keep_doing() keeps chaining futures until it stops, usually never, resulting
in a de-facto memory leak (even though all the memory is still reachable).
Fix by avoiding the chainining, re-using the same promise over and over again.
Because memcpy() is declared by gcc as receiving non-null attributes, gcc
assumes that ptr != null, as it is passed into memcpy() (though with a size
of zero). As a result it ignores the null pointer check in ::free(), and
calls memory::free() directly, which does not expect a null pointer.
Fix by only calling memcpy() when the ptr is non-null.
Allow functions run on another cpu to return a future. In that case, the
result (or exception) is only returned when the future is resolved.
Note this has the potential for livelocks:
A: smp::submit_to(B, [] {
...
return smp::submit_to(A, [] {
...
});
});
If this and its mirror image (B sending to A) happen concurrently, and if
both the A->B and B->A queue become full, the system will not be able to
make forward progress. This can be fixed by making the inner submit_to()
use a different queue.