The result is not used for anything and I am not sure what it could be
used for, as the result carries little (write) to none (flush)
information. So I went ahead and simplified it to be future<> so that
it is easier to return it in places which expect future<>.
Allow the memory manager to call us back requesting a reclaim. Push
the task to the front of the queue, so we don't run out of memory waiting
for it to fire.
Allow memory users to declare methods of reclaiming memory (reclaimers),
and allow the main loop to declare a safe point for calling these reclaimers.
The memory mananger will then schedule calls to reclaimers when memory runs
low.
The idea is that only one thread opens listen socket and runs accept().
Other threads emulate listen()/accept() by waiting for connected
socket from the main thread. Main thread distributes connected sockets
according to round robin pattern. Patch introduce new specialization
for server_socket_impl and network_stack: posix_ap_server_socket_impl
and posix_ap_network_stack respectively. _ap_ stand for auxiliary processor.
Sometimes we need to know if we are running on main thread during engine
initialization, before engine._id is available. This function will be
used for that.
Basically, wrapping stat around _thread_pool as it might block
waiting for metadata to be read from the underlying device.
Signed-off-by: Raphael S. Carvalho <raphaelsc@cloudius-systems.com>
Add a compile-time option, DEFAULT_ALLOCATOR, to use the existing
memory allocator (malloc() and friends) instead of redefining it.
This option is a workaround needed to run Seastar on OSv.
Without this workaround, what seems to happen is that some code compiled
into the kernel (notably, libboost_program_options.a) uses the standard
malloc(), while inline code compiled into Seastar uses the seastar free()
to try and free that memory, resulting in a spectacular crash.
Signed-off-by: Nadav Har'El <nyh@cloudius-systems.com>
With N3778, the compiler can provide us with the size of the object,
so we can avoid looking it up in the page array. Unfortunately only
implemented in clang at the moment.
Instead of rounding up to a power-of-two, have four equally spaced
regions between powers of two. For example:
1024
1280 (+256)
1536 (+256)
1792 (+256)
2048 (+256)
2560 (+512)
3072 (+512)
3584 (+512)
4096 (+512)
Allocate small objects within spans, minimizing waste.
Each object size class has its own pool, and its own freelist. On overflow
free objects are pushed into the spans; if a span is completely free, it is
returned to the main free list.
Currently completion processing start during object creation, but since
all object are created by main thread they all run on the same cpu which
is incorrect. This patch starts completion processing on correct cpu.
Since we have lots of queues, we need an efficient queue structure,
esp. for moveable types. libstdc++'s std::deque is quite hairy,
and boost's circular_buffer_space_optimized uses assignments instead of
constructors, which are both slower and less available than constructors.
This patch implements a growable circular buffer for these needs.
Here, transferring is defined as moving an object to a new location
(either via a move or copy constructor) and destroying the source. This
is useful when implementing containers.
Test case (added in the next patch):
promise<> p1;
promise<> p2;
auto f = p1.get_future().then([f = std::move(p2.get_future())] () mutable {
return std::move(f); // this future will fail
}).then([] {
// never reached, that's ok
});
p1.set_value();
p2.set_exception(std::runtime_error("boom"));
// f should get resolved eventually with error, but was not
The callback passed to it will be executed when future gets resolved,
successfully or not. The returned future will mimic the state of the
target future.