Each "poller" registers a non-blocking callback which is then called in
every iteration of a reactor's main loop.
Each "poller"'s callback returns a boolean: if TRUE then a main loop is allowed to block
(e.g. in epoll()).
If any of registered "pollers" returns FALSE then reactor's main loop is forbidded to block
in the current iteration.
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
The local variable used to read the ports won't be valid after we return from
the function. Moving it to be an instance member is not ideal, but it work if
we don't unmask the ports until we're ready signaling them all.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
If we don't have split channels, we need to delete the relevant property.
because xs_rm() returns true if the feature does not exist, it won't affect the
transaction if we just delete all of them. Therefore we don't need to do any
conditional test.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
If there is some error opening the xenstore - for instance, if we run
without privileges, we should bail out or we will segfault later.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
We are adding everything we read into the features array. Because in the
destructor we will remove everything in the features list, we'll end up
removing more than we should. Things like the mac address, handle, etc, should
never be deleted.
This is not a problem for OSv because usually, after the destructor is called,
the whole guest is down. But for userspace, the network card is left there,
but will cease to work if we delete too much.
After we do that with the _features array - it's original intent, it becomes
reduntant with features nack.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
This is not required for OSv, but is required for userspace operation.
It won't work without it.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
Glauber says:
"This patch yields a small performance boost. It is not complete, since the rest
of the performance work is still missing since half of that is in OSv.
But more importantly, it now works on AWS."
When the backend advertises "feature-rx-copy", the frontend should register for
"request-rx-copy". The local hypervisor seems to be forgiving about it, but the
one in AWS, it is not, and doubly so.
First, it doesn't recognize these as the same. And second, it refuses to
connect the backend if this feature is not advertised by the frontend.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
The ring processing is almost the same for both rx and tx, with the exception
with the core of the action. We can actually unify them nicely with some use of
template programming.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
There are two things we can do that will lead to less interrupts being sent.
The first, is to read the new rsp_cons value at the end of every interaction.
If the backend produces more frames in the mean time, we'll be able to process
in the same round, without getting another interrupt.
The other, is to set the rsp_event only after all the frames are processed.
As a matter of fact, both the tx and rx rings did one of them, but not the same
one. The next patch will unify the ring code to avoid problems like that in the
future.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
Network device has to be available when network stack is created, but
sometimes network device creation should wait for device initialization
by another cpu. This patch makes it possible to delay network stack
creation until network device is available.
Configure all smp queues before calling engine.configure() so that
engine.configure() may use submit_to() api. Note that messages will
still be processed only after engine.run() is executed.
From Raphael:
"Flashcache is basically an extension of memcache where a flash device is used to achieve a considerably higher cache hit ratio (~130x better).
Flashcache major additions:
-----
* Flashcache device length is divided by the number of CPUs, where each portion is then assigned to a per-cpu cache.
* Let me readily mention that items aren't stored on disk, but instead data from items. Keys always remain stored in memory.
* Each item has now a state field that describes its status.
* Each item can be in any of the following states:
- MEM (Item is stored only in memory)
- TO_MEM_DISK (Transition from MEM to MEM_DISK state)
- MEM_DISK (Item is stored both in memory and on disk)
- DISK(Item is stored only on disk)
- ERASED (Item was invalidated)
* Algorithm added to balance items between MEM and MEM_DISK state.
* Three LRU lists were added to keep track of MEM, MEM_DISK and DISK items.
* When item is ERASED, it shouldn't be in any of the lists above.
* When the working set fits memory, items should only be stored in MEM and MEM_DISK lists.
* Upon a SET request, the ratio of MEM and MEM_DISK (MEM_DISK / (MEM + MEM_DISK)) is taken into account to decide whether or not a LRU item should be moved to MEM_DISK state (consists of scheduling a LRU item to be stored on disk, where its data field remains intact).
* Before an item is scheduled to be moved to MEM_DISK state, it's set to the transition state called TO_MEM_DISK. Why? It's basically to handle client requests on transitioning items. Example: For get requests, let's only provide the data given that the data remains intact.
* Upon memory pressure, a specialized reclaiming function is called to do the following:
get a LRU item from MEM_DISK list that has no readers (i.e. refcount is zero); remove it from MEM_DISK list, erase the data; set its state to DISK;
The steps above are executed repeatedly until the request amount of memory reclaimed is satisfied.
* Upon a GET request on a DISK item, a per-item semaphore is used to guarantee that the first request will proceed with the loading of the data from the flash device, while the others wait for the process to complete.
* ERASED state is used to inform flashcache that an item was invalidated and thus shouldn't be moved to any list. E.G. invalidation request could happen while the data from an item is being loaded from disk.
Result:
-----
Performance is worse (unfortunate but also expected because of time waiting for items to be loaded) but hit ratio is considerably better as also expected. I'm thinking of adding a new state for items called LOADED that, when the data from the item is loaded from disk, mark the item as LOADED; insert it into MEM list; and schedule an item from MEM list to be moved to MEM_DISK list. That may bring a good performance benefit, no data to back up my claim though. By the time being, an item loaded is directly moved to MEM_DISK list (as its data is already stored on disk), where it then could be quickly evicted upon a memory pressure.
$ sudo ./memcached --stats --device /dev/sdb --mem 600M (POSIX stack)
* MEMCACHE - TCP:
$ memaslap -T 4 -s 127.0.0.1 -t 60s -c 256
servers : 127.0.0.1
threads count: 4
concurrency: 256
run time: 60s
windows size: 10k
set proportion: set_prop=0.10
get proportion: get_prop=0.90
cmd_get: 6310281
cmd_set: 701266
get_misses: 1783262
written_bytes: 1216572735
read_bytes: 5039263122
object_bytes: 762977408
Run time: 60.0s Ops: 7011547 TPS: 116837 Net_rate: 99.4M/s
* FLASHCACHE - TCP:
$ memaslap -T 4 -s 127.0.0.1 -t 60s -c 256
servers : 127.0.0.1
threads count: 4
concurrency: 256
run time: 60s
windows size: 10k
set proportion: set_prop=0.10
get proportion: get_prop=0.90
cmd_get: 3067576
cmd_set: 340959
get_misses: 13576
written_bytes: 591452430
read_bytes: 3392472330
object_bytes: 370963392
Run time: 60.0s Ops: 3408535 TPS: 56804 Net_rate: 63.3M/s"
See RFC5681: 3.2. Fast Retransmit/Fast Recovery for more details.
"""
In addition, a TCP receiver SHOULD send an immediate ACK when the
incoming segment fills in all or part of a gap in the sequence space.
"""
From Glauber:
"Before those patches, Xen was not surviving a full round of wrk. Now it
survives a 20min one. That doesn't mean it is devoid of bugs: I am still seeing
some warnings being generated, so there is definitely more work to do. But at
least it doesn't crash and is stable.
Performance wise, Xen+OSv fares at 32k req/sec in my laptop, where lwan does
45k"
The xen protocol needs works by filling positions in a circular ring. The
indexes become free to be used again when they are processed by the other side.
There is a problem, however: those indexes must be sequential, because all the
sides share is a produced / consumed index. But there are situations in which
we call get_index() - which produces an index X, but the .then() clause
schedules some other caller of send() to run in our place. That one, in turn,
can call get_index(), then create a packet with index X + 1 that will be put in
the ring before the packet with index X.
If the other end processes this packet very fast, it will respond saying "I
have processed packets up to X + 1". We will act on it as marking X as
processed as well - since it comes before X + 1, and when X is really
processed, chaos will ensue.
The solution for that is to just have the semaphore to count how many spaces we
have in the ring. Once we guarantee that the current caller have space, we then
compute get_index() inside the .then() clause. This works well because the
indexes are all sequential anyway.
For the same reason, we are actually able to remove the queue, and resort to a
simple counter. Once we know there is room, we just get the next index,
whatever it may be.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
we can't reach this place with a negative ref id, so let's assert to make sure
we're fine. Help catching some bugs.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
The index in the ring and the packet id tends to be the same. But it doesn't
have to. There are some situations where the backend and the frontend get out
of sync with this, and this is totally valid.
One example is when the backend skb already have enough room to hold all of the
data being transmitted (netback.c, line 1611 @3.16). The netback will respond
immediately, even though there are other pending packets that are not yet fully
processed.
The ring index, then, must come from the rsp value, not from the req/rsp id.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
When a bulk of data is passed from user application, the TCP layer call
output only once to send data. This will slow TX a lot, because the
output will send at most MSS size of data while we might have way more
than MSS to send. We will send again only after remote ack the data we
just sent. This slowness can be seen easily with tso turned off.
To fix, we should send as much as we are allowed to. This patch boosts
TX bandwidth from 0.N MiB/Sec to hundreds MiB/Sec.
Before:
[asias@hjpc pingpong]$ go run client-txtx.go
Server: 192.168.66.123:10000
Connections: 1
Bytes Received(MiB): 10
Total Time(Secs): 76.217338072
Bandwidth(MiB/Sec): 0.13120374252054473
After:
[asias@hjpc pingpong]$ go run client-txtx.go
Server: 192.168.66.123:10000
Connections: 1
Bytes Received(MiB): 100
Total Time(Secs): 0.5105951040000001
Bandwidth(MiB/Sec): 195.84989988466475
This patch adds congestion control to our TCP according to RFC5681.
These four algorithms: slow start, congestion avoidance, fast
retransmit, and fast recovery, are added.
Reviewed-by: Pekka Enberg <penberg@cloudius-systems.com>