"Deleter objects are relatively heavyweight since they need to remember
which destructor to call. However, raw memory needs no destructor, and
we can exploit this fact."
Instead of allocating a vector to store the buffers to be destroyed, in the
case of a single buffer, use an ordinary free deleter.
This doesn't currently help much because the packet is share()d later on,
but if we may be able to eliminate the sharing one day.
Add packet(Iterator, Iterator, deleter).
(unfortunately we have both a template version with a template parameter
named Deleter, and a non-template version with a parameter called deleter.
Need to sort the naming out).
In many cases, a deleter is used to protect raw memory (e.g. a char array,
not something with a destructor). In that case we can simply free() it,
so, the deleter need not remember which destructor needs to be called.
It does need to remember whether it's a raw object or not, so we take over
the least significant bit and use it as a marker, and store the pointer
to the object in the deleter, instead of using a proxy impl object to
control actual deletion.
If the deleter is subsequently share()d, we have to convert it back to
the standard form, since the reference count lives in the impl object.
Given a string, return the corresponding ethernet address. This is useful
specially for xen, where we read the mac address from the xenstore.
Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
Transmission of data is not atomic so we cannot replace item's fields
in-place. This can lead to inconsistencies in "get" responses. We
should rather allocate a new item object replacing the one which is in
the cache.
Most of the tests are fast and they cover most of the
functionality. The slow minority of tests takes significantly more
time to run. Developers should run tests frequently in order to get
feedback on correctness of their changes. The test runner
distinguishes now between fast and slow tests. When given '--fast'
switch it skips tests marked as slow.
$ time ./test.py
[8/8] PASSED tests/memcache/test.py --mode release
OK.
real 0m33.084s
user 0m0.501s
sys 0m0.271s
$ time ./test.py --fast
[8/8] PASSED tests/memcache/test.py --mode release --fast
OK.
real 0m1.012s
user 0m0.464s
sys 0m0.247s
This is mostly to make memcapable from libmemcached happy.
When client issued a bad command, we should discard all data we have
so that the next command on that connection can succeed.
It doesn't support more than one CPU yet. The symptom is that TCP
connections will have a chance of hanging when they're routed to the
CPU on which memcache doesn't run.
Initial version of memcached on seastar from Tomasz:
"Supports subset of ASCII protocol, commands: get, set and delete.
Both UDP and TCP are supported."