Note: serial format blob is different compared to origin, due to scyllas
different internal architecture. I.e. we query actual rows.
But drivers etc ignore the content of the blob, it is opaque.
Since the introduction of sets::element_discarder sets::discarder is
always given a set, never a single value.
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
Currently sets::discarder is used by both set difference and removal of
a single element operations. To distinguish between them the discarder
checks whether the provided value is a set or something else, this won't
work however if a set of frozen sets is created.
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
Error handling in column_family::try_flush_memtable_to_sstable() is
misplaced. It happens after update_cache(), so writing sstable may
have succeeded, but moving memtable into the cache may have failed.
update_cache() destroys memtable even if it fails, but error handler
is not aware of it (it does not even distinguish whether error happened
during sstable creation or moving into cache) and when it tells caller
to retry it retries with already destroyed memtable. Fix it by ignoring
moving to cache errors.
This reverts commit fff37d15cd.
Says Tomek (and the comment in the code):
"update_cache() must be called before unlinking the memtable because cache + memtable at any time is supposed to be authoritative source of data for contained partitions. If there is a cache hit in cache, sstables won't be checked. If we unlink the memtable before cache is updated, it's possible that a query will miss data which was in that unlinked memtable, if it hits in the cache (with an old value)."
Error handling in column_family::try_flush_memtable_to_sstable() is
misplaced. It happens after update_cache(), so writing sstable may
have succeeded, but moving memtable into the cache may have failed.
update_cache() destroys memtable even if it fails, but error handler
is not aware of it (it does not even distinguish whether error happened
during sstable creation or moving into cache) and when it tells caller
to retry it retries with already destroyed memtable. Fix it by ignoring
moving to cache errors.
nodetool decommission hangs forever due to a recursive lock.
decommission()
with api lock
shutdown_client_servers()
with api lock
stop_rpc_server()
with api lock
stop_native_transport()
Fix it by calling helpers for stop_rpc_server and stop_native_transport
without the lock.
std::set_difference requires the container to be sorted which is not
true here, use remove_if.
Do not use assert, use throw instead so that we can recover from this
error.
Currently error code is attached to a future returned by when_all() which
is never is exceptional one, but it may hold exceptional future as a
first element. Move error handling close to where error it tries to
catch is generated instead.
Let's move the code that prints that a compaction succeeded only
after the code that catches exception on either read or write
fibers. Let's also get rid of done and use repeat instead in
the read fiber.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
If write timeout and last acknowledgement needed for CL happen simultaneously
_ready can be sent to be exceptional by the timeout handler, but since
removal of the response handler happens in continuation it may be
reordered with last ack processing and there _ready will be set again
which will trigger assert. Fix it by removing the handler immediately,
no need to wait for continuation. It makes code simpler too.