atomic_cell will soon become type-aware, so add helpers to class operation
that can supply the type, as it is available in operation::column.type.
(the type will be used in following patches)
schema_tables manages some boolean columns stored in system tables; it
dynamically creates them from C++ values. But as we lacked bool->data_value
conversion, the C++ value was converted to a int32_type. Somehow this didn't
cause any problems, but with some pending patches I have, it does.
Add a bool->data_value converting constructor to fix this.
Since bytes is a very generic value that is returned from many calls,
it is easy to pass it by mistake to a function expecting a data_value,
and to get a wrong result. It is impossible for the data_value constructor
to know if the argument is a genuine bytes variable, a data_value of another
type, but serialized, or some other serialized data type.
To prevent misuse, make the data_value(bytes) constructor
(and complementary data_value(optional<bytes>) explicit.
* seastar 5c10d3e...20bf03b (5):
> do not re-throw exception to get to an exception pointer
> Adding timeout counter to the rpc
> configure.py: support for pkg-config before release 0.28
> future: don't forget to warn about ignored exception
> tutorial: continue network API section
Found by debug build
==10190==ERROR: AddressSanitizer: new-delete-type-mismatch on 0x602000084430 in thread T0:
object passed to delete has wrong type:
size of the allocated type: 16 bytes;
size of the deallocated type: 8 bytes.
#0 0x7fe244add512 in operator delete(void*, unsigned long) (/lib64/libasan.so.2+0x9a512)
#1 0x3c674fe in std::default_delete<dht::range_streamer::i_source_filter>::operator()(dht::range_streamer::i_source_filter*)
const /usr/include/c++/5.1.1/bits/unique_ptr.h:76
#2 0x3c60584 in std::unique_ptr<dht::range_streamer::i_source_filter, std::default_delete<dht::range_streamer::i_source_filter> >::~unique_ptr()
/usr/include/c++/5.1.1/bits/unique_ptr.h:236
#3 0x3c7ac22 in void __gnu_cxx::new_allocator<std::unique_ptr<dht::range_streamer::i_source_filter,
std::default_delete<dht::range_streamer::i_source_filter> > >::destroy<std::unique_ptr<dht::range_streamer::i_source_filter,
std::default_delete<dht::range_streamer::i_source_filter> > >(std::unique_ptr<dht::range_streamer::i_source_filter,
std::default_delete<dht::range_streamer::i_source_filter> >*) /usr/include/c++/5.1.1/ext/new_allocator.h:124
...
Fixes#549.
Being clinically absent-minded, aggregate query support (i.e. count(...))
was left out of the "paging" change set.
This adds repeated paged querying to do aggregate queries (similar to
origin). Uses "batched" paging.
For compatibility reasons, compaction_strategy should accept both class
name strategy and the full class name that includes the package name.
In origin the result name depends on the configuration, we cannot mimic
that as we are using enum for the type.
So currently the return class name remains the class itself, we can
consider changing it in the future.
If the name is org.apache.cassandra.db.compaction.Name the it will be
compare as Name
The error message was modified to report the name it was given.
Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Fixes#545
"Slight file format change for commitlog segments, now incluing
a scylla "marker". Allows for fast-fail if trying to load an
Origin segment.
WARNING: This changes the file format, and there is no good way for me to
check if a CL is "old" scylla, or Origin (since "version" is the same). So
either "old" scylla files also fail, or we never fail (until later, and
worse). Thus, if upgrading from older to this patch ensure to
have cleaned out all commit logs first."
Fixes#355
"Implements query paging similar to origin. If driver sets a "page size" in
a query, and we cannot know that we will not exceed this limit in a single
query, the query is performed using a "pager" object, which, using modified
partition ranges and query limits, keeps track of returned rows to "page"
through the results.
Implementation structure sort of mimics the origin design, even though it
is maybe a little bit overkill for us (currently). On the other hand, it
does not really hurt.
This implementation is tested using the "paging_test" subset in dtest.
It passes all test except:
* test_paging_using_secondary_indexes
* test_paging_using_secondary_indexes_with_static_cols
* test_failure_threshold_deletions
The two first because we don't have secondary indexes yet, the latter
because the test depends on "tombstone_failure_threshold" in origin.
Potential todo: Currently the pager object does not shortcut result
building fully when page limit is exceeded. Could save a little work
here, but probably not very significant."
Allows us fail fast if someone tries to replay an Origin commit log.
WARNING: This changes the file format, and there is no good way for me to
check if a CL is "old" scylla, or Origin (since "version" is the same). So
either "old" scylla files also fail, or we never fail (until later, and
worse). Thus, if upgrading from older to this patch, likewise, ensure to
have cleaned out all commit logs first.
* Static query method to determine if paging might be required
(very conservative - almost all querys will be paged me thinks).
* Static factory method for pager
* Actual pager implementation
Pager object uses three variables to keep track of paging state:
1.) Last partition key - partition key of last partion processed
-> next partition to start process
2.) Last clustering key, i.e. row offset within last key partition,
i.e. how far we got last time
3.) Max remaining - max rows to process further, i.e. initial limit -
processed so far
Partition ranges are modified/removed so that we begin with "Last key",
if present. (Or end with, in the case of reversed processing)
A counting visitor then keeps count of rows to include in processing.
Basic interface for paging control objects.
We probably do not need virtual behaviour for paging, but on the other
hand it does not really cost much, and it keeps a nice symmetry with
origin.
Allows for having more than one clustering row range set, depending on
PK queried (although right now limited to one - which happens to be exactly
the number of mutiplexing paging needs... What a coincidence...)
Encapsulates the row_ranges member in a query function, and if needed holds
ranges outside the default one in an extra object.
Query result::builder::add_partition now fetches the correct row range for
the partition, and this is the range used in subsequent iteration.
Note: serial format blob is different compared to origin, due to scyllas
different internal architecture. I.e. we query actual rows.
But drivers etc ignore the content of the blob, it is opaque.
Currently, there are multiple places we can close a session, this makes
the close code path hard to follow. Remove the call to maybe_completed
in follower_start_sent to simplify closing a bit.
- stream_session::follower_start_sent -> maybe_completed()
- stream_session::receive_task_completed -> maybe_completed()
- stream_session::transfer_task_completed -> maybe_completed()
- on receive of the COMPLETE_MESSAGE -> complete()
nodetool decommission node 127.0.0.2, on node 127.0.0.1, I saw:
DEBUG [shard 0] gossip - failure_detector: Forcing conviction of 127.0.0.1
TRACE [shard 0] gossip - convict ep=127.0.0.1, phi=8, is_alive=1, is_dead_state=0
TRACE [shard 0] gossip - marking as down 127.0.0.1
INFO [shard 0] gossip - inet_address 127.0.0.1 is now DOWN
DEBUG [shard 0] storage_service - on_dead endpoint=127.0.0.1
This is wrong since the argument for send_gossip_shutdown should be the
node being shutdown instead of the live node.
Since the introduction of sets::element_discarder sets::discarder is
always given a set, never a single value.
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
Currently sets::discarder is used by both set difference and removal of
a single element operations. To distinguish between them the discarder
checks whether the provided value is a set or something else, this won't
work however if a set of frozen sets is created.
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>