Raphael recently caught this test failing. I can't really reproduce it,
but it seems to me that it is a timing issue: we execute two different
statements, each one should timeout after 10ms. After 20ms, we make sure
that they both timed out.
They don't (in his system), which is explained by the fact that we are
no longer using high resolution clocks for the timeouts. Expirations for
lowres clocks will only happen at every 10ms, and in the worst case we
will miss twoa.
So the fix I am proposing here is to just account for potential
innacuracies in the clocks and calculations by waiting a bit longer.
Ideally, we would use the manual clock for this. But in this case, this
would mean adding template parameters to pretty much all of the
mutation_reader path.
Currently, not only the test failed, it also had an use-after-free
SIGSEGV. That happens because we give up on the reader while the
timeouts is still to happen.
It is the caller responsibility to ensure the lifetime of the reader is
correct. Dealing with that cleanly would require a cancelation mechanism
that we don't have, so we'll just add an assertion that will fail more
gracefully than the SIGSEGV.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
We are not propagating timeouts to fast_forward_to in the
mutation_reader_test. This is not currently causing any issue, but I
noticed it while chasing one - so let's fix it.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
This will make boost UTF abort execution on SIGABRT rather than trying
to continue running other test cases. This doesn't work well with
seastar integration, the suite will hang.
Message-Id: <1516205469-16378-1-git-send-email-tgrabiec@scylladb.com>
Since we sometimes recommend that the user update to a newer kernel,
it's good to compile support for features that the new kernel supports.
Rather than play games with build-time dependencies, just #define
those features in. It's ugly, but better than depending on third-party
repositories and handling package conflicts.
Message-Id: <20180115143129.22190-1-avi@scylladb.com>
"The previous series handled a passing of the copy of the client_state from process_request(...)
to the process_request_one(...). However the modified copy of the client_state is returned by the
process_request_one(...) back to the process_request(...) and handling of this direction was missing
in the previous series.
This series completes the #2351 fix."
* 'fix-round-robin-cont-v2' of https://github.com/vladzcloudius/scylla:
transport::cql_server::process_request_one: return only the required information instead of the whole client_state object
service::client_state: move auth_state from cql_server::connection to service::client_state
transport::cql_server: don't cache sasl_challenge object in the cql_server::connection
service::client_state::merge(): remove not needed timestamp merge
"_free_segments_in_zones is not adjusted by
segment_pool::reclaim_segments() for empty zones on reclaim under some
conditions. For instance when some zone becomes empty due to regular
free() and then reclaiming is called from the std allocator, and it is
satisfied from a zone after the one which is empty. This would result
in free memory in such zone to appear as being leaked due to corrupted
free segment count, which may cause a later reclaim to fail. This
could result in bad_allocs.
The fix is to always collect such zones.
Fixes#3129
Refs #3119
Refs #3120"
* 'tgrabiec/fix-free_segments_in_zones-leak' of github.com:scylladb/seastar-dev:
tests: lsa: Test _free_segments_in_zones is kept correct on reclaim
lsa: Expose max_zone_segments for tests
lsa: Expose tracker::non_lsa_used_space()
lsa: Fix memory leak on zone reclaim
_free_segments_in_zones is not adjusted by
segment_pool::reclaim_segments() for empty zones on reclaim under some
conditions. For instance when some zone becomes empty due to regular
free() and then reclaiming is called from the std allocator, and it is
satisfied from a zone after the one which is empty. This would result
in free memory in such zone to appear as being leaked due to corrupted
free segment count, which may cause a later reclaim to fail. This
could result in bad_allocs.
The fix is to always collect such zones.
Fixes#3129
Refs #3119
Refs #3120
The call chain is:
storage_service::on_change() -> storage_service::handle_state_removing()
-> storage_service::restore_replica_count() -> streamer->stream_async()
Listeners run as part of gossip message processing, which is serialized.
This means we won't be processing any gossip messages until streaming
completes.
In fact, there is no need to wait for restore_replica_count to complete
which can take a long time, since when it completes, this node will send
notification to tell the removal_coordinator that the restore process is
finished on this node. This node will be removed from _replicating_nodes
on the removal_coordinator.
Tested with update_cluster_layout_tests.py
Fixes#2886
Message-Id: <8b4fe637dfea6c56167ddde3ca86fefb8438ce96.1516088237.git.asias@scylladb.com>
Commit 2d5fb9d109 (gms/gossiper: Replicate changes incrementally to
other shards) changes the way we replicate _token_metadata and
endpoint_state_map. Before they are replicated at the same time, after
they are not any more. This causes a shard in NORMAL status can still be
with a empty _token_metadata.
We saw errors:
[shard 12] token_metadata - sorted_tokens is empty in first_token_index!
during CorruptThenRepairNemesis.
Fix by setting the gossip status to NORMAL after replication of
_token_metadata, so that once a node is in NORMAL, we can do repair. The
commit 69c81bcc87 (repair: Do not allow repair until node is in NORMAL
status) prevents the early repair operation by checking if a node is in
NORMAL status.
Fixes#3121
Message-Id: <af6a223733d2e11351f1fa35f59eacfa7d65dd30.1516065564.git.asias@scylladb.com>
that's required after fa5a26f12d on because sstable write fails when sharding
metadata is empty due to lack of keys that belong to current shard.
make_local_key* were moved to header to avoid compiling sstable_utils.cc into
all those tests that rely on simple_schema.hh, which is a lot.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20180116052052.7819-1-raphaelsc@scylladb.com>
client_state used in the process_request_one(...) contains all sorts of information irrelevant
to the caller (process_request(...)), e.g. Tracing state. Therefore instead of returning
the whole client_state object (which becomes even a bigger problem if process_one(...) and process_request_one(...)
are executed on different shards) we will return only the pieces of information we really need.
To do that we introduce a new class - processing_result, which is cross-shard-access-ready to begin with.
We are going to return a instance of this new class from the process_request_one(...).
Fixes#2351
Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
Move the requests-handling-related state into the client_state. This is needed to properly
define the interface between the process_request(...) and process_request_one(...).
Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
The benefit of such a caching is rather limited because it's likely to be used exactly once
and then destroyed anyway (in case of a successful authentication).
If the authentication has failed no harm is going to be done if we create this object again when
needed.
Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
Since the connection::_client_state is the only generator of new timestamps
now there is no need for this merge.
Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
"After this patchset it's only possible to create a mutation_source with a function that produces flat_mutation_reader."
* 'haaawk/mutation_source_v1' of ssh://github.com/scylladb/seastar-dev:
Merge flat_mutation_reader_mutation_source into mutation_source
Remove unused mutation_reader_mutation_source
Remove unused mutation_source constructor.
Migrate make_source to flat reader
Migrate run_conversion_to_mutation_reader_tests to flat reader
flat_mutation_reader_from_mutations: add support for slicing
Remove unused mutation_source constructor.
Migrate partition_counting_reader to flat reader
Migrate throttled_mutation_source to flat reader
Extract delegating_reader from make_delegating_reader
row_cache_test: call row_cache::make_flat_reader in mutation_sources
Remove unused friend declaration in flat_mutation_reader::impl
Migrate make_source_with to flat reader
Migrate make_empty_mutation_source to flat reader
Remove unused mutation_source constructor
Migrate test_multi_range_reader to flat reader
Remove unused mutation_source constructors
* seastar a7a3e6f...d03896d (11):
> Update dpdk submodule
> Merge "C++17 aligned allocations" from Avi
> Prometheus should check that the iterator is valid before using it
> future-util: failure to allocate internal state is unrecoverable
> Merge "Introduce simple microbenchmarking framework" from Paweł
> tutorial: document debuging ignored exceptions
> Revert "Merge "Introduce simple microbenchmarking framework" from Paweł"
> Merge "Introduce simple microbenchmarking framework" from Paweł
> tests/futures: add more tests for parallel_for_each()
> Add a prometheus.md file
> prometheus: Support metric family name parameter
"Added support for min/max functions over date/timestamp/timeuuid.
There was one issue with Scylla's type system internals: no C++ type
was mapped to these types. So special "native_types" were added for them.
It required some changes to native functions because these types don't support
the same operations as their real native counterparts.
Fixes #3104."
* 'danfiala/3104-v1' of https://github.com/hagrid-the-developer/scylla:
tests: Tests for min/max aggregate functions over date/timestamp and timeuuid.
functions: Added min/max functions for date/timestamp/timeuuid.
types: Added native types for timestamp and timeuuid.
Advertise compatibility with CQL Version 3.3.2, since CAST functions are supported.
Fixes argument misquoting at $SRPM_OPTS expansion for the mock commands
and makes the --jobs argument work as supposed.
Signed-off-by: Mika Eloranta <mel@aiven.io>
Message-Id: <20180113212904.85907-1-mel@aiven.io>
Test fails after fa5a26f12d because generated sstable doesn't contain data for the
shard it was created at, so sharding metadata is empty, resulting in exception
added in the aforementioned commit. That's fixed by using the new make_local_key()
to generate data that belongs to current shard.
make_local_keys(), from which make_local_key() is built on top of, will be useful
to make sstable test work again with any smp count.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20180114032025.26739-1-raphaelsc@scylladb.com>
Timeouts are a global property. However, for tables in keyspaces like
the system keyspace, we don't want to uphold that timeout--in fact, we
wan't no timeout there at all.
We already apply such configuration for requests waiting in the queued
sstable queue: system keyspace requests won't be removed. However, the
storage proxy will insert its own timeouts in those requests, causing
them to fail.
This patch changes the storage proxy read layer so that the timeout is
applied based on the column family configuration, which is in turn
inherited from the keyspace configuration. This matches our usual
way of passing db parameters down.
In terms of implementation, we can either move the timeout inside the
abstract read executor or keep it external. The former is a bit cleaner,
the the latter has the nice property that all executors generated will
share the exact same timeout point. In this patch, we chose the latter.
We are also careful to propagate the timeout information to the replica.
So even if we are talking about the local replica, when we add the
request to the concurrency queue, we will do it in accordance with the
timeout specified by the storage proxy layer.
After this patch, Scylla is able to start just fine with very low
timeouts--since read timeouts in the system keyspace are now ignored.
Fixes#2462
* git@github.com:glommer/scylla.git timeouts-v8.1:
database: delete unused function
consolidate timeout_clock
mutation_query: add a timeout to the mutation query path
flat_mutation_reader: pass timeout down to consume()
add a timeout to fill_buffer
add a timeout to fast forward to
restricted_mutation_reader: don't pass timeouts through the config
structure
allow request-specific read timeouts in storage proxy reads
Timeouts are a global property. However, for tables in keyspaces like
the system keyspace, we don't want to uphold that timeout--in fact, we
wan't no timeout there at all.
We already apply such configuration for requests waiting in the queued
sstable queue: system keyspace requests won't be removed. However, the
storage proxy will insert its own timeouts in those requests, causing
them to fail.
This patch changes the storage proxy read layer so that the timeout is
applied based on the column family configuration, which is in turn
inherited from the keyspace configuration. This matches our usual
way of passing db parameters down.
In terms of implementation, we can either move the timeout inside the
abstract read executor or keep it external. The former is a bit cleaner,
the the latter has the nice property that all executors generated will
share the exact same timeout point. In this patch, we chose the latter.
We are also careful to propagate the timeout information to the replica.
So even if we are talking about the local replica, when we add the
request to the concurrency queue, we will do it in accordance with the
timeout specified by the storage proxy layer.
After this patch, Scylla is able to start just fine with very low
timeouts--since read timeouts in the system keyspace are now ignored.
Fixes#2462
Implementation notes, and general comments about open discussion in 2462:
* Because we are not bypassing the timeout, just setting it high enough,
I consider the concerns about the batchlog moot: if we fail for any
other reason that will be propagated. Last case, because the timeout
is per-CF, we could do what we do for the dirty memory manager and
move the batchlog alone to use a different timeout setting.
* Storage proxy likes specifying its timeouts as a time_point, whereas
when we get low enough as to deal with the read_concurrency_config,
we are talking about deltas. So at some point we need to convert time_points
to durations. We do that in the database query functions.
v2:
- use per-request instead of per-table timeouts.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
This patch enables passing a timeout to the restricted_mutation_reader
through the read path interface -- using fill_buffer and friends. This
will serve as a basis for having per-timeout requests.
The config structure still has a timeout, but that is so far only used
to actually pass the value to the query interface. Once that starts
coming from the storage proxy layer (next patch) we will remove.
The query callers are patched so that we pass the timeout down. We patch
the callers in database.cc, but leave the streaming ones alone. That can
be safely done because the default for the query path is now no_timeout,
and that is what the streaming code wants. So there is no need to
complicate the interface to allow for passing a timeout that we intend
to disable.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
In the last patch, we enabled per-request timeouts, we enable timeouts
in fill_buffer. There are many places, though, in which we
fast_forward_to before we fill_buffer, so in order to make that
effective we need to propagate the timeouts to fast_forward_to as well.
In the same way as fill_buffer, we make the argument optional wherever
possible in the high level callers, making them mandatory in the
implementations.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
As part of the work to enable per-request timeouts, we enable timeouts
in fill_buffer.
The argument is made optional at the main classes, but mandatory in all
the ::impl versions. This way we'll make sure we didn't forget anything.
At this point we're still mostly passing that information around and
don't have any entity that will act on those timeouts. In the next patch
we will wire that up.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
We pass the timeout that we received from data_query/mutation_query
down to consume, which is responsible for actually reading the data.
To make those timeouts actionable, though, we'll have to patch
fill_buffer(). This will happen in the next patch.
Signed-off-by: Glauber Costa <glauber@scylladb.com>