"This patch series ensures we don't count dead partitions (i.e.,
partitions with no live rows) towards the partition_limit. We also
enforce the partition limit at the storage_proxy level, so that
limits with smp > 1 works correctly."
(cherry picked from commit 5f11a727c9)
Having a trace_state_ptr in the storage_proxy level is needed to trace code bits in this level.
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
Store the trace state in the abstract_write_response_handler.
Instrument send_mutation RPC to receive an additional
rpc::optional parameter that will contain optional<trace_info>
value.
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
From now on trace_state::trace() is able to receive the sprint-ready
format string with the arguments that will be applied only during
the flush event.
This patch also optimizes the way the source address is evaluated -
do it only once instead of twice if tracing is requested.
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
In names of functions and variables:
s/flush_/write_/
s/store_/write_/
In a i_tracing_backend_helper:
s/flush()/kick()/
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
If mutations are fragmented during streaming a special care must be
taken so that isolation guarantees are not broken.
Mutations received with flag "fragmented" set are applied to a memtable
that is used only by that particular streaming task and the sstables
created by flushing such memtables are not made visible until the task
is complte. Also, in case the streaming fails all data is dropped.
This means that fragmented mutations cannot benefit from coalescing of
writes from multiple streaming plans, hence separate way of handling
them so that there is no loss of performance for small partitions.
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
plan_id is needed to keep track of the origin of mutations so that if
they are fragmented all fragments are made visible at the same time,
when that particular streaming plan_id completes.
Basically, each streaming plan that sends big (fragmented) mutations is
going to have its own memtables and a list of sstables which will get
flushed and made visible when that plan completes (or dropped if it
fails).
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
This patch as a per-partition row limit. It ensures both local
queries and the reconciliation logic abide by this limit.
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
This patch changes the way we fetch each replica's last row to
determine if we got incomplete information from any of them. Instead
of fetching the last rows up front, we fetch them on demand only if we
actually trigger the code that needs them. We now get the last row from
the versions vector of vectors.
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
This patch extracts to a function the code that actually determines
the last row of a partition based on the direction of the query.
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
User may specify time after which speculative retry should happen
instead of relying on cf statics. Use provided value in speculative
executor.
Message-Id: <20160616104422.GH5961@scylladb.com>
When read repair writes diffs back to replicas it is enough to wait
for requested CL to guaranty read monotonicity. This patch makes read
repair write reuse regular mutate functionality which already tracks
CL status. This is done by changing write response handler to not hold
mutation directly, but instead hold a container that, depending on
whether
this is read repair write or regular one, can provide different mutation
per destination.
Message-Id: <20160613124727.GL1096@scylladb.com>
This patch changes the type of the mutation partition's row_tombstones
to be a range_tombstone_list, so that they are now represented as a
set of disjoint ranges. All of its usages are updated accordingly.
Fixes#1155
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
This is a demo instrumentation:
- Check if a tracing info is present in the read_command.
- If yes - create a tracing session with the given tracing
session ID.
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
writes_attempts suppose to count how many time data was sent out, but
currently it counts even those replicas in other DCs that get the data
through a coordinator. Fix it by counting only when data is actually sent.
Message-Id: <20160601153124.GB9939@scylladb.com>
When read/write to a partition happens in parallel reader may detect
digest mismatch that may potentially cause cross DC read repair attempt,
but the repair is not really needed, so added latency is not justified.
This patch tries to prevent such parallel access from causing heavy
cross DC repair operation buy checking a timestamp of most resent
modification. If the modification happens less then "write timeout"
seconds ago the patch assumes that the read operation raced with write
one and cancel cross DC repair, but only if CL is LOCAL_*.
timed_rate_moving_average_and_histogram
As part of moving the derived statistic in to scylla, this replaces the
histogram object in the storage_proxy to
timed_rate_moving_average_and_histogram. and the read, write and range
counters where replaced by rate_moving_average.
Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Add split (local Nodes, external Nodes aggregated per Nodes' DCs) counters
for the following read categories:
- data reads
- digest reads
- mutation data reads
Each category is added attempts, completions and errors metrics.
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
Added split metrics for operations on a local Node and on external
Nodes aggregated per Nodes' DCs.
Added separate split counters for:
- total writes attempts/errors
- read repair write attempts (there is no easy way to separate errors
at the moment)
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
Since commit c1cffd06 logger catch errors internally, so no need to
catch most of them at the top level. Only those that can happen during
parameter evaluation can reach here. Change parameters to not throw
too.
This is kind of sorting, so it belongs there, but it also fixes a bug in
storage_proxy::get_read_executor() that assumes filter_for_query() do
not change order of nodes in all_nodes when extra replica is chosen.
Otherwise if coordinator ip happens to be last in all_nodes then it will
be chosen as extra replica and will be quired twice.
Message-Id: <1460549369-29523-1-git-send-email-gleb@scylladb.com>
Hints are currently unimplemented but there is code depending on the
fact that hint_to_dead_endpoints() doesn't throw.
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
live_row_count is summed several times in the same function. Do it only
once.
--
v1->v2:
- call get() on std::reference_wrapper<std::vector<partition>> to get
to reference for moving out of it.
Message-Id: <20160411123829.GE21479@scylladb.com>