"
This series introduces materialized view statistics, as stated in issue #3385:
- updates pushed
- updates failed
- row lock stats
It also addresses issue #3416 by decoupling user write stats from view
update stats.
"
* 'materialized_view_metrics_9' of https://github.com/psarna/scylla:
view: adapt view_stats to act as write stats
storage_proxy: decouple write_stats from stats
db: add row locking metrics
view: add view metrics
"
This was sent before as two separate patchsets. It is now unified
because it has a lot of common infrastructure.
In this patchset I am aiming at two goals:
1) Provide a minimum amount of shares for user-initiated operations like
nodetool compact and nodetool cleanup
2) Be more robust with exceptions in the backlog tracker
For the first, the main difference is that I now made the compaction
controller a part of the compaction manager. It then becomes easy to
consult with the compaction controller for the correct amount of shares
those operations should have.
In compaction_strategy.cc, the major_compaction_strategy object was
actually already unused before. So instead of making use of it, which
would require some form of information flow downwards about the backlog
we need to export, I am creating a user-initiated backlog type inside
the compaction manager.
With the two changes described above everything is very well
self-contained within the compaction manager and the implementation
becomes trivial.
For the second, I am now handling exceptions in two places:
1) the backlog computation. Those are const functions so if we just have
a transient exception when compacting the backlog, all we need to do is
return some fixed amount of shares and try again in the next adjustment
window.
2) the process of adding / removing SSTables. Those are harder, since if
we fail to manipulate the list we'll be left in an inconsistent state.
The best approach is then to disable the backlog tracker and return a
fixed amount of shares globally.
Tests: unit (release)
"
* 'backlog-improvements-v3' of github.com:glommer/scylla:
compaction_manager: disable backlog tracker if we see an exception
backlog tracker: protect against exceptions in backlog calculation.
STCS_backlog: protect against negative backlog
STCS_backlog: remove unused attribute
compaction strategy: move size tiered backlog to a header
compaction_strategy: delete major_compaction_strategy class
compaction: make sure that user-initiated compactions always have a minimum priority
backlog_controller: add constants to represent a globally disabled controller
backlog_controller: move compaction controller to the compaction manager
backlog_controller: allow users to compute inverse function of shares
This commit adapts view_stats structure so it can be passed
to storage_proxy as write stats. Thanks to that, mv replica updates
will not interfere with user write metrics. As a side effect it also
provides more stats to replica view updates.
Closes#3385Closes#3416
This commit extracts metrics related to writes from stats structure,
so it can be easily replaced later, e.g. for materialized view metrics.
References #3385
References #3416
This commit adds statistics to row_locker class. Metrics are
independendly counted for all lock types: row<->partition and
exclusive<->shared.
Metrics gathered:
- total acquisitions
- operations that wait on the lock
- histogram of the time spent on waiting on this type of lock
References #3385
References #3416
This commit introduces view statistics:
- updates pushed to local/remote replicas
- updates failed to be pushed to local/remote replicas
Metrics are kept on per-table basis, i.e. updates_pushed_remote
shows the number of total updates (mutations) pushed to all paired
mv replicas that this particular table has.
Every single update is taken into consideration, so if view update
requires removing a row from one view and adding a row to another,
it will be counted as 2 updates.
References #3385
References #3416
Compactor collects all currently active memtables and later replaces
them with the merged result. The problem is that active memtable
belongs to the input set during compaction and as a result mutations
applied concurrently with compaction could be lost once compaction
replaces the memtables. The fix is to open a new active memtable when
compaction starts.
Caused sporadic failures of row_cache_test.cc:test_continuity_is_populated_when_read_overlaps_with_older_version()
Message-Id: <1526997724-13037-1-git-send-email-tgrabiec@scylladb.com>
If we see an exception when adding or removing SSTables from the backlog
tracker, the backlog tracker can be inconsistent forever. It would be
best if we act before that happens and disable the backlog tracker. Once
the backlog tracker is disabled it will default to returning a fixed
number of shares.
We can either disable the backlog tracker or remove it. But if we remove
it we can end up with a backlog of zero if that's the only tracker with
a backlog. We then keep it registered but mark it as disabled. This also
leaves room for recovery in some situations: we can recover the backlog
by a doing a schema change in the column family that had the backlog
disabled, for instance.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
Backlog calculations should be exception free, but there are at cases in
which I can see they happening. One example is if some backlog tracker
that uses temporary objects fails an allocation.
Memory shortages can be specially pernicious: if we leave the
responsibility of catching those to the individual backlog tracker, we
will keep trying to make more allocations in the other backlog trackers
if we have many column families. By handling it here we can stop that.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
A negative backlog can be interpreted as a very large backlog.
Part of that is because we keep the total_size as an unsigned type,
which is what we expect. But in case there is an issue-- like an
exception that causes some SSTable not to be tracked then this size
can become negative. Returning a zero backlog is better than allowing
it to be interpreted as a giant number.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
This attribute ended up being unused in the final version.
Spotted now while reading the code for other purposes.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
It's very common to other strategies to include a SizeTiered
step somehow inside their algorithms: LCS will do SizeTiered on
L0, TWCS will do SizeTiered within a window, etc.
To make it easier for those strategies to consume the SizeTiered
backlog tracker, we will move that to its own file.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
It was already unused before this series. In an earlier version I have
used it to provide an ad-hoc backlog for major compactions. But now that
this is done by the compaction manager, this class really isn't being
used.
And it is likely it won't be: major compaction is not a compaction
strategy a user can choose, unlike the others that need to be built
through make_compaction_strategy.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
We have observed the following behavior with user initiated compactions,
like major compactions:
- if there are no writes, the backlog doesn't increase.
- as compaction progresses the backlog decreases.
- at some point, the backlog is so low that compaction barely makes any
progress.
Going forward, we should allow one to read from the generated partial
SSTables, in which case this doesn't matter that much. But for
user-iniated compactions we would like to guarantee a minimum baseline.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
There are situations in which we want the controllers to stop working
altogether. Usually that's when we have an unimplemented controller or
some exception.
We want to return fixed shares in this case, but this is a very
different situation from when we want fixed shares for *one* backlog
tracker: we want to return fixed shares, yes, but if we disable 200
backlog trackers (because they all failed, for instance), we don't want
that fixed number x 200 to be our backlog.
So the mechanism to globally disable the controller is still granted,
and infinity is a good way to represent that. It's a float that the
controller can easily test against. But actually using infinity in the
code is confusing. People reading it may interpret it as the other way
around from what it means, just meaning "a very large backlog".
Let's turn that into a constant instead. It will help us convey meaning.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
There was recently an attempt to add minimum shares to major compactions
which ended up being harder than it should be due to all the plumbing
necessary to call the compaction controller from inside the compaction
manager-- since it is currently a database object. We had this problem
again when trying to return fixed shares in case of an exception.
Taking a step back, all of those problems stem from the fact that the
compaction controller really shouldn't be a part of the database: as it
deals with compactions and its consequences it is a lot more natural to
have it inside the compaction manager to begin with.
Once we do that, all the aforementioned problems go away. So let's move
there where it belongs.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
Fixes#3446
Previously, only shutdown-synced objects where actually closed,
which is wrong.
This introduces yet another queue, processed together with the
deletion objects, which ensures we explicitly close all objects
that have been discarded.
Message-Id: <20180521140456.32100-1-calle@scylladb.com>
"This series leverages hinted handoff for failed view replica
updates."
* 'materialized_view_updates_with_hh_5' of https://github.com/psarna/scylla:
storage_proxy: enable hinted handoff for materialized views
storage_proxy: make view updates use consistency_level::ANY
row::find_cell() may be called for cells that do not exist in that row.
In such case nullptr shall be returned, this patch makes sure that
it is not dereferenced.
Message-Id: <20180522091726.24396-1-pdziepak@scylladb.com>
There are some situations in which we want to force a specific amount of
shares and don't have a backlog. We can provide a function to get that
from the controller.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
* seastar a6cb005...5da5d4e (6):
> append_challenged_posix_file_impl: Ensure continuation uses non-stale object
> utils: make make_visitor() public
> tcp: Adjust receive window
> tcp: Fix allowed sending size calculation in can_send
> tcp: Fix assert in tcp::tcb::output_one
> be more descriptive with failed syscalls for filesystem operations
Contains alternative fix for #3446 (will also be fixed directly).
This commit initializes and enables hinted handoff for materialized
views, even if HH is not explicitly turned on in config.
User writes still use hinted handoff only if it is explicitly enabled,
while materialized views are allowed to use it unconditionally
in order to store failed replica updates somewhere.
Fixes#3383
This commit makes view replica updates internally use consistency
level ANY, so in case an update fails it will fall back to hinted
handoff.
References #3383
install(1) creates missing directories on recent Fedora, but not
on CentOS 7. This causes the RPM build (which installs to a pristine
tree, without an existing /etc) to fail.
Fix by setting up /etc.
Tests: rpm (Fedora, CentOS)
Message-Id: <20180520124937.20466-1-avi@scylladb.com>
There is no need to call dht::split_ranges_to_shards to split the token
range into <shard> : <a lot of small ranges> mapping and create a flat
mutation reader with a lot of small ranges.
Because:
1) The flat mutation reader on each shard only returns data belongs to
this local shard, there is no correctness issue if we do not split and
feed the sub ranges only belongs to this local shard.
2) With murmur3_partitioner_ignore_msb_bits = 12, it is almost certain
that given a token range, all the shards will have data for the range
anyway. Even if we ask all the shards to work on the token range and
some of the shards have no data for it, it is fine. We simply send no
data from this shard.
Tests: update_cluster_layout_tests.py
Message-Id: <ac00cd21d6156c47b74451dd415d627481e48212.1526864222.git.asias@scylladb.com>
In streaming, the sender sends the mutations on all the local shards in
parallel, it is possible that the receiver handle more than one such
connection on the same shard. It is determined by where the tcp
connection goes. Current rpc ignores the dest shard id when sending the
rpc message.
For instance, say node1 has 2 shards, node2 has 2 shards. Currently, we
can end up with like this:
Node 1 shard 0 -> Node 2 shard 1
Node 1 shard 1 -> Node 2 shard 1
It is better if we do:
Node 1 shard 0 -> Node 2 shard 0
Node 1 shard 1 -> Node 2 shard 1
This patch solves this problem by let the handler always handle on
shard = src_cpu_id % smp::count.
If sender and receiver have the same shard config, it is completely
distributed the work evenly.
If sender and receiver do not have the same shard config, it is
unavoidable some of the shard will do more work than the others.
Tests: dtest update_cluster_layout_tests.py
Message-Id: <911827bcf67459a07ec92623a9ed4c4fbba195ca.1524622375.git.asias@scylladb.com>
Fixes#2793
Prints error handle class (commitlog or "other/disk") + exception
type and message. While not exhaustive, at least gives a correlation
point to (hopefully) other log printouts.
Message-Id: <20180509081040.7676-1-calle@scylladb.com>
"
For compression, SSTables 3.x format uses CRC32 for checksumming
compressed chunks as well as for calculating the full file checksum.
Also, while for older formats "full checksum" of a compressed data file
means a combination of checksums of its compressed chunks, in SSTables
3.x this now reads literally and assumes the checkum of all bytes
written, including per-chunk digests.
Tests: unit {debug, release}
"
* 'projects/sstables-30/write-compression/v3' of https://github.com/argenet/scylla:
tests: Add unit tests for writing compressed SSTables 3.x.
tests: Validate Digest32.crc for SSTables 3.x write tests.
tests: Fix invalid Digest file for write_counter_table test.
sstables: Support writing compressed SSTables 3.0.
sstables: Make compressed streams customizable on checksumming.
sstables: Move checksum calculation logic to compressed_output_stream.
Previously, compressed_output_stream used to calculate checksum of the
supplied chunk and pass it to the 'compression' object to combine with
the full checksum calculated on prior writes.
Now, all the checksum calculation happens inside
compressed_output_stream and 'compression' only stores the result.
This is done to loosen ties between two classes and simplify
compressed_output_stream customisation with various checksum algorithms.
Signed-off-by: Vladimir Krivopalov <vladimir@scylladb.com>
We are currently moving the pointer we acquired to the segment inside
the lambda in which we'll handle the cycle.
The problem is, we also use that same pointer inside the exception
handler. If an exception happens we'll access it and we'll crash.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
Message-Id: <20180518125820.10726-1-glauber@scylladb.com>
* tag 'tgrabiec/fixes-and-improvements-for-gdb-scripts-v1' of github.com:tgrabiec/scylla:
gdb: Print live object size from 'scylla lsa-segment'
gdb: Extend 'scylla segment-descs' output with full occupancy info
gdb: Print allocated object's type name instead of full LSA migrator
gdb: Fix LSA migrator discovery
gdb: Drop code related to LSA zones
gdb: Fix uses of removed segment_desctriptor::_lsa_managed
lsa: Add use for debug::static_migrators
Move code to a traditional install.sh script (more traditional would be
a "make install", but this is close enough).
This allows testing installation independently of packaging. In addition,
non-Red Hat-packaging can share much of the code in install.sh.
Ref #3243.
Tests: build+install rpm
Message-Id: <20180517114147.30863-1-avi@scylladb.com>