mirror of
https://github.com/scylladb/scylladb.git
synced 2026-04-21 00:50:35 +00:00
The current filter tracker uses a distributed mechanism, even though the values for all CPUs but one are usually at zero. This is because - I wrongly assumed - that when using legacy sstables, the same sstable would be serving keys for multiple shards, leading to the map reduce being a necessary operation in this case. However, Avi currently point out that: "It is and it isn't [the case]. Yes the sstable will be loaded on multiple cores, but each core will have its own independent sstable object (only the files on disk are shared). So to aggregate statistics on such a shared sstables, you have to match them by name (and the sharded<filter_tracker> is useless)." Avi is correct in his remarks. The code will hereby be simplified by keeping local counters only, and the map reduce operation will happen at a higher level. Also, because the users of the get methods will go through the sstable, we can actually just move them there. With that we can leave the counters private to the external world in the filter itself. Signed-off-by: Glauber Costa <glommer@cloudius-systems.com>
29 lines
469 B
C++
29 lines
469 B
C++
/*
|
|
* Copyright 2015 Cloudius Systems
|
|
*
|
|
* Modified by Cloudius Systems
|
|
*/
|
|
#pragma once
|
|
|
|
namespace sstables {
|
|
class sstable;
|
|
}
|
|
|
|
class filter_tracker {
|
|
uint64_t false_positive = 0;
|
|
uint64_t true_positive = 0;
|
|
|
|
uint64_t last_false_positive = 0;
|
|
uint64_t last_true_positive = 0;
|
|
public:
|
|
void add_false_positive() {
|
|
false_positive++;
|
|
}
|
|
|
|
void add_true_positive() {
|
|
true_positive++;
|
|
}
|
|
|
|
friend class sstables::sstable;
|
|
};
|