before this change, we rely on the default-generated fmt::formatter
created from operator<<, but fmt v10 dropped the default-generated
formatter.
in this change, we define formatters for `tracing::span_id`, and drop
its operator<<.
Refs #13245
Signed-off-by: Kefu Chai <kefu.chai@scylladb.com>
Closesscylladb/scylladb#17058
Now it's confusing, as it doesn't stop tracing, but rather shuts it down
on all shards. The only caller of it can be more descriptive without the
wrapper
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Today's shutdown() and its stop() peer are very restrictive in a way
callers should use them. There's no much point in it, making shutdown()
re-entrable, as for other services, will let relaxing callers code here
and in next patches
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
For that to happen the value evaluation is moved from the
init_session_records() into a private trace_state helper as it checks
the props values initialized earlier
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
This object is constructed via one_session_records thus the latter needs
to pass some arguments along
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Currently the code uses its own class registration engine, but there's a
generic one in utils/ that applies here too. In fact, the tracing
backend registry is just a transparent wrapper over the generic one :\
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
It's a private method used purely in tracing.cc, no need in compiling it
every time the header is met somewhere else.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
After fcb8d040 ("treewide: use Software Package Data Exchange
(SPDX) license identifiers"), many dual-licensed files were
left with empty comments on top. Remove them to avoid visual
noise.
Closes#10562
Instead of lengthy blurbs, switch to single-line, machine-readable
standardized (https://spdx.dev) license identifiers. The Linux kernel
switched long ago, so there is strong precedent.
Three cases are handled: AGPL-only, Apache-only, and dual licensed.
For the latter case, I chose (AGPL-3.0-or-later and Apache-2.0),
reasoning that our changes are extensive enough to apply our license.
The changes we applied mechanically with a script, except to
licenses/README.md.
Closes#9937
Tracing is created in two steps and is destroyed in two too.
The 2nd step doesn't have the corresponding stop part, so here
it is -- defer tracing stop after it was started.
But need to keep in mind, that tracing is also shut down on
drain, so the stopping should handle this.
Fixes#8382
tests: unit(dev), manual(start-stop, aborted-start)
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Message-Id: <20210331092221.1602-1-xemul@scylladb.com>
If tracing::tracing::_ignore_trace_events is enabled then
the tracing system must ignore all sessions events
for non full_tracing sessions (probability tracing and
user requested) and creating subsessions with the
make_trace_info.
Patch introduces the slow query tracing fast mode that
omits all events during tracing.
Signed-off-by: Ivan Prisyazhnyy <ivan@scylladb.com>
The goal is to make tracing keyspace helper reference query processor, so this
patch adds the needed arguments through the initialization stack.
Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Similar to trace_state keep shared_ptr<tracing> _local_tracing_ptr
in one_session_records when constructed so it can be used
during shutdown.
Fixes#5243
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Static class_registries hinder librarification by requiring linking with
all object files (instead of a library from which objects are linked on
demand) and reduce readability by hiding dependencies and by their
horrible syntax. Hide them behind a non-static, non-template tracing
backend registry.
Message-Id: <20181229121000.7885-1-avi@scylladb.com>
Make state session creation, stop_forground() and tracing::trace(...) methods
noexcept.
Most of them have already been implemented the way that they won't throw
but this patch makes it official...
Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
This patch makes the tracing framework follow the general idea of Google's
Dapper paper: traces generated in a context of the same query are forming
a single-rooted acyclic tree where in a ScyllaDB case vertexes are spans running
on each involved replica Node and edges are RPCs sent from one Node to another.
- Each vertex in the tree above has an ID - "span ID".
- In order to be able to build the tree from the sessions traces we need
to know the parent "span ID" - the ID of a span that sent an RPC that created
the current span.
- Each span of a tracing session is given a 64-bit random span ID.
- The root span has a span_id::illegal_id value.
This patch adds:
- The described above parent span ID and a span ID to the one_session_records
object.
- The current span ID is passed in the trace_info struct to the remote replica.
- Add parent_id and span_id columns to system_traces.events table for the parent
ID and span ID.
Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
- Change an exception type thrown by a tracing::tracing::set_trace_probability()
to make it different from the one thrown by an std::stod() when it fails to
parse a given string.
- Catch the std::out_of_range exception thrown by a tracing::tracing::set_trace_probability() and
wrap the exception string into the httpd::bad_param_exception() object.
- Throw a httpd::bad_param_exception() with a
"Bad format in a probability value: <a user given probability string value>"
message if std::invalid_argument is caught.
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
Message-Id: <1465300738-1557-1-git-send-email-vladz@cloudius-systems.com>
RPC messaging service is initialized before the Tracing service, so
we should prevent creation of tracing spans before the service is
fully initialized.
We will use an already existing "_down" state and extend it in a way
that !_down equals "started", where "started" is TRUE when the local
service is fully initialized.
We will also split the Tracing service initialization into two parts:
1) Initialize the sharded object.
2) Start the tracing service:
- Create the I/O backend service.
- Enable tracing.
Fixes issue #1939
Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
Message-Id: <1481836429-28478-1-git-send-email-vladz@scylladb.com>
The main idea is to log queries that take "too long" to complete.
The "too long" is above the given threshold.
To achieve the above this patch does the following:
- Introduce two new properties to the tracing::trace_state:
- "Full tracing": when the tracing of this query was explicitly requested.
In this state we will record all possible traces related to this query:
both on the coordinator and on any replica involved.
- "Log slow query": when slow query logging is enabled.
If slow query logging is enabled and a session's "duration" is above
the specified threshold we will create a record in the "slow queries log"
and write all trace records created on the coordinator and on a replica
if a replica's session lasts longer than that threshold.
(We will propagate the Coordinator's slow query logging threshold to replicas
in the context of a specific tracing/logging session).
The properties above are independent, namely they may be enabled and/or disabled
independently and any combination of them is legal (naturally, creating a tracing
session when both states above are disabled makes no sense).
- Instrument the tracing::tracing service to allow the following:
- Enable/disable slow query logging.
- Set/get the slow query duration threshold (in microseconds).
- Set/get the slow query log record TTL value (in seconds).
- Instrument the trace_keyspace_helper to write a slow query log entry
when requested.
- The slow query logging is disabled by default and the threshold is set to half a second.
- The TTL of a slow log record is set to 86400 seconds by default.
- It makes sense to use the same "slow query logging threshold" and a "slow query record TTL"
both on a coordinator and on a replica Nodes in a context of the same tracing session:
- Pass both TTL and a threshold to the replica in a trace_info.
This patch also implements the new slow query logging specific logic:
- Don't write the pending tracing records before the end of a tracing session
until "duration" reaches the logging threshold.
- Don't build the parameters<sstring, sstring> map unless we know we will write it
to I/O.
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
- Instead of keeping separate booleans introduce a trace_state_props_set enum_set and
pass it around instead of separate booleans.
- Change the trace_info to hold this value in addition to write_on_close. Initialize
a corresponding bit in an enum_set based on a write_on_close value in a trace_info
constructor for a backward compatibility.
- Separate a trace_state constructor into two:
- For a primary session object.
- For a secondary session object.
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
Use a per-shard tracing records budget instead
of maintaining a fixed-size per-session records budget and
a per-shard sessions budget.
The original policy could lead to some irrational situations,
when we have a single tracing session that creates a substantial
amount of records that we can handle but we would start dropping
new records after it surpasses the per-session limit.
The new policy handles a per-shard trace records budget that is
being consumed by each trace() call and by a primary session destructor
when a session record is created.
Each active record may only be in one of the following states:
- cached: stored in its session's object. When record is in this state
it's not going to be written to I/O during the next write event.
- pending for write: when record is in this state it's going to be written
to I/O during the next write event.
- flushing: the record is being currently written to the I/O.
There are counters of the total amount of records in each state above.
Each record may only be in a specific state at every point of time and
thereby it must be accounted only in one and only one of the three
counters.
The sum of all three counters should not be greater than
(max_pending_trace_records + write_event_records_threshold) at any time
(actually it can get as high as a value above plus (max_pending_sessions)
if all sessions are primary but we won't take this into an account for
simplicity).
The same is about the number of outstanding sessions: it may not be greater
than (max_pending_sessions + write_event_sessions_threshold) at any time.
If total number of tracing records is greater or equal to the limit
above, the new trace point is going to be dropped.
If current number or records plus the expected number of trace records
per session (exp_trace_events_per_session) is greater than the limit
above new sessions will be dropped. A new session will also be dropped if
there are too many active sessions.
When the record or a session is dropped the appropriate statistics
counters are updated and there is a rate-limited warning message printed
to the log.
Every time a number of records pending for write is greater or equal to
(write_event_records_threshold) or a number of sessions pending for
write is greater or equal to (write_event_sessions_threshold) a write
event is issued.
Every 2 seconds a timer would write all pending for write records
available so far.
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
When building events' mutation don't apply them in a tight loop
but rather apply each of them in a separate continuation to allow
reactor to interrupt this loop if it takes too long for it to
complete (e.g. where there are a lot of mutations to apply).
Since building all events' mutations is asynchronous now we can
no longer keep the "nanos" state in a global trace_keyspace_helper
object but rather have to move it into the per-session
backend_session_state class.
backend_session_state class is a backend-specific implementation of a
tracing::backend_session_state_base class.
An instance of the above object is created by a
tracing::i_tracing_backend_helper::allocate_session_state() virtual
method and is stored in a tracing::one_session_records object.
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
Before this patch the interaction between the layers above was as follows:
- trace_state was passing the trace event data to a backend object every
time trace() method was called.
- trace_state was passing the session data to a backend object in a destructor.
- A backend object was storing this data in a form of lambda where all data
above was caught in a capture list. This was primarily done in order to
delay the call for make_xxx_mutation(). Lambdas were stored in a map by a session
ID and they were executed when a kick() method was called.
- A tracing::tracing object was periodically calling a kick() method of a
backend that was initiating a write of all pending data to the storage.
All backend methods used in the described above interactions were virtual.
Thereby, for instance, for each and every trace record we were calling a virtual method that was
receiving a significant amount of parameters, store a lambda in a map and return.
This is clearly a suboptimal way of using virtual functions since we prevent a compiler
from inlining an obviously inlinable operations.
This patch changes the interaction scheme to be as follows:
- Trace events and session data are stored and passed around in a form of structs
that hold all relevant information (no more lambdas).
- As long as a trace session is active its data is aggregated inside the corresponding
trace_state object.
- The object containing all records is passed and stored as a lw_shared_ptr to save extra
copies and to shorten capture lists.
- All aggregated data is passed to a tracing::tracing object in a trace_state destructor.
The data is stored in a std::deque in a tracing::tracing object (instead of a map by a session ID).
- A single backend's virtual method call writes all data aggregated so far (kick()
method is not needed any more), every time a write event occurs.
- Backend has only one virtual method now:
- Write a bulk of sessions' data aggregated so far.
- Backend's virtual method receives a records bulk object by reference.
As a result:
- A latency of a single trace event that has no formatting improved from 0.2us to 0.1us.
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
A backend helper has to constantly communicate with the corresponding
tracing::tracing instance. By saving a reference to the tracing::tracing instance
will save us a lot of tracing::get_local_tracing_instance() calls and thus
a lot of dereferencing.
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
Add an support for passing a format string plus positional parameters
for creation of a trace point message.
Format string should be given in a fmt library native format described
here: http://fmtlib.net/latest/syntax.html#syntax .
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
kick() backend during shutdown and restrict accessing a backend
after that.
Flush pending records when service is being shut down.
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
In names of functions and variables:
s/flush_/write_/
s/store_/write_/
In a i_tracing_backend_helper:
s/flush()/kick()/
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
tracing::tracing local instance is dereferenced from a
cql_server::connection::process_request(), therefore tracing::tracing
service may be stop()ed only after a CQL server service is down.
On the other hand it may not be stopped before RPC service is down
because a remote side may request a tracing for a specific command too.
This patch splits the tracing::tracing stop() into two phases:
1) Flush all pending tracing records and stop the backend.
2) Stop the service.
The first phase is called after CQL server is down and before RPC is down.
The second phase is called after RPC is down.
Fixes#1339
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
Message-Id: <1465840496-19990-1-git-send-email-vladz@cloudius-systems.com>
Add a support for defining a probability (a value in a [0,1] range)
for tracing the next CQL request.
Traces for requests that are chosen to be traced due to this feature
are not going to flushed immediately.
Use std::subtract_with_carry_engine (implements the "lagged Fibonacci" algorithm)
random number engine for fastest generation of random integer values.
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
A tracing session life cycle includes 3 stages:
1) Active: when new trace records are being added to this session.
2) Pending for flushing to a storage: when session is over but not
yet flushed to the storage ("backend").
3) Flushing: when session's records are being flushed to the storage
and this process is not yet completed.
Sessions may accumulate in each of the stages above and we should limit
the maximum amount of sessions being accumulated in each of them in order to avoid OOM
situation.
Current in-tree implementation only limits the number of tracing sessions
accumulated in the first ("Active") stage.
Since currently every closing session is being immediately flushed (as long
as "settraceprobability" is not implemented) the second stage never accumulates
tracing sessions.
The third stage is currently not controlled at all and if, for instance, we
succeed to push enough tracing session towards a slow storage backend, they may
accumulate there consuming an uncontrolled amount of memory and may eventually consume
all of it.
This patch fixes this unpleasant situation by implying the following strategy:
- Limit the total amount of accumulated tracing sessions in all stages above together
by a static value - 2 times "flush threshold". "2 times" is needed to allow new
tracing sessions to accumulate in the stage 2 while sessions in the stage 3 are still
being processed.
- Forcefully flush sessions in the stage 2 to the storage when their count reaches a "flush
threshold".
This would ensure that there will not more than totally (2 * "flush threshold") sessions (in any stage)
on each shard.
An advantage of this strategy is its simplicity - we only need a single threshold to control all stages.
If we feel that we needed a finer graining for each stage we may add separate limits for each of them
in the future.
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
trace_state: Is a single tracing session.
tracing: A sharded service that contains an i_trace_backend_helper instance
and is a "factory" of trace_state objects.
trace_state main interface functions are:
- begin(): Start time counting (should be used via tracing::begin() wrapper).
- trace(): Create a tracing event - it's coupled with a time passed since begin()
(should be used via tracing::trace() wrapper).
- ~trace_state(): Destructor will close the tracing session.
"tracing" service main interface function is:
- start(): Initialize a backend.
- stop(): Shut down a backend.
- create_session(): Creates a new tracing session.
(tracing::end_session(): Is called by a trace_state destructor).
When trace_state needs to store a tracing event it uses a backend helper from
a "tracing" service.
A "tracing" service limits a number of opened tracing session by a static number.
If this number is reached - next sessions will be dropped.
trace_state implements a similar strategy in regard to tracing events per singe
session.
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>