By default, io checker will cause Scylla to shutdown if it finds
specific system errors. Right now, io checker isn't flexible
enough to allow a specialized handler. For example, we don't want
to Scylla to shutdown if there's an permission problem when
uploading new files from upload dir. This desired flexibility is
made possible here by allowing a handler parameter to io check
functions and also changing existing code to take advantage of it.
That's a step towards fixing #1709.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
This patch adds the parsing for the "CREATE MATERIALIZED VIEW" statement,
following Cassandra 3 syntax. For example:
CREATE MATERIALIZED VIEW building_by_city
AS SELECT * FROM buildings
WHERE city IS NOT NULL
PRIMARY KEY(city, name);
It also adds the "IS NOT NULL" operator needed for this purpose.
As in Cassandra, "IS NOT NULL" can only be used for materialized
view creation, and not in a normal SELECT. It can only be used with
the NULL operand (i.e., "IS NOT 3" will be a syntax error).
The current implementation of this statement just does some sanity
checking (such as to verify that "city" is a valid column name and that
the "building" base table exists), complains that materialized views are
not yet supported:
SyntaxException: <ErrorMessage code=2000 [Syntax error in CQL query] message="Failed parsing statement: [CREATE MATERIALIZED VIEW building_by_city AS
SELECT * FROM buildings
WHERE city IS NOT NULL
PRIMARY KEY(city, name);] reason: unsupported operation: Materialized views not yet supported">
As mentioned above, the "IS NOT NULL" restriction is not allowed in
ordinary selects not creating a materialized views:
SELECT * FROM buildings WHERE city IS NOT NULL;
InvalidRequest: code=2200 [Invalid query] message="restriction 'city IS NOT null' is only supported in materialized view creation"
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <1475742927-30695-1-git-send-email-nyh@scylladb.com>
Remove clustering_key_filter_factory and clustering_key_filtering_context.
Use partition_slice directly with a static get_ranges method.
Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Cassandra 1.x clusters often use RandomPartitioner. Supporting
RandomPartitioner will allow easier migration to Scylla
Tests are added to make sure scylla generates the same token as
Cassandra does for the same partition key.
Fixes#1438
Message-Id: <3bc8b7f06fad16d59aaaa96e2827198ce74214c6.1469166766.git.asias@scylladb.com>
This patch implements the size_estimates_recorder, which periodically
writes estimations for all the non-system column families in the
size_estimates system table. The size_estimates_recorder class
corresponds to the one in Cassandra's SizeEstimatesRecorder.java.
Estimation is carried out by shard 0. Since we're estimating based on
data in shared sstables, having multiple shards doing this would skew
the results.
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
The sstables::key class now delegates much of its functionality
to the composite class. All existing behavior is preserved.
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
To ensure isolation of operation when streaming a mutation from a
mutable source (such as cache or memtable) MVCC is used.
Each entry in memtable or cache is actually a list of used versions of
that entry. Incoming writes are either applied directly to the last
verion (if it wasn't being read by anyone) or preprended to the list
(if the former head was being read by someone). When reader finishes it
tries to squash versions together provided there is no other reader that
could prevent this.
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
This commit introduces mutation_fragment class which represents the parts
of mutation streamed by streamed_mutation.
mutation_fragment can be:
- a static row (only one in the mutation)
- a clustering row
- start of range tombstone
- end of range rombstone
There is an ordering (implemented in position_in_partition class) between
mutation_fragment objects. It reflects the order in which content of
partition appears in the sstables.
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
This patch changes the type of the mutation partition's row_tombstones
to be a range_tombstone_list, so that they are now represented as a
set of disjoint ranges. All of its usages are updated accordingly.
Fixes#1155
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
This class is responsible for representing a set of range tombstones
as non-overlapping disjoint sets of range tombstones.
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
This patch introduces the range_tombstone class, composed of
a [start, end] pair of clustering_key_prefixes, the type
of inclusiveness of each bound, and a tombstone.
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
"This series introduces a tracing infrastructure that may be used
for tracing CQL commands execution and measuring latencies of separate
stages of CQL handling as defined by a CQL binary protocol specification.
To begin tracing one should create a "tracing session", which may then
be used to issuing tracing events.
If execution of a specific CQL command involves other Nodes (not only a Coordinator),
then a "tracing session ID" is passed to that Node (in the context of the
corresponding RPC call). Then this "session ID" may be used to create a
"secondary tracing session" to issue tracing events in the context of the original session.
The series contains an implementation of tracing that uses a keyspace in the current
cluster for storing tracing information.
This series contains a demo per-request tracing instrumentation of a QUERY
CQL command and even this instrumentation is partial: it only fully instruments
a QUERY->SELECT->read_data call chain.
This is by all means a very beginning of the proper instrumentation which is
to come.
Right now the latencies for a single SELECT for a single raw with RF 1 from a 2 Nodes cluster
on my laptop started using ccm (for C* all default parameters, for scylla - memory 256MB, --smp 2)
are as follows (pseudo-graphics warning):
--------------------------------------------------------------------------------------------
| scylla (2 Nodes x 2 shards each) | C* 2.1.8
_______________________________________|___________________________________|________________
Coordinator and replica are same Node | |
(TRACING OFF): | 0.3ms | 0.3ms
c-s with a single thread mean latency | (was 0.2ms before the last |
value | rebase with a master) |
--------------------------------------------------------------------------------------------
Coordinator and replica are same Node | |
(TRACING ON) | ~250us | ~1200us
Running a SELECT command from a cqlsh | |
a few times | |
--------------------------------------------------------------------------------------------
Coordinator and replica are not on the | |
same Node | ~700us | >2500us
(TRACING ON) | |
--------------------------------------------------------------------------------------------
To begin tracing one may use a cqlsh "TRACING ON/OFF" commands:
cqlsh> TRACING ON
Now Tracing is enabled
cqlsh> select "C0", "C1" from keyspace1.standard1 where key=0x12345679;
C0 | C1
--------------------+------
0x000000000001e240 | null
(1 rows)
Tracing session: 146f0180-21e7-11e6-b244-000000000000
activity | timestamp | source | source_elapsed
-------------------------------------------------------------------+----------------------------+-----------+----------------
select "C0", "C1" from keyspace1.standard1 where key=0x12345679; | 2016-05-24 22:38:24.536000 | 127.0.0.1 | 0
message received from /127.0.0.1 [0] | 2016-05-24 22:38:24.537000 | 127.0.0.2 | --
Done reading options [0] | 2016-05-24 22:38:24.537000 | 127.0.0.1 | 3
read_data handling is done [0] | 2016-05-24 22:38:24.537000 | 127.0.0.2 | 37
Parsing a statement [0] | 2016-05-24 22:38:24.537000 | 127.0.0.1 | 3
Processing a statement [0] | 2016-05-24 22:38:24.537000 | 127.0.0.1 | 56
Done processing - preparing a result [0] | 2016-05-24 22:38:24.537000 | 127.0.0.1 | 550
Request complete | 2016-05-24 22:38:24.536560 | 127.0.0.1 | 560
cqlsh>"
trace_state: Is a single tracing session.
tracing: A sharded service that contains an i_trace_backend_helper instance
and is a "factory" of trace_state objects.
trace_state main interface functions are:
- begin(): Start time counting (should be used via tracing::begin() wrapper).
- trace(): Create a tracing event - it's coupled with a time passed since begin()
(should be used via tracing::trace() wrapper).
- ~trace_state(): Destructor will close the tracing session.
"tracing" service main interface function is:
- start(): Initialize a backend.
- stop(): Shut down a backend.
- create_session(): Creates a new tracing session.
(tracing::end_session(): Is called by a trace_state destructor).
When trace_state needs to store a tracing event it uses a backend helper from
a "tracing" service.
A "tracing" service limits a number of opened tracing session by a static number.
If this number is reached - next sessions will be dropped.
trace_state implements a similar strategy in regard to tracing events per singe
session.
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
Uses a CQL keyspace system_traces to store tracing information.
Uses two tables:
CREATE TABLE system_traces.sessions (
session_id uuid,
command text,
client inet,
coordinator inet,
duration int,
parameters map<text, text>,
request text,
started_at timestamp,
PRIMARY KEY ((session_id)))
and
CREATE TABLE system_traces.events (
session_id uuid,
event_id timeuuid,
activity text,
source inet,
source_elapsed int,
thread text,
PRIMARY KEY ((session_id), event_id))
system_traces.sessions table contains records of tracing sessions.
system_traces.sessions columns description:
- session_id: an ID of the session.
- command: type of a command this session was created for
(currently supported "NONE", "QUERY" and "REPAIR").
- client: IP of the client that issued the command.
- coordinator: IP of a coordinator that received the command.
- duration: total duration of the tracing session (in us).
- parameters: optional parameters for this session, passed to
i_trace_state::begin() call.
- request: a CQL command this tracing session is created for.
- started_at: the time the session has been started at.
system_traces.events contains records of separate tracing events.
system_traces.events columns description:
- session_id: an ID of the session.
- event_id: an ID of the event.
- activity: the trace point description - a message given to
i_trace_state::trace().
- source: IP of the Node where trace event was issued.
- source_elapsed: time passed since creation of a tracing session (in us) on
the Node where this trace event was issued.
- thread: name of the thread in who's context this trace event was
issued in (currently its "core N", where 'N' is an index of
a shard the trace event was issued on).
This class will cache lambdas creating the corresponding mutations for each tracing
record requested to be stored till flush() method is called.
flush() will merge all pending mutations to "sessions" and "events" tables and
then apply a mutation to "events" table and when it completes - to "sessions"
table. This way it'll ensure that when some tracing session is visible, all its
events are visible too.
trace_keyspace_helper exposes a few metrics via collectd:
- tracing_error - a total number of errors (not including OOM)
- bad_column_family_errors - number of times a tracing record wasn't
stored because system_trace tables' schema
didn't match the expected value. This may happen if
a DB administrator is doing funny things like altering
the schemas of the above tables.
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
* seastar 0bcdd28...864d6dc (4):
> Logging framework
> Add libubsan and libasan to fedora deps docs
> tests: add rpc cancellable tests
> rpc: add cancellable interface
Dropped logging implementation in favor of seastar's due to a link
conflict with operator<<.
"The Prepared message has a metadata section that's similar to result set
metadata but not exactly the same. Fix serialization by introducing a
separate prepared_metadata class like Origin has and implement
serialization as per the CQL protocol specification. This fixes one CQL
binary protocol version 4 issue that we currently have.
The changes have been verified by running the gocql integration tests
using v4. Please note that this series does *not* enable v4 for clients
because Cassandra 2.1.x series only supports CQL binary protocol v3."
"Writes may start to be rejected by replicas after issuing alter table
which doesn't affect columns. This affects all versions with alter table
support.
Fixes#1258"
"Conversion/implementation of "authorizer" code from origin, handling
permissions management for users/resources.
Default implementation keeps mapping of <user.resource>->{permissions}
in a table, contents of which is cached for slightly quicker checks.
Adds access control to all (existing) cql statements.
Adds access management support to the CQL impl. (GRANT/REVOKE/LIST)
Verified manually and with dtest auth_test.py. Note that several of these
still fail due to (unrelated) unimplemented features, like index, types
etc.
Fixes#1138"