Compare commits

..

912 Commits

Author SHA1 Message Date
Pekka Enberg
c4bd4e89ae release: prepare for 1.3.5 2016-11-29 09:47:38 +02:00
Raphael S. Carvalho
c5f43ecd0e main: fix exception handling when initializing data or commitlog dirs
Exception handling was broken because after io checker, storage_io_error
exception is wrapped around system error exceptions. Also the message
when handling exception wasn't precise enough for all cases. For example,
lack of permission to write to existing data directory.

Fixes #883.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <b2dc75010a06f16ab1b676ce905ae12e930a700a.1478542388.git.raphaelsc@scylladb.com>
(cherry picked from commit 9a9f0d3a0f)
2016-11-16 15:13:44 +02:00
Paweł Dziepak
c923b6e20c row_cache: touch entries read during range queries
Fixes #1847.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
Message-Id: <1479230809-27547-1-git-send-email-pdziepak@scylladb.com>
(cherry picked from commit 999dafbe57)
2016-11-16 13:08:41 +00:00
Amnon Heiman
e9811897d2 API: cache_capacity should use uint for summing
Using integer as a type for the map_reduce causes number over overflow.

Fixes #1801

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <1479299425-782-1-git-send-email-amnon@scylladb.com>
(cherry picked from commit a4be7afbb0)
2016-11-16 15:04:24 +02:00
Paweł Dziepak
54c338d785 partition_version: make sure that snapshot is destroyed under LSA
Snapshot destructor may free some objects managed by the LSA. That's why
partition_snapshot_reader destructor explicitly destroys the snapshot it
uses. However, it was possible that exception thrown by _read_section
prevented that from happenning making snapshot destoryed implicitly
without current allocator set to LSA.

Refs #1831.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
Message-Id: <1478778570-2795-1-git-send-email-pdziepak@scylladb.com>
(cherry picked from commit f16d6f9c40)
2016-11-16 12:54:16 +00:00
Glauber Costa
f705be3518 histogram: moving averages: fix inverted parameters
moving_averages constructor is defined like this:

    moving_average(latency_counter::duration interval, latency_counter::duration tick_interval)

But when it is time to initialize them, we do this:

	... {tick_interval(), std::chrono::minutes(1)} ...

As it can be seen, the interval and tick interval are inverted. This
leads to the metrics being assigned bogus values.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
Message-Id: <d83f09eed20ea2ea007d120544a003b2e0099732.1478798595.git.glauber@scylladb.com>
(cherry picked from commit d3f11fbabf)
2016-11-11 10:16:14 +02:00
Calle Wilund
16a9027552 auth::password_authenticator: Ensure exceptions are processed in continuation
Fixes #1718 (even more)
Message-Id: <1475497389-27016-1-git-send-email-calle@scylladb.com>

(cherry picked from commit 5b815b81b4)
2016-11-07 09:25:29 +02:00
Calle Wilund
23e792c9ea auth::password_authenticator: "authenticate" should not throw undeclared excpt
Fixes #1718

Message-Id: <1475487331-25927-1-git-send-email-calle@scylladb.com>
(cherry picked from commit d24d0f8f90)
2016-11-07 09:25:24 +02:00
Pekka Enberg
5294dd9eb2 Merge seastar submodule
* seastar b62d7a5...5adb964 (2):
  > file: make close() more robust against concurrent calls
  > rpc: Do not close client connection on error response for a timed out request
2016-11-07 09:25:02 +02:00
Raphael S. Carvalho
11323582d6 lcs: fix starvation at higher levels
When max sstable size is increased, higher levels are suffering from
starvation because we decide to compact a given level if the following
calculation results in a number greater than 1.001:
level_size(L) / max_size_for_level_l(L)

Fixes #1720.

For this backport, I needed to add schema as parameter to sstable
functions that return first and last decorated keys.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
(cherry picked from commit a8ab4b8f37)
2016-11-05 11:26:01 +02:00
Raphael S. Carvalho
69948778c6 lcs: fix broken token range distribution at higher levels
Uniform token range distribution across sstables in a level > 1 was broken,
because we were only choosing sstable with lowest first key, when compacting
a level > 0. This resulted in performance problem because L1->L2 may have a
huge overlap over time, for example.
Last compacted key will now be stored for each level to ensure sort of
"round robin" selection of sstables for compactions at level >= 1.
That's also done by C*, and they were once affected by it as described in
https://issues.apache.org/jira/browse/CASSANDRA-6284.

Fixes #1719.

For this backport, I added schema parameter to compaction_strategy::
notify_completion() because sstable doesn't store schema here.
Most conflicts were that some interfaces take schema parameter at
this version.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
(cherry picked from commit a3bf7558f2)
2016-11-05 11:26:01 +02:00
Pekka Enberg
ef4332ab09 release: prepare for 1.3.4 2016-11-03 12:09:13 +02:00
Pekka Enberg
18a99c00b0 cql3: Fix selecting same column multiple times
Under the hood, the selectable::add_and_get_index() function
deliberately filters out duplicate columns. This causes
simple_selector::get_output_row() to return a row with all duplicate
columns filtered out, which triggers and assertion because of row
mismatch with metadata (which contains the duplicate columns).

The fix is rather simple: just make selection::from_selectors() use
selection_with_processing if the number of selectors and column
definitions doesn't match -- like Apache Cassandra does.

Fixes #1367
Message-Id: <1477989740-6485-1-git-send-email-penberg@scylladb.com>

(cherry picked from commit e1e8ca2788)
2016-11-01 09:34:49 +00:00
Pekka Enberg
cc3a4173f6 release: prepare for 1.3.3 2016-10-28 09:54:41 +03:00
Pekka Enberg
4dc196164d auth: Fix resource level handling
We use `data_resource` class in the CQL parser, which let's users refer
to a table resource without specifying a keyspace. This asserts out in
get_level() for no good reason as we already know the intented level
based on the constructor. Therefore, change `data_resource` to track the
level like upstream Cassandra does and use that.

Fixes #1790

Message-Id: <1477599169-2945-1-git-send-email-penberg@scylladb.com>
(cherry picked from commit b54870764f)
2016-10-27 23:38:01 +03:00
Glauber Costa
31ba6325ef auth: always convert string to upper case before comparing
We store all auth perm strings in upper case, but the user might very
well pass this in upper case.

We could use a standard key comparator / hash here, but since the
strings tend to be small, the new sstring will likely be allocated in
the stack here and this approach yields significantly less code.

Fixes #1791.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
Message-Id: <51df92451e6e0a6325a005c19c95eaa55270da61.1477594199.git.glauber@scylladb.com>
(cherry picked from commit ef3c7ab38e)
2016-10-27 22:10:04 +03:00
Tomasz Grabiec
dcd8b87eb8 Merge seastar upstream
* seastar b62d7a5...0fd8792 (1):
  > rpc: Do not close client connection on error response for a timed out request

Refs #1778
2016-10-25 13:56:36 +02:00
Tomasz Grabiec
6b53aad9fb partition_version: Fix corruption of partition_version list
The move constructor of partition_version was not invoking move
constructor of anchorless_list_base_hook. As a result, when
partition_version objects were moved, e.g. during LSA compaction, they
were unlinked from their lists.

This can make readers return invalid data, because not all versions
will be reachable.

It also casues leaks of the versions which are not directly attached
to memtable entry. This will trigger assertion failure in LSA region
destructor. This assetion triggers with row cache disabled. With cache
enabled (default) all segments are merged into the cache region, which
currently is not destroyed on shutdown, so this problem would go
unnoticed. With cache disabled, memtable region is destroyed after
memtable is flushed and after all readers stop using that memtable.

Fixes #1753.
Message-Id: <1476778472-5711-1-git-send-email-tgrabiec@scylladb.com>

(cherry picked from commit fe387f8ba0)
2016-10-18 11:00:19 +02:00
Pekka Enberg
762b156809 database: Fix io_priority_class related compilation error
Commit e6ef49e ("db: Do not timeout streaming readers") breaks compilation of database.cc:

  database.cc: In lambda function:
  database.cc:282:62: error: ‘const class io_priority_class’ has no member named ‘id’
               if (service::get_local_streaming_read_priority().id() == pc.id()) {
                                                              ^~
  database.cc:282:73: error: ‘const class io_priority_class’ has no member named ‘id’
               if (service::get_local_streaming_read_priority().id() == pc.id()) {

...because we don't have Seastar commit 823a404 ("io_priority_class:
remove non-explicit operator unsigned") backported.

Fix the issue by using the non-explicit operator instead of explicit id().

Acked-by: Tomasz Grabiec <tgrabiec@scylladb.com>
Message-Id: <1476425276-17171-1-git-send-email-penberg@scylladb.com>
2016-10-14 13:28:32 +03:00
Pekka Enberg
f3ed1b4763 release: prepare for 1.3.2 2016-10-13 20:34:46 +03:00
Paweł Dziepak
6618236fff query_pagers: fix clustering key range calculation
Paging code assumes that clustering row range [a, a] contains only one
row which may not be true. Another problem is that it tries to use
range<> interface for dealing with clustering key ranges which doesn't
work because of the lack of correct comparator.

Refs #1446.
Fixes #1684.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
Message-Id: <1475236805-16223-1-git-send-email-pdziepak@scylladb.com>
(cherry picked from commit eb1fcf3ecc)
2016-10-10 16:08:09 +03:00
Asias He
ecb0e44480 gossip: Switch to use system_clock
The expire time which is used to decide when to remove a node from
gossip membership is gossiped around the cluster. We switched to steady
clock in the past. In order to have a consistent time_point in all the
nodes in the cluster, we have to use wall clock. Switch to use
system_clock for gossip.

Fixes #1704

(cherry picked from commit f0d3084c8b)
2016-10-09 18:12:46 +03:00
Tomasz Grabiec
e6ef49e366 db: Do not timeout streaming readers
There is a limit to concurrency of sstable readers on each shard. When
this limit is exhausted (currently 100 readers) readers queue. There
is a timeout after which queued readers are failed, equal to
read_request_timeout_in_ms (5s by default). The reason we have the
timeout here is primarily because the readers created for the purpose
of serving a CQL request no longer need to execute after waiting
longer than read_request_timeout_in_ms. The coordinator no longer
waits for the result so there is no point in proceeding with the read.

This timeout should not apply for readers created for streaming. The
streaming client currently times out after 10 minutes, so we could
wait at least that long. Timing out sooner makes streaming unreliable,
which under high load may prevent streaming from completing.

The change sets no timeout for streaming readers at replica level,
similarly as we do for system tables readers.

Fixes #1741.

Message-Id: <1475840678-25606-1-git-send-email-tgrabiec@scylladb.com>
(cherry picked from commit 2a5a90f391)
2016-10-09 10:33:39 +03:00
Tomasz Grabiec
7d24a3ed56 transport: Extend request memory footprint accounting to also cover execution
CQL server is supposed to throttle requests so that they don't
overflow memory. The problem is that it currently accounts for
request's memory only around reading of its frame from the connection
and not actual request execution. As a result too many requests may be
allowed to execute and we may run out of memory.

Fixes #1708.
Message-Id: <1475149302-11517-1-git-send-email-tgrabiec@scylladb.com>

(cherry picked from commit 7e25b958ac)
2016-10-03 14:15:21 +03:00
Avi Kivity
ae2a1158d7 Update seastar submodule
* seastar 9b541ef...b62d7a5 (1):
  > semaphore: Introduce get_units()
2016-10-03 14:11:26 +03:00
Asias He
f110c2456a gossip: Do not remove failure_detector history on remove_endpoint
Otherwise a node could wrongly think the decommissioned node is still
alive and not evict it from the gossip membership.

Backport: CASSANDRA-10371

7877d6f Don't remove FailureDetector history on removeEndpoint

Fixes #1714
Message-Id: <f7f6f1eec2aab1b97a2e568acfd756cca7fc463a.1475112303.git.asias@scylladb.com>

(cherry picked from commit 511f8aeb91)
2016-09-29 13:27:34 +03:00
Raphael S. Carvalho
37d27b1144 api: implement api to return sstable count per level
'nodetool cfstats' wasn't showing per-level sstable count because
the API wasn't implemented.

Fixes #1119.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <0dcdf9196eaec1692003fcc8ef18c77d0834b2c6.1474410770.git.raphaelsc@scylladb.com>
(cherry picked from commit 67343798cf)
2016-09-26 09:51:33 +03:00
Tomasz Grabiec
ff674391c2 Merge sestar upstream
Refs #1622
Refs #1690

* seastar b58a287...9b541ef (2):
  > input_stream: Fix possible infinite recursion in consume()
  > iostream: Fix stack overflow in output_stream::split_and_put()
2016-09-22 14:58:44 +02:00
Asias He
55c9279354 gossip: Fix std::out_of_range in setup_collectd
It is possible that endpoint_state_map does not contain the entry for
the node itself when collectd accesses it.

Fixes the issue:

Sep 18 11:33:16 XXX scylla[19483]: [shard 0] seastar - Exceptional
future ignored: std::out_of_range (_Map_base::at)

Fixes #1656

Message-Id: <8ffe22a542ff71e8c121b06ad62f94db54cc388f.1474377722.git.asias@scylladb.com>
(cherry picked from commit aa47265381)
2016-09-20 21:10:55 +03:00
Tomasz Grabiec
ef7b4c61ff tests: Add test for UUID type ordering
Message-Id: <1473956716-5209-2-git-send-email-tgrabiec@scylladb.com>
(cherry picked from commit 2282599394)
2016-09-20 12:22:03 +02:00
Tomasz Grabiec
b9e169ead9 types: fix uuid_type_impl::less
timeuuid_type_impl::compare_bytes is a "trichotomic" comparator (-1,
0, 1) while less() is a "less" comparator (false, true). The code
incorrectly returns c1 instead of c1 < 0 which breaks the ordering.

Fixes #1196.
Message-Id: <1473956716-5209-1-git-send-email-tgrabiec@scylladb.com>

(cherry picked from commit 804fe50b7f)
2016-09-20 12:22:00 +02:00
Shlomi Livne
dabbadcf39 release: prepare for 1.3.1
Signed-off-by: Shlomi Livne <shlomi@scylladb.com>
2016-09-18 13:18:20 +03:00
Duarte Nunes
c9cb14e160 thrift: Correctly detect clustering range wrap around
This patch uses the clustering bounds comparator to correctly detect
wrap around of a clustering range in the thrift handler.

Refs #1446

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1473952155-14886-1-git-send-email-duarte@scylladb.com>
(cherry picked from commit bc3cbb7009)
2016-09-15 16:52:43 +01:00
Shlomi Livne
985352298d ami: Fix instructions how to run scylla_io_setup on non ephemeral instances
On instances differenet then i2/m3/c3 we provide instructions to run
scylla_ip_setup. Running scylla_io_setup requires access to
/var/lib/scylla to crate a temporary file. To gain access to that
directory the user should run 'sudo scylla_io_setup'.

refs: #1645

Signed-off-by: Shlomi Livne <shlomi@scylladb.com>
Message-Id: <4ce90ca1ba4da8f07cf8aa15e755675463a22933.1473935778.git.shlomi@scylladb.com>
(cherry picked from commit acb83073e2)
2016-09-15 15:27:02 +01:00
Paweł Dziepak
c2d347efc7 Merge "Fix abort when querying with contradicting clustering restrictions" from Tomek
"This series backports fixes for #1670 on top of 1.3 branch.

Fixes abort when querying with contradicting clustering column
restrictions, for example:

   SELECT * FROM test WHERE k = 0 AND ck < 1 and ck > 2"
2016-09-15 14:55:28 +01:00
Tomasz Grabiec
78c7408927 Fix abort when querying with contradicting clustering restrictions
Example of affected query:

  SELECT * FROM test WHERE k = 0 AND ck < 1 and ck > 2

Refs #1670.

This commit brings back the backport of "Don't allow CK wrapping
ranges" by Duarte by reverting commit 11d7f83d52.

It also has the following fix, which is introduced by the
aforementioned commit, squashed to improve bisectability:

"cql3: Consider bound type when detecting wrap around

 This patch uses the clustering bounds comparator to correctly detect
 wrap around of a clustering range. This fixes a manifestation of #1446,
 introduced by b1f9688432, where a query
 such as select * from cf where k = 0x00 and c0 = 0x02 and c1 > 0x02
 would result in a range containing a clustering key and a prefix,
 incorrectly ordered by the prefix equality or lexicographical
 comparators.

 Refs #1446

 Signed-off-by: Duarte Nunes <duarte@scylladb.com>
 (cherry picked from commit ee2694e27d)"
2016-09-14 19:50:49 +02:00
Duarte Nunes
5716decf60 bounds_view: Create from nonwrapping_range
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
(cherry picked from commit 084b931457)
2016-09-14 18:28:55 +02:00
Duarte Nunes
f60eb3958a range_tombstone: Extract out bounds_view
This patch extracts bounds_view from range_tombstone so its comprator
can be reused elsewhere.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
(cherry picked from commit 878927d9d2)
2016-09-14 18:28:55 +02:00
Tomasz Grabiec
7848781e5f database: Ignore spaces in initial_token list
Currently we get boost::lexical_cast on startup if inital_token has a
list which contains spaces after commas, e.g.:

  initial_token: -1100081313741479381, -1104041856484663086, ...

Fixes #1664.
Message-Id: <1473840915-5682-1-git-send-email-tgrabiec@scylladb.com>

(cherry picked from commit a498da1987)
2016-09-14 12:03:06 +03:00
Pekka Enberg
195994ea4b Update scylla-ami submodule
* dist/ami/files/scylla-ami 14c1666...e1e3919 (1):
  > scylla_ami_setup: remove scylla_cpuset_setup
2016-09-07 21:05:33 +03:00
Takuya ASADA
183910b8b4 dist/common/scripts/scylla_sysconfig_setup: sync cpuset parameters with rps_cpus settings when posix_net_conf.sh is enabled and NIC is single queue
On posix_net_conf.sh's single queue NIC mode (which means RPS enabled mode), we are excluded cpu0 and it's sibling from network stack processing cpus, and assigned NIC IRQ to cpu0.
So always network stack is not working on cpu0 and it's sibling, to get better performance we need to exclude these cpus from scylla too.
To do this, we need to get RPS cpu mask from posix_net_conf.sh, pass it to scylla_cpuset_setup to construct /etc/scylla.d/cpuset.conf when scylla_setup executed.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1472544875-2033-2-git-send-email-syuu@scylladb.com>
(cherry picked from commit 533dc0485d)
2016-09-07 21:00:30 +03:00
Takuya ASADA
f4746d2a46 dist/common/scripts/scylla_prepare: drop unnecesarry multiqueue NIC detection code on scylla_prepare
Right now scylla_prepare specifies -mq option to posix_net_conf.sh when number of RX queues > 1, but on posix_net_conf.sh it sets NIC mode to sq when queues < ncpus / 2.
So the logic is different, and actually posix_net_conf.sh does not need to specify -sq/-mq now, it autodetects queue mode.
So we need to drop detection logic from scylla_prepare, let posix_net_conf.sh to detect it.

Fixes #1406

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1472544875-2033-1-git-send-email-syuu@scylladb.com>
(cherry picked from commit 0c3bb2ee63)
2016-09-07 20:59:10 +03:00
Pekka Enberg
fa7f990407 Update seastar submodule
* seastar e6571c4...b58a287 (3):
  > scripts/posix_net_conf.sh: supress 'ls: cannot access
  > /sys/class/net/<NIC>/device/msi_irqs/' error message
  > scripts/posix_net_conf.sh: fix 'command not found' error when
  > specifies --cpu-mask
  > scripts/posix_net_conf.sh: add support --cpu-mask mode
2016-09-07 20:58:09 +03:00
Pekka Enberg
86dbcf093b systemd: Don't start Scylla service until network is up
Alexandr Porunov reports that Scylla fails to start up after reboot as follows:

  Aug 25 19:44:51 scylla1 scylla[637]: Exiting on unhandled exception of type 'std::system_error': Error system:99 (Cannot assign requested address)

The problem is that because there's no dependency to network service,
Scylla simply attempts to start up too soon in the boot sequence and
fails.

Fixes #1618.

Message-Id: <1472212447-21445-1-git-send-email-penberg@scylladb.com>
(cherry picked from commit 2d3aee73a6)
2016-08-29 13:26:11 +03:00
Takuya ASADA
db89811fcc dist/common/scripts/scylla_setup: support enabling services on Ubuntu 15.10/16.04
Right now it ignores Ubuntu, but we shareing .service between Fedora/CentOS and Ubuntu >= 15.10, so support it.

Fixes #1556.

Message-Id: <1471932814-17347-1-git-send-email-syuu@scylladb.com>
(cherry picked from commit 74d994f6a1)
2016-08-29 13:26:08 +03:00
Duarte Nunes
9ec939f6a3 thrift: Avoid always recording size estimates
Size estimates for a particular column family are recorded every 5
minutes. However, when a user calls the describe_splits(_ex) verbs,
they may want to see estimates for a recently created and updated
column family; this is legitimate and common in testing. However, a
client may also call describe_splits(_ex) very frequently and
recording the estimates on every call is wasteful and, worse, can
cause clients to give up. This patch fixes this by only recording
estimates if the first attempt to query them produces no results.

Refs #1139

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1471900595-4715-1-git-send-email-duarte@scylladb.com>
(cherry picked from commit 440c1b2189)
2016-08-29 12:08:53 +03:00
Pekka Enberg
d40f586839 dist/docker: Clean up Scylla description for Docker image
Message-Id: <1472145307-3399-1-git-send-email-penberg@scylladb.com>
(cherry picked from commit c5e5e7bb40)
2016-08-29 10:49:09 +03:00
Raphael S. Carvalho
d55c55efec api: use estimation of pending tasks in compaction manager too
We have API for getting pending compaction tasks both in column
family and compaction manager. Column family is already returning
pending tasks properly.
Compaction manager's one is used by 'nodetool compactionstats', and
was returning a value which doesn't reflect pending compaction.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <a20b88938ad39e95f98bfd7f93e4d1666d1c6f95.1471641211.git.raphaelsc@scylladb.com>
(cherry picked from commit d8be32d93a)
2016-08-29 10:20:20 +03:00
Raphael S. Carvalho
dc79761c17 sstables: Fix estimation of pending tasks for leveled strategy
There were two underflow bugs.

1) in variable i, causing get_level() to see an invalid level and
throw an exception as a result.
2) when estimating number of pending tasks for a level.

Fixes #1603.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <cce993863d9de4d1f49b3aabe981c475700595fc.1471636164.git.raphaelsc@scylladb.com>
(cherry picked from commit 77d4cd21d7)
2016-08-29 10:19:30 +03:00
Paweł Dziepak
bbffb51811 mutation_partition: fix iterator invalidation in trim_rows
Reversed iterators are adaptors for 'normal' iterators. These underlying
iterators point to different objects that the reversed iterators
themselves.

The consequence of this is that removing an element pointed to by a
reversed iterator may invalidate reversed iterator which point to a
completely different object.

This is what happens in trim_rows for reversed queries. Erasing a row
can invalidate end iterator and the loop would fail to stop.

The solution is to introduce
reversal_traits::erase_dispose_and_update_end() funcion which erases and
disposes object pointed to by a given iterator but takes also a
reference to and end iterator and updates it if necessary to make sure
that it stays valid.

Fixes #1609.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
Message-Id: <1472080609-11642-1-git-send-email-pdziepak@scylladb.com>
(cherry picked from commit 6012a7e733)
2016-08-25 17:40:37 +03:00
Amnon Heiman
ec3ace5aa3 housekeeping: Silently ignore check version if Scylla is not available
Normally, the check version should start and stop with the scylla-server
service.

If it fails to find scylla server, there is no need to check the
version, nor to report it, so it can stop silently.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
(cherry picked from commit 2b98335da4)
2016-08-23 18:11:44 +03:00
Amnon Heiman
1f2d1012be housekeeping: Use curl instead of Python's libraries
There is a problem with Python SSL's in Ubuntu 14.04:

  ubuntu@ip-10-81-165-156:~$ /usr/lib/scylla/scylla-housekeeping -q version
  Traceback (most recent call last):
    File "/usr/lib/scylla/scylla-housekeeping", line 94, in <module>
      args.func(args)
    File "/usr/lib/scylla/scylla-housekeeping", line 71, in check_version
      latest_version = get_json_from_url(version_url + "?version=" + current_version)["version"]
    File "/usr/lib/scylla/scylla-housekeeping", line 50, in get_json_from_url
      response = urllib2.urlopen(req)
    File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen
      return _opener.open(url, data, timeout)
    File "/usr/lib/python2.7/urllib2.py", line 404, in open
      response = self._open(req, data)
    File "/usr/lib/python2.7/urllib2.py", line 422, in _open
      '_open', req)
    File "/usr/lib/python2.7/urllib2.py", line 382, in _call_chain
      result = func(*args)
    File "/usr/lib/python2.7/urllib2.py", line 1222, in https_open
      return self.do_open(httplib.HTTPSConnection, req)
    File "/usr/lib/python2.7/urllib2.py", line 1184, in do_open
      raise URLError(err)
  urllib2.URLError: <urlopen error [Errno 1] _ssl.c:510: error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure>

Instead of using Python libraries to connect to the check version
server, we will use curl for that.

Fixes #1600

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
(cherry picked from commit 4598674673)
2016-08-23 18:11:38 +03:00
Amnon Heiman
9c8cfd3c0e housekeeping: Add curl as a dependency
To work around an SSL problem with Python on Ubuntu 14.04, we need to
use curl. Add it as a dependency so that it's available on the host.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
(cherry picked from commit 91944b736e)
2016-08-23 18:11:13 +03:00
Takuya ASADA
d9e4ab38f6 dist/ubuntu: support scylla-housekeeping service on all Ubuntu versions
Current scylla-housekeeping support on Ubuntu has bug, it does not installs .service/.timer for Ubuntu 16.04.
So fix it to make it work.

Fixes #1502

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Tested-by: Amos Kong <amos@scylladb.com>
Message-Id: <1471607903-14889-1-git-send-email-syuu@scylladb.com>
(cherry picked from commit 80f7449095)
2016-08-23 13:50:43 +03:00
Takuya ASADA
5d72e96ccc dist/common/systemd: don't use .in for scylla-housekeeping.*, since these are not template file
.in is the name for template files witch requires to rewrite on building time, but these systemd unit files does not require rewrite, so don't name .in, reference directly from .spec.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1471607533-3821-1-git-send-email-syuu@scylladb.com>
(cherry picked from commit aac60082ae)
2016-08-23 13:50:36 +03:00
Paweł Dziepak
a072df0e09 sstables: do not call consume_end_partition() after proceed::no
After state_processor().process_state() returns proceed::no the upper
layer should have a chance to act before more data is pushed to the
consumer. This means that in case of proceed::no verify_end_state()
should not be called immediately since it may invoke
consume_end_partition().

Fixes #1605.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
Message-Id: <1471943032-7290-1-git-send-email-pdziepak@scylladb.com>
(cherry picked from commit 5feed84e32)
2016-08-23 12:24:59 +03:00
Pekka Enberg
8291ec13fa dist/docker: Separate supervisord config files
Move scylla-server and scylla-jmx supervisord config files to separate
files and make the main supervisord.conf scan /etc/supervisord.conf.d/
directory. This makes it easier for people to extend the Docker image
and add their own services.

Message-Id: <1471588406-25444-1-git-send-email-penberg@scylladb.com>
(cherry picked from commit 9d1d8baf37)
2016-08-23 11:56:24 +03:00
Vlad Zolotarov
c485551488 tracing::trace_state: fix a compilation error with gcc 4.9
See #1602.

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
Message-Id: <1471774784-26266-1-git-send-email-vladz@cloudius-systems.com>
2016-08-21 16:39:50 +03:00
Paweł Dziepak
b85164bc1d sstables: optimise clustering rows filtering
Clustering rows in the sstables are sorted in the ascending order so we
can use that to minimise number of comparisons when checking if a row is
in the requested range.

Refs #1544.

Paweł further explains the backport rationale for 1.3:

"Apart from making sense on its own, this patch has a very curious
property
of working around #1544 in a way that doesn't make #1446 hit us harder
than
usual.
So, in the branch-1.3 we can:
 - revert 85376ce555
   'Merge "Don't allow CK wrapping ranges" from Duarte' -- previous,
   insufficient workaround for #1544
 - apply this patch
 - rejoice as cql_query_test passes and #1544 is no longer a problem

The scenario above assumes that this patch doesn't introduces any
regressions."

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
Reviewed-by: Piotr Jastrzebski <piotr@scylladb.com>
Message-Id: <1471608921-30818-1-git-send-email-pdziepak@scylladb.com>
(cherry picked from commit e60bb83688)
2016-08-19 18:11:49 +03:00
Pekka Enberg
11d7f83d52 Revert "Merge "Don't allow CK wrapping ranges" from Duarte"
This reverts commit 85376ce555, reversing
changes made to 3f54e0c28e.

The change breaks CQL range queries.
2016-08-19 18:11:31 +03:00
Pekka Enberg
5c4a24c1c0 dist/docker: Use Scylla mascot as the logo
Glauber "eagle eyes" Costa pointed out that the Scylla logo used in our
Docker image documentation looks broken because it's missing the Scylla
text.

Fix the problem by using the Scylla mascot instead.
Message-Id: <1471525154-2800-1-git-send-email-penberg@scylladb.com>

(cherry picked from commit 2bf5e8de6e)
2016-08-19 12:50:46 +03:00
Pekka Enberg
306eeedf3e dist/docker: Fix bug tracker URL in the documentation
The bug tracker URL in our Docker image documentation is not clickable
because the URL Markdown extracts automatically is broken.

Fix that and add some more links on how to get help and report issues.
Message-Id: <1471524880-2501-1-git-send-email-penberg@scylladb.com>

(cherry picked from commit 4d90e1b4d4)
2016-08-19 12:50:42 +03:00
Pekka Enberg
9eada540d9 release: prepare for 1.3.0 2016-08-18 16:20:21 +03:00
Yoav Kleinberger
a662765087 docker: extend supervisor capabilities
allow user to use the `supervisorctl' program to start and stop
services. `exec` needed to be added to the scylla and scylla-jmx starter
scripts - otherwise supervisord loses track of the actual process we
want to manage.

Signed-off-by: Yoav Kleinberger <yoav@scylladb.com>
Message-Id: <1471442960-110914-1-git-send-email-yoav@scylladb.com>
(cherry picked from commit 25fb5e831e)
2016-08-18 15:41:08 +03:00
Pekka Enberg
192f89bc6f dist/docker: Documentation cleanups
- Fix invisible characters to be space so that Markdown to PDF
  conversion works.

- Fix formatting of examples to be consistent.

- Spellcheck.

Message-Id: <1471514924-29361-1-git-send-email-penberg@scylladb.com>
(cherry picked from commit 1553bec57a)
2016-08-18 13:14:21 +03:00
Pekka Enberg
b16bb0c299 dist/docker: Document image command line options
This patch documents all the command line options Scylla's Docker image supports.

Message-Id: <1471513755-27518-1-git-send-email-penberg@scylladb.com>
(cherry picked from commit 4ca260a526)
2016-08-18 13:14:16 +03:00
Amos Kong
cd9d967c44 systemd: have the first housekeeping check right after start
Issue: https://github.com/scylladb/scylla/issues/1594

Currently systemd run first housekeeping check at the end of
first timer period. We expected it to be run right after start.

This patch makes systemd to be consistent with upstart.

Signed-off-by: Amos Kong <amos@scylladb.com>
Message-Id: <4cc880d509b0a7b283278122a70856e21e5f1649.1471433388.git.amos@scylladb.com>
(cherry picked from commit 9d53305475)
2016-08-17 16:02:25 +03:00
Avi Kivity
236b089b03 Merge "Fixes for streamed_mutation_from_mutation" from Paweł
"This series contains fixes for two memory leaks in
streamed_mutation_from_mutation.

Fixes #1557."

(cherry picked from commit 4871b19337)
2016-08-17 13:26:25 +03:00
Avi Kivity
9d54b33644 Merge 2016-08-17 13:25:49 +03:00
Benoit Canet
4ef6c3155e systemd: Remove WorkingDirectory directive
The WorkingDirectory directive does not support environment variables on
systemd version that is shipped with Ubuntu 16.04. Fortunately, not
setting WorkingDirectory implicitly sets it to user home directory,
which is the same thing (i.e. /var/lib/scylla).

Fixes #1319

Signed-of-by: Benoit Canet <benoit@scylladb.com>
Message-Id: <1470053876-1019-1-git-send-email-benoit@scylladb.com>
(cherry picked from commit 90ef150ee9)
2016-08-17 12:34:44 +03:00
Takuya ASADA
fe529606ae dist/common/scripts: mkdir -p /var/lib/scylla/coredump before symlinking
We are creating this dir in scylla_raid_setup, but user may create XFS volume w/o using the command, scylla_coredump_setup should work on such condition.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1470638615-17262-1-git-send-email-syuu@scylladb.com>
(cherry picked from commit 60ce16cd54)
2016-08-16 12:35:15 +03:00
Avi Kivity
85376ce555 Merge "Don't allow CK wrapping ranges" from Duarte
"This pathset ensures user-specified clustering key ranges are never
wrapping, as those types of ranges are not defined for CQL3.

Fixes #1544"
2016-08-16 10:09:31 +03:00
Duarte Nunes
5e8ac82614 cql3: Discard wrap around ranges.
Wrapping ranges are not supported in CQL3. If one is specified,
this patch converts it to an empty range.

Fixes #1544

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-08-15 15:22:44 +00:00
Duarte Nunes
22c8520d61 storage_proxy: Short circuit query without clustering ranges
This patch makes the storage_proxy return an empty result when the
query doesn't define any clustering ranges (default or specific).

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-08-15 15:05:23 +00:00
Duarte Nunes
e7355c9b60 thrift: Don't always validate clustering range
This patch makes make_clustering_range not enforce that the range be
non-wrapping, so that it can be validated differently if needed. A
make_clustering_range_and_validate function is introduced that keeps
the old behavior.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-08-15 15:05:18 +00:00
Paweł Dziepak
3f54e0c28e partition_version: handle errors during version merge
Currently, partition snapshot destructor can throw which is a big no-no.
The solution is to ignore the exception and leave versions unmerged and
hope that subsequent reads will succeed at merging.

However, another problem is that the merge doesn't use allocating
sections which means that memory won't be reclaimed to satisfy its
needs. If the cache is full this may result in partition versions not
being merged for a very long time.

This patch introduces partition_snapshot::merge_partition_versions()
which contains all the version merging logic that was previously present
in the snapshot destructor. This function may throw so that it can be
used with allocating sections.

The actual merging and handling of potential erros is done from
partition_snapshot_reader destructor. It tries to merge versions under
the allocating section. Only if that fails it gives up and leaves them
unmerged.

Fixes #1578
Fixes #1579.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
Message-Id: <1471265544-23579-1-git-send-email-pdziepak@scylladb.com>
(cherry picked from commit 5cae44114f)
2016-08-15 15:57:10 +03:00
Asias He
4c6f8f9d85 gossip: Add heart_beat_version to collectd
$ tools/scyllatop/scyllatop.py '*gossip*'

node-1/gossip-0/gauge-heart_beat_version 1.0
node-2/gossip-0/gauge-heart_beat_version 1.0
node-3/gossip-0/gauge-heart_beat_version 1.0

Gossip heart beat version changes every second. If everyting is working
correctly, the gauge-heart_beat_version output should be 1.0. If not,
the gauge-heart_beat_version output should be less than 1.0.

Message-Id: <cbdaa1397cdbcd0dc6a67987f8af8038fd9b2d08.1470712861.git.asias@scylladb.com>
(cherry picked from commit ef782f0335)
2016-08-15 12:32:17 +03:00
Nadav Har'El
7a76157cb9 sstables: don't forget to read static row
[v2: fix check for static column (don't check if the schema is not compound)
 and move want-static-columns flag inside the filtering context to avoid
 changing all the callers.]

When a CQL request asks to read only a range of clustering keys inside
a partition, we actually need to read not just these clustering rows, but
also the static columns and add them to the response (as explained by Tomek
in issue #1568).

With the current code, that CQL request is translated into an
sstable::read_row() with a clustering-key filter. But this currently
only reads the requested clustering keys - NOT the static columns.

We don't want sstable::read_row() to unconditionally read the from disk
the static columns because if, for example, they are already cached, we
might not want to read them from disk. We don't have such partial-partition
cache yet, but we are likely to have one in the future.

This patch adds in the clustering key filter object a flag of whether we
need to read the static columns (actually, it's function, returning this
flag per partition, to match the API for the clustering-key filtering).

When sstable::read_row() sees the flag for this partition is true, it also
request to read the static columns.
Currently, the code always passes "true" for this flag - because we don't
have the logic to cache partially-read partitions.

The current find_disk_ranges() code does not yet support returning a non-
contiguous byte range, so this patch, if it notices that this partition
really has static columns in addition to the range it needs to read,
falls back to reading the entire partition. This is a correct solution
(and fixes #1568) but not the most efficient solution. Because static
columns are relatively rare, let's start with this solution (correct
by less efficient when there are static columns) and providing the non-
contiguous reading support is left as a FIXME.

Fixes #1568

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <1471124536-19471-1-git-send-email-nyh@scylladb.com>
(cherry picked from commit 0d00da7f7f)
2016-08-15 12:30:36 +03:00
Amnon Heiman
b2e6a52461 scylla.spec: conditionally include the housekeeping.cfg in the conf package
When the housekeeping configuration name was changed from conf to cfg it
was no longer included as part of the conf rpm.

This change adds a macro that determines of if the file should be
included or not and use that marco to conditionally add the
configuration file to the rpm.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <1471169042-19099-1-git-send-email-amnon@scylladb.com>
(cherry picked from commit 612f677283)
2016-08-14 13:26:25 +03:00
Tomasz Grabiec
b1376fef9b partition_version: Add missing linearization context
Snapshot removal merges partitions, and cell merging must be done
inside linearization context.

Fixes #1574

Reviewed-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1471010625-18019-1-git-send-email-tgrabiec@scylladb.com>
(cherry picked from commit 1b2ea14d0e)
2016-08-12 17:56:21 +03:00
Piotr Jastrzebski
23f4813a48 Fix after free access bug in storage proxy
Due to speculative reads we can't guarantee that all
fibers started by storage_proxy::query will be finished
by the time the method returns a result.

We need to make sure that no parameter passed to this
method ever changes.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Message-Id: <31952e323e599905814b7f378aafdf779f7072b8.1471005642.git.piotr@scylladb.com>
(cherry picked from commit f212a6cfcb)
2016-08-12 16:35:45 +02:00
Duarte Nunes
c16c3127fe docker: If set, broadcast address is seed
This patch configures the broadcast address to be the seed if it is
configured, otherwise Scylla complains about it and aborts.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1470863058-1011-1-git-send-email-duarte@scylladb.com>
(cherry picked from commit 918a2939ff)
2016-08-12 11:47:08 +03:00
Tomasz Grabiec
48fdeb47e2 Merge branch 'raphael/fix_min_max_metadata_v2' from git@github.com:raphaelsc/scylla.git
Fix for generation of sstables min/max clustering metadata from Raphael.

(cherry picked from commit d7f8ce7722)
2016-08-11 17:53:01 +03:00
Avi Kivity
9ef4006d67 Update seastar submodule
* seastar 36a8ebe...e6571c4 (1):
  > reactor: Do not test for poll mode default
2016-08-11 14:45:52 +03:00
Amnon Heiman
75c53e4f24 scylla-housekeeping: rename configuration file from conf to cfg
Files with a conf extension are run by the scylla_prepare on the AMI.
The scylla-housekeeping configuration file is not a bash script and
should not be run.

This patch changes its extension to cfg which is more python like.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <1470896759-22651-2-git-send-email-amnon@scylladb.com>
(cherry picked from commit 5a4fc9c503)
2016-08-11 14:45:11 +03:00
Tomasz Grabiec
66292c0ef0 sstables: Fix bug in promoted index generation
maybe_flush_pi_block, which is called for each cell, assumes that
block_first_colname will be empty when the first cell is encountered
for each partition.

This didn't hold after writing partition which generated no index
entry, because block_first_colname was cleared only when there way any
data written into the promoted index. Fix by always clearing the name.

The effect was that the promoted index entry for the next partition
would be flushed sooner than necessary (still counting since the start
of the previous partition) and with offset pointing to the start of
the current partition. This will cause parsing error when such sstable
is read through promoted index entry because the offset is assumed to
point to a cell not to partition start.

Fixes #1567

Message-Id: <1470909915-4400-1-git-send-email-tgrabiec@scylladb.com>
(cherry picked from commit f1c2481040)
2016-08-11 13:09:05 +03:00
Amnon Heiman
84f7d9a49c build_deb: Add dist flag
The dist flag mark the debian package as distributed package.
As such the housekeeping configuration file will be included in the
package and will not need to be created by the scylla_setup.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <1470907208-502-2-git-send-email-amnon@scylladb.com>
(cherry picked from commit a24941cc5f)
2016-08-11 12:25:28 +03:00
Pekka Enberg
f0535eae9b dist/docker: Fix typo in "--overprovisioned" help text
Reported by Mathias Bogaert (@analytically).
Message-Id: <1470904395-4614-1-git-send-email-penberg@scylladb.com>

(cherry picked from commit d1a052237d)
2016-08-11 11:49:42 +03:00
Avi Kivity
4f096b60df Update seastar submodule
* seastar de789f1...36a8ebe (1):
  > reactor: fix I/O queue pending requests collectd metric

Fixes #1558.
2016-08-10 15:28:09 +03:00
Pekka Enberg
4ac160f2fe release: prepare for 1.3.rc3 2016-08-10 13:53:53 +03:00
Avi Kivity
395edc4361 Merge 2016-08-10 13:34:48 +03:00
Avi Kivity
e2c9feafa3 Merge "Add configuration file to scylla-housekeeping" from Amnon
"The series adds an optional configuration file to the scylla-housekeeping. The
file act as a way to prevent the scylla-housekeeping to run. A missing
configuration file, will make the scylla-housekeeping immediately.

The series adds a flag to the build_rpm that differentiate between public
distributions that would contain the configuration file and private
distributions that will not contain it which will cause the setup script to
create it."

(cherry picked from commit da4d33802e)
2016-08-10 13:34:04 +03:00
Avi Kivity
f4dea17c19 Merge "housekeeping: Switch to pytho2 and handle version" from Amnon
This series handle two issues:
* Moving to python2, though python3 is supported, there are modules that we
  need that are not rpm installable, python3 would wait when it will be more
  mature.

* Check version should send the current version when it check for a new one and
  a simple string compare is wrong.

(cherry picked from commit ec62f0d321)
2016-08-10 13:31:50 +03:00
Amnon Heiman
a45b72b66f scylla-housekeeping: check version should use the current version
This patch handle two issues with check version:
* When checking for a version, the script send the current version
* Instead of string compare it uses parse_version to compare the
versions.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
(cherry picked from commit 406fa11cc5)
2016-08-10 13:29:53 +03:00
Amnon Heiman
be1c2a875b scylla-housekeeping: Switchng to pythno2
There is a problem with python module installation in pythno3,
especially on centos. Though pytho34 has a normal package, alot of the
modules are missing yum installation and can only be installed by pip.

This patch switch the  scylla-housekeeping implementation to use
python2, we should switch back to python3 when CeontOS python3 will be
more mature.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
(cherry picked from commit 641e5dc57c)
2016-08-10 13:23:47 +03:00
Nadav Har'El
0b9f83c6b6 sstable: avoid copying non-existant value
The promoted-index reading code contained a bug where it copied the value
of an disengaged optional (this non-value was never used, but it was still
copied ). Fix it by keeping the optional<> as such longer.

This bug caused tests/sstable_test in the debug build to crash (the release
build somehow worked).

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <1470742418-8813-1-git-send-email-nyh@scylladb.com>
(cherry picked from commit e005762271)
2016-08-10 13:14:49 +03:00
Pekka Enberg
0d77615b80 cql3: Filter compaction strategy class from compaction options
Cassandra 2.x does not store the compaction strategy class in compaction
options so neither should we to avoid confusing the drivers.

Fixes #1538.
Message-Id: <1470722615-29106-1-git-send-email-penberg@scylladb.com>

(cherry picked from commit 9ff242d339)
2016-08-10 12:44:50 +03:00
Pekka Enberg
8771220745 dist/docker: Add '--smp', '--memory', and '--overprovisoned' options
Add '--smp', '--memory', and '--overprovisioned' options to the Docker
image. The options are written to /etc/scylla.d/docker.conf file, which
is picked up by the Scylla startup scripts.

You can now, for example, restrict your Docker container to 1 CPU and 1
GB of memory with:

   $ docker run --name some-scylla penberg/scylla --smp 1 --memory 1G --overprovisioned 1

Needed by folks who want to run Scylla on Docker in production.

Cc: Sasha Levin <alexander.levin@verizon.com>
Message-Id: <1470680445-25731-1-git-send-email-penberg@scylladb.com>
(cherry picked from commit 6a5ab6bff4)
2016-08-10 11:54:01 +03:00
Avi Kivity
f552a62169 Update seastar submodule
* seastar ee1ecc5...de789f1 (1):
  > Merge "Fix the SMP queue poller" from Tomasz

Fixes #1553.
2016-08-10 09:54:15 +03:00
Avi Kivity
696a978611 Update seastar submodule
* seastar 0b53ab2...ee1ecc5 (1):
  > byteorder: add missing cpu_to_be(), be_to_cpu() functions

Fixes build failure.
2016-08-10 09:51:35 +03:00
Nadav Har'El
0475a98de1 Avoid some warnings in debug build
The sanitizer of the debug build warns when a "bool" variable is read when
containing a value not 0 or 1. In particular, if a class has an
uninitialized bool field, which class logic allows to only be set later,
then "move"ing such an object will read the uninitialized value and produce
this warning.

This patch fixes four of these warnings seen in sstable_test by initializing
some bool fields to false, even though the code doesn't strictly need this
initialization.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <1470744318-10230-1-git-send-email-nyh@scylladb.com>
(cherry picked from commit c2e4f5ba16)
2016-08-09 17:54:54 +03:00
Nadav Har'El
0b69e37065 Fix failing tests
Commit 0d8463aba5 broke some of the tests with an assertion
failure about local_is_initialized(). It turns out that there is more than
one level of local_is_initialized() we need to check... For some tests,
neither locals were initialized, but for others, one was and the other
wasn't, and the wrong one was tested.

With this patch, all unit tests except "flush_queue_test.cc" pass on my
machine. I doubt this test is relevant to the promoted index patches,
but I'll continue to investigate it.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <1470695199-32649-1-git-send-email-nyh@scylladb.com>
(cherry picked from commit bce020efbd)
2016-08-09 17:54:49 +03:00
Avi Kivity
dc6be68852 Merge "promoted index for reading partial partitions" from Nadav
"The goal of this patch series is to support reading and writing of a
"promoted index" - the Cassandra 2.* SSTable feature which allows reading
only a part of the partition without needing to read an entire partition
when it is very long. To make a long story short, a "promoted index" is
a sample of each partition's column names, written to the SSTable Index
file with that partition's entry. See a longer explanation of the index
file format, and the promoted index, here:

     https://github.com/scylladb/scylla/wiki/SSTables-Index-File

There are two main features in this series - first enabling reading of
parts of partitions (using the promoted index stored in an sstable),
and then enable writing promoted indexes to new sstables. These two
features are broken up into smaller stand-alone pieces to facilitate the
review.

Three features are still missing from this series and are planned to be
developed later:

1. When we fail to parse a partition's promoted index, we silently fall back
   to reading the entire partition. We should log (with rate limiting) and
   count these errors, to help in debugging sstable problems.

2. The current code only uses the promoted index when looking for a single
   contiguous clustering-key range. If the ck range is non-contiguous, we
   fall back to reading the entire partition. We should use the promoted
   index in that case too.

3. The current code only uses the promoted index when reading a single
   partition, via sstable::read_row(). When scanning through all or a
   range of partitions (read_rows() or read_range_rows()), we do not yet
   use the promoted index; We read contiguously from data file (we do not
   even read from the index file, so unsurprisingly we can't use it)."

(cherry picked from commit 700feda0db)
2016-08-09 17:54:15 +03:00
Avi Kivity
8c20741150 Revert "sstables: promoted index write support"
This reverts commit c0e387e1ac.  The full
patchset needs to be backported instead.
2016-08-09 17:53:24 +03:00
Avi Kivity
3e3eaa693c Revert "Fix failing tests"
This reverts commit 8d542221eb.  It is needed,
but prevents another revert from taking place.  Will be reinstated later
2016-08-09 17:52:57 +03:00
Avi Kivity
03ef0a9231 Revert "Avoid some warnings in debug build"
This reverts commit 47bf8181af.  It is needed,
but prevents another revert from taking place.  Will be reinstated later.
2016-08-09 17:52:09 +03:00
Nadav Har'El
47bf8181af Avoid some warnings in debug build
The sanitizer of the debug build warns when a "bool" variable is read when
containing a value not 0 or 1. In particular, if a class has an
uninitialized bool field, which class logic allows to only be set later,
then "move"ing such an object will read the uninitialized value and produce
this warning.

This patch fixes four of these warnings seen in sstable_test by initializing
some bool fields to false, even though the code doesn't strictly need this
initialization.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <1470744318-10230-1-git-send-email-nyh@scylladb.com>
(cherry picked from commit c2e4f5ba16)
2016-08-09 16:58:27 +03:00
Nadav Har'El
8d542221eb Fix failing tests
Commit 0d8463aba5 broke some of the tests with an assertion
failure about local_is_initialized(). It turns out that there is more than
one level of local_is_initialized() we need to check... For some tests,
neither locals were initialized, but for others, one was and the other
wasn't, and the wrong one was tested.

With this patch, all unit tests except "flush_queue_test.cc" pass on my
machine. I doubt this test is relevant to the promoted index patches,
but I'll continue to investigate it.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <1470695199-32649-1-git-send-email-nyh@scylladb.com>
(cherry picked from commit bce020efbd)
2016-08-09 16:58:27 +03:00
Nadav Har'El
c0e387e1ac sstables: promoted index write support
This patch adds writing of promoted index to sstables.

The promoted index is basically a sample of columns and their positions
for large partitions: The promoted index appears in the sstable's index
file for partitions which are larger than 64 KB, and divides the partition
to 64 KB blocks (as in Cassandra, this interval is configurable through
the column_index_size_in_kb config parameter). Beyond modifying the index
file, having a promoted index may also modify the data file: Since each
of blocks may be read independently, we need to add in the beginning of
each block the list of range tombstones that are still open at that
position.

See also https://github.com/scylladb/scylla/wiki/SSTables-Index-File

Fixes #959

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
(cherry picked from commit 0d8463aba5)
2016-08-09 16:58:27 +03:00
Duarte Nunes
57d3dc5c66 thrift: Set default validator
This patch sets the default validator for dynamic column families.
Doing so has no consequences in terms of behavior, but it causes the
correct type to be shown when describing the column family through
cassandra-cli.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1470739773-30497-1-git-send-email-duarte@scylladb.com>
(cherry picked from commit 0ed19ec64d)
2016-08-09 13:56:43 +02:00
Duarte Nunes
2daee0b62d thrift: Send empty col metadata when describing ks
This patch ensures we always send the column metadata, even when the
column family is dynamic and the metadata is empty, as some clients
like cassandra-cli always assume its presence.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1470740971-31169-1-git-send-email-duarte@scylladb.com>
(cherry picked from commit f63886b32e)
2016-08-09 14:34:14 +03:00
Pekka Enberg
3eddf5ac54 dist/docker: Document data volume and cpuset configuration
Message-Id: <1470649675-5648-1-git-send-email-penberg@scylladb.com>
(cherry picked from commit 3b31d500c8)
2016-08-09 11:15:57 +03:00
Pekka Enberg
42d6f389f9 dist/docker: Add '--broadcast-rpc-address' command line option
We already have a '--broadcat-address' command line option so let's add
the same thing for RPC broadcast address configuration.

Message-Id: <1470656449-11038-1-git-send-email-penberg@scylladb.com>
(cherry picked from commit 4372da426c)
2016-08-09 11:15:53 +03:00
Pekka Enberg
1a6f6f1605 Update scylla-ami submodule
* dist/ami/files/scylla-ami 2e599a3...14c1666 (1):
  > setup coredump on first startup
2016-08-09 11:10:20 +03:00
Avi Kivity
8d8e997f5a Update scylla-ami submodule
* dist/ami/files/scylla-ami 863cc45...2e599a3 (1):
  > Do not set developer-mode on unsupported instance types
2016-08-07 17:52:13 +03:00
Takuya ASADA
50ee889679 dist/ami/files: add a message for privacy policy agreement on login prompt
Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1470212054-351-1-git-send-email-syuu@scylladb.com>
(cherry picked from commit 3d45d6579b)
2016-08-07 17:40:56 +03:00
Duarte Nunes
325f917d8a system_keyspace: Correctly deal with wrapped ranges
This patch ensures we correctly deal with ranges that wrap around when
querying the size_estimates system table.

Ref #693

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1470412433-7767-1-git-send-email-duarte@scylladb.com>
(cherry picked from commit e0a43a82c6)
2016-08-07 17:21:58 +03:00
Takuya ASADA
b088dd7d9e dist/ami/files: show warning message for unsupported instance types
Notify users to run scylla_io_setup before lunching scylla on unsupported instance types.

Fixes #1511

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1470090415-8632-1-git-send-email-syuu@scylladb.com>
(cherry picked from commit bd1ab3a0ad)
2016-08-05 09:51:27 +03:00
Takuya ASADA
a42b2bb0d6 dist/ami: Install scylla metapackage and debuginfo on AMI
Install scylla metapackage and debuginfo on AMI to make AMI to report bugs easier.
Fixes #1496

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1469635071-16821-1-git-send-email-syuu@scylladb.com>
(cherry picked from commit 9b59bb59f2)
2016-08-05 09:48:19 +03:00
Takuya ASADA
aecda01f8e dist/common/scripts: disable coredump compression by default, add an argument to enable compression on scylla_coredump_setup
On large memory machine compression takes too long, so disable it by default.
Also provide a way to enable it again.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1469706934-6280-1-git-send-email-syuu@scylladb.com>
(cherry picked from commit 89b790358e)
2016-08-05 09:47:17 +03:00
Takuya ASADA
f9b0a29def dist/ami: setup correct repository when --localrpm specified
There was no way to setup correct repo when AMI is building by --localrpm option, since AMI does not have access to 'version' file, and we don't passed repo URL to the AMI.
So detect optimal repo path when starting build AMI, passes repo URL to the AMI, setup it correctly.

Note: this changes behavor of build_ami.sh/scylla_install_pkg's --repo option.
It was repository URL, but now become .repo/.list file URL.
This is optimal for the distribution which requires 3rdparty packages to install scylla, like CentOS7.
Existing shell scripts which invoking build_ami.sh are need to change in new way, such as our Jenkins jobs.

Fixes #1414

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1469636377-17828-1-git-send-email-syuu@scylladb.com>
(cherry picked from commit d3746298ae)
2016-08-05 09:45:22 +03:00
Pekka Enberg
192e935832 dist/docker: Use Scylla 1.3 RPM repository 2016-08-05 09:13:08 +03:00
Avi Kivity
436ff3488a Merge "Docker image fixes" from Pekka
"Kubernetes is unhappy with our Docker image because we start systemd
under the hood. Fix that by switching to use "supervisord" to manage the
two processes -- "scylla" and "scylla-jmx":

  http://blog.kunicki.org/blog/2016/02/12/multiple-entrypoints-in-docker/

While at it, fix up "docker logs" and "docker exec cqlsh" to work
out-of-the-box, and update our documentation to match what we have.

Further work is needed to ensure Scylla production configuration works
as expected and is documented accordingly."

(cherry picked from commit 28ee2bdbd2)
2016-08-05 09:10:51 +03:00
Benoît Canet
b91712fc36 docker: Add documentation page for Docker Hub
Signed-of-by: Benoît Canet <benoit@scylladb.com>
Message-Id: <1466438296-5593-1-git-send-email-benoit@scylladb.com>
(cherry picked from commit 4ce7bced27)
2016-08-05 09:10:48 +03:00
Yoav Kleinberger
be954ccaec docker: bring docker image closer to a more 'standard' scylla installation
Previously, the Docker image could only be run interactively, which is
not conducive for running clusters. This patch makes the docker image
run in the background (using systemd). This makes the docker workflow
similar to working with virtual machines, i.e. the user launches a
container, and once it is running they can connect to it with

       docker exec -it <container_name> bash

and immediately use `cqlsh` to control it.

In addition, the configuration of scylla is done using established
scripts, such as `scylla_dev_mode_setup`, `scylla_cpuset_setup` and
`scylla_io_setup`, whereas previously code from these scripts was
duplicated into the docker startup file.

To specify seeds for making a cluster, use the --seeds command line
argument, e.g.

    docker run -d --privileged scylladb/scylla
    docker run -d --privileged scylladb/scylla --seeds 172.17.0.2

other options include --developer-mode, --cpuset, --broadcast-address

The --developer-mode option mode is on by default - so that we don't fail users
who just want to play with this.

The Dockerfile entrypoint script was rewritten as a few Python modules.
The move to Python is meritted because:

    * Using `sed` to manipulate YAML is fragile
    * Lack of proper command line parsing resulted in introducing ad-hoc environment variables
    * Shell scripts don't throw exceptions, and it's easy to forget to check exit codes for every single command

I've made an effort to make the entrypoint `go' script very simple and readable.
The goary details are hidden inside the other python modules.

Signed-off-by: Yoav Kleinberger <yoav@scylladb.com>
Message-Id: <1468938693-32168-1-git-send-email-yoav@scylladb.com>
(cherry picked from commit d1d1be4c1a)
2016-08-05 09:10:35 +03:00
Glauber Costa
2bffa8af74 logalloc: make sure allocations in release_requests don't recurse back into the allocator
Calls like later() and with_gate() may allocate memory, although that is not
very common. This can create a problem in the sense that it will potentially
recurse and bring us back to the allocator during free - which is the very thing
we are trying to avoid with the call to later().

This patch wraps the relevant calls in the reclaimer lock. This do mean that the
allocation may fail if we are under severe pressure - which includes having
exhausted all reserved space - but at least we won't recurse back to the
allocator.

To make sure we do this as early as possible, we just fold both release_requests
and do_release_requests into a single function

Thanks Tomek for the suggestion.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
Message-Id: <980245ccc17960cf4fcbbfedb29d1878a98d85d8.1470254846.git.glauber@scylladb.com>
(cherry picked from commit fe6a0d97d1)
2016-08-04 11:17:54 +02:00
Glauber Costa
4a6d0d503f logalloc: make sure blocked requests memory allocations are served from the standar allocator
Issue 1510 describes a scenario in which, under load, we allocate memory within
release_requests() leading to a reentry into an invalid state in our
blocked requests' shared_promise.

This is not easy to trigger since not all allocations will actually get to the
point in which they need a new segment, let alone have that happening during
another allocator call.

Having those kinds of reentry is something we have always sought to avoid with
release_requests(): this is the reason why most of the actual routine is
deferred after a call to later().

However, that is a trick we cannot use for updating the state of the blocked
requests' shared_promise: we can't guarantee when is that going to run, and we
always need a valid shared_promise, in a valid state, waiting for new requests
to hook into.

The solution employed by this patch is to make sure that no allocation
operations whatsoever happen during the initial part of release_requests on
behalf of the shared promise.  Allocation is now deferred to first use, which
relieves release_requests() from all allocation duties. All it needs to do is
free the old object and signal to the its user that an allocation is needed (by
storing {} into the shared_promise).

Fixes #1510

Signed-off-by: Glauber Costa <glauber@scylladb.com>
Message-Id: <49771e51426f972ddbd4f3eeea3cdeef9cc3b3c6.1470238168.git.glauber@scylladb.com>
(cherry picked from commit ad58691afb)
2016-08-04 11:17:49 +02:00
Avi Kivity
fa81385469 conf: synchronize internode_compression between scylla.yaml and code
Our default is "none", to give reasonable performance, so have scylla.yaml
reflect that.

(cherry picked from commit 9df4ac53e5)
2016-08-04 12:10:03 +03:00
Duarte Nunes
93981aaa93 schema_builder: Ensure dense tables have compact col
This patch ensures that when the schema is dense, regardless of
compact_storage being set, the single regular columns is translated
into a compact column.

This fixes an issue where Thrift dynamic column families are
translated to a dense schema with a regular column, instead of a
compact one.

Since a compact column is also a regular column (e.g., for purposes of
querying), no further changes are required.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1470062410-1414-1-git-send-email-duarte@scylladb.com>
(cherry picked from commit 5995aebf39)

Fixes #1535.
2016-08-03 13:49:51 +02:00
Duarte Nunes
89b40f54db schema: Dense schemas are correctly upgrades
When upgrading a dense schema, we would drop the cells of the regular
(compact) column. This patch fixes this by making the regular and
compact column kinds compatible.

Fixes #1536

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1470172097-7719-1-git-send-email-duarte@scylladb.com>
2016-08-03 13:37:57 +02:00
Paweł Dziepak
99dfbedf36 sstables: extend sstable life until reader is fully closed
data_consume_rows_context needs to have close() called and the returned
future waited for before it can be destroyed. data_consume_context::impl
does that in the background upon its destruction.

However, it is possible that the sstable is removed before
data_consume_rows_context::close() completes in which case EBADF may
happen. The solution is to make data_consume_context::impl keep a
reference to the sstable and extend its life time until closing of
data_consume_rows_context (which is performed in the background)
completes.

Side effect of this change is also that data_consume_context no longer
requires its user to make sure that the sstable exists as long as it is
in use since it owns its own reference to it.

Fixes #1537.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
Message-Id: <1470222225-19948-1-git-send-email-pdziepak@scylladb.com>
(cherry picked from commit 02ffc28f0d)
2016-08-03 13:19:50 +02:00
Paweł Dziepak
e95f4eaee4 Merge "partition_limit: Don't count dead partitions" from Duarte
"This patch series ensures we don't count dead partitions (i.e.,
partitions with no live rows) towards the partition_limit. We also
enforce the partition limit at the storage_proxy level, so that
limits with smp > 1 works correctly."

(cherry picked from commit 5f11a727c9)
2016-08-03 12:44:32 +03:00
Avi Kivity
2570da2006 Update seastar submodule
* seastar f603f88...0b53ab2 (2):
  > reactor: limit task backlog
  > reactor: make sure a poll cycle always happens when later is called

Fix runaway task queue growth on cpu-bound loads.
2016-08-03 12:33:54 +03:00
Tomasz Grabiec
b224ff6ede Merge 'pdziepak/row-cache-wide-entries/v4' from seastar-dev.git
This series adds the ability for partition cache to keep information
whether partition size makes it uncacheable. During, reads these
entries save us IO operations since we already know that the partiiton
is too big to be put in the cache.

First part of the patchset makes all mutation_readers allow the
streamed_mutations they produce to outlive them, which is a guarantee
used later by the code handling reading large partitions.

(cherry picked from commit d2ed75c9ff)
2016-08-02 20:24:29 +02:00
Piotr Jastrzebski
6960fce9b2 Use continuity flag correctly with concurrent invalidations
Between reading cache entry and actually using it
invalidations can happen so we have to check if no flag was
cleared if it was we need to read the entry again.

Fixes #1464.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Message-Id: <7856b0ded45e42774ccd6f402b5ee42175bd73cf.1469701026.git.piotr@scylladb.com>
(cherry picked from commit fdfd1af694)
2016-08-02 20:24:22 +02:00
Avi Kivity
a556265ccd checked_file: preserve DMA alignment
Inherit the alignment parameters from the underlying file instead of
defaulting to 4096.  This gives better read performance on disks with 512-byte
sectors.

Fixes #1532.
Message-Id: <1470122188-25548-1-git-send-email-avi@scylladb.com>

(cherry picked from commit 9f35e4d328)
2016-08-02 12:22:37 +03:00
Duarte Nunes
8243d3d1e0 storage_service: Fix get_range_to_address_map_in_local_dc
This patch fixes a couple of bugs in
get_range_to_address_map_in_local_dc.

Fixes #1517

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1469782666-21320-1-git-send-email-duarte@scylladb.com>
(cherry picked from commit 7d1b7e8da3)
2016-07-29 11:24:12 +02:00
Pekka Enberg
2665bfdc93 Update seastar submodule
* seastar 103543a...f603f88 (1):
  > iotune: Fix SIGFPE with some executions
2016-07-29 11:11:56 +03:00
Tomasz Grabiec
3a1e8fffde Merge branch 'sstables/static-1.3/v1' from git@github.com:duarten/scylla.git into branch-1.3
The current code assumes cell names are always compound and may
wrongly report a non-static row as such. This patch addresses this
and adds a test case to catch regressions.

Backports the fix to #1495.
2016-07-28 15:07:41 +02:00
Gleb Natapov
23c340bed8 api: fix use after free in sum_sstable
get_sstables_including_compacted_undeleted() may return temporary shared
ptr which will be destroyed before the loop if not stored locally.

Fixes #1514

Message-Id: <20160728100504.GD2502@scylladb.com>
(cherry picked from commit 3531dd8d71)
2016-07-28 14:28:25 +03:00
Duarte Nunes
ff8a795021 sstables: Validate static cell is on static column
This patch enforces compatibility between a cell and the
corresponding column definition with regards to them being
static.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-28 12:11:46 +02:00
Duarte Nunes
d11b0cac3b sstable_mutation_test: Test non-compound cell name
This patch adds a test case for reading non-compound cell names,
validating that such a cell is not incorrectly marked as static.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1469616205-4550-5-git-send-email-duarte@scylladb.com>
2016-07-28 12:11:37 +02:00
Duarte Nunes
5ad0448cc9 sstables: Don't assume cell name is compound
The current code assumes cell names are always compound and may
wrongly report a non-static row as such, since it looks at the first
bytes of the name assuming they are the component's length.

Tables with compact storage (which cannot contain static rows) may not
have a compound comparator, so we check for the table's compoundness
before checking for the static marker. We do this by delegating to
composite_view::is_static.

Fixes #1495

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1469616205-4550-4-git-send-email-duarte@scylladb.com>
2016-07-28 12:11:27 +02:00
Duarte Nunes
35ab2cadc2 sstables: Remove duplication in extract_clustering_key
This patch removes some duplicated code in extract_clustering_key(),
which is already handled in composite_view.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1469397806-8067-1-git-send-email-duarte@scylladb.com>
2016-07-28 12:11:22 +02:00
Duarte Nunes
a1cee9f97c sstables: Remove superfluous call to check_static()
When building a column we're calling check_static() two times;
refector things a bit so that this doesn't happen and we reuse the
previous calculation.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1469397748-7987-1-git-send-email-duarte@scylladb.com>
2016-07-28 12:11:15 +02:00
Duarte Nunes
0ae7347d8e composite: Use operator[] instead of at()
Since we already do bounds checking on is_static(), we can use
bytes_view::operator[] instead of bytes_view::at() to avoid repeating
the bounds checking.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1469616205-4550-3-git-send-email-duarte@scylladb.com>
2016-07-28 12:10:14 +02:00
Duarte Nunes
b04168c015 composite_view: Fix is_static
composite_view's is_static function is wrong because:

1) It doesn't guard against the composite being a compound;
2) Doesn't deal with widening due to integral promotions and
   consequent sign extension.

This patch fixes this by ensuring there's only one correct
implementation of is_static, to avoid code duplication and
enforce test coverage.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1469616205-4550-2-git-send-email-duarte@scylladb.com>
2016-07-28 12:10:06 +02:00
Duarte Nunes
4e13853cbc compound_compat: Only compound values can be static
If a composite is not a compound, then it doesn't carry a length
prefix where static information is encoded. In its absence, a
non-compound composite can never be static.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1469397561-7748-1-git-send-email-duarte@scylladb.com>
2016-07-28 12:09:59 +02:00
Pekka Enberg
503f6c6755 release: prepare for 1.3.rc2 2016-07-28 10:57:11 +03:00
Tomasz Grabiec
7d73599acd tests: lsa_async_eviction_test: Use chunked_fifo<>
To protect against large reallocations during push() which are done
under reclaim lock and may fail.
2016-07-28 09:43:51 +02:00
Piotr Jastrzebski
bf27379583 Add tests for wide partiton handling in cache.
They shouldn't be cached.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
(cherry picked from commit 7d29cdf81f)
2016-07-27 14:09:45 +03:00
Piotr Jastrzebski
02cf5a517a Add collectd counter for uncached wide partitions.
Keep track of every read of wide partition that's
not cached.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
(cherry picked from commit 37a7d49676)
2016-07-27 14:09:40 +03:00
Piotr Jastrzebski
ec3d59bf13 Add flag to configure
max size of a cached partition.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
(cherry picked from commit 636a4acfd0)
2016-07-27 14:09:34 +03:00
Piotr Jastrzebski
30c72ef3b4 Try to read whole streamed_mutation up to limit
If limit is exceeded then return the streamed_mutation
and don't cache it.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
(cherry picked from commit 98c12dc2e2)
2016-07-27 14:09:29 +03:00
Piotr Jastrzebski
15e69a32ba Implement mutation_from_streamed_mutation_with_limit
If mutation is bigger than this limit
it won't be read and mutation_from_streamed_mutation
will return empty optional.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
(cherry picked from commit 0d39bb1ad0)
2016-07-27 14:09:23 +03:00
Paweł Dziepak
4e43cb84ff mests/sstables: test reading sstable with duplicated range tombstones
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
(cherry picked from commit b405ff8ad2)
2016-07-27 14:09:02 +03:00
Paweł Dziepak
07d5e939be sstables: avoid recursion in sstable_streamed_mutation::read_next()
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
(cherry picked from commit 04f2c278c2)
2016-07-27 14:06:03 +03:00
Paweł Dziepak
a2a5a22504 sstables: protect against duplicated range tombstones
Promoted index may cause sstable to have range tombstones duplicated
several times. These duplicates appear in the "wrong" place since they
are smaller than the entity preceeding them.

This patch ignores such duplicates by skipping range tombstones that are
smaller than previously read ones.

Moreover, these duplicted range tombstone may appear in the middle of
clustering row, so the sstable reader has also gained the ability to
merge parts of the row in such cases.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
(cherry picked from commit 08032db269)
2016-07-27 14:05:58 +03:00
Paweł Dziepak
a39bec0e24 tests: extract streamed_mutation assertions
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
(cherry picked from commit 50469e5ef3)
2016-07-27 14:05:43 +03:00
Duarte Nunes
f0af5719d5 thrift: Preserve partition order when accumulating
This patch changes the column_visitor so that it preservers the order
of the partitions it visits when building the accumulation result.

This is required by verbs such as get_range_slice, on top of which
users can implement paging. In such cases, the last key returned by
the query will be that start of the range for the next query. If
that key is not actually the last in the partitioner's order, then
the new request will likely result in duplicate values being sent.

Ref #693

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1469568135-19644-1-git-send-email-duarte@scylladb.com>
(cherry picked from commit 5aaf43d1bc)
2016-07-27 12:11:41 +03:00
Avi Kivity
0523000af5 size_estimates_recorder: unwrap ranges before searching for sstables
column_family::select_sstables() requires unwrapped ranges, so unwrap
them.  Fixes crash with Leveled Compaction Strategy.

Fixes #1507.

Reviewed-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1469563488-14869-1-git-send-email-avi@scylladb.com>
(cherry picked from commit 64d0cf58ea)
2016-07-27 10:07:13 +03:00
Paweł Dziepak
69a0e6e002 stables: fix skipping partitions with no rows
If partition contains no static and clustering rows or range tombstones
mp_row_consumer will return disengaged mutation_fragment_opt with
is_mutation_end flag set to mark end of this partition.

Current, mutation_reader::impl code incorrectly recognized disengaged
mutation fragment as end of the stream of all mutations. This patch
fixes that by using is_mutation_end flag to determine whether end of
partition or end of stream was reached.

Fixes #1503.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
Message-Id: <1469525449-15525-1-git-send-email-pdziepak@scylladb.com>
(cherry picked from commit efa690ce8c)
2016-07-26 13:10:31 +03:00
Amos Kong
58d4de295c scylla-housekeeping: fix typo of script path
I tried to start scylla-housekeeping service by:
 # sudo systemctl restart scylla-housekeeping.service

But it's failed for wrong script path, error detail:
 systemd[5605]: Failed at step EXEC spawning
 /usr/lib/scylla/scylla-Housekeeping: No such file or directory

The right script name is 'scylla-housekeeping'

Signed-off-by: Amos Kong <amos@scylladb.com>
Message-Id: <c11319a3c7d3f22f613f5f6708699be0aa6bd740.1469506477.git.amos@scylladb.com>
(cherry picked from commit 64530e9686)
2016-07-26 09:19:15 +03:00
Vlad Zolotarov
026061733f tracing: set a default TTL for system_traces tables when they are created
Fixes #1482

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
Message-Id: <1469104164-4452-1-git-send-email-vladz@cloudius-systems.com>
(cherry picked from commit 4647ad9d8a)
2016-07-25 13:50:43 +03:00
Vlad Zolotarov
1d7ed190f8 SELECT tracing instrumentation: improve inter-nodes communication stages messages
Add/fix "sending to"/"received from" messages.

With this patch the single key select trace with a data on an external node
looks as follows:

Tracing session: 65dbfcc0-4f51-11e6-8dd2-000000000001

 activity                                                                                                                        | timestamp                  | source    | source_elapsed
---------------------------------------------------------------------------------------------------------------------------------+----------------------------+-----------+----------------
                                                                                                              Execute CQL3 query | 2016-07-21 17:42:50.124000 | 127.0.0.2 |              0
                                                                                                   Parsing a statement [shard 1] | 2016-07-21 17:42:50.124127 | 127.0.0.2 |             --
                                                                                                Processing a statement [shard 1] | 2016-07-21 17:42:50.124190 | 127.0.0.2 |             64
 Creating read executor for token 2309717968349690594 with all: {127.0.0.1} targets: {127.0.0.1} repair decision: NONE [shard 1] | 2016-07-21 17:42:50.124229 | 127.0.0.2 |            103
                                                                            read_data: sending a message to /127.0.0.1 [shard 1] | 2016-07-21 17:42:50.124234 | 127.0.0.2 |            108
                                                                           read_data: message received from /127.0.0.2 [shard 1] | 2016-07-21 17:42:50.124358 | 127.0.0.1 |             14
                                                          read_data handling is done, sending a response to /127.0.0.2 [shard 1] | 2016-07-21 17:42:50.124434 | 127.0.0.1 |             89
                                                                               read_data: got response from /127.0.0.1 [shard 1] | 2016-07-21 17:42:50.124662 | 127.0.0.2 |            536
                                                                                  Done processing - preparing a result [shard 1] | 2016-07-21 17:42:50.124695 | 127.0.0.2 |            569
                                                                                                                Request complete | 2016-07-21 17:42:50.124580 | 127.0.0.2 |            580

Fixes #1481

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
Message-Id: <1469112271-22818-1-git-send-email-vladz@cloudius-systems.com>
(cherry picked from commit 57b58cad8e)
2016-07-25 13:50:39 +03:00
Raphael S. Carvalho
2d66a4621a compaction: do not convert timestamp resolution to uppercase
C* only allows timestamp resolution in uppercase, so we shouldn't
be forgiving about it, otherwise migration to C* will not work.
Timestamp resolution is stored in compaction strategy options of
schema BTW.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <d64878fc9bbcf40fd8de3d0f08cce9f6c2fde717.1469133851.git.raphaelsc@scylladb.com>
(cherry picked from commit c4f34f5038)
2016-07-25 13:47:23 +03:00
Duarte Nunes
aaa9b5ace8 system_keyspace: Add query_size_estimates() function
The query_size_estimates() function queries the size_estimates system
table for a given keyspace and table, filtering out the token ranges
according to the specified tokens.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
(cherry picked from commit ecfa04da77)
2016-07-25 13:43:16 +03:00
Duarte Nunes
8d491e9879 size_estimates_recorder: Fix stop()
This patch fixes stop() by checking if the current CPU instead of
whether the service is active (which it won't be at the time stop() is
called).

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
(cherry picked from commit d984cc30bf)
2016-07-25 13:43:08 +03:00
Duarte Nunes
b63c9fb84b system_keyspace: Avoid pointers in range_estimates
This patch makes range_estimates a proper struct, where tokens are
represented as dht::tokens rather than dht::ring_position*.

We also pass other arguments to update_ and clear_size_estimates by
copy, since one will already be required.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
(cherry picked from commit e16f3f2969)
2016-07-25 13:42:53 +03:00
Duarte Nunes
b229f03198 thrift: Fail when creating mixed CF
This patch ensures we fail when creating a mixed column family, either
when adding columns to a dynamic CF through updated_column_family() or
when adding a dynamic column upon insertion.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1469378658-19853-1-git-send-email-duarte@scylladb.com>
(cherry picked from commit 5c4a2044d5)
2016-07-25 13:42:05 +03:00
Duarte Nunes
6caa59560b thrift: Correctly translate no_such_column_family
The no_such_column_family exception is translated to
InvalidRequestException instead of to NotFoundException.

8991d35231 exposed this problem.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1469376674-14603-1-git-send-email-duarte@scylladb.com>
(cherry picked from commit 560cc12fd7)
2016-07-25 13:41:58 +03:00
Duarte Nunes
79196af9fb thrift: Implement describe_splits verb
This patch implements the describe_splits verb on top of
describe_splits_ex.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
(cherry picked from commit ab08561b89)
2016-07-25 13:41:54 +03:00
Duarte Nunes
afe09da858 thrift: Implement describe_splits_ex verb
This patch implements the describe_splits_ex verbs by querying the
size_estimates system table for all the estimates in the specified
token range.

If the keys_per_split argument is bigger then the
estimated partitions count, then we merge ranges until keys_per_split
is met. Note that the tokens can't be split any further,
keys_per_split might be less than the reported number of keys in one
or more ranges.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
(cherry picked from commit 472c23d7d2)
2016-07-25 13:41:46 +03:00
Duarte Nunes
d6cb41ff24 thrift: Handle and convert invalid_request_exception
This patch converts an exceptions::invalid_request_exception
into a Thrift InvalidRequestException instead of into a generic one.

This makes TitanDB work correctly, which expects an
InvalidRequestException when setting a non-existent keyspace.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1469362086-1013-1-git-send-email-duarte@scylladb.com>
(cherry picked from commit 2be45c4806)
2016-07-24 16:46:18 +03:00
Duarte Nunes
6bf77c7b49 thrift: Use database::find_schema directly
This patch changes lookup_schema() so it directly calls
database::find_schema() instead of going through
database::find_column_family(). It also drops conversion of the
no_such_column_family exeption, as that is already handled at a higher
layer.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
(cherry picked from commit 8991d35231)
2016-07-24 16:46:05 +03:00
Duarte Nunes
6d34b4dab7 thrift: Remove hardcoded version constant
...and use the one in thrift_server.hh instead.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
(cherry picked from commit 038d42c589)
2016-07-24 16:45:46 +03:00
Duarte Nunes
d367f1e9ab thrift: Remove unused with_cob_dereference function
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
(cherry picked from commit 8bb43d09b1)
2016-07-24 16:45:22 +03:00
Avi Kivity
75a36ae453 bloom_filter: fix overflow for large filters
We use ::abs(), which has an int parameter, on long arguments, resulting
in incorrect results.

Switch to std::abs() instead, which has the correct overloads.

Fixes #1494.

Message-Id: <1469347802-28933-1-git-send-email-avi@scylladb.com>
(cherry picked from commit 900639915d)
2016-07-24 11:32:28 +03:00
Tomasz Grabiec
35c1781913 schema_tables: Fix hang during keyspace drop
Fixes #1484.

We drop tables as part of keyspace drop. Table drop starts with
creating a snapshot on all shards. All shards must use the same
snapshot timestamp which, among other things, is part of the snapshot
name. The timestamp is generated using supplied timestamp generating
function (joinpoint object). The joinpoint object will wait for all
shards to arrive and then generate and return the timestamp.

However, we drop tables in parallel, using the same joinpoint
instance. So joinpoint may be contacted by snapshotting shards of
tables A and B concurrently, generating timestamp t1 for some shards
of table A and some shards of table B. Later the remaining shards of
table A will get a different timestamp. As a result, different shards
may use different snapshot names for the same table. The snapshot
creation will never complete because the sealing fiber waits for all
shards to signal it, on the same name.

The fix is to give each table a separate joinpoint instance.

Message-Id: <1469117228-17879-1-git-send-email-tgrabiec@scylladb.com>
(cherry picked from commit 5e8f0efc85)
2016-07-22 15:36:45 +02:00
Vlad Zolotarov
1489b28ffd cql_server::connection::process_prepare(): don't std::move() a shared_ptr captured by reference in value_of() lambda
A seastar::value_of() lambda used in a trace point was doing the unthinkable:
it called std::move() on a value captured by reference. Not only it compiled(!!!)
but it also actually std::move()ed the shared_ptr before it was used in a make_result()
which naturally caused a SIGSEG crash.

Fixes #1491

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
Message-Id: <1469193763-27631-1-git-send-email-vladz@cloudius-systems.com>
(cherry picked from commit 9423c13419)
2016-07-22 16:33:17 +03:00
Avi Kivity
f975653c94 Update seastar submodule to point at scylla-seastar 2016-07-21 12:31:09 +03:00
Duarte Nunes
96f5cbb604 thrift: Omit regular columns for dynamic CFs
This patch skips adding the auto-generated regular column when
describing a dynamic Column family for the describe_keyspace(s) verbs.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1469091720-10113-1-git-send-email-duarte@scylladb.com>
(cherry picked from commit a436cf945c)
2016-07-21 12:06:29 +03:00
Raphael S. Carvalho
66ebef7d10 tests: add new test for date tiered strategy
This test set the time window to 1 hour and checks that the strategy
works accordingly.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
(cherry picked from commit cf54af9e58)
2016-07-21 12:00:26 +03:00
Raphael S. Carvalho
789fb0db97 compaction: implement date tiered compaction strategy options
Now date tiered compaction strategy will take into account the
strategy options which are defined in the schema.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
(cherry picked from commit eaa6e281a2)
2016-07-21 12:00:18 +03:00
Pekka Enberg
af7c0f6433 Revert "Merge seastar upstream"
This reverts commit aaf6786997.

We should backport the iotune fixes for 1.3 and not pull everything.
2016-07-21 11:19:50 +03:00
Pekka Enberg
aaf6786997 Merge seastar upstream
* seastar 103543a...9d1db3f (8):
  > reactor: limit task backlog
  > iotune: Fix SIGFPE with some executions
  > Merge "Preparation for protobuf" from Amnon
  > byteorder: add missing cpu_to_be(), be_to_cpu() functions
  > rpc: fix gcc-7 compilation error
  > reactor: Register the smp metrics disabled
  > scollectd: Allow creating metric that is disabled
  > Merge "Propagate timeout to a server" from Gleb
2016-07-21 11:04:31 +03:00
Pekka Enberg
e8cb163cdf db/config: Start Thrift server by default
We have Thrift support now so start the server by default.
Message-Id: <1469002000-26767-1-git-send-email-penberg@scylladb.com>

(cherry picked from commit aff8cf319d)
2016-07-20 11:29:24 +03:00
Duarte Nunes
2d7c322805 thrift: Actually concatenate strings
This patch fixes concatenating a char[] with an int by using sprint
instead of just increasing the pointer.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1468971542-9600-1-git-send-email-duarte@scylladb.com>
(cherry picked from commit 64dff69077)
2016-07-20 11:09:15 +03:00
Tomasz Grabiec
13f18c6445 database: Add table name to log message about sealing
Message-Id: <1468917744-2539-1-git-send-email-tgrabiec@scylladb.com>
(cherry picked from commit 0d26294fac)
2016-07-20 10:13:32 +03:00
Tomasz Grabiec
9c430c2cff schema_tables: Add more logging
Message-Id: <1468917771-2592-1-git-send-email-tgrabiec@scylladb.com>
(cherry picked from commit a0832f08d2)
2016-07-20 10:13:28 +03:00
Pekka Enberg
c84e030fe9 release: prepare for 1.3.rc1 2016-07-19 20:15:26 +03:00
Avi Kivity
dc50b845b4 Merge seastar upstream
* seastar 823bc05...103543a (1):
  > core: add a seastar::format()
2016-07-19 19:09:42 +03:00
Avi Kivity
1a1b7fe3f2 Merge "CQL Tracing patch bomb" from Vlad
"This series includes the following:
   - Introduction of a formatted message support in trace().
   - Major rename: s/flush_/write_/, s/flush()/kick()/, s/store_/write_/.
   - Some cosmetic fixes found on the way.
   - Fix a bug in a shutdown flow.
   - Instrumentation to MUTATE, PREPARE, EXECUTE and BATCH flow and some
     related changes.
   - A patch that aligns the QUERY tracing format with the Origin.
   - Methods and functions description in tracing/trace_state.hh."
2016-07-19 18:46:59 +03:00
Vlad Zolotarov
7c590295ef SELECT instrumentation: add a nice trace point
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-07-19 18:21:59 +03:00
Vlad Zolotarov
a197323b47 tracing::trace_state.hh: Add descriptions for main methods and functions
Add a proper description to a tracing::trace() that clarifies
that the tracing message string and the positional parameters
are going to be copied if tracing state is initialized.

Add a description for trace_state::begin() methods and for a
tracing::begin() helper function.

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-07-19 18:21:59 +03:00
Vlad Zolotarov
b36b69c1d6 service::storage_proxy: remove a default value for a tracing::trace_state_ptr parameter
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-07-19 18:21:59 +03:00
Vlad Zolotarov
baa6496816 service::storage_proxy: READ instrumentation: store trace state object in abstract_read_executor
Having a trace_state_ptr in the storage_proxy level is needed to trace code bits in this level.

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-07-19 18:21:59 +03:00
Vlad Zolotarov
b0a39f210d transport: CQL tracing: QUERY instrumentation: align the session creation parameters with origin
- Don't put the query name as a 'request' but rather save it as one of entries in a
     'params' map.
   - Save some additional query parameters in 'params'.

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-07-19 18:21:58 +03:00
Vlad Zolotarov
962bddf8fe transport: CQL tracing: instrument a BATCH command
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-07-19 18:21:58 +03:00
Vlad Zolotarov
d21eaabcfe transport: CQL tracing: instrument EXECUTE command
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-07-19 18:21:58 +03:00
Vlad Zolotarov
89a49c346c tracing::trace_state: add begin() overload for seastar::value_of given as a "request" parameter.
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-07-19 18:21:58 +03:00
Vlad Zolotarov
1f9b858d83 cql3: prepared_statement: add raw_cql_statement field
This field will contain an original statement given to a PREPARE
command.

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-07-19 18:21:58 +03:00
Vlad Zolotarov
147dd72517 transport: CQL tracing: instrument a PREPARE command
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-07-19 18:21:58 +03:00
Vlad Zolotarov
be88074f47 service::query_state: get rid of begin_tracing()
Use tracing::begin() directly.

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-07-19 18:21:58 +03:00
Vlad Zolotarov
982d301178 service::client_state: add a const version of get_trace_state()
tracing::begin() requires a non-const version, tracing::trace()
requires a const version.

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-07-19 18:21:58 +03:00
Vlad Zolotarov
da56aa4256 service::client_state: rename: trace_state_ptr() -> get_trace_state()
Rename the method for consistency with other classes methods returning
the same value.

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-07-19 18:21:58 +03:00
Vlad Zolotarov
4c16df9e4c service: instrument MUTATE flow with tracing
Store the trace state in the abstract_write_response_handler.
Instrument send_mutation RPC to receive an additional
rpc::optional parameter that will contain optional<trace_info>
value.

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-07-19 18:21:58 +03:00
Vlad Zolotarov
54a758dfff cql3::select_statement: simplify the tracing code by using a tracing::make_trace_info() helper
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-07-19 18:21:58 +03:00
Vlad Zolotarov
952dc8a3d4 query_state: add get_trace_state() method
Adding this method allows to use tracing helper functions
and remove the no longer needed accessors in the query_state.

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-07-19 18:21:58 +03:00
Vlad Zolotarov
0552ffcd17 service/storage_proxy: tracing: adjust the existing SELECT instrumentation with the new trace() interface
From now on trace_state::trace() is able to receive the sprint-ready
format string with the arguments that will be applied only during
the flush event.

This patch also optimizes the way the source address is evaluated -
do it only once instead of twice if tracing is requested.

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-07-19 18:21:58 +03:00
Vlad Zolotarov
c1bb4d147d query::read_command: std::move() std::experimental::optional when initializing trace_info
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-07-19 18:21:58 +03:00
Vlad Zolotarov
0689843e79 tracing::trace_state: add method to set the session's "params" map entries
Sometimes we want to be able to set "params" map after we
started a tracing session, e.g. when the parameters values,
like a consistency level parsed from the "options" part of a binary frame,
are available only after some heavy part of a flow we would like
to trace.

This patch includes the following changes:

   - No longer pass a map to the begin().
   - Limit the parameters to the known set.
   - Define a method to set each such parameter and save its
     value till the final sstring->sstring map is created.
   - Construct the final sstring->sstring map in the destructor of the trace_state
     object in order to defer all the formatting to be after the traced flow.

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-07-19 18:21:58 +03:00
Vlad Zolotarov
9c0a725c56 tracing: add a _local_tracing to a i_tracing_backend_helper
A backend helper has to constantly communicate with the corresponding
tracing::tracing instance. By saving a reference to the tracing::tracing instance
will save us a lot of tracing::get_local_tracing_instance() calls and thus
a lot of dereferencing.

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-07-19 18:21:58 +03:00
Vlad Zolotarov
2bb054748e tracing: record events' time stamps
- Extend the i_tracing_backend_helper interface to accept the event
     record timestamp.
   - Grab the current timestamp when the event record is taken.
   - Add the instrumentation to the trace_keyspace_helper to create a unique time-UUID
     from a given std::chrono::duration object.

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-07-19 18:21:58 +03:00
Vlad Zolotarov
f64f27beb9 utils: add get_time_UUID(system_clock::time_point)
Creates a type 1 UUID (time-based UUID) with the given system_clock::time_point

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-07-19 18:21:58 +03:00
Vlad Zolotarov
06d4221382 tracing: add tracing::make_trace_info() helper
This helper returns an std::experimental::optional<trace_info>
which is initialized or not initialized depending on whether
a given trace_state_ptr is initialized or not.

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-07-19 18:21:57 +03:00
Vlad Zolotarov
7a5fc9fcdc tracing::trace_state: add const qualifiers to a trace_state_ptr parameter
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-07-19 18:21:57 +03:00
Vlad Zolotarov
b0673aabd5 tracing: fix a logger name
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-07-19 18:21:57 +03:00
Vlad Zolotarov
da4836becc tracing::trace_state: add support for a formatted message in trace()
Add an support for passing a format string plus positional parameters
for creation of a trace point message.

Format string should be given in a fmt library native format described
here: http://fmtlib.net/latest/syntax.html#syntax .

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-07-19 18:21:57 +03:00
Vlad Zolotarov
ee0e986e96 tracing: make a service shutdown stages more strict
kick() backend during shutdown and restrict accessing a backend
after that.

Flush pending records when service is being shut down.

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-07-19 18:21:57 +03:00
Vlad Zolotarov
6e38133f82 tracing: prevent a destruction of a tracing::tracing while it's used
Prevent the destruction of tracing::tracing instances while there
are still tracing::trace_state objects that are using it:

   - Make tracing::tracing inherit from seastar::async_sharded_service<tracing::tracing>.
   - Grab a tracing::tracing.shared_from_this() in each
     tracing::trace_state object using it.
   - Use a saved pointer to the local tracing::tracing instance in a destructor
     instead of accessing it via tracing::get_local_tracing_instance()
     to avoid "local is not initialized" assert when sessions are
     being destroyed after the service was stopped.

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-07-19 18:21:57 +03:00
Vlad Zolotarov
a5022a09a4 tracing: use 'write' instead of 'flush' and 'store' for consistency with seastar's API
In names of functions and variables:
s/flush_/write_/
s/store_/write_/

In a i_tracing_backend_helper:
s/flush()/kick()/

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-07-19 18:21:57 +03:00
Pekka Enberg
cbf5283a93 Merge "Populate size_estimates table" from Duarte
"This patchset implements the size_estimates_recorder, which periodically
writes estimations for all the non-system column families in the
size_estimates system table. This table is updated per schema with a set
of token ranges and the associated estimations of how many partitions
there are and their mean size.

Fixes #352"
2016-07-19 14:31:12 +03:00
Duarte Nunes
9ffdf4a5cd db: Implement size_estimates_recorder
This patch implements the size_estimates_recorder, which periodically
writes estimations for all the non-system column families in the
size_estimates system table. The size_estimates_recorder class
corresponds to the one in Cassandra's SizeEstimatesRecorder.java.

Estimation is carried out by shard 0. Since we're estimating based on
data in shared sstables, having multiple shards doing this would skew
the results.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-19 09:44:58 +00:00
Avi Kivity
86661db178 Merge seastar upstream
* seastar ceb0c94...823bc05 (1):
  > Revert "util::lazy_eval: add an implicit cast operator overload"
2016-07-19 12:02:44 +03:00
Duarte Nunes
f8f61cf246 system_keyspace: Record and clear size estimates
This patch implements functions that allow the size_estimates system
table to be updated and cleared. The size_estimates table is updated
per schema with a set of token ranges and the associated estimations
of how many partitions there are and their mean size.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-18 23:58:31 +00:00
Duarte Nunes
3518db531e database: Get non-system column_families
This patch adds an utility function that allows fetching the set of
column_families that do not belong to the system keyspace.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-18 23:58:31 +00:00
Duarte Nunes
4bc00c2055 database: Expose selection of sstables by a range
This patch allows a set of a column_family's sstables to be
selected according to a range of ring_positions.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-18 23:58:31 +00:00
Duarte Nunes
d7ae25c572 range: Make transform template arguments deductable
This patch makes it so that the template arguments of
range<T>::transform are more easily deducible by the compiler.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-18 23:58:31 +00:00
Duarte Nunes
3c05ea2f80 types: Add to_bytes_view for sstrings
This patch adds an overload of to_bytes_view for sstrings

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-18 23:58:31 +00:00
Tomasz Grabiec
ce768858f5 types: Fix update_types()
We should replace the old type, not insert the new type before the old type.

Fixes #1465

Message-Id: <1468861076-20397-1-git-send-email-tgrabiec@scylladb.com>
2016-07-18 20:14:22 +03:00
Avi Kivity
f886f7a2f5 Merge seastar upstream
* seastar a45823a...ceb0c94 (2):
  > print: switch to fmtlib
  > logging: simplify stringer array building
2016-07-18 19:37:34 +03:00
Avi Kivity
d261927fa3 logalloc: change sprint() of a pointer to use void* explicitly
Otherwise, fmtlib dislikes it.
2016-07-18 19:37:16 +03:00
Avi Kivity
1d1b03a7cb cql3: change sprint() of a pointer to use void* explicitly
Otherwise, fmtlib dislikes it.
2016-07-18 19:36:35 +03:00
Raphael S. Carvalho
7b9cf528ad tests: fix occassional failure in date tiered test
That was a bug in the test itself. It could happen that a sstable would
incorrectly belong to the next time window if the current minute is
approaching its end. Fix is about having all sstables that we want in
the same time window with the same min/max timestamp.

Fixes #1448.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <ee25d49e7ed12b4cf7d018a08163404c3d122e56.1468782787.git.raphaelsc@scylladb.com>
2016-07-18 15:18:29 +02:00
Paweł Dziepak
4497204b7d streamed_mutation: do not leave mutation in an invalid state
This patch avoids moving entries from range tombstones and clustering
rows sets in streamed_mutation_from_mutation(). Such action breaks these
sets as the entries will be left in some unknown state.

Instead, the sets are being broken in a supported and predictable way
using unlink_leftmost_without_rebalance().

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
Message-Id: <1468843205-18852-1-git-send-email-pdziepak@scylladb.com>
2016-07-18 15:14:21 +02:00
Avi Kivity
9d1b813f45 Merge seastar upstream
* seastar d699205...a45823a (5):
  > rpc: do not call shutdown function on already closed fd
  > log: Do not crash if logger is invoked from non-reactor thread
  > rpc: remove unaligned_cast and reinterpret_cast uses
  > unaligned: note unaligned_cast<> is deprecated
  > byteorder: add unaligned read/write helpers

Fixes #1463.
2016-07-18 15:24:43 +03:00
Avi Kivity
60491476e3 Merge "thrift: Add authentication and authorization" from Duarte
"This patchset implements the login verb to enable authentication
in the thrift API, and it adds access control to the already
implemented verbs."
2016-07-18 11:32:32 +03:00
Duarte Nunes
b6663f050d thrift: Add authorization for DML verbs
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-17 17:38:23 +00:00
Duarte Nunes
63354320b8 thrift: Add authorization to thrift DDL verbs
This patch adds authorization to the DDL thrift verbs. Since checking
for authorization is asynchronous, we now need to copy the verb
arguments so they can be accessed from the continuations.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-17 17:38:23 +00:00
Duarte Nunes
3c389ba871 client_state: Add has_schema_access function
This function is similar to has_column_family_access, but skips
validating if the specified keyspace and column family names map to a
valid schema, as it already takes one as an argument.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-17 17:38:23 +00:00
Duarte Nunes
dbbf4b3cc2 thrift: Group mutation map by column family
This patch transforms the mutation map, a map of keys to a map of columns
families to mutations, into a map of column families to a map of keys
to mutations. This makes is a more natural organization, as things
like checking access permissions are done by column family.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-17 17:38:23 +00:00
Duarte Nunes
f14628dc49 thrift: Introduce with_schema function
This is a wrapper around with_cob, which fetches a schema and forwards
it to a supplied function.

The patch also removes superfluous return instructions.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-17 17:38:23 +00:00
Duarte Nunes
09a5560b1b thrift: Validate login
This patch validates that a user is correctly logged in (if
authentication is required) for the required verbs.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-17 17:38:23 +00:00
Duarte Nunes
a3e507eb1c thrift: Implement login verb
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-17 17:38:23 +00:00
Amnon Heiman
d096a6762a scylla_setup: Ask if to start scylla-housekeeping
The scylla-server.service will try to start the scylla-housekeeping.

This patch adds a question to the scylla_setup if to enable the version
check, if the answer is no, the scylla-housekeeping will be masked.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <1468741129-1977-1-git-send-email-amnon@scylladb.com>
2016-07-17 17:57:12 +03:00
Nadav Har'El
c647d917e0 sstables: move to_bytes_view to header file
Move the to_bytes_view(temporary_buffer<char>) function from source file
to header file where is can be used in more places.

This saves one use of reinterpret_cast (which we are no re-evaluating),
and moreover, we want to use this function also in the promoted index
code (to return a bytes_view from the promoted index which was saved as a
temporary_buffer).

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <1468761437-27046-1-git-send-email-nyh@scylladb.com>
2016-07-17 16:29:26 +03:00
Avi Kivity
b6b35a986a Merge seastar upstream
* seastar 5e97d5f...d699205 (3):
  > rpc: fix race between send loop and expiration timer
  > rpc: fix cancellable type move operations
  > reactor: create new files with a more reasonable default mode
2016-07-17 13:27:23 +03:00
Paweł Dziepak
81e4952c78 row_cache: fix marking last entry as continuous
Range queries need to take special care when transitioning between
ranges that are read from sstables and ranges that are already in the
cache.

Original code in such case just started a secondary reader and told it
to unconditionally mark the last entry as continuous (primary reader has
already returned an element tha immediately follows the range that is
going to be read form sstables).

However, that information may get stale. For instance, by the time
secondary reader finish reading its range the element immediately
following it may get evicted from the cache thus causing continuity flag
to be incorrectly set.

The solution is to ensure that the element immediately after the range
read from sstables is still in the cache.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
Message-Id: <1468586893-15266-1-git-send-email-pdziepak@scylladb.com>
2016-07-15 15:15:02 +02:00
Tomasz Grabiec
7328a8eff8 cql: modification_statement: Avoid copying keyspace and table names
Message-Id: <1468574135-4701-1-git-send-email-tgrabiec@scylladb.com>
2016-07-15 10:36:53 +01:00
Duarte Nunes
aaa76d58ba query: Move to_partition_range to dht namespace
This patch moves to_partition_range, from the query namespace
to the dht namespace, where it is a more natural fit.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1468498060-19251-1-git-send-email-duarte@scylladb.com>
2016-07-15 10:41:52 +02:00
Tomasz Grabiec
32937f354e Merge branch 'duarten/thrift/dml/v9' from git@github.com:duarten/scylla.git
From Duarte:

This patchset adds support for the data manipulation verbs. It defers support
for super columns and mixed CFs (a static CF treated as dynamic) to later
patchsets.

Everything is done on top of storage_proxy; it was only necessary to modify the
layers below to add support for different kinds of limits: per partition row
limit, which corresponds to limiting the number of columns returned when
querying a dynamic CF, and limit on the number of partitions returned, so that
we can emulate the one thrift row per key model when querying dynamic CFs.

Ref #399
2016-07-14 18:26:07 +02:00
Duarte Nunes
df1234d86a thrift: Mark static CFs as non-compound
By default, the schema is marked as compound regardless of the
comparator. Since a composite comparator for static CFs is currently
unsupported (otherwise thrift column families would be
indistinguishable from CQL ones), just mark them as non-compound.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-14 18:20:55 +02:00
Duarte Nunes
901d4d1628 thrift: Skip CQL3 column families
This patch prevents CQL3 column families from being returned to
clients or subject to updates from thrift.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-14 18:20:54 +02:00
Duarte Nunes
92adbaab0a thrift: Warn about unimplemented features
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-14 18:20:54 +02:00
Duarte Nunes
a924f14441 thrift: Validate thrift Columns
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-14 18:20:54 +02:00
Duarte Nunes
7c1bf41b0d thrift: Implement truncate verb
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-14 18:20:54 +02:00
Duarte Nunes
4f440217e5 thrift: Implement remove verb
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-14 18:20:54 +02:00
Duarte Nunes
237e3b28d6 thrift: Implement insert verb
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-14 18:20:54 +02:00
Duarte Nunes
5c5056e4f9 thrift: Implement atomic_batch_mutate verb
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-14 18:20:54 +02:00
Duarte Nunes
f237b5ff19 thrift: Implement batch_mutate on top of storage_proxy
So that the specified consistency level can be respected.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-14 18:20:54 +02:00
Duarte Nunes
12dca9fdc9 thrift: Convert thrift Mutation to internal one
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-14 18:20:54 +02:00
Duarte Nunes
822a315dfa thrift: Implement get_multi_slice verb
The get_multi_slice verb is used to perform multiple slices on a
single row key in one operation. It takes a set of column_slices,
which we normalize to not contain any overlapping ranges.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-14 18:20:54 +02:00
Duarte Nunes
9792a77266 range: Add deoverlap function
This patch adds the deoverlap function to range.hh, which takes in a
vector of possibly overlapping ranges and returns a vector of
non-overlapping ranges covering the same values.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-14 18:20:41 +02:00
Duarte Nunes
c910a4639c thrift: Implement get_paged_slice verb
The get_paged_slice verb is similar to the get_range_slices verb,
except that it doesn't take a SlicePredicate. Instead, it takes a
column from which to start the query.

For dynamic CFs, we use the partition_slice::specific_ranges to single
out the first partition, and query starting from the start_column row.
For static CFs, we issue an initial query to fetch the remainder of
columns from the first partition, and at least one more query to fetch
the subsequent columns until the limit is reached. This implies a
performance penalty for static CFs.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-14 15:36:23 +02:00
Duarte Nunes
370572884c thrift: Implement get_range_slices verb
The get_range_slices verb is similar to the multiget_slice verb,
except that it operates on a range of partition keys (or tokens).

In origin, empty partitions are returned as part of the KeySlice, for
which the key will be filled in but the columns vector will be empty.
Since in our case we don't return empty partitions, we don't know which
partition keys in the specified range we should return back to the client.
So for now, our behavior differs from Origin.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-14 15:36:23 +02:00
Duarte Nunes
b872db55bd thrift: Implement get_count verb on top of multiget_count
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-14 15:36:23 +02:00
Duarte Nunes
a42b7ba3f7 thrift: Implement multiget_count verb
This patch implements the multiget_count verb in a similar fashion as
multiget_slice, but using an accumulator that counts the returned
columns instead of create thrift ColumnOrSuperColumn objects.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-14 15:36:23 +02:00
Duarte Nunes
a44561870a thrift: Implement get verb in terms of get_slice
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-14 15:36:23 +02:00
Duarte Nunes
db4c26d5b8 thrift: Implement get_slice in terms of multiget_slice
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-14 15:36:23 +02:00
Duarte Nunes
cd3a12535e thrift: Implement multiget_slice verb
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-14 15:36:23 +02:00
Duarte Nunes
acd39d871f thrift: Validate column names
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-14 15:36:23 +02:00
Duarte Nunes
4e9af0dc8e thrift: Make read_command from SlicePredicate
This patch build a query::read_command from a SlicePredicate,
for both dynamic and static column families.

For dynamic CFs, restrictions on the clustering columns are added, and
for static CFs, limits and ordering is defined inline by selecting the
correct regular columns.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-14 15:36:23 +02:00
Duarte Nunes
21d0a2c764 query: Optionally send cell ttl
This patch adds support to send a cell's ttl as part of a query's
result. This is needed for thrift support.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-14 15:36:23 +02:00
Duarte Nunes
eb8f5fafb2 thrift: Add partition key validation
This patch validates whether the specified partition key is not empty
and under the size limit.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-14 15:36:23 +02:00
Duarte Nunes
f57136f2f3 thrift: Make key_from_thrift take schema ref
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-14 15:36:23 +02:00
Duarte Nunes
e2b4cc4849 types: Add to_bytes_view function
This patch adds a function that converts a reference to an std::string
to a bytes_view.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-14 15:36:23 +02:00
Duarte Nunes
a647fea30b schema: Add is_dynamic to thrift_schema
This patch adds the is_dynamic() function to thrift_schema, which
tells whether the underlying column family is dynamic or not,
according to thrift rules.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-14 15:36:23 +02:00
Duarte Nunes
527ec2ab59 thrift: Support composite keys
This patch adds support for composite comparators (which, for dynamic
column families, it means composite clustering keys) and for composite
keys (composite partition keys).

Support for composite column names and regular columns is deferred,
which will entail making compound_type an abstract_type.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-14 15:36:23 +02:00
Duarte Nunes
7f5ec71b1f thrift: Extract ttl calculation
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-14 15:36:23 +02:00
Duarte Nunes
324b776c1b thrift: Add lookup_schema function
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-14 15:36:23 +02:00
Avi Kivity
4ef2b1b25f Merge seastar upstream
* seastar e660d54...5e97d5f (7):
  > util::lazy_eval: add an implicit cast operator overload
  > rpc: consolidate read_(request|response)_frame logic
  > rpc: handle lz4 compressor errors
  > iotune: provide a status dump if we can't calculate a proper number of io_queues
  > rpc: adjust lz4 compression for older lz4.h
  > Fix chunked_fifo move assignment
  > rpc: add missing header file protectors
2016-07-14 16:27:23 +03:00
Avi Kivity
32d670a792 Merge "Scylla-housekeeping check version" from Amnon
"This series replaces the original scylla-help.py

It contains only a basic script that checks daily for version and report if a
newer version matched.

The script is added as a service and will be started and shutdown with
scylla-server."
2016-07-14 14:58:33 +03:00
Avi Kivity
1048e1071b db: do not create column family directories belonging to foreign keyspaces
Currently, for any column family, we create a directory for it in all
keyspace directories.  This is incredibly awkward.

Fix by iterating over just the keyspace's column families, not all
column families in existence.

Fixes #1457.
Message-Id: <1468495182-18424-1-git-send-email-avi@scylladb.com>
2016-07-14 14:31:05 +03:00
Amnon Heiman
260761f2dd rules.in: Add the scylla-timer to ubuntu
This adds a rule to install the scylla-timer as part of the ubuntu
package.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-07-14 12:46:47 +03:00
Amnon Heiman
3be9ab38e2 ubuntu.in: Add dependency to python3-requests
The check version script uses the python requests package, this add the
dependency to the ubuntu package.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-07-14 12:46:47 +03:00
Amnon Heiman
3b9db378ac scylla-server.install.in: Pack the scyll-housekeeping on ubuntu
This adds the scylla-housekeeping to the ubuntu packging.
2016-07-14 12:46:47 +03:00
Amnon Heiman
948140bec3 Adding a timer service for ubuntu scylla-housekeeping
Ununtu 14.4 upstart does not support timers for recurrent operations.
The upstart cookbook suggest a way to mimic this functionality here:
http://upstart.ubuntu.com/cookbook/#run-a-job-periodically

This patch adds a service that runs the house-keeping daily.

Setting it as a service insure that it would start and stop with
scylla-server service.
2016-07-14 12:46:39 +03:00
Avi Kivity
23edc1861a db: estimate queued read size more conservatively
There are plenty of continuations involved, so don't assume it fits in 1k.
Message-Id: <1468429516-4591-1-git-send-email-avi@scylladb.com>
2016-07-14 11:42:24 +02:00
Avi Kivity
d3c87975b0 db: don't over-allocate memory for mutation_reader
column_family::make_reader() doesn't deal with sstables directly, so it
doesn't need to reserve memory for them.

Fixes #1453.
Message-Id: <1468429143-4354-1-git-send-email-avi@scylladb.com>
2016-07-14 10:01:42 +02:00
Paweł Dziepak
10c144ffd4 types: fix type aliasing violation
Any pointer can be casted to char*, but not the other way around. This
causes GCC6 to misoptimize timestamp_type_impl::from_string().

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
Message-Id: <1468413349-27267-1-git-send-email-pdziepak@scylladb.com>
2016-07-13 17:22:16 +03:00
Tomasz Grabiec
c97871d95c migration_manager: Uncomment logging for keysapce drop
Message-Id: <1468413673-6899-1-git-send-email-tgrabiec@scylladb.com>
2016-07-13 13:42:23 +01:00
Gleb Natapov
9cc076c9f3 storage_proxy: preserve endpoint's order while filtering local nodes for query
filter_for_query() gets sorted by preference list of endpoints and
should preserve that order after filtering out non local endpoints for
local query. partition() does not guaranty this while stable_partition()
does, so use it instead.

Fixes #1450.
Message-Id: <20160713100909.GM10767@scylladb.com>
2016-07-13 13:17:28 +03:00
Tomasz Grabiec
7227c537ce Merge branch 'pdziepak/streamed-mutations-hashing/v5' from seastar-dev.git
From Paweł:

This is another episode in the "convert X to streamed mutations" series.
Hashing mutations (mainly for repair) is converted so that it doesn't
need to rebuild whole mutation.

The first part of the series changes the way streamed mutations deal
with range tombstones. Since it is not necessary to make sure we write
disjoint tombstones to sstables there is no need anymore for streamed
mutations to produce disjoint tombstones and, consequently, no need for
range tombstones to be split into range_tombstone_begin and
range_tombstone_end.

The second part is the actual hashing implementation. However, to ensure
that the hash depends only on the contents of the mutation and no the
way it is stored in different data sources range tombstones have to be
made disjoint before they are hashed.

This series also ensures that any changes caused by streamed mutations
to hashing and streaming do not break repair during upgrade.
2016-07-13 11:24:00 +02:00
Duarte Nunes
674afc52bc compound_test: Test singular composite_view::explode()
This patch adds a test case for composite_view::explode() called on a
non-compound composite.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1468353393-3074-1-git-send-email-duarte@scylladb.com>
2016-07-13 11:23:24 +02:00
Paweł Dziepak
3fe1aec29d streaming: avoid word "ERROR" in non-error messages
Some tools (e.g. ccm) get confused and consider messages containing word
"ERROR" as error level messagess irrespectively of their actual severity
level.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
Message-Id: <1468399752-5228-1-git-send-email-pdziepak@scylladb.com>
2016-07-13 12:06:33 +03:00
Paweł Dziepak
eb88181347 repair: ask for streamed checksums if cluster supports them
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-07-13 09:51:23 +01:00
Paweł Dziepak
7e06499458 repair: convert hashing to streamed_mutations
This patch makes hashing for repair calculate checksums in a way that
doesn't require rebuilding whole mutation.
Unfortunately, such checksums are incompatible with the old ones so the
old way for computing checksums is preserved for compatibility reasons.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-07-13 09:51:23 +01:00
Paweł Dziepak
e779e2f0c9 streaming: do not fragment mutations in mixed cluster
The receiving side needs to handle fragmented mutations properly so that
isolation guarantees are not broken. If the receiving node may be an old
one do not fragment mutations.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-07-13 09:51:23 +01:00
Paweł Dziepak
85c092c56c storage_service: add LARGE_PARTITIONS_FEATURE
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-07-13 09:51:23 +01:00
Paweł Dziepak
c5662919df tests/streamed_mutation: test hashing
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-07-13 09:51:23 +01:00
Paweł Dziepak
fe172484bd streamed_mutation: add mutation_hasher
mutation_hasher is a consumer of streamed_mutation that feeds its data
to a specified hasher.
It is not compatible with hashing_partition_visitor.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-07-13 09:51:23 +01:00
Paweł Dziepak
eb1dcf08e7 tests/streamed_mutation: add test for range_tombstones_stream
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-07-13 09:51:23 +01:00
Paweł Dziepak
93cc4454a6 streamed_mutation: emit range_tombstones directly
Originally, streamed_mutations guaranteed that emitted tombstones are
disjoint. In order to achieve that two separate objects were produced
for each range tombstone: range_tombstone_begin and range_tombstone_end.

Unfortunately, this forced sstable writer to accumulate all clustering
rows between range_tombstone_begin and range_tombstone_end.

However, since there is no need to write disjoint tombstones to sstables
(see #1153 "Write range tombstones to sstables like Cassandra does") it
is also not necessary for streamed_mutations to produce disjoint range
tombstones.

This patch changes that by making streamed_mutation produce
range_tombstone objects directly.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-07-13 09:51:18 +01:00
Paweł Dziepak
c3a8539074 streamed_mutation: add more comparators to position_in_partition
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-07-13 09:50:08 +01:00
Paweł Dziepak
27fea7bf2c mutation_partition: add non-cons rows and tombstones accessors
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-07-13 09:50:07 +01:00
Paweł Dziepak
2208d4b53e range_tombstone_list: add non-const begin() and end()
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-07-13 09:50:07 +01:00
Paweł Dziepak
5a790a9b49 range_tombstone: add flip()
range_tombstone::flip() flips range bounds. This is necessary in order
to use range tombstone in reversed mutation fragment streams.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-07-13 09:50:07 +01:00
Paweł Dziepak
e1d306fa0d range_tombstone: add memory_usage()
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-07-13 09:50:07 +01:00
Paweł Dziepak
91a866501d range_tombstone: add range_tombstone_accumulator
range_tombstone_accumulator is a helper class that allows determining
tombstone for a clustering row when range tombstones and clustering rows
are streamed from streamed_mutation.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-07-13 09:50:07 +01:00
Paweł Dziepak
cd7937d33b range_tombstone: add apply()
range_tombstone::apply() allows merging two, possibly overlapping, range
tombstones with the same start bound and produces one or two disjoint
range tombstones as a result.

It is intended to be used for merging tombstones coming from different
sources.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-07-13 09:50:07 +01:00
Nadav Har'El
aec90a22da sstable parsing: assert we do not lose clustering rows
The sstable parsing code calls mp_row_consumer::flush() after every
clustering row has been read, and this puts the now complete row in a single
field "_ready". The assumption is that at this point parsing will stop, the
consumer will move out this _ready (mp_row_consumer::get_mutation_fragment())
and when flush() is later called again, _ready will be empty again.

This assumption is correct in our code, but is based on an intricate
combination of estoreric parts of the code, such as:

 1. In data_consume_row_context we stop parsing after reading the parition's
    header, before reading any clustering rows, giving the caller the chance
    to call sstable_streamed_mutation::read_next() to be prepared for the
    incoming mutations.

 2. In mp_row_consumer::flush_if_needed(), we stop the parser after each
    individual clustering row.

It is easy to break this assumption, and I did this in one of my code changes,
and the result was silent loss of clustering rows, as "_ready" got silently
overwritten before the reader had a chance to move it out.

What this patch does is to add an assertion: If a clustering row is silently
lost before being transferred to the mutation fragment reader, we croak.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <1468389955-24600-1-git-send-email-nyh@scylladb.com>
2016-07-13 09:42:48 +01:00
Duarte Nunes
4eca7632ec sstables: Replace composite fields with raw bytes
This patch fixes a regression introduced in
f81329be60, which made keys compound by
default when using a particular ctor, in turn leading to mismatches
when comparing the same key built with functions that properly
consider compoundness.

As a temporary fix, the sstable::key and sstable::key_view classes
store raw bytes instead of a composite.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1468339295-3924-1-git-send-email-duarte@scylladb.com>
2016-07-12 18:08:04 +02:00
Duarte Nunes
f013425bb5 query: Ensure timestamp is last param in read_command
Since the timestamp is not serialized, it must always be the last
parameter of query::read_command. This patch reorders it with the
partition_limit parameters and updates callers that specified a
timestamp argument.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1468312334-10623-1-git-send-email-duarte@scylladb.com>
2016-07-12 10:41:54 +01:00
Amnon Heiman
41546747d8 scylla-server.service: Start the scylla-housekeeping
This makes scylla-server to try and start the scylla-housekeeping.

Failing to start the service will not interfere with the scylla-server
start.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-07-12 12:32:52 +03:00
Amnon Heiman
0eba2b8fd5 scylla.spec.in: Pack the scylla-housekeeping service
This change pack and install the scylla-housekeeping service under
redhat like systems.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-07-12 12:32:48 +03:00
Tomasz Grabiec
c5e3c9bc35 Merge branch 'duarten/composite-v7' from git@github.com:duarten/scylla.git
From Duarte:

This patchset adds a representation of a legacy composite
value to compound_compat.hh and replaces the one in
sstables/key.hh. This patchset is needed for the thrift series.
2016-07-12 10:49:02 +02:00
Amnon Heiman
6d5049d90b Adding the scylla-housekeeping service
The scylla housekeeping service responsible for recurent tasks.

It is currently set to run daily and report if the version is correct.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-07-12 11:47:04 +03:00
Amnon Heiman
30efdabf55 Introducting the scylla-housekeeping script
scylla-housekeeping is a script that check and report for hardware and software issues.

The first phase of it check for newer version and report if the version
is old.

To see the available options run
scylla-housekeeping help
2016-07-12 11:12:43 +03:00
Glauber Costa
73a70e6d0a config: Use Scylla in user visible options
We have imported most of our data about config options from Cassandra.  Due to
that, many options that mention the database by name are still using
"Cassandra".

Specially for the user visible options, which is something that a user sees, we
should really be using Scylla here.

This patch was created by automatically replacing every occurrence of "Cassandra"
with "Scylla" and then later on discarding the ones in which the change didn't
make sense (such as Unused options and mentions to the Cassandra documentation)

Signed-off-by: Glauber Costa <glauber@scylladb.com>
Message-Id: <1423e1d7e36874a1f46bd091aec96dcb4d8482d9.1468267193.git.glauber@scylladb.com>
2016-07-12 09:18:17 +03:00
Duarte Nunes
f81329be60 sstables: sstables::key delegates to composite
The sstables::key class now delegates much of its functionality
to the composite class. All existing behavior is preserved.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-11 23:37:33 +02:00
Gleb Natapov
726b79ea91 messaging_service: enable internode_compression option
Use LZ4 for internode compression if enabled.

Message-Id: <20160711141734.GZ18455@scylladb.com>
2016-07-11 18:30:21 +03:00
Avi Kivity
201f585ab6 Merge seastar upstream
* seastar e7a7d41...e660d54 (1):
  > rpc: add factory class for lz4 compressor
2016-07-11 18:29:43 +03:00
Glauber Costa
f7706d51d1 scyllatop: fix typo
Keyborad -> Keyboard

Signed-off-by: Glauber Costa <glauber@scylladb.com>
Message-Id: <349f20fd69be2f2e05ae0b7800e34a336cd2472b.1468248179.git.glauber@scylladb.com>
2016-07-11 18:27:49 +03:00
Duarte Nunes
ad8ff1df7e sstables: Replace composite class
This patch replaces the sstables::composite class with the one in
compound_compat.hh.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-11 16:55:11 +02:00
Duarte Nunes
0b87d16699 composite: Add unit tests
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-11 16:55:11 +02:00
Duarte Nunes
b179d8d378 compound_compat: Parse legacy compound values
This patch adds support for parsing legacy compound values by
introducing the composite class, a wrapper around a sequence of bytes
serialized in the legacy format for compounds. Compound values can be
sent though the thrift API.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-07-11 16:55:07 +02:00
Avi Kivity
9b08ddb639 Merge seastar upstream
* seastar 9267dfa...e7a7d41 (3):
  > Merge "Compression support for RPC" from Gleb
  > reactor: allow sleeping while disk aio is pending
  > sstring: add resize method
2016-07-11 16:23:29 +03:00
Calle Wilund
4ab03e98cf commitlog: Ensure we don't end up in a loop when we must wait for alloc
Continuation reordering could cause us to repeatedly see the
segment-local flag var even though actual write/sync ops are done.
Can cause wild recursion without actual delayed continuation ->
SOE.

Fix by also checking queue status, since this is the wait object.

Message-Id: <1468234873-13581-1-git-send-email-calle@scylladb.com>
2016-07-11 14:12:38 +03:00
Avi Kivity
f126efd7f2 transport: encode user-defined type metadata
Right now we fall back to tuples, which confuses the client.

Fixes #1443.

Reviewed-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1468167120-1945-1-git-send-email-avi@scylladb.com>
2016-07-11 08:51:17 +03:00
Takuya ASADA
d2caa486ba dist/redhat/centos_dep: disable go and ada language on scylla-gcc package, since ScyllaDB never use them
centos-master jenkins job failed at building libgo, but we don't need go language, so let's disable it on scylla-gcc package.
Also we never use ada, disable it too.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1468166660-23323-1-git-send-email-syuu@scylladb.com>
2016-07-10 19:12:52 +03:00
Avi Kivity
24e3026e32 Merge "compaction manager refactoring" from Raphael 2016-07-10 17:16:23 +03:00
Tomasz Grabiec
6a1f9a9b97 db: Improve logging
Message-Id: <1467997671-16570-1-git-send-email-tgrabiec@scylladb.com>
2016-07-10 16:15:03 +03:00
Avi Kivity
b5bef73ad2 Merge "Avoiding checking bloom filters during compaction" from Tomasz
"Checking bloom filters of sstables to compute max purgeable timestamp
for compaction is expensive in terms of CPU time. We can avoid
calculating it if we're not about to GC any tombstone.

This patch changes compacting functions to accept a function instead
of ready value for max_purgeable.

I verified that bloom filter operations no longer appear on flame
graphs during compaction-heavy workload (without tombstones).

Refs #1322."
2016-07-10 11:33:41 +03:00
Tomasz Grabiec
8c4b5e4283 db: Avoiding checking bloom filters during compaction
Checking bloom filters of sstables to compute max purgeable timestamp
for compaction is expensive in terms of CPU time. We can avoid
calculating it if we're not about to GC any tombstone.

This patch changes compacting functions to accept a function instead
of ready value for max_purgeable.

I verified that bloom filter operations no longer appear on flame
graphs during compaction-heavy workload (without tombstones).

Refs #1322.
2016-07-10 09:54:20 +02:00
Tomasz Grabiec
c0233c877d db: Avoid out-of-memory when flushing cannot keep up
memtable_list::seal_on_overlflow() is called on each mutation to check
if current memtable should be flushed. It will call
memtable_list::seal_active_memtable() when that is the case.

The number of concurrent seals is guarded by a semaphore, starting
from commit 0f64eb7e7d, and allows
at most 4 of them.

If there are 4 flushes already pending, every incoming mutation will
enqueue a new flush task on the semaphore's wait list, without waiting
for it. The wait queue can grow without bounds, eventually leading to
out-of-memory.

The fix is to seal the memtable immediately to satisfy should_flush()
condition, but limit concurrency of actual flushes. This way the wait
queue size on the semaphore is limited by memtables pending a flush,
which is fairly limited.

Message-Id: <1467997652-16513-1-git-send-email-tgrabiec@scylladb.com>
2016-07-10 10:53:51 +03:00
Tomasz Grabiec
74ff30a31a mutation_reader: Introduce stable_flattened_mutations_consumer adaptor
Needed to make compact_mutation class non-movable later. It is used in
do_with, so needs to be movable. Will be solved by using this adaptor.
2016-07-09 22:31:28 +02:00
Tomasz Grabiec
fb44f895b2 mutation_reader: Name template parameters after concepts
With so many consumer concepts out there, it is confusing to name
parameters using genering "Consumer" name, let's name them after
(already defined) concepts: CompactedMutationsConsumer, FlattenedConsumer.
2016-07-09 22:31:27 +02:00
Raphael S. Carvalho
ed5e7e6842 compaction: refactor compaction manager
Previously, same function was used to handle both regular compaction
and cleanup requests. That's bad because a lot of conditions were
added for both compaction types to live in the same function.
Now, cleanup and regular compaction will live in different functions.
They share a lot of code, so helper functions were introduced.
This change is also important for user-initiated compaction that
will go through compaction manager in the future.
Code is also a lot easier to read now.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2016-07-08 16:37:53 -03:00
Raphael S. Carvalho
da6a2b429d compaction: add functions to register and deregister compacting sstables
Reviewed-by: Nadav Har'El <nyh@scylladb.com>
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2016-07-08 16:00:51 -03:00
Raphael S. Carvalho
4d6dce8ec9 compaction: add helper function to get candidates for strategy
Reviewed-by: Nadav Har'El <nyh@scylladb.com>
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2016-07-08 15:06:14 -03:00
Raphael S. Carvalho
e38f66c6fe database: make certain column family functions const qualified
Reviewed-by: Nadav Har'El <nyh@scylladb.com>
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2016-07-08 15:05:22 -03:00
Raphael S. Carvalho
bfc5376548 compaction: remove gate from compaction manager task
There is no longer a need to use gate for regular termination of
fiber that runs compaction. Now, we only set task->stopping to
true, ask for compaction termination, and wait for its future to
resolve. Code is simplified a lot with this change.

Reviewed-by: Nadav Har'El <nyh@scylladb.com>
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2016-07-08 15:05:10 -03:00
Paweł Dziepak
cba996a3ea Merge "Implement missing functions for byte_ordered_partitioner" from Asias 2016-07-08 10:49:25 +01:00
Asias He
f4389349e4 config: Enable partitioner option
Enable --partitioner option so that user can choose partitioner other
than the default Murmur3Partitioner. Currently, only Murmur3Partitioner
and ByteOrderedPartitioner are supported. When non-supported partitioner
is specifed, error will be propogated to user.
2016-07-08 17:44:55 +08:00
Asias He
9c27b5c46e byte_ordered_partitioner: Implement missing describe_ownership and midpoint
In order to support ByteOrderedPartitioner, we need to implement the
missing describe_ownership and midpoint function in
byte_ordered_partitioner class.

As a starter, this path uses a simple node token distance based method
to calculate ownership. C* uses a complicated key samples based method.
We can switch to what C* does later.

Tests are added to tests/partitioner_test.cc.

Fixes #1378
2016-07-08 17:44:55 +08:00
Asias He
e0949a8f4f storage_service: Exit shadow round state if it fails
If a node fails to talk to any seed node, shadow round will fail. We
should exit shadow round state before we continue.

This issue is spotted by
consistency_test.TestConsistency.data_query_digest_test dtest.
Message-Id: <ba0613532a69bac369ca316ab61d907b320c8e68.1467963674.git.asias@scylladb.com>
2016-07-08 10:05:07 +01:00
Avi Kivity
8dab93a853 sstables: fix low disk utilization with compression and small chunk lengths
As Nadav notes we use the chunk length as the buffer size for the compressed
stream too.

Fix by using it only for the outer (uncompressed) stream; the inner
(compressed) stream uses the sstable buffer size, 128 kiB.

Fixes #1402.
Message-Id: <1467910556-5759-1-git-send-email-avi@scylladb.com>
Reviewed-by: Nadav Har'El <nyh@scylladb.com>
2016-07-07 18:13:30 +01:00
Vlad Zolotarov
f2bf453be2 database: revive mutation retry in case of replay_position_reordered_exception
The logic that would retry applying a mutation in case of
a replay_position_reordered_exception error was broken by
a commit 0c31f3e626
Author: Glauber Costa <glauber@scylladb.com>
Date:   Wed Apr 20 19:09:21 2016 -0400

    database: move memtable throttler to the LSA throttler

This patch makes it work again.

Fixes #1439

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
Message-Id: <1467893342-30559-1-git-send-email-vladz@cloudius-systems.com>
2016-07-07 15:00:35 +02:00
Tomasz Grabiec
de429d6a53 Merge branch 'dev/pdziepak/streamed-mutations-streaming/v3'
Support for streaming of large partitions from Paweł:

This series converts streaming to streaming_mutations so that there
is need to store full mutation in memory in order to send or receive
it.

The first several patches add a way of estimating mutation fragment
memory usage and introduce fragment_and_freeze() which produces
a stream of reasonably sized frozen mutations from a single streamed
mutation.

The second part of this patchset makes sure that streaming mutations
in fragments doesn't break isolation guarantees. This is achieved by
delaying visibility of sstables produced by streaming until the
streaming is completed. However, our current receiving code merges
mutations from all streaming plans together thus making it impossible
to track which data was received from a particular streaming plan.
The solution to that problem is to introduce an additional flag to
STREAM_MUTATION verb which informs the receiver whether the mutation
is fragmented and care must be taken to preserve isolation. Small
mutations behaved as they were, with writes from different stream
plans coalesced while big mutations are handled separately for each
streaming task.
2016-07-07 13:23:39 +02:00
Paweł Dziepak
d9eb4d8028 streaming: use fragment_and_freeze() to send mutations
Commit 206955e4 "streaming: Reduce memory usage when sending mutations"
moved streaming mutation limiter from do_send_mutations() to
send_mutations(). The reason for that was that send_mutation() did full
mutation copies. That's no longer the case and streaming limiter should
be moved back to do_send_mutation() in order to provide back pressure to
fragment_and_freeze().

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-07-07 12:18:36 +01:00
Paweł Dziepak
32a5de7a1f db: handle receiving fragmented mutations
If mutations are fragmented during streaming a special care must be
taken so that isolation guarantees are not broken.

Mutations received with flag "fragmented" set are applied to a memtable
that is used only by that particular streaming task and the sstables
created by flushing such memtables are not made visible until the task
is complte. Also, in case the streaming fails all data is dropped.

This means that fragmented mutations cannot benefit from coalescing of
writes from multiple streaming plans, hence separate way of handling
them so that there is no loss of performance for small partitions.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-07-07 12:18:35 +01:00
Paweł Dziepak
f2ae31711e streaming: inform CF when streaming fails
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-07-07 12:18:35 +01:00
Paweł Dziepak
4031c0ed8f streaming: pass plan_id to column family for apply and flush
plan_id is needed to keep track of the origin of mutations so that if
they are fragmented all fragments are made visible at the same time,
when that particular streaming plan_id completes.

Basically, each streaming plan that sends big (fragmented) mutations is
going to have its own memtables and a list of sstables which will get
flushed and made visible when that plan completes (or dropped if it
fails).

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-07-07 12:18:35 +01:00
Paweł Dziepak
51ec7a7285 db: wait for ongoing flushes at end of streaming
When flush_streaming_mutations() is called at the end of streaming it is
supposed to flush all data and then invalidate cache. ranges However, if
there are already some memtable flushes in progress it won't wait for them.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-07-07 12:18:35 +01:00
Paweł Dziepak
5bc51821fe sstables: allow writing unsealed sstables
The purpose of this patch is to split the actions of writing sstable and
sealing it. As long as the sstable is unsealed it is considered
incomplete and is going to be removed on reboot.

Such functionality is needed in order to defer visibility of sstables
created during streaming until the streaming is complete.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-07-07 12:18:35 +01:00
Paweł Dziepak
a7b6c1110f sstables: do not require seal_sstable() to be run in thread
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-07-07 12:18:35 +01:00
Paweł Dziepak
4e34bd4e8a tests/streamed_mutation: test fragment_and_freeze()
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-07-07 12:18:35 +01:00
Paweł Dziepak
19629e95e2 frozen_mutation: add fragment_add_freeze()
fragment_and_freeze() produces a stream of frozen mutations from a
single streamed_mutation.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-07-07 12:18:30 +01:00
Paweł Dziepak
820bd6c9bc streamed_mutation: add mutation_fragment::memory_usage()
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-07-07 12:17:25 +01:00
Paweł Dziepak
23d0bfd065 mutation_partition: add row::memory_usage()
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-07-07 12:17:25 +01:00
Paweł Dziepak
1d54327afd atomic_cell_or_collection: add memory_usage()
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-07-07 12:17:25 +01:00
Paweł Dziepak
d0ee750cec keys: add memory_usage()
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-07-07 12:17:25 +01:00
Paweł Dziepak
cfa581b426 utils/managed_vector: add memory_usage()
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-07-07 12:17:25 +01:00
Paweł Dziepak
703509a1c7 utils/managed_bytes: add memory_usage()
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-07-07 12:17:25 +01:00
Paweł Dziepak
a289816b31 streamed_mutation: fix mutation_fragment::consume() return type
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-07-07 12:17:25 +01:00
Paweł Dziepak
37bd7230bc streamed_mutation: add mutation fragment visitor
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-07-07 12:17:25 +01:00
Glauber Costa
54ce6221a7 allow the dirty memory manager to be used without a database object
Some of our tests don't provide a database object to a CF. Create a default
dirty memory manager object that can be used without a database for them.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
Message-Id: <872f8c9232ff87d788e271b1db86c814d7a75d9f.1467832713.git.glauber@scylladb.com>
2016-07-07 10:00:43 +01:00
Raphael S. Carvalho
0772d20c60 fix compilation in debug mode
build/debug/sstables/compaction_strategy.o: In function
`date_tiered_manifest::date_tiered_manifest(std::map<basic_sstring<char, unsigned int, 15u>,
basic_sstring<char, unsigned int, 15u>, std::less<basic_sstring<char, unsigned int, 15u> >,
std::allocator<std::pair<basic_sstring<char, unsigned int, 15u> const, basic_sstring<char,
unsigned int, 15u> > > > const&)':
/home/centos/scylla/sstables/date_tiered_compaction_strategy.hh:67: undefined reference to
`date_tiered_manifest::DEFAULT_BASE_TIME_SECONDS'

That's fixed by moving definition of static constexpr outside the class.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20c16ad71f64900aa5591018bc4e976406cfebb3.1467870383.git.raphaelsc@scylladb.com>
2016-07-07 11:52:37 +03:00
Avi Kivity
9a8788019d row_cache: fix visitor for boost <= 1.55
Older boosts can't return a future from a visitor (likely lacking support
for move-only objects).  Supply a dirty hackaround.

Message-Id: <1467822548-25940-1-git-send-email-avi@scylladb.com>
2016-07-06 19:55:51 +03:00
Avi Kivity
21031d276b Merge seastar upstream
* seastar c82c36f...9267dfa (6):
  > app_template: Make run() wait for func when reactor exit is triggered externally
  > core: Introduce futurize_apply() helper
  > rpc: make unexpected eof messages more informative
  > Fix boost version check
  > reactor: more fix for smp poll with older boost
  > reactor: fix build on older boost due to spsc_queue::read_available()
2016-07-06 18:14:13 +03:00
Avi Kivity
02530faeb2 compaction: fix tombstones not being garbage collected during compaction
2a46410f4a changed sstable_list from a map
to a set, so it is no longer sorted by generation.  The code for finding
the list of sstables not being compacted relied on this sort order, and
now broke, returning a longer list than needed (including some of the
sstables being compacted).  As a result, the compaction code preserved
the tombstones, incorrectly thinking there was still live data they
referenced.

Fix by sorting the set explicitly.

Fixes #1429.
Message-Id: <1467793026-6571-1-git-send-email-avi@scylladb.com>
2016-07-06 10:22:31 +02:00
Asias He
0c56bbe793 gossip: Make get_supported_features and wait_for_feature_on{_all}_node private
They are used only inside gossiper itself. Also make the helper
get_supported_features(std::unordered_map<gms::inet_address, sstring>) static.

Message-Id: <f434c145ad9138084708b60c1d959b84360e47b2.1467775291.git.asias@scylladb.com>
2016-07-06 09:54:56 +03:00
Avi Kivity
ab279a4752 Merge "Add support to date tiered compaction strategy" from Raphael
"After this patchset, date tiered compaction strategy is supported by Scylla.

For those who don't know what it is about, the following article may help:
https://labs.spotify.com/2014/12/18/date-tiered-compaction/

It's also nicely explained here by our wiki page:
https://github.com/scylladb/scylla/wiki/SSTable-compaction#date-tiered-compaction

Basically, date tiered strategy was developed to help the database perform better
when facing a time series workload. Date tiered strategy will work to keep data
written at nearly the same time together, such that the number of relevant sstables
for a time-based query is relatively low. We still lacks support to filter out
sstables based on time parameters of a query, but that feature should come ASAP.

The following dtests now pass:
compaction_test.py:TestCompaction_with_DateTieredCompactionStrategy.compaction_delete_test
compaction_test.py:TestCompaction_with_DateTieredCompactionStrategy.compaction_strategy_switching_test

Used cassandra-stress with the parameter '-schema compaction\(strategy=DateTieredCompactionStrategy\)'
to check stability.

Fixes #511."
2016-07-06 09:51:12 +03:00
Avi Kivity
7438c9de5c Merge "Fix database freeze with load for multiple CFs" from Glauber
"Issue 1195 describes a scenario with a fairly easy reproducer in which
we can freeze the database. That involves writing simultaneously to
multiple CFs, such that the sum of all the memory they are using is larger
than the dirty memory limit, without not any of them individually being
larger than the memtable size.

This patchset rewrites the throttling code, including now active flushes
so that this situation cannot happen.

Fixes #1195"
2016-07-06 09:48:13 +03:00
Raphael S. Carvalho
b5ec4d46c6 tests: add test for date tiered compaction strategy
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2016-07-06 02:11:47 -03:00
Raphael S. Carvalho
b699ef2de3 compaction: wire up date tiered compaction strategy
After this commit, date tiered compaction strategy is supported
on Scylla.

To understand how it works, take a look at our wiki page:
https://github.com/scylladb/scylla/wiki/SSTable-compaction#date-tiered-compaction

Fixes #511.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2016-07-06 02:11:47 -03:00
Raphael S. Carvalho
e5cc0cc6c4 compaction: implement date tiered compaction strategy
This commit is basically about converting Java to C++.
Date tiered compaction strategy isn't wired yet.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2016-07-06 02:11:47 -03:00
Raphael S. Carvalho
cab2892866 tests: add test for sstables::get_fully_expired_sstables
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2016-07-06 02:11:47 -03:00
Raphael S. Carvalho
e9076f39be compaction: implement function to get fully expired sstables
Strongly based on org.apache.cassandra.db.compaction.
CompactionController.getFullyExpiredSSTables.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2016-07-06 02:11:47 -03:00
Raphael S. Carvalho
69b3860662 tests: add test for leveled_manifest::overlapping
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2016-07-06 02:11:45 -03:00
Raphael S. Carvalho
92848efc42 sstables: make overlapping functions static
That's needed for a function that will get overlapping sstables to
get fully expired ones.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2016-07-06 01:34:34 -03:00
Raphael S. Carvalho
8d38fa49d4 sstables: move code to get uncompacting sstables to a function
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2016-07-06 01:33:55 -03:00
Raphael S. Carvalho
1118cfc51a tests: test that sstable max_local_deletion_time is properly updated
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2016-07-06 01:13:34 -03:00
Raphael S. Carvalho
cc6c383249 sstables: properly keep track of max local deletion time
We weren't updating max local deletion time for cells that contain
ttl, or for tombstone cells.
If there is a live cell with no ttl, then max local deletion time
is supposed to store maximum value, which means that the sstable
will not be fully expired later on.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2016-07-06 01:13:24 -03:00
Raphael S. Carvalho
1ecd9bdefc sstables: fix type of max_local_deletion_time
max_local_deletion_time was incorrectly using an unsigned type
instead of a signed one.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2016-07-06 01:13:13 -03:00
Raphael S. Carvalho
f9ab94d266 compaction: import DateTieredCompactionStrategy.java
File can be found at the following C* directory:
src/java/org/apache/cassandra/db/compaction

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2016-07-06 01:12:49 -03:00
Glauber Costa
b0932ceb04 database: act on LSA pressure notification
Issue 1195 describes a scenario with a fairly easy reproducer in which we can
freeze the database. That involves writing simultaneously to multiple CFs, such
that the sum of all the memory they are using is larger than the dirty memory
limit, without not any of them individually being larger than the memtable size.
Because we will never reach the individual memtable seal size for any of them,
none of them will initiate a flush leading the database to a halt.

The LSA has now gained infrastructure that allow us to be notified when pressure
conditions mount. What we will do in this case is initiate a flush ourselves.

Fixes #1195

Signed-off-by: Glauber Costa <glauber@scylladb.com>
2016-07-05 17:46:28 -04:00
Glauber Costa
7169b727ea move system tables to its own region
In the spirit of what we are doing for the read semaphore, this patch moves
system writes to its own dirty memory manager. Not only will it make sure that
system tables will not be serialized by its own semaphore, but it will also put
system tables in its own region group.

Moving system tables to its own region group has the advantage that system
requests won't be waiting during throttle behind a potentially big queue of user
requests, since requests are tended to in FIFO order within the same region
group. However, system tables being more controlled and predictable, we can
actually go a step further and give them some extra reservation so they may not
necessarily block even if under pressure (up to 10 MB more).

Signed-off-by: Glauber Costa <glauber@scylladb.com>
2016-07-05 17:46:28 -04:00
Glauber Costa
c358947284 database: wrap semaphore and region group into a new dirty memory manager
We currently have a semaphore in the column family level that protects us against
multiple concurrent sstable flushes. However, storing that semaphore into the CF,
not the database, was a (implementation, not design) mistake.

One comment in particular makes it quite clear:

   // Ideally, we'd allow one memtable flush per shard (or per database object), and write-behind
   // would take care of the rest. But that still has issues, so we'll limit parallelism to some
   // number (4), that we will hopefully reduce to 1 when write behind works.

So I aimed for the shard, but ended up coding it into the CF because that's closer to the
flush point - my bad.

This patch fixes this while paving the way for active reclaim to take place. It wraps the semaphore
and the region group in a new structure, the dirty_memory_manager. The immediate benefit is that we
don't need to be passing both the semaphore and the region group downwards in the DB -> CF path. The
long term benefit is that we now have a one unified structure that can hold shared flush data in all
of the CFs.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
2016-07-05 15:29:04 -04:00
Glauber Costa
d41fcd45d1 memtables: make memtable inherit from region
The LSA memory pressure mechanism will let us know which region is the best
candidate for eviction when under pressure. We need to somehow then translate
region -> memtable -> column family.

The easiest way to convert from region to memtable, is having memtable inherit
from region. Despite the fact that this requires multiple inheritance, which
always raise a flag a bit, the other class we inherit from is
enable_shared_from_this, which has a very simple and well defined interface. So
I think it is worthy for us to do it.

Once we have the memtable, grabing the column family is easy provided we have a
database object. We can grab it from the schema.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
2016-07-05 15:05:29 -04:00
Glauber Costa
0c31f3e626 database: move memtable throttler to the LSA throttler
The LSA infrastructure, through the use of its region groups, now have
a throttler mechanism built-in. This patch converts the current throttlers
so that the LSA throttler is used instead.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
2016-07-05 15:05:19 -04:00
Yoav Kleinberger
0ad940bfc3 tools/scyllatop: fix crash due to mouse events
due to an urwid-library technicality, some mouse events like scroll or
click would crash scyllatop. This patch fixes this problem.

closes issue #1396.

Signed-off-by: Yoav Kleinberger <yoav@scylladb.com>
Message-Id: <1467294117-19218-1-git-send-email-yoav@scylladb.com>
2016-07-05 19:08:55 +03:00
Avi Kivity
cb59e724ee Merge "Fix enabling sstable read ahead" from Paweł
"This series contains remaining changes necessary to safely enable read
ahead of sstables. Basically, it makes sure that input_streams are
always properly closed (even in case of exception during read)."
2016-07-05 19:04:19 +03:00
Raphael S. Carvalho
e688fc9550 api: provide estimation of pending compaction
Use compaction_strategy::estimated_pending_compaction() to provide
user with an estimation of number of compaction for strategy to be
fully satisfied.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <39b7d91f2525ca38fb2ce9d8885d0c2e727de7ed.1467667054.git.raphaelsc@scylladb.com>
2016-07-05 19:03:12 +03:00
Raphael S. Carvalho
43926026c3 compaction: introduce compaction strategy method to estimate pending compaction
At the moment, it's not possible to know how many compaction are needed for
compaction strategy to be satisfied. It's not possible to know exactly the
number of pending compaction, but the strategy can provide an estimation.

For size tiered, it's based on number of sstables in each bucket. By dividing
bucket size by max threshold, we get number of compaction needed to compact
that single bucket.

For leveled, it's about the number of sstables that exceeds the limit in
each level.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <e209e52f6159ee274a8358b69961a7c0ce357f7d.1467667054.git.raphaelsc@scylladb.com>
2016-07-05 19:03:11 +03:00
Avi Kivity
76cc6408cd Merge "feature check for seed node" from Asias
""This series implemnts feature check for seed node.
2016-07-05 19:01:01 +03:00
Asias He
6f69963ef9 system_keyspace: Simplify load_host_ids implementation
- Use plain loop instead of do_for_each

- Use row.get_as() instead of row.template get_as()
Message-Id: <3e108d3a6258c0caaf569eb9c79532d9789ea411.1467703722.git.asias@scylladb.com>
2016-07-05 09:47:21 +02:00
Asias He
3f31be58b6 system_keyspace: Simplify load_tokens implemntation
- Use plain loop instead of do_for_each

- Use row.get_as() instead of row.template get_as()
Message-Id: <f959ace4f30078695d383c849ed4520169228f97.1467703722.git.asias@scylladb.com>
2016-07-05 09:47:21 +02:00
Asias He
5236e7a379 storage_service: Implement feature check for seed node
Checking features for seed node is a bit more complicated than non-seed
node, because non-seed node can always talk to at least one seed node,
seed node may not.

In this patch, we distingush new cluster and existing cluster by
checking if the system table is empty. We relax the feature check for
new cluster because the feature check is mostly useful when upgrading an
existing cluster to prevent old node to join new cluster.

When talking to a seed node failed during the check, we fallback to the
check using features stored in the system table. This makes restarting a
seed node when no other seed node is up possible (no other seed node at
all, or other seed node is not up yet).

I tested the following scenarios.

1) start a completely new seed node in a new cluster
* system table is empty, skip the check.

2) start a cluster, restart one seed node, at least one other seed node
is up
* system table is not empty, check with shadow round, shadow round will
* succeed

3) start a cluster, restart one seed node, no other seed node is up
* system table is not empty, check with shadow round, shadow round will
* fail, fallback to system table check.

4) start a cluster, shutdown all the nodes, start one seed node with new
ip address, seed list in yaml is updated with new ip address
* system table is not empty, check with shadow round, shadow round will
* fail, fallback to system table check
2016-07-05 10:09:54 +08:00
Asias He
bb80362c3f gossip: Insert with result.end() in get_supported_features
It is faster than result.begin(), suggested by Avi.
2016-07-05 10:09:54 +08:00
Asias He
72cb4a228b gossip: Add to_feature_set helper
To convert a "," split feature string to a feature set.
2016-07-05 10:09:54 +08:00
Asias He
1d6c57fb40 gossip: Reduce timeout in shadow round
In 3a36ec33db (gossip: Wait longer for seed node during boot up), we
increased the timeout by the factor of 60, i.e., ring_dealy * 60 = 5
seconds * 60 = 5 minutes.

In 57ee9676c2 (storage_service: Fix default ring_delay time), we fixed
the default ring_dealy to 30 seconds. Now the timeout is 30 * 60 seconds
= 30 minutes, which is too long.

Make it 5 minues.
2016-07-05 10:09:54 +08:00
Asias He
88f0bb3a7b gossip: Add check_knows_remote_features
To check if this node knows features in
std::unordered_map<inet_address, sstring> peer_features_string
2016-07-05 10:09:54 +08:00
Asias He
2b53c50c15 gossip: Add get_supported_features
To get features supported by all the nodes listed in the
address/feature map.
2016-07-05 10:09:53 +08:00
Asias He
31df4e5316 system_keyspace: Introduce load_peer_features
To get the peer features stored in the system.peers table.
2016-07-05 10:09:53 +08:00
Paweł Dziepak
4acf77d755 sstables: drop unused data_stream_at()
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-07-04 18:17:43 +01:00
Paweł Dziepak
2cdf498bbd sstables: close input stream in sstable::data_read()
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-07-04 18:17:42 +01:00
Paweł Dziepak
8931b939a1 sstables: use finally() to close input streams
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-07-04 18:17:42 +01:00
Paweł Dziepak
e6ececce7f Merge seastar upstream
Submodule seastar a47f893..c82c36f:
  > reactor: fix build error
  > util: lazy_eval: fix compilation errors related to operator<<()s
    definitions
2016-07-04 18:14:05 +01:00
Duarte Nunes
41843b32c5 thrift: Correctly mark a CF as dense
And store whether the comparator is a composite type in the case of
dynamic CFs.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1467307688-11059-1-git-send-email-duarte@scylladb.com>
2016-07-04 17:40:53 +02:00
Nadav Har'El
c4e871ea2d Work around unexpected data_value constructor
If someone tried to naively use utf8_type->decompose("18wX"), this would
mysteriously fail, returning an empty key.

decompose takes a data_value, so the compiler looked for an implict
conversion from the string constant (const char*) to data_value. We did
not have such a conversion, only conversion from sstring. But the compiler
chose (backed by the C++ standard, no doubt) to implicitly convert the
const char* to a bool (!), and then use data_value(bool). It did not
convert the const char* to an sstring, nor did it warn about the possible
ambiguity.

So this patch adds a data_value(const char*) constructor, so people will
not fall into the same trap that I fell into...

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <1467643462-6349-1-git-send-email-nyh@scylladb.com>
2016-07-04 17:50:53 +03:00
Avi Kivity
e22517bafc Merge "Optimize reads from leveled sstables"
In a leveled column family, there can be many thousands of sstables, since
each sstable is limited to a relatively small size (160M by default).
With the current approach of reading from all sstables in parallel, cpu
quickly becomes a bottleneck as we need to check the bloom filter for each
of these sstables.

This patch addresses the problem by introducing a
compaction-strategy-specific data structure for holding sstables.  This
data structure has a method to obtain the sstables used for a read.

For leveled compaction strategy, this data structure is an interval map,
which can be efficiently used to select the right sstables.
2016-07-04 16:00:35 +03:00
Asias He
610a0f7ef0 storage_service: Skip feature check for seed node for now
When a seed node boots up with more than one node in the seed list, it
will fail to talk to the other seed node which is not up yet.
This fails the feature check, so the seed node will not boot.

Skip the feature check for seed node for now, util we have a proper solution.

Fixes recent dtest failure due to fail to boot the seed node.

Message-Id: <e1d4110f96817e45f81dc0bc948dd14600fc5333.1467251799.git.asias@scylladb.com>
2016-07-04 15:09:57 +03:00
Avi Kivity
28fab55e6e Merge "Convert sstable writes to streamed mutations" from Paweł
"This series converts sstable writers (including compaction) to streamed
mutations and makes them use consumer-style interface.

Code related to sstable writes and compaction is converted to consumers
that can be used with consume_flattened_in_thread() (which is a variant
of consume_flattened() intended to be run inside a thread).
compac_for_query is improved so that it can be reused by sstable
compaction."
2016-07-04 15:07:47 +03:00
Avi Kivity
171054e87b Merge seastar upstream
* seastar d4d9e16...a47f893 (1):
  > Merge "overprovisioning support"
2016-07-04 13:46:03 +03:00
Paweł Dziepak
5d0de2179a Merge "Adding scylla version API" from Amnon
Amnon says:

The API that returns the version, currently returns the compatibility
version
(e.g. the version the compatible origin version - currently 2.1.8).

The check version functionality need to know what is the current running
version of scylla.  For that a new API was added that return the current
version.

The result is equivalent of running scylla --version.

After this series a call to:

$ curl -X GET
"http://localhost:10000/storage_service/scylla_release_version"
"666.development-20160703.72f0d4d"

Which is the json representation of:

$ ./build/release/scylla --version
666.development-20160703.72f0d4d
2016-07-04 10:52:44 +01:00
Asias He
f6a2672be0 storage_service: Modify log to match config option of scylla
We currently log as follow:

May  9 00:09:13 node3.nl scylla[2546]:  [shard 0] storage_service - This
node was decommissioned and will not rejoin the ring unless
cassandra.override_decommission=true has been set,or all existing data
is removed and the node is bootstrapped again

Howerver, user should use

   override_decommission:true

instead of

   cassandra.override_decommission:true

in scylla.yaml where the cassandra prefix is stripped.

Fixes #1240
Message-Id: <b0c9424c6922431ad049ab49391771e07ca6fbde.1467079190.git.asias@scylladb.com>
2016-07-04 10:47:49 +02:00
Avi Kivity
76cc0c0cf9 auth: fix performance problem when looking up permissions
data_resource lookup uses data_resource::name(), which uses sprint(), which
uses (indirectly) locale, which takes a global lock.  This is a bottleneck
on large machines.

Fix by not using name() during lookup.

Fixes #1419
Message-Id: <1467616296-17645-1-git-send-email-avi@scylladb.com>
2016-07-04 10:26:18 +02:00
Yoav Kleinberger
49cba035ea tools/scyllatop: leave terminal in a functioning state when user quits with CTRL-C
closes issue #1417.

Signed-off-by: Yoav Kleinberger <yoav@scylladb.com>
Message-Id: <1467556769-11851-1-git-send-email-yoav@scylladb.com>
2016-07-03 17:43:46 +03:00
Amnon Heiman
e66a1cd705 API: Add implementation for the scylla release version
This adds the implementation to the scylla release version API.

After this patch a call to:

curl -X GET "http://localhost:10000/storage_service/scylla_release_version"

Will return the current scylla release version.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-07-03 16:29:09 +03:00
Amnon Heiman
56ea8c943e API: add scylla release version API
This adds a definition to the scylla release version. The API already
return the compatibility version (ie. the compatible origin version)

This definition returns the scylla version, a call to the API should
return the same result as running scylla --version.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-07-03 16:26:21 +03:00
Avi Kivity
68e613b313 Rebuild _column_family::_sstables when changing compaction_strategy
The concrete sstable_set type depends on the compaction strategy, so
ask the compaction_strategy to create a new sstable_set object and populate
it.
2016-07-03 13:42:10 +03:00
Avi Kivity
44a6cef4e1 sstable mutation readers: use sstable_set::select()
Apply compaction strategy specific logic to narrow down the set of sstables
used for a query; can speed up reads using LeveledCompactionStrategy
significantly.

Fixes #1185.
2016-07-03 10:50:58 +03:00
Avi Kivity
4cb7618601 Convert column_family::_sstables to sstable_set
Using sstable_set will allow us to filter sstables during a query before
actually creating a reader (this is left to the next patch; here we just
convert the users of the _sstables field).
2016-07-03 10:32:27 +03:00
Avi Kivity
c8237fc262 compaction_strategy: introduce make_sstable_set()
Allow compaction_strategy to create a container for sstables that is
optimized for the strategy.

Most compaction_strategies return bag_sstable_set; leveled compaction
returns the specialized partitioned_sstable_set.
2016-07-03 10:27:01 +03:00
Avi Kivity
168696c558 Introduce partitioned_sstable_set
partitioned_sstable_set assumes that sstable are mostly partitioned along
the token range: only a few sstables will be needed to access a particular
token.  It is implemented as an interval_map.
2016-07-03 10:27:00 +03:00
Avi Kivity
64e4357461 Introduce bag_sstable_set
bag_sstable_set is a generic sstable_set implementation: it assumes nothing
about the sstables.  It is implemented as a vector, and any select will
return the entire sstable set.
2016-07-03 10:27:00 +03:00
Avi Kivity
85e9cf4616 Introduce sstable_set
sstable_set abstracts the notion of a container of sstables, allowing
different compaction strategies to supply their own implementation.  The
intended user is leveled compaction strategy; since it partitions sstables,
it can quickly restrict the number of sstables that participate in a query
by looking at the min/max partition key.

sstable_set also maintains an internal lw_shared_ptr<sstable_list>,
in parallel with the abstract container.  This is to support
column_family::get_sstable(), which returns a lw_shared_ptr<sstable_list>
which must be anchored somewhere if it is not saved at the caller side,
as it isn't in most current callers.
2016-07-03 10:27:00 +03:00
Avi Kivity
c1815abd15 Introduce compatible_ring_position
ring_position is built for modern code that does not require default
constructors or stateless comparators.  But not all code is modern, so
supply a compatible_ring_position that works with old code, at the cost
of some extra storage.  Intended user is boost's interval container
library.
2016-07-03 10:27:00 +03:00
Avi Kivity
2a46410f4a Change sstable_list from a map to a set
sstable_list is now a map<generation, sstable>; change it to a set
in preparation for replacing it with sstable_set.  The change simplifies
a lot of code; the only casualty is the code that computes the highest
generation number.
2016-07-03 10:26:57 +03:00
Duarte Nunes
386c0dd4b2 storage_proxy: Correctly calculate new limit
This patch fixes a bug where we would always return query::max_rows
when calculating the new limit for a retry read command.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1467289746-18177-1-git-send-email-duarte@scylladb.com>
2016-06-30 14:49:56 +02:00
Paweł Dziepak
b150720361 sstable: enable read ahead
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-30 13:18:24 +01:00
Paweł Dziepak
4513f8b52c sstables: add compressed_file_data_source_impl::close()
compressed_file_data_source_impl should close the underlying data source
properly when asked to.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-30 13:07:07 +01:00
Paweł Dziepak
55a6911d7a sstables: close input_stream<> properly
If read ahead is going to be enabled it is important to close
input_stream<> properly (and wait for completion) before destroying it.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-30 11:39:01 +01:00
Paweł Dziepak
e44e12c74a sstables: drop no longer needed code
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-30 11:39:01 +01:00
Paweł Dziepak
c2f0ee9b5f sstables: add consumer-style sstable compactor
This patch moves compaction logic to a consumer that can be used with
consume_flattened_in_thread(). Internally, sstable_writer is used to
write individual sstables.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-30 11:39:01 +01:00
Paweł Dziepak
18a9ee105f sstables: add consumer-style sstable writer
sstable_writer encapsulates all logic related to writing sstable.
Previously introduced component_writer is used to write actual
mutations. sstable_writer is intended to be used with
consume_flattened_in_thread(). Its purpose is to be used by higher-level
consumer that needs to write possibly more than one sstable (sstable
compaction is an example of such consumer).

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-30 11:39:01 +01:00
Paweł Dziepak
0e8b8463ba sstables: introduce consumer-style components writer
This patch rewrites do_write_components() so that it can use
consume_flattened_in_thread(). All components-writing code is moved to a
new consumer: component_writer.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-30 11:39:01 +01:00
Paweł Dziepak
0287e0c9ac mutation_reader: add consume_flattened_in_thread()
This is a version of consume_flattened() intended to be run inside a
thread. All consumer code is going to be invoked in the same thread
context.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-30 11:39:01 +01:00
Paweł Dziepak
7a95847014 mutation_compactor: prepare for sstable compaction
compact_mutation code is going to be shared among queries and sstable
compaction. There are some differences though. Queries don't provide
_max_purgeable and sstable compaction don't need any limits.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-30 11:39:01 +01:00
Paweł Dziepak
00bcc05d36 mutation_compactor: _max_purgeable depends on the decorated key
_max_perguable can be different for each partition, since it is computed
using sstables in which that partition is present (or likely to be
present).

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-30 11:39:01 +01:00
Paweł Dziepak
4133cc7a53 mutation_reader: make consume_flattened() produce decorated keys
Since decorated keys are already computed it is better to pass more
information than less. Consumers interested just in partition key can
just drop token and the ones requiring full decorated key don't need to
recompute it.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-30 11:39:00 +01:00
Paweł Dziepak
fe4b739828 mutation_compactor: rename compact_for_query to compact_mutation
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-30 11:37:54 +01:00
Paweł Dziepak
3e86f9ab73 mutation_partition: extract compact_for_query to a separate header
The compacting logic inside compact_for_query is going to be shared with
sstable compaction.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-30 11:37:54 +01:00
Paweł Dziepak
9b14c93677 streamed_mutation: return reference to decorated key
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-30 11:37:54 +01:00
Paweł Dziepak
3c08ffb275 query: add full_slice
query::full_slice is a partiton slice which has full clustering row
ranges for all partition keys and no per-partition row limit.
Options and columns are not set.

It is used as a helper object in cases when a reference to
partition_slice is needed but the user code needs just all data there is
(an example of such case would be sstable compaction).

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-30 11:37:54 +01:00
Paweł Dziepak
599ed7f1ed sstables: restore indentation
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-30 11:37:54 +01:00
Paweł Dziepak
e7ff20b3bb sstables: run compaction code inside a thread
Currently, each sstable write has its separate thread. However, the goal
is to have compaction use consume_flattened() with a consumer that
creates and writes the sstables. consume_flattened() needs to be executed
inside a thread, since sstable writer may defer.

This patch is a first step in preparations and it just makes whole
compaction logic run inside a thread. That makes little sense now, since
all sstable writes spawn their own threads but that's going to change
in the following patches.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-30 11:37:54 +01:00
Duarte Nunes
0ae6eafadd query: Make partition_limit last parameter
The partition_limit should have been added to the end of the ctor
argument list, as its current placement causes some callers to pass it
the timestamp instead of the limit.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1467239360-6853-3-git-send-email-duarte@scylladb.com>
2016-06-30 12:31:11 +02:00
Gleb Natapov
8bf82cc31c put additional info into cql timeout exception
Fixes #1397

Message-Id: <20160628101829.GR14658@scylladb.com>
2016-06-30 12:03:48 +02:00
Paweł Dziepak
b70bf086b7 frozen_mutation: handle reversed streams properly
Freezing streamed_mutations assumed that mutation fragments are streamed
in the order they appear in the frozen mutation. That is not true for
reversed streams.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
Message-Id: <1467277069-18702-1-git-send-email-pdziepak@scylladb.com>
2016-06-30 11:26:45 +02:00
Avi Kivity
9ac730dcc9 mutation_reader: make restricting_mutation_reader even more restricting
While limiting the number of concurrently executing sstable readers reduces
our memory load, the queued readers, although consuming a small amount of
memory, can still grow without bounds.

To limit the damage, add two limits on the queue:
 - a timeout, which is equal to the read timeout
 - a queue length limit, which is equal to 2% of the shard memory divided
   by an estimate of the queued request size (1kb)

Together, these limits bound the amount of memory needed by queued disk
requests in case the disk can't keep up.
Message-Id: <1467206055-30769-1-git-send-email-avi@scylladb.com>
2016-06-29 15:17:35 +02:00
Raphael S. Carvalho
85cb2a6d35 database: trigger compaction on boot
At the moment, we only trigger compaction after creating a new
sstable as a result of memtable flush, or some other event such
as changing compaction strategy of a column family.
However, it's important to trigger compaction on boot too.
That will happen after loading all column families.

Fixes #1404.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <54d38a418157454eec97aaba6b8a6b6e51484db4.1467135349.git.raphaelsc@scylladb.com>
2016-06-29 13:47:42 +03:00
Amnon Heiman
610fe274fd services: Make scylla-jmx service depends on scylla-server
The scylla-jmx no longer shutdown itself. A better setup would be that
the it would be started when the scylla-server starts and that it would
shutdown when the scylla-server shutdown.

This patch do the scylla-server part of the change.

The scylla-server definition would Want the scylla-jmx.service so there
is no need to enable the scylla-jmx.service.

A patch to the scylla-jmx would cause it to shutdown when the scylla-jmx
shutsdown.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <1467184502-4358-1-git-send-email-amnon@scylladb.com>
2016-06-29 11:36:04 +03:00
Avi Kivity
2c4501f317 Merge seastar upstream
* seastar c15055c...d4d9e16 (4):
  > semaphore: switch to chunked_fifo
  > fair_queue: add missing include
  > chunked_fifo: implement back()
  > Chunked FIFO queue
2016-06-28 19:30:29 +03:00
Avi Kivity
1b448877d7 Merge " thrift: Implement CQL over thrift" from Duarte
"This patchset implements the CQL over thrift verbs. Only CQL3 is supported,
and the CQL2 verbs are disabled."
2016-06-28 13:36:12 +03:00
Piotr Jastrzebski
59d0d9e666 Fix cache_tracker::clear
Make sure that artificial entries for
all column families are set to non continuous.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Message-Id: <f9e517fe40482c05f6c388faab7d6b9eca6b159e.1467103548.git.piotr@scylladb.com>
2016-06-28 11:18:23 +02:00
Piotr Jastrzebski
27575a0528 Fix previous_entry_is_continuous
Rename it to check_previous_entry.
Remove unnesessary test.
Make sure ring_position always has working relation_to_keys method.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Message-Id: <6bc790d492ba9b5c302a50218f3e26b924f657d0.1467101754.git.piotr@scylladb.com>
2016-06-28 10:27:08 +02:00
Piotr Jastrzebski
68e5a199e9 Clean continuous flag of cache entry
preceeding invalidated decorated key even
when it's not found.

Add test.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Message-Id: <c7b8f4df37256363bf304e0396f84b5f37921b81.1467059472.git.piotr@scylladb.com>
2016-06-28 10:26:02 +02:00
Piotr Jastrzebski
cd9f3f94c4 Fix row_cache::update
Clear continuous flag on the last cache entry
with key smaller than a partition being dropped from
memtable on flush and not saved in cache.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Message-Id: <0b5293cc0bf8bb858e62aa8dd00ae7fe7a484380.1467059472.git.piotr@scylladb.com>
2016-06-28 10:25:38 +02:00
Piotr Jastrzebski
eb959a8b81 Change check for artificial entry
in cache_entry destructor from
_key.has_key() to _lru_link.is_linked()

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Message-Id: <f6d3d1bc49d9f6dd5b67a10cbe862466047b039d.1467059472.git.piotr@scylladb.com>
2016-06-28 10:24:29 +02:00
Nadav Har'El
164c760324 Switch compression chunk default from 64 KB to 4 KB
Following Cassandra, our default sstable compression chunk size is 64 KB.
The big downside of this default size is that small reads need to read
and uncompress a large chunk, around 32 KB (if compression halves the data
size). In this patch we switch the default chunk size to 4 KB, which allows
faster small reads (the report in issue #1337 was of a 60-fold speedup...).

Since commit 2f56577, large reads will not be signficantly slowed down by
changing to a small chunk size. The remaining potential downside of this
change is lowering of the compression ratio because of the smaller chunks
individually compressed. However, experimentation shows that the compression
ratio is hurt somewhat, but not dramatically, by lowering the chunk size:
A recent survey of Cassandra compression in
https://www.percona.com/blog/2016/03/09/evaluating-database-compression-methods/
reports a compression ratio of 2 for 64 KB chunks, vs. 1.75 for 4 KB chunks.
My own test on a cassandra-stress workload (whose data is relatively hard
to compress), showed compression ratio 1.25 for 64 KB chunk, vs. 1.23 for
4 KB chunks.

Also remember that if a user wants to control the chunk length for a
particular table, he can - the 64 KB or 4 KB sizes are just the default.

Fixes #1337

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <1467063335-12096-1-git-send-email-nyh@scylladb.com>
2016-06-28 08:50:24 +03:00
Tomasz Grabiec
6108d91362 scylla-gdb: Introduce scylla ptr
Helps in identifying pointers allocated through seastar
allocator. Shows to which thread the pointer belongs, to which size
class, whether it's live or free, what's the offset realtive to the
live object.

Example:

  (gdb) scylla ptr 0x6040abe88170
  thread 1, small (size <= 320), live (0x6040abe88140 +48)

Message-Id: <1467047215-1763-1-git-send-email-tgrabiec@scylladb.com>
2016-06-27 20:11:56 +03:00
Avi Kivity
22ec25b1b3 Merge seastar upstream
* seastar 3029ebe...c15055c (5):
  > memory: add option to mlock() all memory
  > reactor: run idle poll handler with a pure poll function
  > ignore all but one failed futures in map_reduce
  > tutorial: more general exception printout on startup
  > resource: don't abort on too-high io queue count

Fixes #1395.
Fixes #1400.
2016-06-27 19:24:04 +03:00
Tomasz Grabiec
85a37cb379 Merge tag '1398/v3' from https://github.com/avikivity/scylla
From Avi:

Both the cql binary transport and the rpc server have protection against
too many concurrent requests overwhelming the database due to transient
allocations.  There work by estimating the amount of memory a request
requires, and accounting that against a semaphore.  When the semaphore
blocks, we stop dequeing requests from the tcp connection.

Unfortunately, this doesn't work for reads, because we can't estimate the
required memory size.  A small read request can require many sstables to be
read, perhaps concurrently, and a large response to be generated.

Fix by limiting the number of concurrent reads in a shard to 100.  This
is more than enough concurrency for any reasonable disk, and there is no
network communication at this level, so we're safe from high network
latency requiring high concurrency.

Fixes #1398.
2016-06-27 18:04:33 +02:00
Avi Kivity
f03cd6e913 db: add statistics about queued reads 2016-06-27 17:25:08 +03:00
Avi Kivity
edeef03b34 db: restrict replica read concurrency
Since reading mutations can consume a large amount of memory, which, moreover,
is not predicatable at the time the read is initiated, restrict the number
of reads to 100 per shard.  This is more than enough to saturate the disk,
and hopefully enough to prevent allocation failures.

Restriction is applied in column_family::make_sstable_reader(), which is
called either on a cache miss or if the cache is disabled.  This allows
cached reads to proceed without restriction, since their memory usage is
supposedly low.

Reads from the system keyspace use a separate semaphore, to prevent
user reads from blocking system reads.  Perhaps we should select the
semaphore based on the source of the read rather than the keyspace,
but for now using the keyspace is sufficient.
2016-06-27 17:17:56 +03:00
Avi Kivity
bea7d7ee94 mutation_reader: introduce restricting_reader
A restricting_reader wraps a mutation_reader, and restricts it concurrency
using a provided semaphore; this allows controlling read concurrency, which
is important since reads can consume a lot of resources ((number of
participating sstables) * 128k after we have streaming mutations, and a lot
more before).
2016-06-27 17:17:52 +03:00
Duarte Nunes
d31b52a07b thrift: Disable CQL2 verbs
And make set_cql_version a no-op.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-27 15:39:33 +02:00
Duarte Nunes
60094f4033 thrift: Implement execute_prepared_cql3_query verb
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-27 15:39:28 +02:00
Duarte Nunes
96068084ca thrift: Implement prepare_cql3_query verb
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-27 15:39:22 +02:00
Duarte Nunes
c8afb4cc46 query_processor: Support thrift prepared statements
This patch adds support for thrift prepared statements. It specializes
the result_message::prepared into two types:
result_message::prepared::cql and result_message::prepared::thrift, as
their identifiers have different types.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-27 15:39:02 +02:00
Paweł Dziepak
1addbb9c1d thrift: implement execute_cql3_query
Signed-off-by: Paweł Dziepak <pdziepak@cloudius-systems.com>
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-27 15:38:52 +02:00
Duarte Nunes
2e7cb32601 query_options: Adjust value_views after prepare()
query_options::prepare() changes the values array, but this is not the
one used by query_options internally (e.g., in get_value_at). So we
need to also recalculate the value_views after prepare() is called.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-27 15:24:27 +02:00
Duarte Nunes
2683a49c69 query_options: Remove value_views arg from ctor
Having both the values and value_views arguments in the query_options
ctor is confusing, since query_options uses only the value_views field
but that is not communicated to the caller.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-27 15:24:27 +02:00
Duarte Nunes
62cfc4ab55 thrift: Add with_exn_cob helper function
Similarly to the with_cob functions, this one takes the exn_cob
function and ensures it is called in case of an exception. This
is useful when the return type of the thrift verb is not nothrow
move constructible; by holding on to the cob inside the verb and
calling it directly when we have the result we avoid having to
wrap it in a smart pointer.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-27 15:24:27 +02:00
Duarte Nunes
b74ee6fdea thrift: Add consistency level conversion
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-27 15:24:27 +02:00
Paweł Dziepak
0c441378f2 client_state: support thrift clients
Signed-off-by: Paweł Dziepak <pdziepak@cloudius-systems.com>
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-27 15:24:27 +02:00
Paweł Dziepak
002d2bc353 thrift: pass query_processor to the thrift handler
Signed-off-by: Paweł Dziepak <pdziepak@cloudius-systems.com>
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-27 15:24:27 +02:00
Duarte Nunes
225c5be78e thrift: Add query_state to thrift_handler
This patch adds a query_state object to the thrift handler,
as it is required for CQL3 operations.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-27 15:24:27 +02:00
Avi Kivity
f96e5d7c1b managed_bytes: fix build with gcc 6
gcc 6 complains that deleting a managed_bytes::external isn't defined
because the size isn't known.  I'm not sure it's correct, but there's no
way to tell because flexible arrays aren't standardized.

Fix by using an array of zero size.
Message-Id: <1466715187-4125-1-git-send-email-avi@scylladb.com>
2016-06-27 10:54:10 +02:00
Avi Kivity
056b427855 range_tombstone_list: use non-template lambda for cloning tombstones
Using a template lambda invokes a bug in Fedora 24's boost where the
lambda's parameter is an internal boost type rather than a range_tombestone.

Constraining the parameter with an explicit type avoids the problem.
Message-Id: <1466844211-17298-1-git-send-email-avi@scylladb.com>
2016-06-27 10:48:59 +02:00
Amnon Heiman
a439a6b8d3 API: Add the collectd enable/disable implementation
This adds the implementation to the enable and disable of the collectd
metrics.

An example for disabling all collectd metrics that has write in their
type_instance part:

curl -X POST --header "Content-Type: application/json" --header "Accept:
application/json"
"http://localhost:10000/collectd/.*?instance=.*&type=.*&type_instance=.*write.*&enable=false"

After that a call to:
curl -X GET "http://localhost:10000/collectd/"

Would return those metrics with the enable set to "false"

An example to enable all the metrics in cache that their type starts
with byt:

curl -X POST --header "Content-Type: application/json" --header "Accept:
application/json"
"http://localhost:10000/collectd/cache?type=byt.*&enable=true"

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <1466932139-19264-3-git-send-email-amnon@scylladb.com>
2016-06-26 12:26:50 +03:00
Amnon Heiman
4d7837af40 API Definition: collectd to support enable disable
This adds to the definition of the collectd API the ability to turn on
and off specific collectd metrics.

For the GET end point a POST option was added that allow to enable or
disable a metric.

The general GET endpoint now returns the enable flag that indicates if
the metric is enable.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <1466932139-19264-2-git-send-email-amnon@scylladb.com>
2016-06-26 12:26:48 +03:00
Duarte Nunes
dfbf68cd24 commitlog: Define operator<< in namespace db
Needed for compilation with gcc6.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1466852874-8448-1-git-send-email-duarte@scylladb.com>
2016-06-26 10:05:28 +03:00
Avi Kivity
5b81448ed6 main: add scylla --version option
Fixes #1384.
Message-Id: <1466691517-29964-1-git-send-email-avi@scylladb.com>
2016-06-23 16:24:03 +02:00
Duarte Nunes
1ffae6e6ee database_test: Add test case for row limit
This patch introduces database_test and adds a test case to ensure
the row limit is respected when querying multiple partition ranges.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20160623111723.17523-1-duarte@scylladb.com>
2016-06-23 14:20:34 +02:00
Avi Kivity
e647ec1c4a Merge "thrift: Implement describe verbs" from Duarte
"This patchset implements the thrib describe verbs:
    - describe_keyspace
    - describe_keyspaces
    - describe_cluster_name
    - describe_version
    - describe_ring
    - describe_local_ring
    - describe_token_map
    - describe_partitioner
    - describe_snitch
    - describe_schema_versions

The verbs describe_splits and describe_splits_ex are not implemented
because they are marked as experimentail (Origin's thrift interface has
this to say about them: "experimental API for hadoop/parallel query
support. may change violently and without warning."). Some drivers have
moved away from depending on this verb (SPARKC-94). The correct way to
implement the verbs for us would be to use the size_estimates system table
(CASSANDRA-7688). However, we currently don't populate size_estimates, which
is done by SizeEstimatesRecorder.java in Origin."
2016-06-23 13:30:39 +03:00
Duarte Nunes
b291c22e39 thrift: Complete describe_keyspace verb
This patch completes the describe_keyspace verb by adding setting the
remaining fields.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-23 12:02:47 +02:00
Duarte Nunes
febc48166d thrift: Type name is already based on Origin
This patch removes a conversion function from an internal type
name to Origin's naming, which isn't needed because the
abstract_type hierarchy already keeps that mapping.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-23 12:02:47 +02:00
Duarte Nunes
8b00fe3989 thrift: Add explanatory note about describe_splits
We don't implement describe_splits, and this patch describes why that
it. In a nutshell, to properly implement this, we would need something
like Origin's SizeEstimatesRecorder.java, but as the verb is marked as
experimental, we don't do it for now.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-23 12:02:46 +02:00
Duarte Nunes
b175204cfe thrift: Implement describe_snitch verb
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-23 11:53:47 +02:00
Duarte Nunes
9e6ab878d6 thrift: Implement describe_partitioner verb
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-23 11:53:47 +02:00
Duarte Nunes
358b03c409 thrift: Implement describe_token_map verb
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-23 11:53:47 +02:00
Duarte Nunes
1ea7102d9f thrift: Implement describe_ring verbs
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-23 11:53:42 +02:00
Duarte Nunes
8377264226 thrift: Implement describe_version verb
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-23 11:53:10 +02:00
Duarte Nunes
8370450dcb trhift: Implement describe_cluster_name verb
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-23 11:53:09 +02:00
Duarte Nunes
2a898743c6 thrift: Implement describe_schema_versions verb
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-23 11:53:07 +02:00
Pekka Enberg
c464e85a3f Merge "thrift: Implement DDL verbs" from Duarte
"This patchset implements the thrift DDL verbs:
    - system_add_column_family
    - system_drop_column_family
    - system_update_column_family
    - system_add_keyspace
    - system_drop_keyspace
    - system_update_keyspace"
2016-06-23 12:46:58 +03:00
Duarte Nunes
3c02af083c thrift: Implement system_update_keyspace verb
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-23 11:41:29 +02:00
Duarte Nunes
aa16c303ca thrift: Implement system_drop_keyspace verb
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-23 11:41:29 +02:00
Duarte Nunes
8ff3fbe916 thrift: Implement system_drop_column_family verb
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-23 11:41:29 +02:00
Duarte Nunes
f6fab027c6 thrift: Implement system_update_column_family verb
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-23 11:41:28 +02:00
Duarte Nunes
de46653036 thrift: Implement system_add_column_family verb
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-23 11:41:28 +02:00
Duarte Nunes
25a8ffb09a thrift: Extract keyspace_from_thrift function
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-23 11:41:28 +02:00
Duarte Nunes
74cb796de7 thrift: Extract schema_from_thrift function
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-23 11:41:28 +02:00
Duarte Nunes
9d85ea6304 thrift: Complete system_add_keyspace verb
This patch completes the system_add_keyspace verb by setting all
relevant options on the new schemas.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-23 11:41:28 +02:00
Duarte Nunes
05f7a6d63e thrift: Add basic support for dynamic CF
In thrift, a static column family is one where all columns are
defined upon schema creation. It maps to a CQL table with a singular
partition key and a set of regular columns.

On the other hand, a dynamic column family is one which allows column
to be dynamically added by insertion requests. It maps to a CQL table
with a partition key and a clustering key, which will hold the names of
the dynamic columns, and a regular column, which will how the respective
values. If the thrift comparator type is composite, then there will be a
clustering column for each of the composite's components.

There can also be mixed column families; supporting those is future
work.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-23 11:41:28 +02:00
Duarte Nunes
49b8bff21c thrift: Extract make_exception to common header
This patch moves the make_exception function from thrift/handler.cc to
the new header file thrift/utils.hh.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-23 11:40:52 +02:00
Benoît Canet
e37b18b231 scylla_ntp_setup: Define an ntp server on ubuntu if there is none
The pool directive from ntp.conf is not recognized by ntpdate.
Strip it and put the ubuntu server in place.

Fixes: #1345

Signed-of-by: Benoît Canet <benoit@scylladb.com>
Message-Id: <1466607457-14029-1-git-send-email-benoit@scylladb.com>
2016-06-23 12:40:13 +03:00
Benoît Canet
8b6bb0251d README.md: Fix markdown formating
I suspect wrong formatting causes us trouble in the
docker hub descriptions.

Signed-of-by: Benoît Canet <benoit@scylladb.com>
Message-Id: <1466603787-13423-1-git-send-email-benoit@scylladb.com>
2016-06-23 12:39:04 +03:00
Duarte Nunes
aacc7193f2 schema: Replace keyspace's schema_ptr on CF update
This patch ensures we replace the schema_ptr held by its respective
keyspace object when a column family is being updated.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20160623085710.26168-1-duarte@scylladb.com>
2016-06-23 11:11:52 +02:00
Tzach Livyatan
3fa7bb1292 scylla_setup: Ignore case in prompt responses
Fix #1376 by converting each response to lowercase.

Signed-off-by: Tzach Livyatan <tzach@scylladb.com>
Message-Id: <1466672539-5625-1-git-send-email-tzach@scylladb.com>
2016-06-23 12:08:26 +03:00
Glauber Costa
e08fa7dafa fix potential stale data in cache update
We currently have a problem in update_cache, that can be trigger by ordering
issues related to memtable flush termination (not initiation) and/or
update_cache() call duration.

That issue is described in #1364, and in short, happens if a call to
update_cache starts before and ongoing call finishes. There is now a new SSTable
that should be consulted by the presence checker that is not.

The partition checker operates in a stale list because we need to make sure the
SSTable we just wrote is excluded from it.  This patch changes the partition
checker so that all SSTables currently in use are consulted, except for the one
we have just flushed. That provides both the guarantee that we won't check our
own SSTable and access to the most up-to-date SSTable list.

Fixes #1364

Signed-off-by: Glauber Costa <glauber@scylladb.com>
Message-Id: <fa1cee672bba8e21725c6847353552791225295f.1466534499.git.glauber@scylladb.com>
2016-06-23 10:54:44 +02:00
Pekka Enberg
bcba45f546 Merge "Prevent old node to join new cluster" from Asias
Fixes #1253
2016-06-23 10:25:38 +03:00
Piotr Jastrzebski
9b011bff18 row_cache: add contiguity flag to cache entry to reduce disk IO during scans
Add contiguity flag to cache entry and set it in scanning reader.
Partitions fetched during scanning are continuous
and we know there's nothing between them.

Clear contiguity flag on cache entries
when the succeeding entry is removed.

Use continuous flag in range queries.
Don't go do disk if we know that there's nothing
between two entries we have in cache. We know that
when continuous flag of the first one is set to true.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Message-Id: <72bae432717037e95d1ac9465deaccfa7c7da707.1466627603.git.piotr@scylladb.com>
2016-06-23 09:43:15 +03:00
Avi Kivity
5af22f6cb1 main: handle exceptions during startup
If we don't, std::terminate() causes a core dump, even though an
exception is sort-of-expected here and can be handled.

Add an exception handler to fix.

Fixes #1379.
Message-Id: <1466595221-20358-1-git-send-email-avi@scylladb.com>
2016-06-23 09:25:33 +03:00
Avi Kivity
a192c80377 gdb: fully-qualify type names
gdb gets confused if a non-fully-qualified class name is used when
we are in some namespace context.  Help it out by adding a :: prefix.
Message-Id: <1466587895-8690-1-git-send-email-avi@scylladb.com>
2016-06-22 12:04:17 +02:00
Avi Kivity
9dacd4fb80 Merge "query: Add new limits" from Duarte
This patchset adds two new types of query limits:

 - Per partition row limit, which limits how many rows
   a given partition may return; needed both for thrift
   and for future CQL features;
 - Limit on the number of partitions returned, needed
   by thrift.
2016-06-22 11:03:13 +03:00
Duarte Nunes
82dbf5bff3 storage_proxy: Trace when retrying a query
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-22 09:48:15 +02:00
Duarte Nunes
69798df95e query: Limit number of partitions returned
This is required to implement a thrift verb.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-22 09:48:13 +02:00
Duarte Nunes
594e43a60a compact_query: Rename partition_limit
This patch renames compact_query::_partition_limit to
_current_partition_limit for clarity, as the next patch adds
a partition limit that limits the number of partitions.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-22 09:47:29 +02:00
Duarte Nunes
e9ebd87991 compact_query: Rename limit to row_limit
This patch renames compact_query::_limit to _row_limit for
clarity, as a subsequent patch introduces yet another limit.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-22 09:47:28 +02:00
Duarte Nunes
01b18063ea query: Add per-partition row limit
This patch as a per-partition row limit. It ensures both local
queries and the reconciliation logic abide by this limit.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-22 09:46:51 +02:00
Duarte Nunes
20d9813a89 storage_proxy: Fetch last replica row just in time
This patch changes the way we fetch each replica's last row to
determine if we got incomplete information from any of them. Instead
of fetching the last rows up front, we fetch them on demand only if we
actually trigger the code that needs them. We now get the last row from
the versions vector of vectors.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-22 00:15:38 +02:00
Duarte Nunes
4ce9fc24cb storage_proxy: Extract finding last row
This patch extracts to a function the code that actually determines
the last row of a partition based on the direction of the query.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-22 00:15:38 +02:00
Takuya ASADA
73ba4ac337 dist: drop sudoers.d from .rpm, since systemd moved to PermissionsStartOnly
Since systemd moved to PermissionsStartOnly, only upstart uses sudoers.
So move common/sudoers.d to dist/ubuntu, drop them from .rpm.
Also, Ubuntu 15.10/16.04 does not requires sudoers since these are uses systemd.
So copy sudoers only for 14.04.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1466536491-9860-1-git-send-email-syuu@scylladb.com>
2016-06-21 22:59:18 +03:00
Glauber Costa
4e81f19ab5 LSA: fix typo in region merge
There are many potentially tricky things about referring to different regions
from the LSA perspective. Madness, however, is not one of them. I can only
assume we meant made?

Signed-off-by: Glauber Costa <glauber@scylladb.com>
Message-Id: <8eb81f35de4b208a494e43cb392eea07b87b2bf1.1466534798.git.glauber@scylladb.com>
2016-06-21 22:58:44 +03:00
Benoît Canet
8e4dee0bd1 scylla_setup: Hide /dev/loop*
The user probably don't want to use
/dev/loop* as RAID devices.

Fixes: #1259

Signed-of-by: Benoît Canet <benoit@scylladb.com>
Message-Id: <1466520602-7888-1-git-send-email-benoit@scylladb.com>
2016-06-21 19:27:40 +03:00
Tzach Livyatan
27b99f47e8 scylla_setup: improve the wording of disk setup phase.
Fix #1197 by adding XFS related info to the interactive prompt

Signed-off-by: Tzach Livyatan <tzach@scylladb.com>
Message-Id: <1466504625-28926-1-git-send-email-tzach@scylladb.com>
2016-06-21 19:26:31 +03:00
Avi Kivity
96ebc4e7b5 Merge seastar upstream
* seastar 401c333...3029ebe (3):
  > util: add a seastar::value_of() helper function
  > rpc: force closing listen fd on server stop
  > reactor: fix I/O priority class id assignment
2016-06-21 15:11:26 +03:00
Tomasz Grabiec
597cbbdedc Merge branch 'pdziepak/streamed-mutations/v5' from seastar-dev.git
From Paweł:

This series introduces streaming_mutations which allow mutations to be
streamed between the producers and the consumers as a series of
mutation_fragments. Because of that the mutation streaming interface
works well with partitions larger than available memory provided that
actual producer and consumer implementations can support this as well.

mutation_fragments are the basic objects that are emitted by
streamed_mutations they can represent a static row, a clustering row,
the beginning and the end of a range tombstone. They are ordered by their
clustering keys (with static rows being always the first emitted mutation
fragment). The beginning of range tombstone is emitted before any
clustering row affected by that tombstone and the end of range tombstone
is emitted after the last clustering row affected by it. Range tombstones
are disjoint.

In this series all producers are converted to fully support the new
interface, that includes cache, memtables and sstables. Mutation queries
and data queries are the only consumers converted so far.

To minimize the per-mutation_fragment overhead streamed_mutations use
batching. The actual producer implementation fills a buffer until
it is full (currently, buffer size is 16, the limit should, however,
be changed to depend on the actual size in memory of the stored elements)
or end of stream is reached.

In order to guarantee isolation of writes reads from cache and memtable
use MVCC. When a reader is created it takes a snapshot of the particular
cache or memtable entry. The snapshot is immutable and if there happen
to be any incoming writes while the read is active a new version of
partition is created. When the snapshot is destroyed partition versions
are merged together as much as possible.

Performance results with perf_simple_query (median of results with
duration 15):

         before        after          diff
write    618652.70     618047.58      -0.10%
read     661712.44     608070.49      -8.11%
2016-06-21 12:15:21 +02:00
Pekka Enberg
11dd20d640 Revert "ami: Change type from EBS to Instance"
This reverts commit 2d7f8f4a47.

Avi sayeth:

"Isn't this the other way round?  EBS is persistent."

and

"The patch is wrong too.  Instance store takes 5 minutes to boot
compared to 1 minute for EBS."
2016-06-21 12:41:30 +03:00
Tomasz Grabiec
e783b58e3b Merge branch 'glommer/LSA-throttler-v6' from git@github.com:glommer/scylla.gi
From Glauber:

This is my new take at the "Move throttler to the LSA" series, except
this one don't actually move anything anywhere: I am leaving all
memtable conversion out, and instead I am sending just the LSA bits +
LSA active reclaim. This should help us see where we are going, and
then we can discuss all memtable changes in a series on its own,
logically separated (and hopefully already integrated with virtual
dirty).

[tgrabiec: trivial merge conflicts in logalloc.cc]
2016-06-21 10:22:26 +02:00
Calle Wilund
2b812a392a commitlog_replayer: Fix calculation of global min pos per shard
If a CF does not have any sstables at all, we should treat it
as having a replay position of zero. However, since we also
must deal with potential re-sharding, we cannot just set
shard->uuid->zero initially, because we don't know what shards
existed.

Go through all CF:s post map-reduce, and for every shard where
a CF does not have an RP-mapping (no sstables found), set the
global min pos (for shard) to zero.

Fixes #1372

Message-Id: <1465991864-4211-1-git-send-email-calle@scylladb.com>
2016-06-21 10:05:05 +03:00
Benoît Canet
2d7f8f4a47 ami: Change type from EBS to Instance
Instance types does not have ephemeral drive that disapear on reboot.

Fixes #1229

Signed-of-by: Benoît Canet <benoit@scylladb.com>
Message-Id: <1466443232-5898-1-git-send-email-benoit@scylladb.com>
2016-06-21 09:56:26 +03:00
Calle Wilund
88ffe60138 batchlog_manager: Change replay mutation CL to ALL
Try to emulate the origin behaviour for batch reply. They use an
explicit write handler, combinging

1.) Hinting to all known dead endpoints
2.) Sending to all persumed live, requiring ack from all
3.) Hinting to endpoint to which send failed.

We don't have hints, so try to work around by doing send with
cl=ALL, and if send fails (wholly or partially), retain the
batch in the log.

This is still slight behavioural difference, and we also risk
filling up the batch log in extreme cases. (Though probably not
in any real environment).

Refs #1222

Message-Id: <1466444170-23797-1-git-send-email-calle@scylladb.com>
2016-06-21 09:41:09 +03:00
Glauber Costa
7f29cb8aba tests: add logalloc tests for pressure notification
tests to make sure varios scenarios of pressure notification for active
asynchronous reclaim work.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
2016-06-20 18:58:39 -04:00
Glauber Costa
8f5047fc5f tests: add tests to new region_group throttle interface
Signed-off-by: Glauber Costa <glauber@scylladb.com>
2016-06-20 18:51:00 -04:00
Glauber Costa
579d121db8 LSA: export largest region
We now keep the regions sorted by size, and the children region groups as well.
Internally, the LSA has all information it needs to make size-based reclaim
decisions. However, we don't do reclaim internally, but rather warn our user
that a pressure situation is mounted.

The user of a region_group doesn't need to evict the largest region in case of
pressure and is free to do whatever it chooses - including nothing. But more
likely than not, taking into account which region is the largest makes sense.

This patch puts together this last missing piece of the puzzle, and exports the
information we have internally to the user.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
2016-06-20 18:51:00 -04:00
Glauber Costa
35f8a2ce2c LSA: add a backpointer to the region from its private data
Region is implemented using the pimpl pattern (region_impl), and all its
relevant data is present in a private structure instead of the region itself.

That private structure is the one that the other parts of the LSA will refer to,
the region_group being the prime example. To allow classes such as the
region_group the externally export a particular region, we will introduce a
backpointer region_impl -> region.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
2016-06-20 18:50:59 -04:00
Glauber Costa
38a402307d LSA: enhance region_group reclaimer
We are currently just allowing the region_group to specify a throttle_threshold,
that triggers throttling when a certain amount of memory is reached. We would
like to notify the callers that such condition is reached, so that the callers
can do something to alleviate it - like triggering flushes of their structures.

The approach we are taking here is to pass a reclaimer instance. Any user of a
region_group can specialize its methods start_reclaiming and stop_reclaiming
that will be called when the region_group becomes under pressure or ceases to
be, respectively.

Now that we have such facility, it makes more sense to move the
throttle_threshold here than having it separately.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
2016-06-20 18:50:59 -04:00
Glauber Costa
6404028c6a LSA: move subgroups to a heap as well
When we decide to evict from a specific region_group due to excessive memory
usage, we must also consider looking at each of their children (subgroups). It
could very well be that most of memory is used by one of the subgroups, and
we'll have to evict from there.

We also want to make sure we are evicting from the biggest region of all, and
not the biggest region in the biggest region_group. To understand why this is
important, consider the case in which the regions are memtables associated with
dirty region groups. It could be that a very big memtable was recently flushed,
and a fairly small one took its place. That region group is still quite large
because the memtable hasn't finished flushing yet, but that doesn't mean we
should evict from it.

To allow us to efficiently pick which region is the largest, each root of each
subtree will keep track of its maximal score, defined as the maximum between our
largest region total_space and the maximum maximal score of subtrees.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
2016-06-20 18:50:13 -04:00
Glauber Costa
e1eab5c845 LSA: store regions in a heap for regions_group
Currently, the regions in a region group are organized in a simple vector.
We can do better by using a binomial heap, as we do for segments, and then
updating when there is change. Internally to the LSA, we are in good position
to always know when change happens, so that's really the best way to do it.

The end game here, is to easily call for the reclaim of the largest offending
region (potentially asynchronously). Because of that, we aren't really interested
in the region occupancy, but in the region reclaimable occuppancy instead: that's
simply equal to the occupancy if the region is reclaimable, and 0 otherwise. Doing
that effectively lists all non reclaimable regions in the end of the heap, in no
particular order.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
2016-06-20 18:50:13 -04:00
Glauber Costa
54d4d46cf7 LSA: move throttling code to LSA.
The database code uses a throttling function to make sure that memory
used for the dirty region never is over the limit. We track that with
a region group, so it makes sense to move this as generic functionality
into LSA.

This patch implements the LSA-side functionality and a later patch will
convert the current memtable throttler to use it.

Unlike the current throttling mechanism, we'll not use a timer-based
mechanism here. Aside from being more generic and friendlier towards
other users, this is a good change for current memtable by itself.

The constants - 10ms and 1MB chosen by the current throttler are arbitrary, and we
would be better off without them. Let's discuss the merits of each separately:

1) 10ms timer: If we are throttling, we expect somebody to flush the memtables
for memory to be released. Since we are in position to know exactly when a memtable
was written, thus releasing memory, we can just call unthrottle at that point, instead
of using a timer.

2) 1MB release threshold: we do that because we have no idea how much memory a request
will use, so we put the cut somehow. However, because of 1) we don't call unthrottle
through a timer anymore, and do it directly instead. This means that we can just execute
the request and see how much memory it has used, with no need to guess. So we'll call
unthrottle at the end of every request that was previously throttled.

Writing the code this way also has the advantage that we need one less continuation in
the common case of the database not being throttled.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
2016-06-20 18:34:19 -04:00
Paweł Dziepak
6f25533f4e mutation_query: drop querying_reader
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:31:52 +01:00
Paweł Dziepak
ed12c164f8 mutation_query: make mutation queries streaming-friendly
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:31:28 +01:00
Paweł Dziepak
0828c88b25 mutation_partition: implement streaming-friendly data_query()
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:31:19 +01:00
Paweł Dziepak
67ae9457e3 mutation_partition: introduce mutation_querier
mutation_querier is a streamed_mutation consumer that adds the mutation
content to query::result.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:53 +01:00
Paweł Dziepak
f54e604a16 mutation_partition: introduce compact_for_query
compact_for_query is an intermediate stage used to compact data in a
flattened stream of mutations before they are consumed by query building
consumers.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:53 +01:00
Paweł Dziepak
2b7e62599d mutation_reader: add consume_flattened()
Mutation reader produces a stream of streamed_mutations. Each
streamed_mutation itself is a stream so basically we are dealing here
with a stream of streams.

consume_flattened() flattens such stream of streams making all its
elements consumable by a single consumer. It also allows reversing
the mutations before consumption using reverse_streamed_mutation().
2016-06-20 21:29:52 +01:00
Paweł Dziepak
5566d23180 streamed_mutation: add reverse_streamed_mutation()
reverse_streamed_mutation() is an inefficient way of reversing
streamed_mutations. First, it collects all mutation_fragments and then
it emits them in the reversed orders (except static row which always is
the first element and it also flips the bounds of range tombstones).

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:52 +01:00
Paweł Dziepak
f676d1779b range_tombstone: add flip_bound_kind()
flip_bound_kind() changes start bound to end bound and vice versa while
preserving the inclusivness.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:52 +01:00
Paweł Dziepak
a3423bac38 tests/streamed_mutation: test freezing streamed_mutations
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:52 +01:00
Paweł Dziepak
6e68f0931e frozen_mutation: freeze streamed_mutations
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:52 +01:00
Paweł Dziepak
349905d0fd range_tombstone_list: add clear()
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:52 +01:00
Paweł Dziepak
494c6fa9c1 tests/mutation_query_test: make sure mutations are sliced properly
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:52 +01:00
Paweł Dziepak
8dfabf2790 mutation_reader: support slicing in make_reader_returning_many()
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:52 +01:00
Paweł Dziepak
6871bd5fa0 memtable: fully support streamed_mutations
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:52 +01:00
Paweł Dziepak
983321f194 tests/mutation: do not create memtable on stack
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:51 +01:00
Paweł Dziepak
4a5a9148e3 tests/row_cache: test slicing mutation reader
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:51 +01:00
Paweł Dziepak
e1a8d94542 tests/row_cache: test mvcc
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:51 +01:00
Paweł Dziepak
b2c37429e7 row_cache: drop slicing_reader
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:51 +01:00
Paweł Dziepak
f605499aec row_cache: fully support streamed_mutations
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:51 +01:00
Paweł Dziepak
e4ae7894d4 tests/mutation: test slicing mutations
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:51 +01:00
Paweł Dziepak
f95c5542dc mutation_partition: allow slicing moved mutation_partition
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:51 +01:00
Paweł Dziepak
db5ea591ad add mvcc implementation for mutation_partitions
To ensure isolation of operation when streaming a mutation from a
mutable source (such as cache or memtable) MVCC is used.

Each entry in memtable or cache is actually a list of used versions of
that entry. Incoming writes are either applied directly to the last
verion (if it wasn't being read by anyone) or preprended to the list
(if the former head was being read by someone). When reader finishes it
tries to squash versions together provided there is no other reader that
could prevent this.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:51 +01:00
Paweł Dziepak
b2e6e95de7 clustering_key_filter: always return ranges in ascending order
Originally, ranges for reversed queries were in descending order and
ranges for forward queries in ascending order. However,
streamed_mutations require them to always be in ascending order.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:51 +01:00
Paweł Dziepak
2ab1a73efa memtable: rename partition_entry to memtable_entry
partition_entry is going to be a more general object used by both
cache and memtable entries.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:51 +01:00
Paweł Dziepak
4992ea9949 tests: add test for anchorless_list
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:51 +01:00
Paweł Dziepak
dfa827161d utils: add anchorless list
The main user of this list is MVCC implementation in partition_version.cc.
The reason why boost::intrusive::list<> cannot be used is that tere is no
single owner of the list who could keep boost::intrusive::list<> object
alive. In the MVCC case there is at least one partition_entry object and
possibly multiple partition_snapshot objects which lifetime is independent
and the list must remain in a valid state as long as at least one of them
is alive.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:51 +01:00
Paweł Dziepak
f991a2deb5 tests/row_cache_alloc_stress: use another memtable for underlying storage
It is incorrect to update row_cache with a memtable that is also its
underlying storage. The reason for that is that after memtable is merged
into row_cache they share lsa region. Then when there is a cache miss
it asks underlying storage for data. This will result with memtable
reader running under row_cache allocation section. Since memtable reader
also uses allocation section the result is an assertion fault since
allocation sections from the same lsa region cannot be nested.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:51 +01:00
Paweł Dziepak
5a5c519fa0 tests/row_cache_alloc_stress: use large cells instead of many rows
With streamed_mutations a partition with many small rows doesn't stress
the cache as much as the test expects. Use large clustering rows instead.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:50 +01:00
Paweł Dziepak
71e961427a test/sstables: test reading sstables with incorrect ordering
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:50 +01:00
Paweł Dziepak
2ee69860d2 sstables: make sstable reader produce streamed_mutations
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:50 +01:00
Paweł Dziepak
e82cc68196 streamed_mutation: add range_tombstone_stream
range_tombstone_stream encapsulates logic responsible for turning
range_tombstone_list into a stream of mutation_fragments and merging
that stream with a stream of clustering rows.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:50 +01:00
Paweł Dziepak
a200189541 range_tombstone_list: mark apply() argument as const
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:50 +01:00
Paweł Dziepak
5a60f6d1ec range_tombstone: extract is_single_clustering_row_tombstone()
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:50 +01:00
Paweł Dziepak
b6f78a8e2f sstable: make sstable reads return streamed_mutation
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:50 +01:00
Paweł Dziepak
9e8db53c46 sstables: allow row consumer to stop at any point
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:50 +01:00
Paweł Dziepak
125c4e20e2 tests/sstables: add test for sliced mutation reads
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:50 +01:00
Paweł Dziepak
71088b4f4a sstables: fix partition slicing for row markers and collections
Row markers and collections weren't filtered out even if they belonged
to a clustering row that shouldn't be in the result. The check whether
to include cell or not was done only for live and dead atomic cells.

This patch adds appropriate checks for collections and row markers.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:50 +01:00
Paweł Dziepak
575daea897 sstables: make deletion_time to tombstone cast safer
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:50 +01:00
Paweł Dziepak
7074b439d8 mutation_reader: do not ask for mutation before current is consumed
mutation_reader and streamed_mutation may use the same stream as a source
mutation_fragments and mutations themselves (this happens in sstable reader).
In such case asking for next streamed_mutation from mutation_reader would
invalidate all other streamed_mutations.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:50 +01:00
Paweł Dziepak
737eb73499 mutation_reader: make readers return streamed_mutations
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:50 +01:00
Paweł Dziepak
52a0b405f8 tests/row_cache: simplify verify_has()
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:50 +01:00
Paweł Dziepak
fec3346343 tests: add streamed_mutation assertions
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:49 +01:00
Paweł Dziepak
11f43a8e91 tests/sstable: drop sstable_range_wrapping_reader
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:49 +01:00
Paweł Dziepak
5b45d46f82 row_cache: simplify slicing_reader
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:49 +01:00
Paweł Dziepak
9c83eb9542 mutation_reader: drop joining and lazy readers
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:49 +01:00
Paweł Dziepak
579de26e95 storage_proxy: drop make_local_reader()
This code was used only by its unit test.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:49 +01:00
Paweł Dziepak
c8f4b96e76 tests: add streamed_mutation_tests
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:49 +01:00
Paweł Dziepak
a1fc5888d3 streamed_mutation: add mutation_merger
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:49 +01:00
Paweł Dziepak
48e08fa997 mutation: add mutation_from_streamed_mutation()
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:49 +01:00
Paweł Dziepak
9df01c2a36 streamed_mutation: add streamed_mutation_from_mutation()
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:49 +01:00
Paweł Dziepak
22160ae6d5 mutation_partition: make rows_type public
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:49 +01:00
Paweł Dziepak
675f684788 streamed_mutation: introduce streamed_mutation
streamed_mutation represents a mutation in a form of a stream of
mutation_fragments. streamed_mutation emits mutation fragments in the
order they should appear in the sstables, i.e. static row is always
the first one, then clustering rows and range tombstones are emitted
according to the lexicographical ordering of their clustering keys and
bounds of the range tombstones.

Range tombstones are disjoint, i.e. after emitting
range_tombstone_begin it is guaranteed that there is going to be a
single range_tombstone_end before another range_tombstone_begin is
emitted.

The ordering of mutation_fragments also guarantees that by the time
the consumer sees a clustering row it has already received all
relevant tombstones.

Partition key and partition tombstone are not streamed and is part of
the streamed_mutation itself.

streamed_mutation uses batching. The mutation implementations are
supposed to fill a buffer with mutation fragments until is_buffer_full()
or the end of stream is encountered.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:49 +01:00
Paweł Dziepak
262337768a streamed_mutation: introduce mutation_fragment
This commit introduces mutation_fragment class which represents the parts
of mutation streamed by streamed_mutation.

mutation_fragment can be:
 - a static row (only one in the mutation)
 - a clustering row
 - start of range tombstone
 - end of range rombstone

There is an ordering (implemented in position_in_partition class) between
mutation_fragment objects. It reflects the order in which content of
partition appears in the sstables.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:49 +01:00
Paweł Dziepak
84713d2236 utils: extract optimized_optional<> from mutation_opt
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:49 +01:00
Paweł Dziepak
847bf878ec mutation_partition: add more row::apply() overloads
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:48 +01:00
Paweł Dziepak
7809adc6ce keys: add compound_wrapper::tri_compare
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:48 +01:00
Paweł Dziepak
c24f08a683 range_tombstone_list: compare full tombstones not just timestamps
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:48 +01:00
Paweł Dziepak
df4c1c6293 range_tombstone: simplify bound_view::equal()
Bounds are equal only if they are of the same kind. No need to check
weights.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:48 +01:00
Paweł Dziepak
a6aceb179d range_tombstone: fix bound ordering
Assuming the clustering keys are equal:
  excl_end < incl_start < incl_end < excl_start.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:48 +01:00
Paweł Dziepak
3a0e76d635 range_tombstone: check for adjacent instead of equal bounds
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2016-06-20 21:29:48 +01:00
Nadav Har'El
3372052d48 Rewriting shared sstables only after all shards loaded sstables
After commit faa4581, each shard only starts splitting its shared sstables
after opening all sstables. This was important because compaction needs to
be aware of all sstables.

However, another bug remained: If one shard finishes loading its sstables
and starts the splitting compactions, and in parallel a different shard is
still opening sstables - the second shard might find a half-written sstable
being written by the first shard, and abort on a malformed sstable.

So in this patch we start the shared sstable rewrites - on all shards -
only after all shards finished loading their sstables. Doing this is easy,
because main.cc already contains a list of sequential steps where each
uses invoke_on_all() to make sure the step completes on all shards before
continuing to the next step.

Fixes #1371

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <1466426641-3972-1-git-send-email-nyh@scylladb.com>
2016-06-20 16:25:24 +03:00
Calle Wilund
7cdea1b889 commitlog: Use flush queue for write/flush ordering, improve batch
Using an ordering mechanism better than rw-locks for write/flush
means we can wait for pending write in batch mode, and coalesce
data from more than one mutation into a chunk.

It also means we can wait for a specific read+flush pair (based on
file position).

Downside is that we will not do parallel writes in batch mode (unless
we run out of buffer), which might underutilize the disk bandwidth.

Upside is that running in batch mode (i.e. per-write consistency)
now has way better bandwidth, and also, at least with high mutation
rate, better average latency.

Message-Id: <1465990064-2258-1-git-send-email-calle@scylladb.com>
2016-06-20 13:09:16 +03:00
Benoît Canet
77375cefaa docker: normalize environment variables names
Use a more docker like form.

Signed-of-by: Benoît Canet <benoit@scylladb.com>
Message-Id: <1466414939-5019-1-git-send-email-benoit@scylladb.com>
2016-06-20 12:30:13 +03:00
Benoît Canet
4c7ac4cab7 docker: implement seeds and broadcast_address variables
Implement the seeds and broadcast_address variable
required for clustering behavior.

Do it raw with sed in the startup script.

Signed-of-by: Benoît Canet <benoit@scylladb.com>
Message-Id: <1466412846-4760-3-git-send-email-benoit@scylladb.com>
2016-06-20 11:55:03 +03:00
Benoît Canet
fd811c90fc docker: Complete the missing part of production mode
Scylla will not start if the disk was not benchmarked
so start run io_tune with the right parameters.

Also add the cpu_set environment variables for passing
cpu set to iotune and scylla.

Signed-of-by: Benoît Canet <benoit@scylladb.com>
Message-Id: <1466412846-4760-2-git-send-email-benoit@scylladb.com>
2016-06-20 11:54:54 +03:00
Pekka Enberg
1d5f7be447 systemd: Use PermissionsStartOnly instead of running sudo
Use the PermissionsStartOnly systemd option to apply the permission
related configurations only to the start command. This allows us to stop
using "sudo" for ExecStartPre and ExecStopPost hooks and drop the
"requiretty" /etc/sudoers hack from Scylla's RPM.

Tested-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1466407587-31734-1-git-send-email-penberg@scylladb.com>
2016-06-20 11:53:24 +03:00
Vlad Zolotarov
baf3614e8f sstables: don't backup sstables that are a result of a compaction
According to incremental backup description
(http://docs.datastax.com/en/cassandra_win/2.2/cassandra/operations/opsBackupIncremental.html)
sstables that are a result of a compaction process should not
be backed up since original sstables had already been backed up.

Fixes #1308

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
Reviewed-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <1466338622-7323-1-git-send-email-vladz@cloudius-systems.com>
2016-06-20 09:52:30 +03:00
Pekka Enberg
f4153c75a0 cql3: Bump CQL language version to 3.2.1
We already added 3.2.1 support in commit 569d288 ("cql3: Add TRUNCATE
TABLE alias for TRUNCATE") but never got around fixing the CQL version
reported to drivers.

Fixes #1358.

Message-Id: <1466403967-28654-1-git-send-email-penberg@scylladb.com>
2016-06-20 09:42:12 +03:00
Avi Kivity
07045ffd7c dist: fix scylla-kernel-conf postinstall scriptlet failure
Because we build on CentOS 7, which does not have the %sysctl_apply macro,
the macro is not expanded, and therefore executed incorrectly even on 7.2,
which does.

Fix by expanding the macro manually.

Fixes #1360.
Message-Id: <1466250006-19476-1-git-send-email-avi@scylladb.com>
2016-06-20 09:36:39 +03:00
Lucas Meneghel Rodrigues
ae622b0c08 dist/common/scripts/scylla_kernel_check: Update messages
Small grammar tweaks to the script's output messages.

Signed-off-by: Lucas Meneghel Rodrigues <lmr@scylladb.com>
Message-Id: <1466205496-3885-3-git-send-email-lmr@scylladb.com>
2016-06-19 19:28:58 +03:00
Lucas Meneghel Rodrigues
aacf7eb2ae dist/common/scripts/scylla_kernel_check: Fix conditional statement
Since most of the time people are running scylla_setup on
a fully upgraded ubuntu 14.04 box, we rarely reach that
code path, but once we do we end up with an error. Let's
fix that.

Signed-off-by: Lucas Meneghel Rodrigues <lmr@scylladb.com>
Message-Id: <1466205496-3885-2-git-send-email-lmr@scylladb.com>
2016-06-19 19:28:56 +03:00
Nadav Har'El
faa45812b2 Rewrite shared sstables only after entire CF is read
Starting in commit 721f7d1d4f, we start "rewriting" a shared sstable (i.e.,
splitting it into individual shards) as soon as it is loaded in each shard.

However as discovered in issue #1366, this is too soon: Our compaction
process relies in several places that compaction is only done after all
the sstables of the same CF have been loaded. One example is that we
need to know the content of the other sstables to decide which tombstones
we can expire (this is issue #1366). Another example is that we use the
last generation number we are aware of to decide the number of the next
compaction output - and this is wrong before we saw all sstables.

So with this patch, while loading sstables we only make a list of shared
sstables which need to be rewritten - and the actual rewrite is only started
when we finish reading all the sstables for this CF. We need to do this in
two cases: reboot (when we load all the existing sstables we find on disk),
and nodetool referesh (when we import a set of new sstables).

Fixes #1366.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <1466344078-31290-1-git-send-email-nyh@scylladb.com>
2016-06-19 16:50:51 +03:00
Paweł Dziepak
dde87e0b0e row_cache: drop schema upgrade for new entries in update()
Commit daad2eb "row_cache: fix memory leak in case of schema upgrade
failure" has fixed a memory leak caused by failed upgrade_entry().
However, in case of upgrade failure memtable_entry used to create the
new cache entry was left in some invalid state. If the operation was
retried the cache would attempt again to apply that memtable_entry which
now would be in invalid state.

The solution is to either to ignore upgrade_entry() exceptions or do not
call it at all and let the cache entry be upgraded on demand. This patch
implements the latter.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
Message-Id: <1466163435-27367-1-git-send-email-pdziepak@scylladb.com>
2016-06-17 13:43:01 +02:00
Paweł Dziepak
daad2ebf81 row_cache: fix memory leak in case of schema upgrade failure
When update() causes a new entry to be inserted to the cache the
procedure is as follows:
1. allocate and construct new entry
2. upgrade entry schema
3. add entry to lru list and cache tree

Step 2 may fail and at this point the pointer to the entry is neither
protected by RAII nor added in any of the cache containers. The solution
is to swap steps 2 and 3 so that even if the upgrade fails the entry is
already owned by the cache and won't leak.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
Message-Id: <1466161709-25288-1-git-send-email-pdziepak@scylladb.com>
2016-06-17 13:12:01 +02:00
Asias He
4f3ce42163 storage_service: Prevent old version node to join a new version cluster
We want to prevent older version of scylla which has fewer features to
join a cluster with newer version of scylla which has more features,
because when scylla sees a feature is enabled on all other nodes, it
will start to use the feature and assume existing nodes and future nodes
will always have this feature.

In order to support downgrade during rolling upgrade, we need to support
mixed old and new nodes case.

1) All old nodes
O O O O O <- N   OK
O O O O O <- O   OK

2) All new nodes
N N N N N <- N   OK
N N N N N <- O   FAIL

3) Mixed old and new nodes
O N O N O <- N   OK
O N O N O <- O   OK

(O == old node, N == new node, <- == joining the cluster)

With this patch, I tested:

1.1) Add new node to new node cluster
gossip - Feature check passed. Local node 127.0.0.4 features =
{RANGE_TOMBSTONES}, Remote common_features = {RANGE_TOMBSTONES}

1.2) Add old node to old node cluster
gossip - Feature check passed. Local node 127.0.0.4 features = {},
Remote common_features = {}

2.1) Add new node to new node cluster
gossip - Feature check passed. Local node 127.0.0.4 features =
{RANGE_TOMBSTONES}, Remote common_features = {RANGE_TOMBSTONES}

2.2) Add old node to new node cluster
seastar - Exiting on unhandled exception: std::runtime_error (Feature
check failed. This node can not join the cluster because it does not
understand the feature. Local node 127.0.0.4 features = {}, Remote
common_features = {RANGE_TOMBSTONES})

3.1) Add new node to mixed cluster
gossip - Feature check passed. Local node 127.0.0.4 features =
{RANGE_TOMBSTONES}, Remote common_features = {}

3.2) Add old node to mixed cluster
gossip - Feature check passed. Local node 127.0.0.4 features = {},
Remote common_features = {}

Fixes #1253
2016-06-17 10:49:45 +08:00
Asias He
32ed468e42 gossip: Remove empty string feature in get_supported_features
If the feature string is empty, boost::split will return
std::set<sstring> = {""} instead of std::set<sstring> = {}
which will make a node with a feaure, e.g. std::set<sstring> =
{"RANGE_TOMBSTONES"}, think it does not understand the feature of
a node with no features at all.
2016-06-17 10:49:45 +08:00
Gleb Natapov
4659800ab9 storage_proxy: implement custom speculative retry strategy
User may specify time after which speculative retry should happen
instead of relying on cf statics. Use provided value in speculative
executor.

Message-Id: <20160616104422.GH5961@scylladb.com>
2016-06-16 13:45:56 +03:00
Pekka Enberg
d72c608868 service/storage_service: Make do_isolate_on_error() more robust
Currently, we only stop the CQL transport server. Extract a
stop_transport() function from drain_on_shutdown() and call it from
do_isolate_on_error() to also shut down the inter-node RPC transport,
Thrift, and other communications services.

Fixes #1353
2016-06-16 13:34:09 +03:00
Avi Kivity
85bb5ea064 Merge "Reduce LSA reclaim latency" from Tomasz
"Reclaiming many segments was observed to cause up to multi-ms
latency. With the new setting, the latency of reclamation cycle with
full segments (worst case mode) is below 1ms.

I saw no difference in throughput in a CQL write micro benchmark
in neither of these workloads:
 - full segments, reclaim by random eviction
 - sparse segments (3% occupancy), reclaim by compaction and no eviction

Fixes #1274."
2016-06-16 10:47:57 +03:00
Pekka Enberg
a8f95e8081 dist/docker: Use Scylla superpackage for installation
Make the Dockerfile more future-proof by using the Scylla superpackage
for installation.

Message-Id: <1466015996-19792-1-git-send-email-penberg@scylladb.com>
2016-06-16 10:32:18 +03:00
Benoît Canet
c133748a24 scylla_setup: Fix RAID device enumeration
Commit f42673ed1e ("scylla_setup: Hide
busy block devices from RAID0 configuration") wasn't enumerating
anything.  Additionally it listed from /dev/ and not /dev/dm which broke
the tests conditions.

This one uses blkid instead of /proc/partitions.

A follow up patch will be required to mask encrypted devices.

Signed-of-by: Benoît Canet <benoit@scylladb.com>
Message-Id: <1466059657-12377-1-git-send-email-benoit@scylladb.com>
2016-06-16 09:52:25 +03:00
Glauber Costa
01a658f51d LSA: helper function for region_group
current hierarchy walk converted, but more users will come.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
2016-06-15 22:26:50 -04:00
Glauber Costa
741aa16748 LSA: allow a region_group to have a threshold for throttling specified
Allocations will still be allowed if made directly, but callers will have the
choice (in an upcoming patch) to proceed only if memory is below this threshold.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
2016-06-15 22:26:50 -04:00
Glauber Costa
7cd0c0731e region_group: delete move constructor
Tomek correctly points out that since we are now using "this" in lambda
captures, we should make the region_group not movable. We currently define a
move constructor, but there are no users. So we should just remove them.

copy constructor is already deleted, and so are the copy and move assignment
operators. So by removing the move constructor, we should be fine.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
2016-06-15 22:26:50 -04:00
Benoît Canet
0cf8144485 scylla_setup: Propose defaults values when judicious
Also takes care of explaining the options.

Fixes #1031

Signed-of-by: Benoît Canet <benoit@scylladb.com>
Message-Id: <1466011848-11054-1-git-send-email-benoit@scylladb.com>
2016-06-15 20:33:55 +03:00
Benoît Canet
263a55c0da scylla_setup: Inform the user that he can skip any step
Fixes: #1188

Signed-of-by: Benoît Canet <benoit@scylladb.com>
Message-Id: <1466001423-9547-3-git-send-email-benoit@scylladb.com>
2016-06-15 19:38:23 +03:00
Benoît Canet
f42673ed1e scylla_setup: Hide busy block devices from RAID0 configuration
This patch look in /proc/mount for the device name so
the device or it's subdevices will be excluded from the availables
RAID0 targets. It does the same with physical volume from device
mapper.

Fixes #1189
Message-Id: <1466001423-9547-4-git-send-email-benoit@scylladb.com>
2016-06-15 19:36:11 +03:00
Paweł Dziepak
c8e75d2e84 schema: cache is_atomic() in column_definition
is_atomic() is called for each cell in mutation applies, compaction
and query. Since the value doesn't change it can be easily cached which
would save one indirection and virtual call.

Results of perf_simple_query -c1 (median, duration 60):
         before      after
read   54611.49   55396.01   +1.44%
write  65378.92   68554.25   +4.86%

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
Message-Id: <1465991045-11140-1-git-send-email-pdziepak@scylladb.com>
2016-06-15 19:18:13 +03:00
Benoît Canet
4def1f4524 dist: sysctl.d: Disable automatic numa balancing
On NUMA hardware, autonuma may reduce performance by
unmapping memory.

Since we do manual NUMA placement, autonuma will not
help anything.

We ought to disable it by setting the kernel.numa_balancing
sysctl to 0.

Fixes: #1120

Signed-of-by: Benoît Canet <benoit@scylladb.com>
Message-Id: <1466006345-9972-1-git-send-email-benoit@scylladb.com>
2016-06-15 19:11:00 +03:00
Gleb Natapov
7f54333c45 storage_proxy: fix complication on older boost
boost before 1.56.0 had broken boost:size() implementation. Do not use
it.

Message-Id: <20160615123134.GD5961@scylladb.com>
2016-06-15 15:34:57 +03:00
Asias He
de0fd98349 repair: Switch log level to warn instead of error
dtest takes error level log as serious error. It is not a serious error
for streaming to fail to send a verb and fail a streaming session which
triggers a repair failure, for example, the peer node is gone or
stopped. Switch to use log level warn instead of level error.

Fixes repair_additional_test.py:RepairAdditionalTest.repair_kill_3_test

Fixes: #1335
Message-Id: <406fb0c4a45b81bd9c0aea2a898d7ca0787b23e9.1465979288.git.asias@scylladb.com>
2016-06-15 13:01:35 +03:00
Asias He
94c9211b0e streaming: Switch log level to warn instead of error
dtest takes error level log as serious error. It is not a serious error
for streaming to fail to send a verb and fail a streaming session, for
example, the peer node is gone or stopped. Switch to use log level warn
instead of level error.

Fixes repair_additional_test.py:RepairAdditionalTest.repair_kill_3_test

Fixes: #1335
Message-Id: <0149d30044e6e4d80732f1a20cd20593de489fc8.1465979288.git.asias@scylladb.com>
2016-06-15 13:01:22 +03:00
Vlad Zolotarov
c616e74ae4 locator::gossiping_property_file_snitch: use a lowres_clock time source for a timer
gossiping_property_file_snitch checks a configuration file every 60s.
lowres_clock clock source should be good enough for that.

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
Message-Id: <1465314448-11611-1-git-send-email-vladz@cloudius-systems.com>
2016-06-15 13:01:05 +03:00
Tomasz Grabiec
207c8d94f1 idl: Rename variable to a more meaningful name
Message-Id: <1465909911-10534-2-git-send-email-tgrabiec@scylladb.com>
2016-06-14 17:02:59 +03:00
Raphael S. Carvalho
80d8c5ef6f compaction: use proper type in constructor
Correctness is not affected due to long type, but an unsigned
long type should be definitely used instead.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <d3ab15a3206306de195aeb3d78f9b5bc4ca9208e.1465908970.git.raphaelsc@scylladb.com>
2016-06-14 17:02:32 +03:00
Tomasz Grabiec
8e8f63de85 mutation_partition_view: Avoid unnecessary copy into temporary
Message-Id: <1465909038-8174-1-git-send-email-tgrabiec@scylladb.com>
2016-06-14 17:02:17 +03:00
Tomasz Grabiec
75f899cc93 lsa: Make reclamation step configurable via config 2016-06-14 15:13:15 +02:00
Tomasz Grabiec
cd9955d2ce lsa: Reclaim 1 segment by default
Reclaiming many segments was observed to cause up to multi-ms
latency. With the new setting, the latency of reclamation cycle with
full segments (worst case mode) is below 1ms.

I saw no decrease in throughput compared to the step of 16 segments in
neither of these modes:
  - full segments, reclaim by random evicition
  - sparse segments (3% occupancy), reclaim by compaction and no eviction

Fixes #1274.
2016-06-14 15:13:15 +02:00
Tomasz Grabiec
86b76171a8 lsa: Use the same step in both internal and external reclamations 2016-06-14 15:13:15 +02:00
Tomasz Grabiec
d74d902a01 lsa: Make reclamation step configurable 2016-06-14 15:13:14 +02:00
Tomasz Grabiec
93bb95bd0d lsa: Log reclamation rate 2016-06-14 15:13:14 +02:00
Tomasz Grabiec
cb18418022 lsa: Print more details before aborting 2016-06-14 15:13:14 +02:00
Tomasz Grabiec
7cb98c916f tests: lsa_async_eviction_test: Push to refs with reclaim lock
push_back() is not reentrant with pop_front(), used by the evictor. If
reclaimer runs when std::deque allocates a new node it will get
corrupted. Fix by runnning push_back() under reclaim lock.
2016-06-14 15:13:14 +02:00
Tomasz Grabiec
de8772525a tests: lsa_async_eviction_test: Make sure refs scope encloses reclaimer scope 2016-06-14 15:13:14 +02:00
Tomasz Grabiec
c4a556ac13 tests: lsa_async_eviction_test: Fix use after free due to at_exit() callback
The callback will run after thread is destroyed. We don't really need
the stop feature, so for now just remove it.
2016-06-14 15:13:14 +02:00
Pekka Enberg
155ad2eeb5 storage_service: Fix start_rpc_server() to use logger
Message-Id: <1465882880-7392-1-git-send-email-penberg@scylladb.com>
2016-06-14 09:52:04 +02:00
Raphael S. Carvalho
0b2cd41daf database: remember sstable level when cleaning it up
Cleanup operation wasn't preserving level of sstables. That will have
a bad impact on performance because compaction work is lost.

Fixes #1317.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <35ce8fbbb4590725bb0414e6a5450fcbe6cb7212.1465843387.git.raphaelsc@scylladb.com>
2016-06-14 08:06:00 +03:00
Vlad Zolotarov
d3960f0bbb tracing: rearrange shut down
tracing::tracing local instance is dereferenced from a
cql_server::connection::process_request(), therefore tracing::tracing
service may be stop()ed only after a CQL server service is down.
On the other hand it may not be stopped before RPC service is down
because a remote side may request a tracing for a specific command too.

This patch splits the tracing::tracing stop() into two phases:
   1) Flush all pending tracing records and stop the backend.
   2) Stop the service.

The first phase is called after CQL server is down and before RPC is down.
The second phase is called after RPC is down.

Fixes #1339

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
Message-Id: <1465840496-19990-1-git-send-email-vladz@cloudius-systems.com>
2016-06-14 07:58:04 +03:00
Avi Kivity
49449fc30c Merge seastar upstream
* seastar 864d6dc...401c333 (8):
  > scollectd: Support filtering specific collectd metrics
  > core: Integrate error reporting with the logging framework
  > rpc: wait for all replies to be completed before closing rpc server
  > rpc: clean up resource accounting
  > queue: fix race between pop_eventually() and abort()
  > rpc_test: fix cancel test to not depend on timing.
  > tutorial: explain application-specific command line options
  > add ostream output operator for std::unordered_map
2016-06-13 19:35:00 +03:00
Gleb Natapov
e089166cfa storage_proxy: wait only for expected CL when writing back data during read repair
When read repair writes diffs back to replicas it is enough to wait
for requested CL to guaranty read monotonicity. This patch makes read
repair write reuse regular mutate functionality which already tracks
CL status. This is done by changing write response handler to not hold
mutation directly, but instead hold a container that, depending on
whether
this is read repair write or regular one, can provide different mutation
per destination.

Message-Id: <20160613124727.GL1096@scylladb.com>
2016-06-13 19:01:51 +03:00
Duarte Nunes
c896309383 database: Actually decrease query_state limit
query_state expects the current row limit to be updated so it
can be enforced across partition ranges. A regression introduced
in e4e8acc946 prevented that from
happening by passing a copy of the limit to querying_reader.

This patch fixes the issue by having column_family::query update
the limit as it processes partitions from the querying_reader.

Fixes #1338

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1465804012-30535-1-git-send-email-duarte@scylladb.com>
2016-06-13 10:03:27 +02:00
Avi Kivity
465c0a4ead Merge "Make stronger guarantees in row_cache's clear/invalidate" from Tomasz
"Correctness of current uses of clear() and invalidate() relies on fact
that cache is not populated using readers created before
invalidation. Sstables are first modified and then cache is
invalidated. This is not guaranteed by current implementation
though. As pointed out by Avi, a populating read may race with the
call to clear(). If that read started before clear() and completed
after it, the cache may be populated with data which does not
correspond to the new sstable set.

To provide such guarantee, invalidate() variants were adjusted to
synchronize using _populate_phaser, similarly like row_cache::update()
does.

Fixes #1291."
2016-06-13 09:55:29 +03:00
Shlomi Livne
ac6f2b5c13 dist/common: Update scylla_io_setup to use settings done in cpuset.conf
scylla_io_setup is searching for --smp and --cpuset setting in
SCYLLA_ARGS. We have moved the settings of this args into
/etc/scylla.d/cpuset.conf and they are set by scylla_cpuset_setup into
CPUSET.

Fixes: #1327

Signed-off-by: Shlomi Livne <shlomi@scylladb.com>
Message-Id: <2735e3abdd63d245ec96cfa1e65f766b1c12132e.1465508701.git.shlomi@scylladb.com>
2016-06-10 09:37:44 +03:00
Vlad Zolotarov
89375d4c2a service::storage_proxy: tracing: instrument read_digest and read_mutation_data
Instrument read_digest and read_mutation_data handlers similarly
to a read_data handler instrumentation.

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
Message-Id: <1465304055-4263-1-git-send-email-vladz@cloudius-systems.com>
2016-06-09 14:32:42 +02:00
Pekka Enberg
8df5aa7b0c utils/exceptions: Whitelist EEXIST and ENOENT in should_stop_on_system_error()
There are various call-sites that explicitly check for EEXIST and
ENOENT:

  $ git grep "std::error_code(E"
  database.cc:                            if (e.code() != std::error_code(EEXIST, std::system_category())) {
  database.cc:            if (e.code() != std::error_code(ENOENT, std::system_category())) {
  database.cc:        if (e.code() != std::error_code(ENOENT, std::system_category())) {
  database.cc:                            if (e.code() != std::error_code(ENOENT, std::system_category())) {
  sstables/sstables.cc:            if (e.code() == std::error_code(ENOENT, std::system_category())) {
  sstables/sstables.cc:            if (e.code() == std::error_code(ENOENT, std::system_category())) {

Commit 961e80a ("Be more conservative when deciding when to shut down
due to disk errors") turned these errors into a storage_io_exception
that is not expected by the callers, which causes 'nodetool snapshot'
functionality to break, for example.

Whitelist the two error codes to revert back to the old behavior of
io_check().
Message-Id: <1465454446-17954-1-git-send-email-penberg@scylladb.com>
2016-06-09 10:03:04 +02:00
Pekka Enberg
02d033667a utils: Improve storage_io_exception error message
Make storage_io_exception exception error message less cryptic by
actually including the human-readable error message from
std::system_error...

Before:

  nodetool: Scylla API server HTTP POST to URL '/storage_service/snapshots' failed: Storage io error errno: 2

After:

  nodetool: Scylla API server HTTP POST to URL '/storage_service/snapshots' failed: Storage I/O error: 2: No such file or directory

We can improve this further by including the name of the file that the I/O
error happened on.
Message-Id: <1465452061-15474-1-git-send-email-penberg@scylladb.com>
2016-06-09 09:58:00 +02:00
Tomasz Grabiec
d5a2d7a88d row_cache: Add eviciton and removal counters
Fixes #1273.

Message-Id: <1465315433-8473-1-git-send-email-tgrabiec@scylladb.com>
2016-06-08 16:08:32 -04:00
Nadav Har'El
721f7d1d4f Rewrite shared sstables soon after startup
Several shards may share the same sstable - e.g., when re-starting scylla
with a different number of shards, or when importing sstables from an
external source. Sharing an sstable is fine, but it can result in excessive
disk space use because the shared sstable cannot be deleted until all
the shards using it have finished compacting it. Normally, we have no idea
when the shards will decide to compact these sstables - e.g., with size-
tiered-compaction a large sstable will take a long time until we decide
to compact it. So what this patch does is to initiate compaction of the
shared sstables - on each shard using it - so that a soon as possible after
the restart, we will have the original sstable is split into separate
sstables per shard, and the original sstable can be deleted. If several
sstables are shared, we serialize this compaction process so that each
shard only rewrites one sstable at a time. Regular compactions may happen
in parallel, but they will not not be able to choose any of the shared
sstables because those are already marked as being compacted.

Commit 3f2286d0 increased the need for this patch, because since that
commit, if we don't delete the shared sstable, we also cannot delete
additional sstables which the different shards compacted with it. For one
scylla user, this resulted in so much excessive disk space use, that it
literally filled the whole disk.

After this patch commit 3f2286d0, or the discussion in issue #1318 on how
to improve it, is no longer necessary, because we will never compact a shared
sstable together with any other sstable - as explained above, the shared
sstables are marked as "being compacted" so the regular compactions will
avoid them.

Fixes #1314.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <1465406235-15378-1-git-send-email-nyh@scylladb.com>
Reviewed-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2016-06-08 15:44:29 -04:00
Raphael S. Carvalho
1b8e170254 compaction: retry compaction until strategy is satisfied
Previously, we were using a stat to decide if compaction should be
retried, but that's not efficient. The information is also lost
after node is restarted.

After these changes, compaction will be retried until strategy is
satisfied, i.e. there is nothing to compact.
We will now be doing the following in a loop:
Get compaction job from compaction strategy.
	If cannot run, finish the loop.
	Otherwise, compact this column family.
Go back to start of the loop.

By the way, pending_compactions stat will be deprecated after this
commit. Previously, it was increased to indicate the want for
compaction and decreased when compaction finished. Now, we can
compact more than we asked for, so it would be decreased below 0.
Also, it's the strategy that will tell the want for compaction.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <899df0d8d807f6b5d9bb8600d7c63b4e260cc282.1465398243.git.raphaelsc@scylladb.com>
2016-06-08 11:31:56 -04:00
Avi Kivity
7bd4b7ca63 cql3: split use_statement into raw and prepared variants
Rather than having one class fulfil both roles, have one class per role,
disentangling dependencies.
Message-Id: <1465053407-20931-1-git-send-email-avi@scylladb.com>
2016-06-08 16:48:45 +03:00
Yoav Kleinberger
43071bf488 tools/scyllatop: handle absentee metrics
Sometimes a metric previously reported from collectd is not available
anymore. Previously, this caused scyllatop to log and exception to the
user - which in effect destroyes the user experience and inhibits
monitoring other metrics. This patch makes ScyllaTop handle this
problem. It will display such metrics and 'not available', and exclude
them from some and average computations.

Closes issue #1287.

Signed-off-by: Yoav Kleinberger <yoav@scylladb.com>
Message-Id: <1465301178-27544-1-git-send-email-yoav@scylladb.com>
2016-06-08 16:35:55 +03:00
Vlad Zolotarov
24624b2600 tests/gossiping_property_file_snitch_test: cancel O_DIRECT enforcement
Cancel O_DIRECT enforcement on shard 0 (a default I/O shard for this snitch)
to ensure proper functioning on any FS (e.g. ecryptfs).

Otherwise tests fails on file systems not supporting O_DIRECT.

Fixes #1324

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
Message-Id: <1465385087-20510-1-git-send-email-vladz@cloudius-systems.com>
2016-06-08 16:21:44 +03:00
Tomasz Grabiec
287ff7dbd3 Merge tag 'ms_update/v2' from seastar-dev.git
From Asias:

In f27e5d2a6 (messaging_service: Delay listening ms during boot up),
messaging_service startup is splitted into two stages. Adjust the api
registration code and fix up the messaging_service stop code.
2016-06-08 10:25:14 +02:00
Paul McGuire
8326fe7760 Clean up idl-compiler pyparsing usage
This patch makes a few minor improvements in the parser:

  - merge first and rest into 2-argument form of Word to define
    identifier – should give some performance boost, simpler code

  - replace Literal(keyword_string) with Keyword(keyword_string)
    throughout - stricter parsing, avoids misinterpreting identifiers
    with keywords

  - replace expr.setResultsName("name") with expr("name") throughout –
    this is a style change (no actual change in underlying parser
    behavior), but I find this form easier to follow

  - add calls to setName to make exceptions more readable

Message-Id: <005901d1bbd2$711f7bb0$535e7310$@austin.rr.com>
2016-06-08 08:13:05 +03:00
Asias He
b36d3be5d4 messaging_service: Fix messaging_service::stop
There are two problems:

1. _server_tls is not stopped

2. _server and _server_tls might not be created if
messaging_service::start_listen is not called yet.
2016-06-08 11:13:36 +08:00
Asias He
e6f63a50e1 main: Delay the messaging_service api registration
Since messaging_service is fully initialized in
storage_service::init_server which calls
messaging_service::start_listen, we need to delay
the messaging_service api registration after it.
2016-06-08 11:13:35 +08:00
Asias He
f7d25e6bae messaging_service: Handle _server is not created in foreach_server_connection_stats
It is possible _server is not created yet when
foreach_server_connection_stats is called. Handle this case.
2016-06-08 11:13:35 +08:00
Gleb Natapov
9635e67a84 config: adjust boost::program_options validator to work with db::string_map
Fixes #1320

Message-Id: <20160607064511.GX9939@scylladb.com>
2016-06-07 10:42:27 +03:00
Amnon Heiman
2cf882c365 rate_moving_average: mean_rate is not initilized
The rate_moving_average is used by timed_rate_moving_average to return
its internal values.

If there are no timed event, the mean_rate is not propertly initilized.
To solve that the mean_rate is now initilized to 0 in the structure
definition.

Refs #1306

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <1465231006-7081-1-git-send-email-amnon@scylladb.com>
2016-06-07 09:38:58 +03:00
Vlad Zolotarov
ce08bc611c tracing: fix debug compilation
Define flush_period as a const and not as constexpr.

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
Message-Id: <1465240516-20128-1-git-send-email-vladz@cloudius-systems.com>
2016-06-06 22:15:27 -04:00
Avi Kivity
e25380347a Merge "tracing: probabilistic tracing" from Vlad
"This series includes some fixes to and adds a probabilistic tracing feature."
2016-06-06 11:25:18 -04:00
Benoît Canet
b508aaf0d9 docker: Add the production environment variable
This variable if set to true will activate
developer mode. It will be set by using the
-e option of docker run.

The xfs bind mount behavior and the cpuset behavior
will be set by using the relevant docker command
lines options and documented in the scylla/docker
howto.

Fixes: #1267

Signed-of-by: Benoît Canet <benoit@scylladb.com>
Message-Id: <1465213713-2537-1-git-send-email-benoit@scylladb.com>
2016-06-06 16:28:17 +03:00
Benoît Canet
c771854120 docker: Start scylla on ubuntu docker
Make it behave on par with redhat version

Signed-of-by: Benoît Canet <benoit@scylladb.com>
Message-Id: <1465218003-2740-1-git-send-email-benoit@scylladb.com>
2016-06-06 16:27:03 +03:00
Vlad Zolotarov
0611417c76 api::storage_service: add set_trace_probability/get_trace_probability
Trace probability defines a probability for the next CQL command
to be traced.

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-06-06 15:44:28 +03:00
Vlad Zolotarov
905190ac06 tracing: add support for probabilistic tracing
Add a support for defining a probability (a value in a [0,1] range)
for tracing the next CQL request.

Traces for requests that are chosen to be traced due to this feature
are not going to flushed immediately.

Use std::subtract_with_carry_engine (implements the "lagged Fibonacci" algorithm)
random number engine for fastest generation of random integer values.

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-06-06 15:41:01 +03:00
Vlad Zolotarov
779ff88c76 tracing: add flush timer
Flush pending sessions to the storage every 2 seconds.

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-06-06 14:34:08 +03:00
Tomasz Grabiec
170a214628 row_cache: Make stronger guarantees in clear/invalidate
Correctness of current uses of clear() and invalidate() relies on fact
that cache is not populated using readers created before
invalidation. Sstables are first modified and then cache is
invalidated. This is not guaranteed by current implementation
though. As pointed out by Avi, a populating read may race with the
call to clear(). If that read started before clear() and completed
after it, the cache may be populated with data which does not
correspond to the new sstable set.

To provide such guarantee, invalidate() variants were adjusted to
synchronize using _populate_phaser, similarly like row_cache::update()
does.
2016-06-06 13:21:06 +02:00
Vlad Zolotarov
4b008ac5ea tracing: rework maximum sessions amount back pressure strategy
A tracing session life cycle includes 3 stages:
   1) Active: when new trace records are being added to this session.
   2) Pending for flushing to a storage: when session is over but not
      yet flushed to the storage ("backend").
   3) Flushing: when session's records are being flushed to the storage
      and this process is not yet completed.

Sessions may accumulate in each of the stages above and we should limit
the maximum amount of sessions being accumulated in each of them in order to avoid OOM
situation.

Current in-tree implementation only limits the number of tracing sessions
accumulated in the first ("Active") stage.

Since currently every closing session is being immediately flushed (as long
as "settraceprobability" is not implemented) the second stage never accumulates
tracing sessions.

The third stage is currently not controlled at all and if, for instance, we
succeed to push enough tracing session towards a slow storage backend, they may
accumulate there consuming an uncontrolled amount of memory and may eventually consume
all of it.

This patch fixes this unpleasant situation by implying the following strategy:

   - Limit the total amount of accumulated tracing sessions in all stages above together
     by a static value - 2 times "flush threshold". "2 times" is needed to allow new
     tracing sessions to accumulate in the stage 2 while sessions in the stage 3 are still
     being  processed.
   - Forcefully flush sessions in the stage 2 to the storage when their count reaches a "flush
     threshold".

This would ensure that there will not more than totally (2 * "flush threshold") sessions (in any stage)
on each shard.

An advantage of this strategy is its simplicity - we only need a single threshold to control all stages.
If we feel that we needed a finer graining for each stage we may add separate limits for each of them
in the future.

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-06-06 13:50:41 +03:00
Pekka Enberg
b407031401 Update scylla-ami submodule
* dist/ami/files/scylla-ami 72ae258...863cc45 (3):
  > Move --cpuset/--smp parameter settings from scylla_sysconfig_setup to scylla_ami_setup
  > convert scylla_install_ami to bash script
  > 'sh -x -e' is not valid since all scripts converted to bash script, so remove them
2016-06-06 13:37:21 +03:00
Vlad Zolotarov
35402b965f service/client_state: don't try to dereference a tracing state if it's not initialized
Call for a tracing::tracing::create_session() doesn't promise a session creation.
Check that the session is actually created before trying to use it.

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-06-06 13:00:31 +03:00
Vlad Zolotarov
139fa9d1bd tracing: minor cleanups
- Make small functions on a fast path "inline".
   - Add "const" qualifier where needed.

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-06-06 13:00:31 +03:00
Avi Kivity
961e80ab74 Be more conservative when deciding when to shut down due to disk errors
Currently we only shut down on EIO.  Expand this to shut down on any
system_error.

This may cause us to shut down prematurely due to a transient error,
but this is better than not shutting down due to a permanent error
(such as ENOSPC or EPERM).  We may whitelist certain errors in the future
to improve the behavior.

Fixes #1311.
Message-Id: <1465136956-1352-1-git-send-email-avi@scylladb.com>
2016-06-06 10:56:34 +02:00
Raphael S. Carvalho
17b56eb459 compaction: leveled: improve log message for overlapping table
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <2dcbe3c8131f1d88a3536daa0b6cdd25c6e41d76.1464883077.git.raphaelsc@scylladb.com>
2016-06-05 18:20:01 +03:00
Raphael S. Carvalho
588ce915d6 compaction: disable parallel compaction for leveled strategy
It was discussed that leveled strategy may not benefit from parallel
compaction feature because almost all compaction jobs will have similar
size. It was also found that leveled strategy wasn't working correctly
with it because two overlapping sstable (targetting the same level)
could be created in parallel by two ongoing compaction.

Fixes #1293.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <60fe165d611c0283ca203c6d3aa2662ab091e363.1464883077.git.raphaelsc@scylladb.com>
2016-06-05 18:20:00 +03:00
Amnon Heiman
5f84e55bf6 histogram: total need to be increment on plus operator
The total counter (the one that count the actual number of sample
points) should be incremented when adding histograms.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <1464172277-4251-1-git-send-email-amnon@scylladb.com>
2016-06-05 12:09:36 +03:00
Tomasz Grabiec
2ab18dcd2d row_cache: Implement clear() using invalidate()
Reduces code duplication.
2016-06-03 13:34:40 +02:00
Tomasz Grabiec
57413618e8 Merge branch 'range-tombstone-v9' from https://github.com/duarten/scylla.git
From Duarte:

This patchset adds the range_tombstone_list data structure,
used to hold a set of disjoint range tombstones, and changes
the internal representation of row tombstones to use that
data structure.

Fixes #1155

[tgrabiec: Added compound_wrapper::make_empty(const schema&) overload
	   to fix compilation failure in tracing code]
2016-06-02 22:17:17 +02:00
Raphael S. Carvalho
3f4500cb71 db: compaction strategy changes via alter table must have immediate effect
At the moment, compaction strategy changes via ALTER TABLE have no effect until
node restart.

Tomek says: "Statements of the following form should have immediate effect:
ALTER TABLE t WITH compaction = { 'class' : 'LeveledCompactionStrategy' };"

Fixes #877.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <3b72c494f887643b82a272ef0a9995edb970382c.1464726828.git.raphaelsc@scylladb.com>
2016-06-02 16:59:50 +02:00
Pekka Enberg
d03f65d94e database: Don't use std::cbegin() and std::cend()
They're not supported by GCC 4.9.

Fixes #1305
Message-Id: <1464877984-27856-1-git-send-email-penberg@scylladb.com>
2016-06-02 16:57:24 +02:00
Duarte Nunes
c970d682d1 storage_service: Announce range tombstones feature
This patch enables the RANGE_TOMBSTONES supported feature, meaning
that the node is capable of accepting row entry tombstones as range
tombstones.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-02 16:21:59 +02:00
Duarte Nunes
70083efee2 sstables: Read and write range tombstone bounds
This patch uses the composite_marker to add inclusiveness information
to the prefixes of a range tombstone.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-02 16:21:59 +02:00
Duarte Nunes
7628e403a3 sstables: Drop code for tombstone merging
Since Scylla now supports proper range tombstones, the code for
reading ranges from sstables and converting them to overlapping
tombstones is no longer necessary, and is, in fact, wasteful as
the internal representation converts overlapping tombstones back to
ranges.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-02 16:21:59 +02:00
Duarte Nunes
79bff2742f random_mutation_generator: Generate range tombstones
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-02 16:21:59 +02:00
Duarte Nunes
95594b8171 mutations: Encapsulate row tombstones difference
This patch moves the difference between two mutation_partition's
row_tombstones inside the range_tombstone_list.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-02 16:21:59 +02:00
Duarte Nunes
91aac30f12 mutations: Row tombstones are now a set of ranges
This patch changes the type of the mutation partition's row_tombstones
to be a range_tombstone_list, so that they are now represented as a
set of disjoint ranges. All of its usages are updated accordingly.

Fixes #1155

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-02 16:21:59 +02:00
Duarte Nunes
e46537b7d3 storage_service: Include range tombstones feature
This patch adds the range tombstones feature, which is not enabled
yet, to the storage_service, so that consumers can query for it.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-02 16:21:58 +02:00
Duarte Nunes
17a544c4a6 gossip: Add feature default ctor and operator=
This allows a feature to be declared and initialized later.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-02 16:21:58 +02:00
Duarte Nunes
2c82dcd309 gossip: Decouple feature lifetime from the gossiper
This patch changes the gms::feature destructor so it
checks whether the gossiper has been stopped before trying
to unregister the feature.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-02 16:21:58 +02:00
Duarte Nunes
351aaf9738 range_tombstone: Introduce range_tombstone_to_prefix_tombstone_converter
This patch extracts the code from sstables/partition.cc which is used
to transform a set of range tombstones into a set of overlapping
scylladb tombstones.

The range_tombstone_merger will be used to send mutations to nodes not
yet updated to support the internal range tombstone representation.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-02 16:21:58 +02:00
Duarte Nunes
f7809bcaef range_tombstone_list: Add unit test
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-02 16:21:58 +02:00
Duarte Nunes
284bb6b66f range_tombstone_list: Make it ReversiblyMergeable
This patch implements the ReversiblyMergeable cancellative monoid
for the range_tombstone_list.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-02 16:21:58 +02:00
Duarte Nunes
86030885c8 mutations: Introduce range tombstone list
This class is responsible for representing a set of range tombstones
as non-overlapping disjoint sets of range tombstones.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-02 16:21:58 +02:00
Duarte Nunes
6a111fdd01 mutations: Introduce the range_tombstone class
This patch introduces the range_tombstone class, composed of
a [start, end] pair of clustering_key_prefixes, the type
of inclusiveness of each bound, and a tombstone.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-02 16:21:58 +02:00
Duarte Nunes
dc8319ed91 keys: Remove schema argument from make_empty
An empty key is independent of the schema.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-02 16:21:36 +02:00
Duarte Nunes
7f8c35dd8c idl: Add range tombstone IDL
This patch adds the range tombstone IDL, preserving backwards
compatibility.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-02 16:21:36 +02:00
Duarte Nunes
9bd7d08fc7 idl-compiler: Default expr can refer to previous fields
This patch changes the idl-compiler so that the default value of a
field can be set to the value of a previous field in the class:

class P {
    uint32_t x;
    uint32_t y = x;
};

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-02 16:21:36 +02:00
Duarte Nunes
e2812c1b7a idl: Rename range_tombstone::key to start
... and make it a clustering_key_prefix, in preparation of
supporting not-whole-row range tombstones.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-06-02 16:21:36 +02:00
Pekka Enberg
f64c25a495 cql3/statements/select_statement: Unify coding style
The coding style in select_statement.cc is very inconsistent which makes
the code hard to read. Clean that up.
Message-Id: <1464871790-21031-1-git-send-email-penberg@scylladb.com>
2016-06-02 16:17:21 +02:00
Avi Kivity
6da0449fc7 tests: adjust config_test for db::string_map changes 2016-06-02 14:48:02 +03:00
Gleb Natapov
9132604a90 config: make string_map to be a unique type instead of an alias to unordered_map
Config provides operators << >> for string_map which makes it impossible
to have generic stream operators for unordered_map. Fix it by making
string_map a separate type and not just an alias.

Message-Id: <20160602102642.GJ9939@scylladb.com>
2016-06-02 13:28:40 +03:00
Asias He
96463cc17c streaming: Fix indention in do_send_mutations
Message-Id: <bc8cfa7c7b29f08e70c0af6d2fb835124d0831ac.1464857352.git.asias@scylladb.com>
2016-06-02 11:56:03 +03:00
Asias He
206955e47c streaming: Reduce memory usage when sending mutations
Limit disk bandwidth to 5MB/s to emulate a slow disk:
echo "8:0 5000000" >
/cgroup/blkio/limit/blkio.throttle.write_bps_device
echo "8:0 5000000" >
/cgroup/blkio/limit/blkio.throttle.read_bps_device

Start scylla node 1 with low memory:
scylla -c 1 -m 128M --auto-bootstrap false

Run c-s:
taskset -c 7 cassandra-stress write duration=5m cl=ONE -schema
'replication(factor=1)' -pop seq=1..100000  -rate threads=20
limit=2000/s -node 127.0.0.1

Start scylla node 2 with low memory:
scylla -c 1 -m 128M --auto-bootstrap true

Without this patch, I saw std::bad_alloc during streaming

ERROR 2016-06-01 14:31:00,196 [shard 0] storage_proxy - exception during
mutation write to 127.0.0.1: std::bad_alloc (std::bad_alloc)
...
ERROR 2016-06-01 14:31:10,172 [shard 0] database - failed to move
memtable to cache: std::bad_alloc (std::bad_alloc)
...

To fix:

1. Apply the streaming mutation limiter before we read the mutation into
memory to avoid wasting memory holding the mutation which we can not
send.

2. Reduce the parallelism of sending streaming mutations. Before we send each
range in parallel, after we send each range one by one.

   before: nr_vnode * nr_shard * (send_info + cf.make_reader memory usage)

   after: nr_shard * (send_info + cf.make_reader memory usage)

We can at least save memory usage by the factor of nr_vnode, 256 by
default.

In my setup, fix 1) alone is not enough, with both fix 1) and 2), I saw
no std::bad_alloc. Also, I did not see streaming bandwidth dropped due
to 2).

In addition, I tested grow_cluster_test.py:GrowClusterTest.test_grow_3_to_4,
as described:

https://github.com/scylladb/scylla/issues/1270#issuecomment-222585375

With this patch, I saw no std::bad_alloc any more.

Fixes: #1270

Message-Id: <7703cf7a9db40e53a87f0f7b5acbb03fff2daf43.1464785542.git.asias@scylladb.com>
2016-06-02 11:01:58 +03:00
Gleb Natapov
1476becd28 config: put operators << and >> into db namespace
Makes ADL find the right version of the overload.

Message-Id: <20160601130952.GJ2381@scylladb.com>
2016-06-02 10:45:01 +03:00
Pekka Enberg
b6b2c84316 Merge "CQL tracing" from Vlad
"This series introduces a tracing infrastructure that may be used
for tracing CQL commands execution and measuring latencies of separate
stages of CQL handling as defined by a CQL binary protocol specification.

To begin tracing one should create a "tracing session", which may then
be used to issuing tracing events.

If execution of a specific CQL command involves other Nodes (not only a Coordinator),
then a "tracing session ID" is passed to that Node (in the context of the
corresponding RPC call). Then this "session ID" may be used to create a
"secondary tracing session" to issue tracing events in the context of the original session.

The series contains an implementation of tracing that uses a keyspace in the current
cluster for storing tracing information.

This series contains a demo per-request tracing instrumentation of a QUERY
CQL command and even this instrumentation is partial: it only fully instruments
a QUERY->SELECT->read_data call chain.

This is by all means a very beginning of the proper instrumentation which is
to come.

Right now the latencies for a single SELECT for a single raw with RF 1 from a 2 Nodes cluster
on my laptop started using ccm (for C* all default parameters, for scylla - memory 256MB, --smp 2)
are as follows (pseudo-graphics warning):
--------------------------------------------------------------------------------------------
                                       | scylla (2 Nodes x 2 shards each)  |     C* 2.1.8
_______________________________________|___________________________________|________________
Coordinator and replica are same Node  |                                   |
(TRACING OFF):                         |                0.3ms              |     0.3ms
c-s with a single thread mean latency  |      (was 0.2ms before the last   |
value                                  |       rebase with a master)       |
--------------------------------------------------------------------------------------------
Coordinator and replica are same Node  |                                   |
(TRACING ON)                           |                ~250us             |     ~1200us
Running a SELECT command from a cqlsh  |                                   |
a few times                            |                                   |
--------------------------------------------------------------------------------------------
Coordinator and replica are not on the |                                   |
same Node                              |                ~700us             |     >2500us
(TRACING ON)                           |                                   |
--------------------------------------------------------------------------------------------

To begin tracing one may use a cqlsh "TRACING ON/OFF" commands:

cqlsh> TRACING ON
Now Tracing is enabled
cqlsh> select "C0", "C1" from keyspace1.standard1  where key=0x12345679;

 C0                 | C1
--------------------+------
 0x000000000001e240 | null

(1 rows)

Tracing session: 146f0180-21e7-11e6-b244-000000000000

 activity                                                          | timestamp                  | source    | source_elapsed
-------------------------------------------------------------------+----------------------------+-----------+----------------
 select "C0", "C1" from keyspace1.standard1  where key=0x12345679; | 2016-05-24 22:38:24.536000 | 127.0.0.1 |              0
                              message received from /127.0.0.1 [0] | 2016-05-24 22:38:24.537000 | 127.0.0.2 |             --
                                          Done reading options [0] | 2016-05-24 22:38:24.537000 | 127.0.0.1 |              3
                                    read_data handling is done [0] | 2016-05-24 22:38:24.537000 | 127.0.0.2 |             37
                                           Parsing a statement [0] | 2016-05-24 22:38:24.537000 | 127.0.0.1 |              3
                                        Processing a statement [0] | 2016-05-24 22:38:24.537000 | 127.0.0.1 |             56
                          Done processing - preparing a result [0] | 2016-05-24 22:38:24.537000 | 127.0.0.1 |            550
                                                  Request complete | 2016-05-24 22:38:24.536560 | 127.0.0.1 |            560

cqlsh>"
2016-06-02 08:35:33 +03:00
Avi Kivity
c7953897d1 build: remove obsolete log.cc dependency 2016-06-01 22:35:07 +03:00
Vlad Zolotarov
69bd8efc40 storage_proxy: instrument a read_data handler to accept a tracing info
This is a demo instrumentation:
   - Check if a tracing info is present in the read_command.
   - If yes - create a tracing session with the given tracing
     session ID.

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-06-01 20:17:25 +03:00
Vlad Zolotarov
4c17a422e0 cql3: instrument a SELECT query to send tracing info
Instrument a coordinator of a SELECT query to send tracing session
info to the corresponding replica Nodes.

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-06-01 20:17:25 +03:00
Vlad Zolotarov
6e26909b02 query::read_command: add an optional trace_info field
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-06-01 20:17:19 +03:00
Vlad Zolotarov
a53d329b25 tracing: add a serializable trace_info object
tracing::trace_info is used to pass the tracing information between nodes.

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-06-01 20:16:53 +03:00
Vlad Zolotarov
099ff0d2d5 transport: instrument a QUERY with tracing
- Store a trace state inside a client_state.
   - Start tracing in a cql_server::connection::process_query().

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-06-01 20:14:29 +03:00
Vlad Zolotarov
f994e0a8d0 transport/server: add support for sending a tracing session ID in a CQL response
- Add a tracing ID (UUID) optional field to cql_server::response.
   - If _tracing_id is set make_frame() would insert a tracing ID
     in the response message. According to CQL spec it should be the
     first thing in the response "body" and the TRACING bit (0x02) should be
     set in the "flags" field.

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-06-01 20:13:53 +03:00
Vlad Zolotarov
9e61a3498d cql_server::response: rework make_frame()
Use a template function to avoid code duplication.

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-06-01 20:13:53 +03:00
Vlad Zolotarov
8bf34fca02 service::client_state: store a client address
When client_state is created with an external_tag - store
a client address in the client state.

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-06-01 20:13:53 +03:00
Vlad Zolotarov
c58c56bccc gms::inet_address: add a constructor from socket_address
Currently only IPv4 addresses are supported.

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-06-01 20:13:53 +03:00
Vlad Zolotarov
63c724c41d service::client_state: make private fields actually private
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-06-01 20:13:53 +03:00
Vlad Zolotarov
4b43b08ffc main: start a tracing service
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-06-01 20:13:53 +03:00
Vlad Zolotarov
c965528a03 tracing: add a trace_state and tracing classes
trace_state: Is a single tracing session.
tracing:     A sharded service that contains an i_trace_backend_helper instance
             and is a "factory" of trace_state objects.

trace_state main interface functions are:
   - begin(): Start time counting (should be used via tracing::begin() wrapper).
   - trace(): Create a tracing event - it's coupled with a time passed since begin()
              (should be used via tracing::trace() wrapper).
   - ~trace_state(): Destructor will close the tracing session.

"tracing" service main interface function is:
   - start(): Initialize a backend.
   - stop():  Shut down a backend.
   - create_session(): Creates a new tracing session.

(tracing::end_session(): Is called by a trace_state destructor).

When trace_state needs to store a tracing event it uses a backend helper from
a "tracing" service.

A "tracing" service limits a number of opened tracing session by a static number.
If this number is reached - next sessions will be dropped.

trace_state implements a similar strategy in regard to tracing events per singe
session.

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-06-01 20:13:42 +03:00
Vlad Zolotarov
fa14ad3a99 service/client_state: don't allow modification of a system_trace KS
Only users with enough permissions are allowed to modify system_trace KS.

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-06-01 20:12:19 +03:00
Vlad Zolotarov
d3988a8113 tracing::trace_keyspace_helper: a keyspace based i_tracing_backend_helper implementation
Uses a CQL keyspace system_traces to store tracing information.

Uses two tables:

CREATE TABLE system_traces.sessions (
session_id uuid,
command text,
client inet,
coordinator inet,
duration int,
parameters map<text, text>,
request text,
started_at timestamp,
PRIMARY KEY ((session_id)))

and

CREATE TABLE system_traces.events (
session_id uuid,
event_id timeuuid,
activity text,
source inet,
source_elapsed int,
thread text,
PRIMARY KEY ((session_id), event_id))

system_traces.sessions table contains records of tracing sessions.
system_traces.sessions columns description:
   - session_id:  an ID of the session.
   - command:     type of a command this session was created for
                  (currently supported "NONE", "QUERY" and "REPAIR").
   - client:      IP of the client that issued the command.
   - coordinator: IP of a coordinator that received the command.
   - duration:    total duration of the tracing session (in us).
   - parameters:  optional parameters for this session, passed to
                  i_trace_state::begin() call.
   - request:     a CQL command this tracing session is created for.
   - started_at:  the time the session has been started at.

system_traces.events contains records of separate tracing events.
system_traces.events columns description:
   - session_id:     an ID of the session.
   - event_id:       an ID of the event.
   - activity:       the trace point description - a message given to
                     i_trace_state::trace().
   - source:         IP of the Node where trace event was issued.
   - source_elapsed: time passed since creation of a tracing session (in us) on
                     the Node where this trace event was issued.
   - thread:         name of the thread in who's context this trace event was
                     issued in (currently its "core N", where 'N' is an index of
                     a shard the trace event was issued on).

This class will cache lambdas creating the corresponding mutations for each tracing
record requested to be stored till flush() method is called.

flush() will merge all pending mutations to "sessions" and "events" tables and
then apply a mutation to "events" table and when it completes - to "sessions"
table. This way it'll ensure that when some tracing session is visible, all its
events are visible too.

trace_keyspace_helper exposes a few metrics via collectd:
   - tracing_error - a total number of errors (not including OOM)
   - bad_column_family_errors - number of times a tracing record wasn't
                                stored because system_trace tables' schema
                                didn't match the expected value. This may happen if
                                a DB administrator is doing funny things like altering
                                the schemas of the above tables.

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-06-01 20:12:19 +03:00
Vlad Zolotarov
a2994ffd7f tracing: add i_tracing_backend_helper interface
This class represents an interface for a specific backend that is
going to store tracing information.

The specific implementation may and expected to implement caching
of pending tracing records.

Interface functions are:
   - start(): Initialize a backend (e.g. create keyspace and tables).
   - stop():  Flush all pending work and shut down the backend.
   - store_session_record()/store_event_record():
              Cache/store the corresponding tracing records.
   - flush(): Flush pending tracing records.

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2016-06-01 20:12:13 +03:00
Gleb Natapov
91c773fdde storage_proxy: fix writes_attempts counter
writes_attempts suppose to count how many time data was sent out, but
currently it counts even those replicas in other DCs that get the data
through a coordinator. Fix it by counting only when data is actually sent.

Message-Id: <20160601153124.GB9939@scylladb.com>
2016-06-01 18:46:23 +03:00
Avi Kivity
8dcbddc7ed Merge "Serialize memtable flushes" from Glauber
"One of the things we need to do as part of the throttle rework I am doing is to
serialize memtable flushes to some extent - that will guarantee that in case
we're throttling, the flushes finish earlier and release memory earlier, if
compared to the case in which we just let all tables flush freely and
simultaneously."
2016-06-01 18:31:18 +03:00
Avi Kivity
0c7b2e2d5c Merge 2016-06-01 18:29:23 +03:00
Avi Kivity
d2e4548b35 Merge seastar upstream
* seastar 0bcdd28...864d6dc (4):
  > Logging framework
  > Add libubsan and libasan to fedora deps docs
  > tests: add rpc cancellable tests
  > rpc: add cancellable interface

Dropped logging implementation in favor of seastar's due to a link
conflict with operator<<.
2016-06-01 18:28:42 +03:00
Tomasz Grabiec
56736389c1 Merge branch 'sstable-errors/v2' from https://github.com/penberg/scylla.git
This series adds a constructor to malformed_sstable_exception that
includes a filename and converts some call-sites to use it.

There are still plenty of low-level sites that don't even know the
sstable filename they are operating on. We need to either change the
code to carry the filename to lower layers or find a higher-level
call-site where we can catch malformed_sstable_exception and rethrow it
with the sstable filename. But that's for another series by someone who
knows the sstable code well.

Refs #669.
2016-06-01 16:59:56 +02:00
Gleb Natapov
26b50eb8f4 storage_proxy: drop debug output
Message-Id: <20160601132641.GK2381@scylladb.com>
2016-06-01 17:13:12 +03:00
Pekka Enberg
94c35cc135 sstables/sstables: Add sstable filename to thrown malformed_sstable_exceptions 2016-06-01 17:11:05 +03:00
Pekka Enberg
3ca7fc2a8b database: Add sstable filename to thrown malformed_sstable_exceptions 2016-06-01 14:56:10 +03:00
Pekka Enberg
fa5354dda4 sstables: Add optional filename to malformed_sstable_exception
Add a constructor to malformed_sstable_exception that accepts a error
message and a sstable name.
2016-06-01 14:48:08 +03:00
Pekka Enberg
de0634c289 Merge "Extract modification_statement's (and related) parsed statement
into raw" from Avi

"Move parsed statements into raw namespace.  Mindless but therapeutic."
2016-06-01 14:19:53 +03:00
Avi Kivity
92d815a6cf Make github issue template less shouty 2016-06-01 10:45:04 +03:00
Pekka Enberg
0255318bf3 Revert "Revert "main: change order between storage service and drain execution during exit""
This reverts commit b3ed55be1d.

The issue is in the failing dtest, not this commit. Gleb writes:

  "The bug is in the test, not the patch. Test waits for repair session
   to end one way or the other when node is killed, but for nodetool to
   know if repair is completed it needs to poll for it.  If node dies
   before nodetool managed to see repair completion it will stuck
   forever since jmx is alive, but does not provide answers any more.
   The patch changes timing, repair is completed much close to exit now,
   so problem appears, but it may happen even without the patch.

   The fix is for dtest to kill jmx as part of killing a node
   operation."

Now that Lucas fixed the problem in scylla-ccm, revert the revert.
2016-06-01 08:48:50 +03:00
Glauber Costa
0f64eb7e7d serialize memtable flush for a memtable_list
We can only free memory for a region_group when the entire memtable is released.
This means that while the disk can handle requests from multiple memtables just fine,
we won't free any memory until all of them finish. If we are under a pressure situation
we will take a lot more time to leave it.

Ideally, with write-behind, we would allow just one memtable to be flushed at a
time. But since we don't have it enabled, it's better to serialize the flushes
so that only some memtables (4) are flushed at a time. Having the memtable writer
bandwidth all to itself, the memtable will finish sooner, release memory sooner,
and recover the system's health sooner.

We would like to do that without having streaming and memtables starve each
other. Ideally, that should mean half the bandwidth for each - but that
sacrifices memtable writes in the common case there is no streaming. Again,
write behind will help here, and since this is something we intend to do, there
is no need to complicate the code too much for an interim solution.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
2016-05-31 17:18:35 -04:00
Glauber Costa
46c79be401 database: allow callers to specify memtable list's flush behavior
This patch introduces an explicit behavior enum class - one of delayed or
immediate, that allow callers to tell the memtable list whether they want a
delayed flush (default), or force an immediate flush. So far this only affects
the streaming code (memtables just ignore it), but the concept is one that can
be easily generalized.

With that in place, we can revert back the stop function to use the standard
flush. I have argued before that adding infrastructure like that would not be
worth it for the sake of stop alone, but some other code could now use it.

Specifically, the active reclaimer for the throttler would like to force
immediate flushes, as delayed flushes really won't make a lot of difference in
reducing memory usage.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
2016-05-31 17:17:48 -04:00
Avi Kivity
c8b5104aa5 cql3: extract raw batch_statement into raw sub-namespace
prepare() was moved to .cc to avoid circular dependencies.
2016-05-31 21:41:26 +03:00
Avi Kivity
1d144699f6 cql3: extract raw delete_statement into raw sub-namespace 2016-05-31 21:24:56 +03:00
Avi Kivity
e596799962 cql3: extract raw update_statement into raw sub-namespace
update_statment also has an insert_statement counterpart, convert it too.
2016-05-31 21:16:53 +03:00
Avi Kivity
10213c4211 cql3: extract raw modification_statement into raw sub-namespace 2016-05-31 20:53:37 +03:00
Asias He
f27e5d2a68 messaging_service: Delay listening ms during boot up
When a node starts up, peer node can send gossip syn message to it
before the gossip message handlers are registered in messaging_service.

We can see:

  scylla[123]:  [shard 0] rpc - client a.b.c.d: unknown verb exception 6 ignored

To fix, we delay the listening of messaging_service to the point when
gossip message handlers are registered.
Message-Id: <9b20d85e199ef0e44cdcde2920123a301a88f3d7.1464254400.git.asias@scylladb.com>
2016-05-31 12:28:11 +03:00
Avi Kivity
f3fc3afe00 cql3: optimize make_empty_metadata()
All empty metadata objects are equal, so make just one and keep returning
it.
Message-Id: <1464334638-7971-4-git-send-email-avi@scylladb.com>
2016-05-31 09:12:20 +03:00
Avi Kivity
0135b4d5cd cql3: constify metadata users
Metadata usually doesn't change after it is created; make that visible in
the code, allowing further optimizations to be applied later.
Message-Id: <1464334638-7971-3-git-send-email-avi@scylladb.com>
2016-05-31 09:12:11 +03:00
Avi Kivity
6728454591 cql3: rationalize extract_result_metadata()
Rather than dynamic_cast<>ing the statement to see whether it is a
select statement, add a virtual function to cql_statement to get the
result metadata.

This is faster and easier to follow.
Message-Id: <1464334638-7971-2-git-send-email-avi@scylladb.com>
2016-05-31 09:12:02 +03:00
Avi Kivity
25b3d74f45 cql3: Split select_statement::raw_statement into raw namespace
cql3::select_statement::raw_statement
    -> cql3::raw::select_statement
Message-Id: <1464609556-3756-4-git-send-email-avi@scylladb.com>
2016-05-31 09:09:30 +03:00
Avi Kivity
c8f98c5981 cql3: move cf_statement into raw hierarchy
cql3::statements::cf_statement
    -> cql3::statements::raw::cf_statement
Message-Id: <1464609556-3756-3-git-send-email-avi@scylladb.com>
2016-05-31 09:09:21 +03:00
Avi Kivity
caf8d4f0e6 cql3: separate parsed_statement and parsed_statment::prepared
cql3::statements::parsed_statement
    -> cql3::statements::raw::parsed_statement
  cql3::statements::parsed_statement::prepared
    -> cql3::statements::prepared_statement
Message-Id: <1464609556-3756-2-git-send-email-avi@scylladb.com>
2016-05-31 09:09:10 +03:00
Duarte Nunes
a15ed3c60f mutation_test: Specify tmp data dir
Otherwise we attempt to create sstable files under /.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1464618602-1124-1-git-send-email-duarte@scylladb.com>
2016-05-30 20:34:47 +02:00
Pekka Enberg
b3ed55be1d Revert "main: change order between storage service and drain execution during exit"
This reverts commit 0ebd8b18b7.

The change breaks repair_additional_test.py:RepairAdditionalTest.repair_kill_1_test
2016-05-30 12:48:09 +03:00
Avi Kivity
e515933c70 dist: tune scheduler for lower latency
Scylla-jmx and collectd can preempt scylla and induce long latencies.  Tune
the scheduler to provide lower latencies.

Since when the support processes are not running we normally do not context
switch (one thread per core, remember?), there should be no effect on
throughput.

The tunings are provided in a separate package, which can be uninstalled
if the server is shared with other applications which are negatively
affected by the tuning.

Fixes #1218.
Message-Id: <1464529625-12825-1-git-send-email-avi@scylladb.com>
2016-05-30 08:42:19 +03:00
Avi Kivity
e8e00338d1 config: document defragment_memory_on_idle
Message-Id: <1464261650-14136-2-git-send-email-avi@scylladb.com>
2016-05-30 08:39:26 +03:00
Avi Kivity
b50cb3eca8 config: rename compact_on_idle
compact_on_idle will lead users to thinking we're talking about sstable
compaction, not log-structured-allocator compaction.

Rename the variable to reduce the probability of confusion.
Message-Id: <1464261650-14136-1-git-send-email-avi@scylladb.com>
2016-05-30 08:39:13 +03:00
Yoav Kleinberger
e580ac5dae docker: fix Ubuntu Dockerfile
one needs to update the repository info before one can install packages.
Fixes issue #1296.

Signed-off-by: Yoav Kleinberger <yoav@scylladb.com>
Message-Id: <a906e76d584baff5988cb31a4003de27455e0741.1464529740.git.yoav@scylladb.com>
2016-05-29 17:00:25 +03:00
Avi Kivity
3f6ecb9f28 Merge "cancel cross DC read repair if non matching data was recently modified" from Gleb 2016-05-29 15:58:55 +03:00
Gleb Natapov
2efbccc901 storage_proxy: do only local read repair if non matching data was recently modified
When read/write to a partition happens in parallel reader may detect
digest mismatch that may potentially cause cross DC read repair attempt,
but the repair is not really needed, so added latency is not justified.

This patch tries to prevent such parallel access from causing heavy
cross DC repair operation buy checking a timestamp of most resent
modification. If the modification happens less then "write timeout"
seconds ago the patch assumes that the read operation raced with write
one and cancel cross DC repair, but only if CL is LOCAL_*.
2016-05-29 15:26:51 +03:00
Amnon Heiman
d4123ba613 API: column_family count sstable space used correctly
The space calculation counters in column family had two problem:
1. The total bytes is an ever growing counter, which is meaningless for
the API.

2. Trying to simply sum the size on all shards, ignores the fact that the
same sstable file can be referenced by multiple shards, this is
especially noticeable during migration time.

To solve this, the implementation was modified so instead of
collecting the sizes, the API would collect a map of file name to size
and then would do the summing.

This removes the duplications and fixes the total bytes calculation

Calling cfstats before the change with load after a compaction happend:

$ nodetool cfstats keyspace1
Keyspace: keyspace1
Verify write latency 1068253.0 76435
	Read Count: 75915
	Read Latency: 0.5953986037015082 ms.
	Write Count: 76435
	Write Latency: 0.013975966507490025 ms.
	Pending Flushes: 0
		Table: standard1
		SSTable count: 5
		Space used (live): 44261215
		Space used (total): 219724478

After the fix:

$ nodetool cfstats keyspace1
Keyspace: keyspace1
Verify write latency 1863206.0 124219
	Read Count: 125401
	Read Latency: 0.9381053978835895 ms.
	Write Count: 124219
	Write Latency: 0.01499936402643718 ms.
	Pending Flushes: 0
		Table: standard1
		SSTable count: 6
		Space used (live): 50402904
		Space used (total): 50402904
		Space used by snapshots (total): 0

Fixes: #1042

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <1464518757-14666-2-git-send-email-amnon@scylladb.com>
2016-05-29 14:11:03 +03:00
Gleb Natapov
32c9a06faf messaging_service: abort retrying send during exit
Fixes #862

Message-Id: <1463579574-15789-3-git-send-email-gleb@scylladb.com>
2016-05-29 11:39:36 +03:00
Gleb Natapov
0ebd8b18b7 main: change order between storage service and drain execution during exit
Even the comment says drain_on_shutdown should be called first, but for
that in has to be registered last.

Fixes #862

Message-Id: <1463579574-15789-2-git-send-email-gleb@scylladb.com>
2016-05-29 11:39:24 +03:00
Glauber Costa
30d54cef38 database: add a comment explaining the choice of function in CF stop
We have recently commited a fix to a broken streaming bug that involved
reverting column_family::stop() back to calling the custom seal functions
explicitly for both memtables and streaming memtables.

We here add a comment to explain why that had to be done.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
Message-Id: <fe94b5883e9c29adc7fc9ee9f498894c057e7b64.1464293167.git.glauber@scylladb.com>
2016-05-29 11:28:15 +03:00
Avi Kivity
8e124b31aa Merge "gossip: Refactor waiting for supported features" from Duarte
"This patch changes the way we wait for supported features. We no longer
sleep periodically, waking up to check if the wanted features are now
avaiable. Instead, we register waiters in a condition variable that is
signaled whenever new endpoint information is received.

We also add a new poll interface based on the feature class, which
encapsulates the availability of a cluster feature."
2016-05-27 20:24:25 +03:00
Duarte Nunes
f613dabf53 gossip: Introduce the gms::feature class
This class encapsulates the waiting for a cluster feature. A feature
object is registered with the gossiper, which is responsible for later
marking it as enabled.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-05-27 17:20:51 +00:00
Duarte Nunes
4684b8ecbb gossip: Refactor waiting for features
This patch changes the sleep-based mechanism of detecting new features
by instead registering waiters with a condition variable that is
signaled whenever a new endpoint information is received.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-05-27 17:20:51 +00:00
Duarte Nunes
422f244172 gossip: Don't timeout when waiting for features
This patch removes the timeout when waiting for features,
since future patches will make this argument unnecessary.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2016-05-27 17:20:51 +00:00
Avi Kivity
fab4cc8d6d Merge seastar upstream
* seastar 8bfbb1a...0bcdd28 (1):
  > Merge "introduce sleep_abortable() that throws exception on application exit" from Gleb
2016-05-27 20:14:49 +03:00
Duarte Nunes
b3011c9039 gossip: Rename set_heart_beat_state
...to set_heart_beat_state_and_update_timestamp in order to make it
explicit for callers the update_timestamp is also changed.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1464309023-3254-3-git-send-email-duarte@scylladb.com>
2016-05-27 09:11:39 +03:00
Duarte Nunes
8c0e2e05b7 gossip: Fix modification to shadow endpoint state
This patch fixes an inadvertent change to the shadow endpoint state
map in gossiper::run, done by calling get_heart_beat_state() which
also updates the endpoint state's timestamp. This did not happen for
the normal map, but did happen for the shadow map. As a result, every
time gossiper::run() was scheduled, endpoint_map_changed would always
be true and all the shards would make superfluous copies of the
endpoint state maps.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1464309023-3254-2-git-send-email-duarte@scylladb.com>
2016-05-27 09:10:38 +03:00
Pekka Enberg
b7e79b72d5 Merge "Introduce SET_NIC for non-AMI environment" from Takuya
"This patchset provides a way to enable SET_NIC(posix_net_conf.sh) on
 non-AMI environment.
 Also support -mq option of the script.
 This also contains number of bug fixes of scripts.

 Fixes #1192"
2016-05-26 13:37:06 +03:00
Yoav Kleinberger
26c0d86401 tools/scyllatop: improved user interface: scrollable views
NOTE: scyllatop now requires the urwid library

previously, if there were more metrics that lines in the terminal
window, the user could not see some of the metrics.  Now the user can
scroll.

As an added bonus, the program will not crash when the window size
changes.

Signed-off-by: Yoav Kleinberger <yoav@scylladb.com>
Message-Id: <1464098832-5755-1-git-send-email-yoav@scylladb.com>
2016-05-26 13:36:28 +03:00
Piotr Jastrzebski
136b8148d2 Use idle CPU to compact LSA memory
Register an idle CPU handler that compacts a single segment
every time there's nothing better to execute on CPU.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Message-Id: <c26aa608a1e0752fb9e6db1833ef3ba1de95f161.1464169748.git.piotr@scylladb.com>
2016-05-26 12:43:53 +03:00
Avi Kivity
d7f36a093f Merge seastar upstream
* seastar e5faea8...8bfbb1a (1):
  > reactor: advertise the logging_failures metric as a DERIVE counter

Fixes #1292.
2016-05-26 11:46:08 +03:00
Tomasz Grabiec
f0c2b1d161 config: Fix typos
Message-Id: <1464201938-4778-1-git-send-email-tgrabiec@scylladb.com>
2016-05-26 08:19:57 +03:00
Asias He
f1b3cb4a08 storage_service: Catch and fail an invalid configuration with --replace-address
Vlad reported a strange user configuration:

   SCYLLA_ARGS="--log-to-syslog 1 --log-to-stdout 0 --default-log-level
   info --collectd-address=127.0.0.1:25826 --collectd=1
   --collectd-poll-period 60000 --network-stack posix --num-io-queues 32
   --max-io-requests 128 --replace-address 10.0.4.131"

   seed_provider:
       - class_name: org.apache.cassandra.locator.SimpleSeedProvider
         parameters:
             - seeds: "10.0.4.131"

   In the mean while, 10.0.4.131 is the IP address of the node itself.

When the node was started, the following message were reported.

   Apr 13 06:31:12 n0 scylla[19681]: [shard 0] gossip - Connect seeds again
   ... (20 seconds passed)
   Apr 13 06:31:13 n0 scylla[19681]: [shard 0] gossip - Connect seeds again
   ... (21 seconds passed)
   Apr 13 06:31:14 n0 scylla[19681]: [shard 0] gossip - Connect seeds again
   ... (22 seconds passed)
   Apr 13 06:31:15 n0 scylla[19681]: [shard 0] gossip - Connect seeds again
   ... (23 seconds passed)

The configruation is invalid, becasue for --replace-address to
work, at least one working seed node should be alive. Catch the
configuration error and fail it with an appropriate error message.

Fixes #1183
Message-Id: <a94a082d896313e7a668915ae21fe2c03719da3a.1464164058.git.asias@scylladb.com>
2016-05-25 14:42:19 +03:00
Asias He
fed1e65e1e gossip: Do not insert the same node into _live_endpoints_just_added
_live_endpoints_just_added tracks the peer node which just becomes live.
When a down node gets back, the peer nodes can receive multiple messages
which would mark the node up, e.g., the message piled up in the sender's
tcp stack, after a node was blocked with gdb and released. Each such
message will trigger a echo message and when the reply of the echo
message is received (real_mark_alive), the same node will be added to
_live_endpoints_just_added.push_back more than once. Thus, we see the
same node be favored more than once:

INFO  2016-04-12 12:09:57,399 [shard 0] gossip -
do_gossip_to_live_member: Favor newly added node 127.0.0.2
INFO  2016-04-12 12:09:58,412 [shard 0] gossip -
do_gossip_to_live_member: Favor newly added node 127.0.0.2
INFO  2016-04-12 12:09:59,429 [shard 0] gossip -
do_gossip_to_live_member: Favor newly added node 127.0.0.2
INFO  2016-04-12 12:10:00,429 [shard 0] gossip -
do_gossip_to_live_member: Favor newly added node 127.0.0.2
INFO  2016-04-12 12:10:01,430 [shard 0] gossip -
do_gossip_to_live_member: Favor newly added node 127.0.0.2
INFO  2016-04-12 12:10:02,442 [shard 0] gossip -
do_gossip_to_live_member: Favor newly added node 127.0.0.2
INFO  2016-04-12 12:10:03,454 [shard 0] gossip -
do_gossip_to_live_member: Favor newly added node 127.0.0.2

To fix, do not insert the node if it is already in
_live_endpoints_just_added.

Fixes #1178
Message-Id: <6bcfad4430fbc63b4a8c40ec86a2744bdfafb40f.1464161975.git.asias@scylladb.com>
2016-05-25 14:19:40 +03:00
Glauber Costa
46f60f52d9 database: do not use implicitly stated seal function when closing the CF
In commit 4981362f57, I have introduced a regression that was thankfully
caught by our dtest infrastructure.

That patch is a preparation patch for the active reclaim patchset that is to
come, and it consolidated all the flushes using the memtable_list's seal_fn
function instead of calling the seal function explicitly.

The problem here is that the streaming memtables have the delayed mechanism,
about which the memtable_list is unaware. Calling memtable_list's
seal_active_memtable() for the streaming memtables calls the delayed version,
that does not guarantee flush. If we're lucky, we will indeed flush after the
timer expires, but if we're not we'll just stop the CF with data not flushed.

There are two options to fix this: the first is to teach the memtable_list about
the delayed/forced mechanism, and the second is to just call the correct
function explicitly during shutdown, and then when the time comes to add
continuations to the result of the seal, add them here as well.

Although the second option involves a bit more work and duplication, I think it
is better in the sense that the delayed / forced mechanism really is something
that belong to the streaming only. Being this the only user, I don't think it
justifies complicating the memtable_list with this concept.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
Message-Id: <b26017c825ccf585f39f58c4ab3787d78e551f5f.1464126884.git.glauber@scylladb.com>
2016-05-25 08:21:24 +03:00
Avi Kivity
2d4d6c9c92 Merge seastar upstream
* seastar aed893e...e5faea8 (5):
  > Catch exceptions thrown by idle cpu handler
  > core::gate: add a get_count() method
  > reactor: Introduce idle CPU handler
  > core: add missing header for g++-4.9
  > Add lksctp-tools-devel do required packages
2016-05-24 20:42:41 +03:00
Pekka Enberg
ceb29f9d32 Merge "Introduce upload dir for sstable migration" from Raphael
"This change is intended to make migration process safer and easier.
 All column families will now have a directory called upload.
 With this feature, users may choose to copy migrated sstables to upload
 directory of respective column families, and run 'nodetool refresh'.
 That's supposed to be the preferred option from now on."
2016-05-24 16:36:47 +03:00
Gleb Natapov
7f6b12c97a query: add user provided timestamp to read_command
If read query supplies timestamp  move it to read_command to be
used later otherwise get local timestamp.
2016-05-24 15:19:35 +03:00
Pekka Enberg
d7d8c76fe5 transport/server: Add CQL frame LZ4 compression support
The default CQL frame compression algorithm in Cassandra is LZ4. Add
support for decompressing incoming frames and compressing outgoing
frames with LZ4 if the CQL driver asks for that.

Fixes #416

Message-Id: <1464086807-11325-1-git-send-email-penberg@scylladb.com>
2016-05-24 15:03:33 +03:00
Takuya ASADA
53cebb4a5e dist/ubuntu: don't rebuild dependency packages by default
Same as CentOS, do not build dependencies by default, install binary packages from our repository.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1464023451-21436-1-git-send-email-syuu@scylladb.com>
2016-05-24 14:10:59 +03:00
Gleb Natapov
12cf60c302 messaging_service: add timestemp of last modification to READ_DIGEST verb return value 2016-05-24 13:27:34 +03:00
Gleb Natapov
1e6f64f4ab query: add latest modification timestamp to result structure 2016-05-24 13:27:34 +03:00
Gleb Natapov
5fef0717cc query: find latest modification timestamp while calculating result digest 2016-05-24 13:27:34 +03:00
Avi Kivity
9637c2232c Merge "Move the JMX timer polling logic to Scylla" from Amnon 2016-05-24 13:07:52 +03:00
Raphael S. Carvalho
c2fa3b796d db: fix read consistency after refresh
If sstable loaded by refresh covers a row that is cached by the
column family, read query may fail to return consistent data.
What we should do is to clear cache for the column family being
loaded with new sstables.

Fixes #1212.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <a08c9885a5ceb0b2991e40337acf5b7679580a66.1464072720.git.raphaelsc@scylladb.com>
2016-05-24 12:11:41 +03:00
Takuya ASADA
5d5d525a14 dist/ubuntu: fix incorrect dependency package name
PyYAML is CentOS/RHEL/Fedora package name, python-yaml is correct one.

Fixes #1279

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1463987823-22837-1-git-send-email-syuu@scylladb.com>
2016-05-23 10:21:29 +03:00
Pekka Enberg
8a7197e390 dist/docker: Fetch RPM repository from Scylla web site
Fix the hard-coded Scylla RPM repository by downloading it from Scylla
web site. This makes it easier to switch between different versions.

Message-Id: <1463981271-25231-1-git-send-email-penberg@scylladb.com>
2016-05-23 09:45:41 +03:00
Piotr Jastrzebski
2be4ec4e06 Add lksctp-tools-devel to required packages
in fedora build instructions.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Message-Id: <15f3db34f12f01cb9da32fd14c16ba87e64ad5f4.1463947999.git.piotr@scylladb.com>
2016-05-23 08:26:02 +03:00
Avi Kivity
5e5317b228 dist: add build dependencies for sctp
Required by new seastar
2016-05-22 19:10:25 +03:00
Avi Kivity
5bb1255da1 Merge seastar upstream
* seastar 6a849ac...aed893e (3):
  > net: move 'transport' enum to seastar namespace
  > net: sctp protocol support for posix stack
  > future: Support get() when state is at a promise
2016-05-22 16:32:33 +03:00
Amnon Heiman
e26002d581 idl-compiler: default constructor of complex types
This patch solve a problem where a complex type is define as version
depended (with the version attribute) but doesn't have a default value.

In those cases the default constructor is used, but in the case of
complex types (template) param_type should be use to get the C++ type.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <1463916723-15322-1-git-send-email-amnon@scylladb.com>
2016-05-22 15:32:29 +03:00
Raphael S. Carvalho
e5f0314afd db: introduce upload directory for sstable migration
This change is intended to make migration process safer and easier.
All column families will now have a directory called upload.
With this feature, users may choose to copy migrated sstables to upload
directory of respective column families, and call 'nodetool refresh'.
That's supposed to be the preferred option from now on.

For each sstable in upload directory, refresh will do the following:
1) Mutate sstable level to 0.
2) Create hard links to its components in column family dir, using
a new generation. We make it safe by creating a hard link to temporary
TOC first.
3) Remove all of its components in upload directory.

This new code runs after refresh checked for new sstables in the column
family directory. Otherwise, we could have a generation conflict.
Unlike the first step, this new step runs with sstable write enabled.
It's easier here because we know exactly which sstables are new.

After that, refresh will load new sstables found in column family
and upload directories.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2016-05-20 17:26:21 -03:00
Raphael S. Carvalho
70b793e4d3 tests: add test for statistics rewrite
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2016-05-20 17:26:12 -03:00
Raphael S. Carvalho
74c8a87777 sstables: fix statistics rewrite
It's not working because it tries to overwrite existing statistics
file with exclusive flag.
It's fixed by writing new statistics into temporary file and
renaming it into place.

If Scylla failed in middle of rewrite, a temporary file is left
over. So boot code was adjusted to delete a temporary file created
by this rewrite procedure.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2016-05-20 17:24:15 -03:00
Pekka Enberg
94e7e61cd0 api: Register snitch API earlier
Currently, we register snitch API in set_server_gossip_settle() which
waits until a node has joined the cluster. This makes 'nodetool status'
not properly show the status of a joining node. Fix the issue by
registering snitch API earlier.

Fixes #1269.
Message-Id: <1463576381-15484-1-git-send-email-penberg@scylladb.com>
2016-05-20 14:24:14 +03:00
Gleb Natapov
7a54b5ebbb gossiper: cleanup mark_alive() even more
Message-Id: <20160519100513.GE984@scylladb.com>
2016-05-19 12:47:19 +02:00
Takuya ASADA
03a762bb0b dist/common/scripts: Ask to set SET_NIC=yes on scylla_setup interactive prompt
We supported SET_NIC on non-AMI environment, so ask user to use it on scylla_setup interactive prompt.
2016-05-19 06:26:23 +09:00
Takuya ASADA
88fde0a91e dist/ami: fix dependency unresolved error on AMI build script with local package, by adding scylla-conf package
Since we added scylla-conf package, we cannot install scylla-server/-tools without the package, because of this --localrpm is failing.
So copy scylla-conf package to AMI, and install it to fix the problem.
2016-05-19 06:26:23 +09:00
Takuya ASADA
898243929f dist/common/scripts: specify queue settings for posix_net_conf.sh on scylla_prepare
posix_net_conf.sh wants -sq/-mq options, so detect number of queues and specify the option in scylla_prepare.
2016-05-19 06:26:23 +09:00
Takuya ASADA
f84b7b094f dist/common/scripts: drop special condition to enable SET_NIC on AMI, do this on AMI installation script
Remove special case of SET_NIC in AMI, do this in scylla-ami-setup.service.
2016-05-19 06:25:41 +09:00
Takuya ASADA
49cdd0b786 dist: move '--cpuset' and '--smp' configuration to scylla_cpuset_setup / cpuset.conf
These parameters are only required for AMI, not for non-AMI environment which want to enable SET_NIC, so split them to indivisual script / conf file, call it from AMI install script.
2016-05-19 06:25:28 +09:00
Takuya ASADA
46fa80a5a6 dist/common/scripts: replace IFNAME variable when --nic specified to scylla_sysconfig_setup
scylla_sysconfig_setup has bug that it not replaces IFNAME variable, so fixed.
2016-05-19 06:25:15 +09:00
Glauber Costa
4eff07d773 database: reorder initialization
In a preparation move for the LSA throttler, we have reordered the
initialization fields in database.hh so that the sizes of the regions are
computed before the initialization of the region.

However, that seemingly innocent move broke one of our tests. The reason behind
that, is that if we don't destroy the column families before destroying the
region, we may end up with a use after free in the memtable destructor - that
itself expects to call into the region.

This patch reorders the initialization so that the CF list still comes after the
dirty regions (therefore being destroyed first), while maintaining the relative
ordering between size / region that we needed in the first place.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
Message-Id: <0669984b5bccdb2c950f2444bdee4427abad56ba.1463508884.git.glauber@scylladb.com>
2016-05-18 11:02:40 +03:00
Asias He
eb9ac9ab91 gms: Optimize gossiper::is_alive
In perf-flame, I saw in

service::storage_proxy::create_write_response_handler (2.66% cpu)

  gossiper::is_alive takes 0.72% cpu
  locator::token_metadata::pending_endpoints_for takes 1.2% cpu

After this patch:

service::storage_proxy::create_write_response_handler (2.17% cpu)

  gossiper::is_alive does not show up at all
  locator::token_metadata::pending_endpoints_for takes 1.3% cpu

There is no need to copy the endpoint_state from the endpoint_state_map
to check if a node is alive. Optimize it since gossiper::is_alive is
called in the fast path.

Message-Id: <2144310aef8d170cab34a2c96cb67cabca761ca8.1463540290.git.asias@scylladb.com>
2016-05-18 10:12:38 +03:00
Avi Kivity
6ec0000df8 Merge "fix migration of tables with level > 0" from Rapahel 2016-05-17 19:14:01 +03:00
Raphael S. Carvalho
cbc2e96a58 tests: check that overlapping sstable has its level changed to 0
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2016-05-17 11:11:05 -03:00
Raphael S. Carvalho
ee0f66eef6 db: fix migration of sstables with level greater than 0
Refresh will rewrite statistics of any migrated sstable with level
> 0. However, this operation is currently not working because O_EXCL
flag is used, meaning that create will fail.

It turns out that we don't actually need to change on-disk level of
a sstable by overwriting statistics file.
We can only set in-memory level of a sstable to 0. If Scylla reboots
before all migrated sstables are compacted, leveled strategy is smart
enough to detect sstables that overlap, and set their in-memory level
to 0.

Fixes #1124.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2016-05-17 11:08:08 -03:00
Gleb Natapov
76e0eb426e gossiper: simplify mark_alive()
The code runs in a thread so there is no need to use heap to
communicate between statements.

Message-Id: <20160517120245.GK984@scylladb.com>
2016-05-17 15:37:21 +03:00
Avi Kivity
4413176051 Merge "reduce performance degradation when adding node" from Asias
"With this series, the operations per second drop during adding node period gets
much better.

Before:
45K to 10K

After:
45k to 38K

Refs: #1223
"
2016-05-17 14:31:31 +03:00
Asias He
089734474b token_metadata: Speed up pending_endpoints_for
pending_endpoints_for is called frequently by
storage_proxy::create_write_response_handler when doing cql query.

Before this patch, each call to pending_endpoints_for involves
converting a multimap (std::unordered_multimap<range<token>,
inet_address>>) to map (std::unordered_map<range<token>,
std::unordered_set<inet_address>>).

To speed up the token to pending endpoint mapping search, a interval map
is introduced. It is faster than searching the map linearly and can
avoid caching the token/pending endpoint mapping.

With this patch, the operations per second drop during adding node
period gets much better.

Before:
45K to 10K

After:
45k to 38K

(The number is measured with the streaming code skipping to send data to
rule out the streaming factor.)

Refs: #1223
2016-05-17 17:32:15 +08:00
Asias He
ee0585cee9 dht: Add default constructor for token
It is needed to put token in to a boost interval_map in the following
patch.
2016-05-17 17:32:15 +08:00
Amnon Heiman
ad34f80e6f API: change cache_service, column_family and storage_proxy to rate
object

The API would expose now the rate_moving_average and
rate_moving_average_and_histogram.

The old end points remains for the transition period, but marked as
depricated.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-05-17 11:56:52 +03:00
Amnon Heiman
b33ed48527 API Definition: change cache_service, column_family and storage_proxy to use rate objects
This patch replaces the latency histogram to
rate_moving_avrage_and_histogram and the counters to
rate_moving_average.

The old endpoints where left unchagned but marked as depricated when
needed.
2016-05-17 11:55:06 +03:00
Amnon Heiman
20a48b0f20 API: column family stats break the map_reduce functionality
This patch replaces the helper function for column family with two
function, one that collect the relevant column family from all shareds
and another one that do the translation to json object.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-05-17 11:53:15 +03:00
Amnon Heiman
750f30cf07 column_family: Change histogram to
timed_rate_moving_average_and_histogram

As part of moving the derived statistic in to scylla, this replaces the
histogram object in the column_family to
timed_rate_moving_average_and_histogram.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-05-17 11:53:15 +03:00
Amnon Heiman
468bcfbf1f row_cache: Change counter to timed_rate_moving_average_and_histogram
As part of moving the derived statistic in to scylla, this replaces the
counter in the row_cache stats to
timed_rate_moving_average_and_histogram.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-05-17 11:53:15 +03:00
Amnon Heiman
64e0c8cd1b storage_proxy: Change histogram to
timed_rate_moving_average_and_histogram

As part of moving the derived statistic in to scylla, this replaces the
histogram object in the storage_proxy to
timed_rate_moving_average_and_histogram. and the read, write and range
counters where replaced by rate_moving_average.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-05-17 11:52:16 +03:00
Amnon Heiman
f6a5a4e3da API: Add helper function for the rate objects
This patch adds the helper function that are used to sum the
rate_moving_average and rate_moving_average_and_histogram.

The current sum functionality for histogram was modified to support
rate and histogram but return a histogram. This way current endpoints
would continue to behave the same.

It also cleans the histogram related method by using the plus operator
in the histogram.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-05-17 11:49:34 +03:00
Amnon Heiman
8ef25ceb05 Add waited avrage rate related object
This patch adds a few data structure for derived and accumulative
statistics that are similiar to the yammer implementation used by the
JMX.

It also adds a plus operator to histogram which cleans the histogram
usage.

moving_average - An exponentially-weighted moving average. calculate an event rate
on a given interval.

rate_moving_average and timed_rate_moving_average - Calculate 1m, 5m and
15m ewma an all time avrage and a counter.

rate_moving_average_and_histogram and
timed_rate_moving_average_and_histogram - Combines a histogram with a
rate_moving_average. It also expose a histogram API so it will be an
easy task to replace a histogram with a
timed_rate_moving_average_and_histogram.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-05-17 11:47:49 +03:00
Glauber Costa
17b9203719 database: invert order of elements
So that the sizes of the region can be initialized first

Signed-off-by: Glauber Costa <glauber@scylladb.com>
Message-Id: <dc3df186a977b492d83c0a397f206c2db940aa37.1463448522.git.glauber@scylladb.com>
2016-05-17 11:28:39 +03:00
Glauber Costa
2ff6d38d0c database: use a single constructor for the column family
We've been keeping two constructors for the column family to allow for a
version without the commitlog. But it's by now quite complicated to maintain
the two, because changes always have to be made in two places.

This patch adds a private constructor that does the actual construction, and
have the public constructors to call it.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
Message-Id: <dd3cb0b9c20ad154a6131bad6ece619f70ed5025.1463448522.git.glauber@scylladb.com>
2016-05-17 11:28:39 +03:00
Glauber Costa
8fede5b98e memtables: isolate logic for disk writes disabled
When we have disk writes disabled, we exit immediately from the flush
function. We can just encode that separately and pass a different function
in the memtable_list creation. That simplifies the memtable flush a bit.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
Message-Id: <908e3b5eb2c6ee84b8ad7b31c3673be5531a087c.1463448522.git.glauber@scylladb.com>
2016-05-17 11:28:38 +03:00
Glauber Costa
4981362f57 memtables: always seal through memtable_list seal function
I would like to be able to apply a function at the end of every flush, that is
common for both memtables and streaming memtables. For instance, to unthrottle
current waiters. Right now some calls to seal_active_memtable are open coded,
calling the column family's function directly, for both the main memtable list
and the streaming list.

This patch moves all the current open code callers to call the respective
memtable_list function.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
Message-Id: <0c780254f3c4eb03e2bcd856b83941cf49a84b85.1463448522.git.glauber@scylladb.com>
2016-05-17 11:28:37 +03:00
Takuya ASADA
4972a72380 dist: drop 'sudo -E' and SETENV for security reason, source envfile from scripts
As Nadav pointed out, SETENV and sudo -E might be causes security hole:
https://github.com/scylladb/scylla/issues/1028#issuecomment-196202171
So drop them now, sourcing envfiles from scylla_prepare / scylla_stop scripts
instead.

Also on "[PATCH] ubuntu: Fix the init script variable sourcing" thread
we have problem to passing variables from envfiles to scylla_prepare /
scylla_stop on Ubuntu, it seems better to sourcing from these scripts.

Additionally, this fixes #1249

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1462989906-30062-1-git-send-email-syuu@scylladb.com>
2016-05-17 10:31:03 +03:00
Pekka Enberg
9c450f673c cql3: Clean up prepared_metadata class
Return vectors by const reference in prepared_metadata class and add a
FIXME to result_message class.

Message-Id: <1463425756-20225-1-git-send-email-penberg@scylladb.com>
2016-05-17 10:02:14 +03:00
Pekka Enberg
217c1ffa95 cql3: Specify result set flag ABI explicitly
As Avi points out, the flag values are an ABI. So specify them explicitly.

Message-Id: <1463413379-8355-1-git-send-email-penberg@scylladb.com>
2016-05-16 19:00:52 +03:00
Avi Kivity
a3b23d75b9 Merge "Fix Prepared message metadata serialization"
"The Prepared message has a metadata section that's similar to result set
metadata but not exactly the same. Fix serialization by introducing a
separate prepared_metadata class like Origin has and implement
serialization as per the CQL protocol specification. This fixes one CQL
binary protocol version 4 issue that we currently have.

The changes have been verified by running the gocql integration tests
using v4. Please note that this series does *not* enable v4 for clients
because Cassandra 2.1.x series only supports CQL binary protocol v3."
2016-05-16 18:59:54 +03:00
Pekka Enberg
868ff5107c cql3: Introduce prepared_metadata class
Introduce a new prepared_metadata class that holds prepared statement
metadata and implement CQL binary protocol serialization that works for
all versions.
2016-05-16 18:06:01 +03:00
Tomasz Grabiec
272e89846d Merge branch 'cache' from git@github.com:haaawk/scylla.git
From Piotr:

Fixes #656.

It makes it possible to slice using clustering ranges in mutation
readers.  We don't have row index yet so the slicing is just ignoring
data which is out of range.
2016-05-16 14:44:33 +02:00
Piotr Jastrzebski
dcba6f5c45 Pass clustering_row_ranges to mutation readers.
This will allow readers to reduce the amount of data read.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2016-05-16 14:36:57 +02:00
Pekka Enberg
a68671e247 cql3: Add column_specification::all_in_same_table() helper
We need it the prepared_metadata class that we're about to introduce.
2016-05-16 14:13:31 +03:00
Takuya ASADA
80037aa95b dist/common/scripts: don't proceed to run scylla_raid_setup when disks not selected, on interactive RAID setup
When disks not selected, run disk select prompt again.
Fixes #1260

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1463388933-3640-1-git-send-email-syuu@scylladb.com>
2016-05-16 13:45:17 +03:00
Pekka Enberg
adfb4d7bbd cql3: Move result_set class implementation to source file 2016-05-16 13:20:45 +03:00
Pekka Enberg
8552f222f5 cql3: Clean up result_set class
Kill some left-over ifdef'd code from the result_set class.

Message-Id: <1463392997-22921-1-git-send-email-penberg@scylladb.com>
2016-05-16 13:09:37 +03:00
Piotr Jastrzebski
23c23abe53 Make memtable mutation_reader slice using clustering ranges.
Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2016-05-16 11:46:41 +02:00
Piotr Jastrzebski
484d2ecd0a Slice data with clustering key range in sstable reader
Add additional parameters to mp_row_consumer to be able to fetch
only cells for given clustering key ranges

This will be used in row_cache when it will work on clustering key
level instead of partition key level.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2016-05-16 11:46:30 +02:00
Piotr Jastrzebski
8307681975 Introduce clustering_ranges type.
It will be used to slice data returned by mutation_readers.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2016-05-16 11:46:09 +02:00
Amnon Heiman
7e07d97e4b API utils: Adding rate moving avrage
rate_moving_average and rate_moving_average_and_histogram are type that
are used by the JMX.  They are based on the yammer meter and timer and
are used to collect derivative information.

Specificlly: rate_moving_average calculate rates and
rate_moving_average_and_histogram collect rates and
histogram.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2016-05-16 11:40:19 +03:00
Pekka Enberg
17765b6c06 Merge seastar upstream
* seastar 3dec26f...6a849ac (4):
  > seastar::socket: Be resilient against ENOTCONN
  > Merge " improve performance and predictability of syscall thread communications" from Glauber
  > rpc_test: Shutdown properly
  > [PATCH} future: better detect get_future() on already used promise
2016-05-16 08:04:47 +03:00
Yoav Kleinberger
de7952a8db tools/scyllatop: log input from collectd for easier debugging
When running with DEBUG verbosity, scyllatop will now log every single
value it receives from collectd. When you suspect that scyllatop is
somehow distorting values, this is a good way to check it.

Signed-off-by: Yoav Kleinberger <yoav@scylladb.com>
Message-Id: <1463320730-6631-1-git-send-email-yoav@scylladb.com>
2016-05-15 19:17:10 +03:00
Tomasz Grabiec
1eabe9b840 storage_proxy: Add trace-level logging for mutating
Message-Id: <1462978554-31217-1-git-send-email-tgrabiec@scylladb.com>
2016-05-12 13:52:56 +03:00
Tomasz Grabiec
7207cc8b1a storage_proxy: Improve error reporting
Knowing the source node can help in debugging the issue.
Message-Id: <1462978535-31164-1-git-send-email-tgrabiec@scylladb.com>
2016-05-12 13:52:39 +03:00
Pekka Enberg
b5d9aa866d Merge "Fixes for schema synchronization" from Tomek
"Writes may start to be rejected by replicas after issuing alter table
 which doesn't affect columns. This affects all versions with alter table
 support.

 Fixes #1258"
2016-05-12 09:43:25 +03:00
Duarte Nunes
7dbeef3c39 storage_service: Fix ignored future in on_alive
This patch ensures the future created by invoke_on_all is not ignored
by waiting on it, which is safe to do since we are within a
seastar::async context.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1462989837-7326-1-git-send-email-duarte@scylladb.com>
2016-05-12 09:03:46 +03:00
Tomasz Grabiec
13d8cd0ae9 migration_manager: Invalidate prepared statements on every schema change
Currently we only do that when column set changes. When prepared
statements are executed, paramaters like read repair chance are read
from schema version stored in the statement. Not invalidating prepared
statements on changes of such parameters will appear as if alter took
no effect.

Fixes #1255.
Message-Id: <1462985495-9767-1-git-send-email-tgrabiec@scylladb.com>
2016-05-12 08:58:40 +03:00
Tomasz Grabiec
90c31701e3 tests: Add unit tests for schema_registry 2016-05-11 17:31:22 +02:00
Tomasz Grabiec
443e5aef5a schema_registry: Fix possible hang in maybe_sync() if syncer doesn't defer
Spotted during code review.

If it doesn't defer, we may execute then_wrapped() body before we
change the state. Fix by moving then_wrapped() body after state changes.
2016-05-11 17:31:22 +02:00
Tomasz Grabiec
8703136a4f migration_manager: Fix schema syncing with older version
The problem was that "s" would not be marked as synced-with if it came from
shard != 0.

As a result, mutation using that schema would fail to apply with an exception:

  "attempted to mutate using not synced schema of ..."

The problem could surface when altering schema without changing
columns and restarting one of the nodes so that it forgets past
versions.

Fixes #1258.

Will be covered by dtest:

  SchemaManagementTest.test_prepared_statements_work_after_node_restart_after_altering_schema_without_changing_columns
2016-05-11 17:29:14 +02:00
Takuya ASADA
8503600e30 dist/common/systemd: drop hardcoded path
Stop using /var/lib/scylla, use $SCYLLA_HOME instead.
systemd seems does not extract variables on Environment="HOME=$SCYLLA_HOME", but both CentOS/Ubuntu able to run scylla-server without $HOME, so dropped it.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1462977871-26632-1-git-send-email-syuu@scylladb.com>
2016-05-11 17:53:53 +03:00
Calle Wilund
152bd82a05 alter_keyspace_statement: Handle missing replication strategy
ALTER KEYSPACE should allow no replication strategy to be set,
in which case old strategy should be kept.
Initial translation from origin missed this.

Fixes #1256

Message-Id: <1462967584-2875-2-git-send-email-calle@scylladb.com>
2016-05-11 16:02:22 +03:00
Calle Wilund
5604fb8aa3 cql3::statements::cf_prop_defs: Fix compation min/max not handled
Property parsing code was looking at wrong property level
for initial guard statement.

Fixes #1257

Message-Id: <1462967584-2875-1-git-send-email-calle@scylladb.com>
2016-05-11 16:02:16 +03:00
Takuya ASADA
c38b5fbb3d dist/common/scripts: On scylla_io_setup, run iotune on correct data directory which specified on scylla.yaml
Currently scylla_io_setup hardcoded to run iotune on /var/lib/scylla, but user may change data directory by modifying scylla.yaml, and it may on different block device.
So use scylla_config_get.py to get configuration from scylla.yaml, passes it to iotune.

Fixes #1167

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1462955824-21983-2-git-send-email-syuu@scylladb.com>
2016-05-11 13:02:25 +03:00
Takuya ASADA
53820393da dist/common/scripts: add scylla.yaml parser for scripts
To parse scylla.yaml, scylla_config_get.py is added.
It can be use like 'scylla_config_get.py [key name]' from shell script, or command line.
This is needed for scylla_io_setup, to get 'data_file_directories' from shellscript.
Currently it does not supported to specify key name of nested data structure, but enough for scyll_io_setup.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1462955824-21983-1-git-send-email-syuu@scylladb.com>
2016-05-11 13:02:23 +03:00
Pekka Enberg
d93d46e721 Merge "ALTER KEYSPACE" from Calle
"Implementation of ALTER KEYSPACE.
Fixes #429"
2016-05-10 22:07:06 +03:00
Takuya ASADA
a73924b4e0 dist/ubuntu/dep: introduce scylla-gdb-7.11 for Ubuntu 14.04LTS
Introduce scylla-gdb-7.11 for Ubuntu 14.04LTS, to get better support of recent version of g++ on gdb.

Fixes #969

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1462825880-20866-3-git-send-email-syuu@scylladb.com>
2016-05-10 17:53:32 +03:00
Takuya ASADA
9ff2efb28b dist/common/dep: add Ubuntu support for scylla-env
Since Ubuntu 14.04LTS needs scylla-gdb package which install to /opt/scylladb, we need to port scylla-env package to Ubuntu as well.
This change introduces scylla-env package to Ubuntu 14.04LTS.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1462825880-20866-2-git-send-email-syuu@scylladb.com>
2016-05-10 17:53:32 +03:00
Takuya ASADA
43cc77d1b8 dist/redhat/centos_dep: move scylla-env to dist/common to share with Ubuntu
Since Ubuntu 14.04LTS needs scylla-gdb package which install to /opt/scylladb, we need to port scylla-env package to Ubuntu as well.
To do it, share the package directory on dist/common/dep at first.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1462825880-20866-1-git-send-email-syuu@scylladb.com>
2016-05-10 17:53:31 +03:00
Calle Wilund
147aa81177 Cql.g: Handle ALTER KEYSPACE 2016-05-10 14:36:46 +00:00
Calle Wilund
5c36d2e09e alter_keyspace_statement: Implement
Note: Like create keyspace, we don't properly validate 
replication strategy yet.
2016-05-10 14:36:17 +00:00
Piotr Jastrzebski
240a185727 Stop scanning keyspace data directory when populating.
Iterate over column families and check/create directories for them
instead of scanning keyspace data directory and filtering directories
against column families that exist in system tables for this keyspace.

Fixes #1008

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Message-Id: <26da66eec67a1ab1318917a66161915cdef924ab.1462890592.git.piotr@scylladb.com>
2016-05-10 17:35:55 +03:00
Calle Wilund
63b6c6bb5a migration_manager: Implement announce_keyspace_update
More or less the same as create keyspace...
2016-05-10 14:34:51 +00:00
Calle Wilund
8cdf4e37fb schema_tables: Fix merge_keyspaces to handle alter keyspace
Must keep "altered" alive into the call chain.
2016-05-10 14:32:51 +00:00
Calle Wilund
6ef7885ae3 database: Implement update_keyspace
Reloads keyspace metadata and replaces in existing keyspace. 
Note: since keyspace metadata, and consequently, replication 
strategy now becomes volatile, keyspace::metadata now returns
shared pointer by value (i.e. keep-alive). 
Replication strategy should receive the same treatment, but
since it is extensively used, but never kept across a 
continuation, I've just added a comment for now.
2016-05-10 14:31:30 +00:00
Raphael S. Carvalho
d80d194873 compaction_manager: stop compaction tasks in parallel
Purpose is to speed up shutdown.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <a8db3492f1ceeea2a886d3920e5effa841ea155f.1462838670.git.raphaelsc@scylladb.com>
2016-05-10 10:03:35 +03:00
Avi Kivity
28cc6f97af Merge 2016-05-09 14:25:25 +03:00
Calle Wilund
917bf850fa transport::server: Do not treat accept exception as fatal
1.) It most likely is not, i.e. either tcp or more likely, ssl
    negotiation failure. In any case, we can still try next
    connection.
2.) Not retrying will cause us to "leak" the accept, and then hang
    on shutdown.

Also, promote logging message on accept exception to "warn", since
dtest(s?) depend on seeing log output.

Message-Id: <1462283265-27051-4-git-send-email-calle@scylladb.com>
2016-05-09 14:13:07 +03:00
Calle Wilund
437ebe7128 cql_server: Use credentials_builder to init tls
Slightly cleaner, and shard-safe tls init.

Message-Id: <1462283265-27051-3-git-send-email-calle@scylladb.com>
2016-05-09 14:12:59 +03:00
Calle Wilund
58f7edb04f messaging_service: Change tls init to use credentials_builder
To simplify init of msg service, use credendials_builder
to encapsulate tls options so actual credentials can be
more easily created in each shard.

Message-Id: <1462283265-27051-2-git-send-email-calle@scylladb.com>
2016-05-09 14:12:53 +03:00
Avi Kivity
29e103a2ae Merge seastar upstream
* seastar 7782ad4...3dec26f (3):
  > tests/mkcert.gmk: Fix makefile bug in snakeoil cert generator
  > tls_test: Add case to do a little checking of credentials_builder
  > tls: Add credentials_builder - copyable credentials "factory"
2016-05-09 14:12:29 +03:00
Tomasz Grabiec
1ca5ceadff Merge tag '1235-v2' from https://github.com/avikivity/scylla
From Avi:

When we shut down, we may have to give up on some pending atomic
sstable deletions, because not all shards may have agreed to delete
all members of the set.

This is expected, so silence these frightening error messages.

Fixes #1235.
2016-05-09 12:22:41 +02:00
Duarte Nunes
dada385826 rpc: Secure connection attempts can be cancelled
This patch adds support for secure connection attempts to be
cancellable.

Fixes #862

Includes seastar upstream merge:

* seastar f1a3520...7782ad4 (1):
  > Merge "rpc: Allow client connections to be cancelled" from Duarte

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1462783335-10731-1-git-send-email-duarte@scylladb.com>
2016-05-09 11:44:53 +03:00
Takuya ASADA
f7d41ba07a dist: Extract scylla.yaml and create metapackage
This patch create a scylla-conf package containing
scylla.yaml and a scylla package acting as a metapackage.

Fixes #421

Signed-off-by: Benoît Canet <benoit@scylladb.com>
Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1462280987-26909-1-git-send-email-syuu@scylladb.com>
2016-05-09 11:23:28 +03:00
Avi Kivity
4b34152870 Merge seastar upstream
* seastar ab74536...f1a3520 (2):
  > rpc: clear outgoing queue of a socket after failed connection
  > Merge "unconnected socket (now seastar::socket)" from Duarte

Fixes #1236.
2016-05-09 10:16:15 +03:00
Raphael S. Carvalho
3ac22bc0d7 compaction_manager: simplify code that waits for cleanup termination
Now that a task is created on demand, it's possible to wait for
termination of cleanup without extra machinery.
However, shared_future<> is now used because we may have more
than one fiber waiting for completion of task.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <209de365c7782742dc2876a66f9d0784998cae53.1462599296.git.raphaelsc@scylladb.com>
2016-05-08 11:26:36 +03:00
Avi Kivity
ee7225a9cb sstables: silence atomic deletion cancellation logs during sstable deletion
Those logs are expected during shutdown.
2016-05-07 20:37:49 +03:00
Avi Kivity
80302d98dd database: silence atomic deletion cancellation logs during compaction
Those logs are expected during shutdown.
2016-05-07 20:37:48 +03:00
Avi Kivity
43221fc7e2 sstables: make delete_atomically() throw a distinct exception when cancelled
Throwing a runtime_error makes it impossible to catch the cancellation
exception, so replace it with a distinct exception class.
2016-05-07 20:37:46 +03:00
Calle Wilund
709dd82d59 storage_service: Add logging to match origin
Pointing out if CQL server is listing in SSL mode.
Message-Id: <1462368016-32394-2-git-send-email-calle@scylladb.com>
2016-05-06 13:27:55 +03:00
Raphael S. Carvalho
bf18025937 main: stop compaction manager earlier
Avi says:
"During shutdown, we prevent new compactions, but perhaps too late.
Memtables are flushed and these can trigger compaction."

To solve that, let's stop compaction manager at a very early step
of shutdown. We will still try to stop compaction manager in
database::stop() because user may ask for a shutdown before scylla
was fully started. It's fine to stop compaction manager twice.
Only the first call will actually stop the manager.

Fixes #1238.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <c64ab11f3c91129c424259d317e48abc5bde6ff3.1462496694.git.raphaelsc@scylladb.com>
2016-05-06 07:41:29 +03:00
Calle Wilund
d8ea85cd90 messaging_service: Add logging to match origin
To announce rpc port + ssl if on.

Message-Id: <1462368016-32394-1-git-send-email-calle@scylladb.com>
2016-05-05 10:26:01 +03:00
Raphael S. Carvalho
b8277979ef compaction_manager: fix indentation
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <82c6b93b24cbcc97f5eff3f91b05d4c1b415ecee.1462412927.git.raphaelsc@scylladb.com>
2016-05-05 10:06:56 +03:00
Avi Kivity
3aefa4f1d2 Merge seastar upstream
* seastar e536555...ab74536 (4):
  > reactor: kill max_inline_continuations
  > smp: optimize smp_message_queue::flush_request_batch() for empty queue
  > thread: do not yield if idle
  > Merge "Fixes for iotune" from Glauber
2016-05-05 09:48:58 +03:00
Gleb Natapov
f1cd52ff3f tests: test for result row counting
Message-Id: <1462377579-2419-2-git-send-email-gleb@scylladb.com>
2016-05-04 18:18:17 +02:00
Gleb Natapov
b75475de80 query: fix result row counting for results with multiple partitions
Message-Id: <1462377579-2419-1-git-send-email-gleb@scylladb.com>
2016-05-04 18:18:15 +02:00
Gleb Natapov
2a00c06dd5 query: fix non full clustering key deserialization
Clustering key prefix may have less columns than described in schema.
Deserailiaztion should stop when end of buffer is reached.

Message-Id: <20160503140420.GP23113@scylladb.com>
2016-05-04 17:42:28 +02:00
Raphael S. Carvalho
5aeeb0b3e8 compaction: add support to parallel compaction on the same column family
It was noticed that small sstables will accumulate for a column family because
scylla was limited to two compaction per shard, and a column family could have
at most one compaction running at a given shard. With the number of sstables
increasing rapidly, read performance is degraded.

At the moment, our compaction manager works by running two compaction task
handlers that run in parallel to the rest of the system. Each task handler
gets to run when needed, gets a column family from compaction manager queue,
runs compaction on it, and goes to sleep again. That's basically its cycle.
Compaction manager only allows one instance of a column family to be on its
queue, meaning that it's impossible for a column family to be compacted in
parallel. One compaction starts after another for a given column family.

To solve the problem described, we want to concurrently run compaction jobs
of a column family that have different "size tier" (or "weight").
For those unfamiliar, compaction job contains a list of sstables that will be
compacted together.
The "size tier" of a compaction job is the log of the total size of the input
sstables. So a compaction job only gets to run if its "size tier" is not the
same of an ongoing compaction. There is no point in compacting concurrently at
the same "size tier", because that slows down both compactions.

We will no longer queue column families in compaction manager. Instead, we
create a new fiber to run compaction on demand.
This fiber that runs asynchronously will do the following:
1) Get a compaction job from compaction strategy.
2) Calculate "size tier" of compaction job.
3) Run compaction job if its "size tier" is not the same of an ongoing
compaction for the given column family.
As before, it may decide to re-compact a column family based on a stat stored
in column family object.

Ran all compaction-related dtests.

Fixes #1216.

Reviewed-by: Nadav Har'El <nyh@scylladb.com>
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <d30952ff136192a522bde4351926130addec8852.1462311908.git.raphaelsc@scylladb.com>
2016-05-04 11:46:09 +03:00
Calle Wilund
6d2caedafd auth: Make auth.* schemas use deterministic UUIDs
In initial implementation I figured this was not required, but
we get issues communicating across nodes if system tables
don't have the same UUID, since creation is forcefully local, yet
shared.

Just do a manual re-create of the scema with a name UUID, and
use migration manager directly.
Message-Id: <1462194588-11964-1-git-send-email-calle@scylladb.com>
2016-05-03 10:48:24 +03:00
Avi Kivity
24f90b087f Merge "fix range queries with limiter to not generate more requests than needed" from Gleb
Fixes #1204.
2016-05-02 15:14:45 +03:00
Gleb Natapov
3039e4c7de storage_proxy: stop range query with limit after the limit is reached 2016-05-02 15:10:15 +03:00
Gleb Natapov
db322d8f74 query: put live row count into query::result
The patch calculates row count during result building and while merging.
If one of results that are being merged does not have row count the
merged result will not have one either.
2016-05-02 15:10:15 +03:00
Gleb Natapov
41c586313a storage_proxy: fix calculation of concurrency queried ranges 2016-05-02 15:10:15 +03:00
Gleb Natapov
c364ab9121 storage_proxy: add logging for range query row count estimation 2016-05-02 15:10:15 +03:00
Calle Wilund
751ba2f0bf messaging_service: Change init to use per-shard tls credentials
Fixes: #1220

While the server_credentials object is technically immutable
(esp with last change in seastar), the ::shared_ptr holding them
is not safe to share across shards.

Pre-create cpu x credentials and then move-hand them out in service
start-up instead.

Fixes assertion error in debug builds. And just maybe real memory
corruption in release.

Requires seastar tls change:
"Change server_credentials to copy dh_params input"

Message-Id: <1462187704-2056-1-git-send-email-calle@scylladb.com>
2016-05-02 15:04:40 +03:00
Raphael S. Carvalho
ae95ce1bd7 sstables: optimize leveled compaction strategy
Leveled compaction strategy is doing a lot of work whenever it's asked to get
a list of sstables to be compacted. It's checking if a sstable overlaps with
another sstable in the same level twice. First, when adding a sstable to a
list with sstables at the same level. Second, after adding all sstables to
their respective lists.

It's enough to check that a sstable creates an overlap in its level only once.
So I am changing the code to unconditionally insert a sstable to its respective
list, and after that, it will call repair_overlapping_sstables() that will send
any sstable that creates an overlap in its level to L0 list.

By the way, the optimization isn't in the compaction itself, instead in the
strategy code that gets a set of sstables to be compacted.

Reviewed-by: Nadav Har'El <nyh@scylladb.com>
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <8c8526737277cb47987a3a5dbd5ff3bb81a6d038.1461965074.git.raphaelsc@scylladb.com>
2016-05-02 11:18:39 +03:00
Avi Kivity
dc69999fd8 Merge seastar upstream
* seastar dab58e4...e536555 (5):
  > rpc: introduce outgoing packet queue
  > Add condition variable implementation.
  > future-utils: support futures with multiple values in map_reduce
  > tests: rpc: stop client and server
  > tls_test: Add test for large-ish buffer send/recieve
2016-05-02 11:10:33 +03:00
Takuya ASADA
122330a5eb dist/common/scripts: add interactive prompt for package installation check, also check scylla-tools installed
Currently scylla_setup is unusable when user does not want to install scylla-jmx because it checks package unconditionally, but some users (or developers) does not want to install it, so let's ask to skip check or not on interactive prompt.

Also, scylla-tools package should installed for most of the case, added check code for the package.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1460662354-10221-1-git-send-email-syuu@scylladb.com>
2016-05-01 14:50:50 +03:00
Takuya ASADA
cc74b6ff5f dist/ubuntu: move lines from rules to .install/.dirs/.docs
To simplify build script, and make it easier spliting two packages,
use .install/.dirs/.docs instead of rules.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1461960695-30647-1-git-send-email-syuu@scylladb.com>
2016-05-01 10:16:35 +03:00
Avi Kivity
434db0bc8b Update scylla-ami submodule
* dist/ami/files/scylla-ami 7019088...72ae258 (1):
  > Add --repo option to scylla_install_ami to construct AMI with custom repository URL
2016-04-28 16:41:30 +03:00
Takuya ASADA
6723978891 dist/ami: Add --repo option to build_ami.sh to construct AMI with custom repository URL
To build AMI from specified build of .rpm, custom repo URL option is required.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1461849370-11963-1-git-send-email-syuu@scylladb.com>
2016-04-28 16:40:49 +03:00
397 changed files with 29117 additions and 6725 deletions

View File

@@ -1,9 +1,9 @@
## Installation details
*Installation details*
Scylla version (or git commit hash):
Cluster size:
OS (RHEL/CentOS/Ubuntu/AWS AMI):
## Hardware details (for performance issues)
*Hardware details (for performance issues)* Delete if unneeded
Platform (physical/VM/cloud instance type/docker):
Hardware: sockets= cores= hyperthreading= memory=
Disks: (SSD/HDD, count)

View File

@@ -1,6 +1,6 @@
#Scylla
# Scylla
##Building Scylla
## Building Scylla
In addition to required packages by Seastar, the following packages are required by Scylla.
@@ -15,7 +15,7 @@ git submodule update --recursive
* Installing required packages:
```
sudo yum install yaml-cpp-devel lz4-devel zlib-devel snappy-devel jsoncpp-devel thrift-devel antlr3-tool antlr3-C++-devel libasan libubsan gcc-c++ gnutls-devel ninja-build ragel libaio-devel cryptopp-devel xfsprogs-devel numactl-devel hwloc-devel libpciaccess-devel libxml2-devel python3-pyparsing
sudo yum install yaml-cpp-devel lz4-devel zlib-devel snappy-devel jsoncpp-devel thrift-devel antlr3-tool antlr3-C++-devel libasan libubsan gcc-c++ gnutls-devel ninja-build ragel libaio-devel cryptopp-devel xfsprogs-devel numactl-devel hwloc-devel libpciaccess-devel libxml2-devel python3-pyparsing lksctp-tools-devel
```
* Build Scylla

View File

@@ -1,6 +1,6 @@
#!/bin/sh
VERSION=1.1.4
VERSION=1.3.5
if test -f version
then

View File

@@ -487,6 +487,36 @@
}
]
},
{
"path": "/cache_service/metrics/row/hits_moving_avrage",
"operations": [
{
"method": "GET",
"summary": "Get row hits moving avrage",
"type": "#/utils/rate_moving_average",
"nickname": "get_row_hits_moving_avrage",
"produces": [
"application/json"
],
"parameters": []
}
]
},
{
"path": "/cache_service/metrics/row/requests_moving_avrage",
"operations": [
{
"method": "GET",
"summary": "Get row requests moving avrage",
"type": "#/utils/rate_moving_average",
"nickname": "get_row_requests_moving_avrage",
"produces": [
"application/json"
],
"parameters": []
}
]
},
{
"path": "/cache_service/metrics/row/size",
"operations": [

View File

@@ -55,6 +55,57 @@
"paramType":"query"
}
]
},
{
"method":"POST",
"summary":"Start reporting on one or more collectd metric",
"type":"void",
"nickname":"enable_collectd",
"produces":[
"application/json"
],
"parameters":[
{
"name":"pluginid",
"description":"The plugin ID, describe the component the metric belongs to. Examples are cache, thrift, etc'. Regex are supported.The plugin ID, describe the component the metric belong to. Examples are: cache, thrift etc'. regex are supported",
"required":true,
"allowMultiple":false,
"type":"string",
"paramType":"path"
},
{
"name":"instance",
"description":"The plugin instance typically #CPU indicating per CPU metric. Regex are supported. Omit for all",
"required":false,
"allowMultiple":false,
"type":"string",
"paramType":"query"
},
{
"name":"type",
"description":"The plugin type, the type of the information. Examples are total_operations, bytes, total_operations, etc'. Regex are supported. Omit for all",
"required":false,
"allowMultiple":false,
"type":"string",
"paramType":"query"
},
{
"name":"type_instance",
"description":"The plugin type instance, the specific metric. Exampls are total_writes, total_size, zones, etc'. Regex are supported, Omit for all",
"required":false,
"allowMultiple":false,
"type":"string",
"paramType":"query"
},
{
"name":"enable",
"description":"set to true to enable all, anything else or omit to disable",
"required":false,
"allowMultiple":false,
"type":"boolean",
"paramType":"query"
}
]
}
]
},
@@ -63,10 +114,10 @@
"operations":[
{
"method":"GET",
"summary":"Get a collectd value",
"summary":"Get a list of all collectd metrics and their status",
"type":"array",
"items":{
"type":"type_instance_id"
"type":"collectd_metric_status"
},
"nickname":"get_collectd_items",
"produces":[
@@ -74,6 +125,25 @@
],
"parameters":[
]
},
{
"method":"POST",
"summary":"Enable or disable all collectd metrics",
"type":"void",
"nickname":"enable_all_collectd",
"produces":[
"application/json"
],
"parameters":[
{
"name":"enable",
"description":"set to true to enable all, anything else or omit to disable",
"required":false,
"allowMultiple":false,
"type":"boolean",
"paramType":"query"
}
]
}
]
}
@@ -113,6 +183,20 @@
}
}
}
},
"collectd_metric_status":{
"id":"collectd_metric_status",
"description":"Holds a collectd id and an enable flag",
"properties":{
"id":{
"description":"The metric ID",
"type":"type_instance_id"
},
"enable":{
"description":"Is the metric enabled",
"type":"boolean"
}
}
}
}
}

View File

@@ -1094,7 +1094,7 @@
"method":"GET",
"summary":"Get read latency histogram",
"$ref": "#/utils/histogram",
"nickname":"get_read_latency_histogram",
"nickname":"get_read_latency_histogram_depricated",
"produces":[
"application/json"
],
@@ -1121,6 +1121,49 @@
"items":{
"$ref": "#/utils/histogram"
},
"nickname":"get_all_read_latency_histogram_depricated",
"produces":[
"application/json"
],
"parameters":[
]
}
]
},
{
"path":"/column_family/metrics/read_latency/moving_average_histogram/{name}",
"operations":[
{
"method":"GET",
"summary":"Get read latency moving avrage histogram",
"$ref": "#/utils/rate_moving_average_and_histogram",
"nickname":"get_read_latency_histogram",
"produces":[
"application/json"
],
"parameters":[
{
"name":"name",
"description":"The column family name in keysspace:name format",
"required":true,
"allowMultiple":false,
"type":"string",
"paramType":"path"
}
]
}
]
},
{
"path":"/column_family/metrics/read_latency/moving_average_histogram/",
"operations":[
{
"method":"GET",
"summary":"Get read latency moving avrage histogram from all column family",
"type":"array",
"items":{
"$ref": "#/utils/rate_moving_average_and_histogram"
},
"nickname":"get_all_read_latency_histogram",
"produces":[
"application/json"
@@ -1260,7 +1303,7 @@
"method":"GET",
"summary":"Get write latency histogram",
"$ref": "#/utils/histogram",
"nickname":"get_write_latency_histogram",
"nickname":"get_write_latency_histogram_depricated",
"produces":[
"application/json"
],
@@ -1287,6 +1330,49 @@
"items":{
"$ref": "#/utils/histogram"
},
"nickname":"get_all_write_latency_histogram_depricated",
"produces":[
"application/json"
],
"parameters":[
]
}
]
},
{
"path":"/column_family/metrics/write_latency/moving_average_histogram/{name}",
"operations":[
{
"method":"GET",
"summary":"Get write latency moving average histogram",
"$ref": "#/utils/rate_moving_average_and_histogram",
"nickname":"get_write_latency_histogram",
"produces":[
"application/json"
],
"parameters":[
{
"name":"name",
"description":"The column family name in keysspace:name format",
"required":true,
"allowMultiple":false,
"type":"string",
"paramType":"path"
}
]
}
]
},
{
"path":"/column_family/metrics/write_latency/moving_average_histogram/",
"operations":[
{
"method":"GET",
"summary":"Get write latency moving average histogram of all column family",
"type":"array",
"items":{
"$ref": "#/utils/rate_moving_average_and_histogram"
},
"nickname":"get_all_write_latency_histogram",
"produces":[
"application/json"

View File

@@ -716,6 +716,36 @@
}
]
},
{
"path": "/storage_proxy/metrics/read/timeouts_rates",
"operations": [
{
"method": "GET",
"summary": "Get read metrics rates",
"type": "#/utils/rate_moving_average",
"nickname": "get_read_metrics_timeouts_rates",
"produces": [
"application/json"
],
"parameters": []
}
]
},
{
"path": "/storage_proxy/metrics/read/unavailables_rates",
"operations": [
{
"method": "GET",
"summary": "Get read metrics rates",
"type": "#/utils/rate_moving_average",
"nickname": "get_read_metrics_unavailables_rates",
"produces": [
"application/json"
],
"parameters": []
}
]
},
{
"path": "/storage_proxy/metrics/read/histogram",
"operations": [
@@ -723,7 +753,7 @@
"method": "GET",
"summary": "Get read metrics",
"$ref": "#/utils/histogram",
"nickname": "get_read_metrics_latency_histogram",
"nickname": "get_read_metrics_latency_histogram_depricated",
"produces": [
"application/json"
],
@@ -738,6 +768,36 @@
"method": "GET",
"summary": "Get range metrics",
"$ref": "#/utils/histogram",
"nickname": "get_range_metrics_latency_histogram_depricated",
"produces": [
"application/json"
],
"parameters": []
}
]
},
{
"path": "/storage_proxy/metrics/read/moving_avrage_histogram",
"operations": [
{
"method": "GET",
"summary": "Get read metrics",
"$ref": "#/utils/rate_moving_average_and_histogram",
"nickname": "get_read_metrics_latency_histogram",
"produces": [
"application/json"
],
"parameters": []
}
]
},
{
"path": "/storage_proxy/metrics/range/moving_avrage_histogram",
"operations": [
{
"method": "GET",
"summary": "Get range metrics rate and histogram",
"$ref": "#/utils/rate_moving_average_and_histogram",
"nickname": "get_range_metrics_latency_histogram",
"produces": [
"application/json"
@@ -776,6 +836,36 @@
}
]
},
{
"path": "/storage_proxy/metrics/range/timeouts_rates",
"operations": [
{
"method": "GET",
"summary": "Get range metrics rates",
"type": "#/utils/rate_moving_average",
"nickname": "get_range_metrics_timeouts_rates",
"produces": [
"application/json"
],
"parameters": []
}
]
},
{
"path": "/storage_proxy/metrics/range/unavailables_rates",
"operations": [
{
"method": "GET",
"summary": "Get range metrics rates",
"type": "#/utils/rate_moving_average",
"nickname": "get_range_metrics_unavailables_rates",
"produces": [
"application/json"
],
"parameters": []
}
]
},
{
"path": "/storage_proxy/metrics/write/timeouts",
"operations": [
@@ -806,6 +896,36 @@
}
]
},
{
"path": "/storage_proxy/metrics/write/timeouts_rates",
"operations": [
{
"method": "GET",
"summary": "Get write metrics rates",
"type": "#/utils/rate_moving_average",
"nickname": "get_write_metrics_timeouts_rates",
"produces": [
"application/json"
],
"parameters": []
}
]
},
{
"path": "/storage_proxy/metrics/write/unavailables_rates",
"operations": [
{
"method": "GET",
"summary": "Get write metrics rates",
"type": "#/utils/rate_moving_average",
"nickname": "get_write_metrics_unavailables_rates",
"produces": [
"application/json"
],
"parameters": []
}
]
},
{
"path": "/storage_proxy/metrics/write/histogram",
"operations": [
@@ -813,6 +933,21 @@
"method": "GET",
"summary": "Get write metrics",
"$ref": "#/utils/histogram",
"nickname": "get_write_metrics_latency_histogram_depricated",
"produces": [
"application/json"
],
"parameters": []
}
]
},
{
"path": "/storage_proxy/metrics/write/moving_avrage_histogram",
"operations": [
{
"method": "GET",
"summary": "Get write metrics",
"$ref": "#/utils/rate_moving_average_and_histogram",
"nickname": "get_write_metrics_latency_histogram",
"produces": [
"application/json"

View File

@@ -177,6 +177,22 @@
}
]
},
{
"path":"/storage_service/scylla_release_version",
"operations":[
{
"method":"GET",
"summary":"Fetch a string representation of the Scylla version.",
"type":"string",
"nickname":"get_scylla_release_version",
"produces":[
"application/json"
],
"parameters":[
]
}
]
},
{
"path":"/storage_service/schema_version",
"operations":[

View File

@@ -65,6 +65,41 @@
"description":"The series of values to which the counts in `buckets` correspond"
}
}
}
}
},
"rate_moving_average": {
"id":"rate_moving_average",
"description":"A meter metric which measures mean throughput and one, five, and fifteen-minute exponentially-weighted moving average throughputs",
"properties":{
"rates": {
"type":"array",
"items":{
"type":"double"
},
"description":"One, five and fifteen mintues rates"
},
"mean_rate": {
"type":"double",
"description":"The mean rate from startup"
},
"count": {
"type":"long",
"description":"Total number of events from startup"
}
}
},
"rate_moving_average_and_histogram": {
"id":"rate_moving_average_and_histogram",
"description":"A timer metric which aggregates timing durations and provides duration statistics, plus throughput statistics",
"properties":{
"meter": {
"type":"rate_moving_average",
"description":"The metric rate moving average"
},
"hist": {
"type":"histogram",
"description":"The metric histogram"
}
}
}
}
}

View File

@@ -83,6 +83,10 @@ future<> set_server_storage_service(http_context& ctx) {
return register_api(ctx, "storage_service", "The storage service API", set_storage_service);
}
future<> set_server_snitch(http_context& ctx) {
return register_api(ctx, "endpoint_snitch_info", "The endpoint snitch info API", set_endpoint_snitch);
}
future<> set_server_gossip(http_context& ctx) {
return register_api(ctx, "gossiper",
"The gossiper API", set_gossiper);
@@ -118,10 +122,6 @@ future<> set_server_gossip_settle(http_context& ctx) {
rb->register_function(r, "cache_service",
"The cache service API");
set_cache_service(ctx,r);
rb->register_function(r, "endpoint_snitch_info",
"The endpoint snitch info API");
set_endpoint_snitch(ctx, r);
});
}

View File

@@ -110,44 +110,7 @@ future<json::json_return_type> sum_stats(distributed<T>& d, V F::*f) {
});
}
inline double pow2(double a) {
return a * a;
}
// FIXME: Move to utils::ihistogram::operator+=()
inline utils::ihistogram add_histogram(utils::ihistogram res,
const utils::ihistogram& val) {
if (res.count == 0) {
return val;
}
if (val.count == 0) {
return std::move(res);
}
if (res.min > val.min) {
res.min = val.min;
}
if (res.max < val.max) {
res.max = val.max;
}
double ncount = res.count + val.count;
// To get an estimated sum we take the estimated mean
// and multiply it by the true count
res.sum = res.sum + val.mean * val.count;
double a = res.count/ncount;
double b = val.count/ncount;
double mean = a * res.mean + b * val.mean;
res.variance = (res.variance + pow2(res.mean - mean) )* a +
(val.variance + pow2(val.mean -mean))* b;
res.mean = mean;
res.count = res.count + val.count;
for (auto i : val.sample) {
res.sample.push_back(i);
}
return res;
}
inline
httpd::utils_json::histogram to_json(const utils::ihistogram& val) {
@@ -156,15 +119,39 @@ httpd::utils_json::histogram to_json(const utils::ihistogram& val) {
return h;
}
template<class T, class F>
future<json::json_return_type> sum_histogram_stats(distributed<T>& d, utils::ihistogram F::*f) {
inline
httpd::utils_json::rate_moving_average meter_to_json(const utils::rate_moving_average& val) {
httpd::utils_json::rate_moving_average m;
m = val;
return m;
}
return d.map_reduce0([f](const T& p) {return p.get_stats().*f;}, utils::ihistogram(),
add_histogram).then([](const utils::ihistogram& val) {
inline
httpd::utils_json::rate_moving_average_and_histogram timer_to_json(const utils::rate_moving_average_and_histogram& val) {
httpd::utils_json::rate_moving_average_and_histogram h;
h.hist = val.hist;
h.meter = meter_to_json(val.rate);
return h;
}
template<class T, class F>
future<json::json_return_type> sum_histogram_stats(distributed<T>& d, utils::timed_rate_moving_average_and_histogram F::*f) {
return d.map_reduce0([f](const T& p) {return (p.get_stats().*f).hist;}, utils::ihistogram(),
std::plus<utils::ihistogram>()).then([](const utils::ihistogram& val) {
return make_ready_future<json::json_return_type>(to_json(val));
});
}
template<class T, class F>
future<json::json_return_type> sum_timer_stats(distributed<T>& d, utils::timed_rate_moving_average_and_histogram F::*f) {
return d.map_reduce0([f](const T& p) {return (p.get_stats().*f).rate();}, utils::rate_moving_average_and_histogram(),
std::plus<utils::rate_moving_average_and_histogram>()).then([](const utils::rate_moving_average_and_histogram& val) {
return make_ready_future<json::json_return_type>(timer_to_json(val));
});
}
inline int64_t min_int64(int64_t a, int64_t b) {
return std::min(a,b);
}

View File

@@ -38,6 +38,7 @@ struct http_context {
};
future<> set_server_init(http_context& ctx);
future<> set_server_snitch(http_context& ctx);
future<> set_server_storage_service(http_context& ctx);
future<> set_server_gossip(http_context& ctx);
future<> set_server_load_sstable(http_context& ctx);

View File

@@ -194,30 +194,46 @@ void set_cache_service(http_context& ctx, routes& r) {
});
cs::get_row_capacity.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, 0, [](const column_family& cf) {
return map_reduce_cf(ctx, uint64_t(0), [](const column_family& cf) {
return cf.get_row_cache().get_cache_tracker().region().occupancy().used_space();
}, std::plus<uint64_t>());
});
cs::get_row_hits.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, 0, [](const column_family& cf) {
return cf.get_row_cache().stats().hits;
}, std::plus<int64_t>());
return map_reduce_cf(ctx, uint64_t(0), [](const column_family& cf) {
return cf.get_row_cache().stats().hits.count();
}, std::plus<uint64_t>());
});
cs::get_row_requests.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, 0, [](const column_family& cf) {
return cf.get_row_cache().stats().hits + cf.get_row_cache().stats().misses;
}, std::plus<int64_t>());
return map_reduce_cf(ctx, uint64_t(0), [](const column_family& cf) {
return cf.get_row_cache().stats().hits.count() + cf.get_row_cache().stats().misses.count();
}, std::plus<uint64_t>());
});
cs::get_row_hit_rate.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, ratio_holder(), [](const column_family& cf) {
return ratio_holder(cf.get_row_cache().stats().hits + cf.get_row_cache().stats().misses,
cf.get_row_cache().stats().hits);
return ratio_holder(cf.get_row_cache().stats().hits.count() + cf.get_row_cache().stats().misses.count(),
cf.get_row_cache().stats().hits.count());
}, std::plus<ratio_holder>());
});
cs::get_row_hits_moving_avrage.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf_raw(ctx, utils::rate_moving_average(), [](const column_family& cf) {
return cf.get_row_cache().stats().hits.rate();
}, std::plus<utils::rate_moving_average>()).then([](const utils::rate_moving_average& m) {
return make_ready_future<json::json_return_type>(meter_to_json(m));
});
});
cs::get_row_requests_moving_avrage.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf_raw(ctx, utils::rate_moving_average(), [](const column_family& cf) {
return cf.get_row_cache().stats().hits.rate() + cf.get_row_cache().stats().misses.rate();
}, std::plus<utils::rate_moving_average>()).then([](const utils::rate_moving_average& m) {
return make_ready_future<json::json_return_type>(meter_to_json(m));
});
});
cs::get_row_size.set(r, [&ctx] (std::unique_ptr<request> req) {
// In origin row size is the weighted size.
// We currently do not support weights, so we use num entries instead

View File

@@ -25,10 +25,14 @@
#include "core/scollectd_api.hh"
#include "endian.h"
#include <boost/range/irange.hpp>
#include <regex>
namespace api {
using namespace scollectd;
using namespace httpd;
using namespace json;
namespace cd = httpd::collectd_json;
static auto transformer(const std::vector<collectd_value>& values) {
@@ -49,6 +53,14 @@ static auto transformer(const std::vector<collectd_value>& values) {
return collected_value;
}
static const char* str_to_regex(const sstring& v) {
if (v != "") {
return v.c_str();
}
return ".*";
}
void set_collectd(http_context& ctx, routes& r) {
cd::get_collectd.set(r, [&ctx](std::unique_ptr<request> req) {
@@ -72,7 +84,7 @@ void set_collectd(http_context& ctx, routes& r) {
});
cd::get_collectd_items.set(r, [](const_req req) {
std::vector<cd::type_instance_id> res;
std::vector<cd::collectd_metric_status> res;
auto ids = scollectd::get_collectd_ids();
for (auto i: ids) {
cd::type_instance_id id;
@@ -80,10 +92,44 @@ void set_collectd(http_context& ctx, routes& r) {
id.plugin_instance = i.plugin_instance();
id.type = i.type();
id.type_instance = i.type_instance();
res.push_back(id);
cd::collectd_metric_status it;
it.id = id;
it.enable = scollectd::is_enabled(i);
res.push_back(it);
}
return res;
});
cd::enable_collectd.set(r, [](std::unique_ptr<request> req) -> future<json::json_return_type> {
std::regex plugin(req->param["pluginid"].c_str());
std::regex instance(str_to_regex(req->get_query_param("instance")));
std::regex type(str_to_regex(req->get_query_param("type")));
std::regex type_instance(str_to_regex(req->get_query_param("type_instance")));
bool enable = strcasecmp(req->get_query_param("enable").c_str(), "true") == 0;
return smp::invoke_on_all([enable, plugin, instance, type, type_instance]() {
for (auto id: scollectd::get_collectd_ids()) {
if (std::regex_match(std::string(id.plugin()), plugin) &&
std::regex_match(std::string(id.plugin_instance()), instance) &&
std::regex_match(std::string(id.type()), type) &&
std::regex_match(std::string(id.type_instance()), type_instance)) {
scollectd::enable(id, enable);
}
}
}).then([] {
return json::json_return_type(json_void());
});
});
cd::enable_all_collectd.set(r, [](std::unique_ptr<request> req) -> future<json::json_return_type> {
bool enable = strcasecmp(req->get_query_param("enable").c_str(), "true") == 0;
return smp::invoke_on_all([enable] {
for (auto id: scollectd::get_collectd_ids()) {
scollectd::enable(id, enable);
}
}).then([] {
return json::json_return_type(json_void());
});
});
}
}

View File

@@ -77,14 +77,14 @@ future<json::json_return_type> get_cf_stats(http_context& ctx,
}
static future<json::json_return_type> get_cf_stats_count(http_context& ctx, const sstring& name,
utils::ihistogram column_family::stats::*f) {
utils::timed_rate_moving_average_and_histogram column_family::stats::*f) {
return map_reduce_cf(ctx, name, int64_t(0), [f](const column_family& cf) {
return (cf.get_stats().*f).count;
return (cf.get_stats().*f).hist.count;
}, std::plus<int64_t>());
}
static future<json::json_return_type> get_cf_stats_sum(http_context& ctx, const sstring& name,
utils::ihistogram column_family::stats::*f) {
utils::timed_rate_moving_average_and_histogram column_family::stats::*f) {
auto uuid = get_uuid(name, ctx.db.local());
return ctx.db.map_reduce0([uuid, f](database& db) {
// Histograms information is sample of the actual load
@@ -92,7 +92,7 @@ static future<json::json_return_type> get_cf_stats_sum(http_context& ctx, const
// with count. The information is gather in nano second,
// but reported in micro
column_family& cf = db.find_column_family(uuid);
return ((cf.get_stats().*f).count/1000.0) * (cf.get_stats().*f).mean;
return ((cf.get_stats().*f).hist.count/1000.0) * (cf.get_stats().*f).hist.mean;
}, 0.0, std::plus<double>()).then([](double res) {
return make_ready_future<json::json_return_type>((int64_t)res);
});
@@ -100,28 +100,29 @@ static future<json::json_return_type> get_cf_stats_sum(http_context& ctx, const
static future<json::json_return_type> get_cf_stats_count(http_context& ctx,
utils::ihistogram column_family::stats::*f) {
utils::timed_rate_moving_average_and_histogram column_family::stats::*f) {
return map_reduce_cf(ctx, int64_t(0), [f](const column_family& cf) {
return (cf.get_stats().*f).count;
return (cf.get_stats().*f).hist.count;
}, std::plus<int64_t>());
}
static future<json::json_return_type> get_cf_histogram(http_context& ctx, const sstring& name,
utils::ihistogram column_family::stats::*f) {
utils::timed_rate_moving_average_and_histogram column_family::stats::*f) {
utils::UUID uuid = get_uuid(name, ctx.db.local());
return ctx.db.map_reduce0([f, uuid](const database& p) {return p.find_column_family(uuid).get_stats().*f;},
return ctx.db.map_reduce0([f, uuid](const database& p) {
return (p.find_column_family(uuid).get_stats().*f).hist;},
utils::ihistogram(),
add_histogram)
std::plus<utils::ihistogram>())
.then([](const utils::ihistogram& val) {
return make_ready_future<json::json_return_type>(to_json(val));
});
}
static future<json::json_return_type> get_cf_histogram(http_context& ctx, utils::ihistogram column_family::stats::*f) {
static future<json::json_return_type> get_cf_histogram(http_context& ctx, utils::timed_rate_moving_average_and_histogram column_family::stats::*f) {
std::function<utils::ihistogram(const database&)> fun = [f] (const database& db) {
utils::ihistogram res;
for (auto i : db.get_column_families()) {
res = add_histogram(res, i.second->get_stats().*f);
res += (i.second->get_stats().*f).hist;
}
return res;
};
@@ -132,6 +133,33 @@ static future<json::json_return_type> get_cf_histogram(http_context& ctx, utils:
});
}
static future<json::json_return_type> get_cf_rate_and_histogram(http_context& ctx, const sstring& name,
utils::timed_rate_moving_average_and_histogram column_family::stats::*f) {
utils::UUID uuid = get_uuid(name, ctx.db.local());
return ctx.db.map_reduce0([f, uuid](const database& p) {
return (p.find_column_family(uuid).get_stats().*f).rate();},
utils::rate_moving_average_and_histogram(),
std::plus<utils::rate_moving_average_and_histogram>())
.then([](const utils::rate_moving_average_and_histogram& val) {
return make_ready_future<json::json_return_type>(timer_to_json(val));
});
}
static future<json::json_return_type> get_cf_rate_and_histogram(http_context& ctx, utils::timed_rate_moving_average_and_histogram column_family::stats::*f) {
std::function<utils::rate_moving_average_and_histogram(const database&)> fun = [f] (const database& db) {
utils::rate_moving_average_and_histogram res;
for (auto i : db.get_column_families()) {
res += (i.second->get_stats().*f).rate();
}
return res;
};
return ctx.db.map(fun).then([](const std::vector<utils::rate_moving_average_and_histogram> &res) {
std::vector<httpd::utils_json::rate_moving_average_and_histogram> r;
boost::copy(res | boost::adaptors::transformed(timer_to_json), std::back_inserter(r));
return make_ready_future<json::json_return_type>(r);
});
}
static future<json::json_return_type> get_cf_unleveled_sstables(http_context& ctx, const sstring& name) {
return map_reduce_cf(ctx, name, int64_t(0), [](const column_family& cf) {
return cf.get_unleveled_sstables();
@@ -141,7 +169,7 @@ static future<json::json_return_type> get_cf_unleveled_sstables(http_context& ct
static int64_t min_row_size(column_family& cf) {
int64_t res = INT64_MAX;
for (auto i: *cf.get_sstables() ) {
res = std::min(res, i.second->get_stats_metadata().estimated_row_size.min());
res = std::min(res, i->get_stats_metadata().estimated_row_size.min());
}
return (res == INT64_MAX) ? 0 : res;
}
@@ -149,7 +177,7 @@ static int64_t min_row_size(column_family& cf) {
static int64_t max_row_size(column_family& cf) {
int64_t res = 0;
for (auto i: *cf.get_sstables() ) {
res = std::max(i.second->get_stats_metadata().estimated_row_size.max(), res);
res = std::max(i->get_stats_metadata().estimated_row_size.max(), res);
}
return res;
}
@@ -166,13 +194,58 @@ static double update_ratio(double acc, double f, double total) {
static ratio_holder mean_row_size(column_family& cf) {
ratio_holder res;
for (auto i: *cf.get_sstables() ) {
auto c = i.second->get_stats_metadata().estimated_row_size.count();
res.sub += i.second->get_stats_metadata().estimated_row_size.mean() * c;
auto c = i->get_stats_metadata().estimated_row_size.count();
res.sub += i->get_stats_metadata().estimated_row_size.mean() * c;
res.total += c;
}
return res;
}
static std::unordered_map<sstring, uint64_t> merge_maps(std::unordered_map<sstring, uint64_t> a,
const std::unordered_map<sstring, uint64_t>& b) {
a.insert(b.begin(), b.end());
return a;
}
static json::json_return_type sum_map(const std::unordered_map<sstring, uint64_t>& val) {
uint64_t res = 0;
for (auto i : val) {
res += i.second;
}
return res;
}
static future<json::json_return_type> sum_sstable(http_context& ctx, const sstring name, bool total) {
auto uuid = get_uuid(name, ctx.db.local());
return ctx.db.map_reduce0([uuid, total](database& db) {
std::unordered_map<sstring, uint64_t> m;
auto sstables = (total) ? db.find_column_family(uuid).get_sstables_including_compacted_undeleted() :
db.find_column_family(uuid).get_sstables();
for (auto t : *sstables) {
m[t->get_filename()] = t->bytes_on_disk();
}
return m;
}, std::unordered_map<sstring, uint64_t>(), merge_maps).
then([](const std::unordered_map<sstring, uint64_t>& val) {
return sum_map(val);
});
}
static future<json::json_return_type> sum_sstable(http_context& ctx, bool total) {
return map_reduce_cf_raw(ctx, std::unordered_map<sstring, uint64_t>(), [total](column_family& cf) {
std::unordered_map<sstring, uint64_t> m;
auto sstables = (total) ? cf.get_sstables_including_compacted_undeleted() :
cf.get_sstables();
for (auto t : *sstables) {
m[t->get_filename()] = t->bytes_on_disk();
}
return m;
},merge_maps).then([](const std::unordered_map<sstring, uint64_t>& val) {
return sum_map(val);
});
}
template <typename T>
class sum_ratio {
uint64_t _n = 0;
@@ -194,7 +267,7 @@ public:
static double get_compression_ratio(column_family& cf) {
sum_ratio<double> result;
for (auto i : *cf.get_sstables()) {
auto compression_ratio = i.second->get_compression_ratio();
auto compression_ratio = i->get_compression_ratio();
if (compression_ratio != sstables::metadata_collector::NO_COMPRESSION_RATIO) {
result(compression_ratio);
}
@@ -202,6 +275,14 @@ static double get_compression_ratio(column_family& cf) {
return std::move(result).get();
}
static std::vector<uint64_t> concat_sstable_count_per_level(std::vector<uint64_t> a, std::vector<uint64_t>&& b) {
a.resize(std::max(a.size(), b.size()), 0UL);
for (auto i = 0U; i < b.size(); i++) {
a[i] += b[i];
}
return a;
}
void set_column_family(http_context& ctx, routes& r) {
cf::get_column_family_name.set(r, [&ctx] (const_req req){
vector<sstring> res;
@@ -325,7 +406,7 @@ void set_column_family(http_context& ctx, routes& r) {
return map_reduce_cf(ctx, req->param["name"], sstables::estimated_histogram(0), [](column_family& cf) {
sstables::estimated_histogram res(0);
for (auto i: *cf.get_sstables() ) {
res.merge(i.second->get_stats_metadata().estimated_row_size);
res.merge(i->get_stats_metadata().estimated_row_size);
}
return res;
},
@@ -336,7 +417,7 @@ void set_column_family(http_context& ctx, routes& r) {
return map_reduce_cf(ctx, req->param["name"], int64_t(0), [](column_family& cf) {
uint64_t res = 0;
for (auto i: *cf.get_sstables() ) {
res += i.second->get_stats_metadata().estimated_row_size.count();
res += i->get_stats_metadata().estimated_row_size.count();
}
return res;
},
@@ -347,7 +428,7 @@ void set_column_family(http_context& ctx, routes& r) {
return map_reduce_cf(ctx, req->param["name"], sstables::estimated_histogram(0), [](column_family& cf) {
sstables::estimated_histogram res(0);
for (auto i: *cf.get_sstables() ) {
res.merge(i.second->get_stats_metadata().estimated_column_count);
res.merge(i->get_stats_metadata().estimated_column_count);
}
return res;
},
@@ -384,10 +465,14 @@ void set_column_family(http_context& ctx, routes& r) {
return get_cf_stats_count(ctx, &column_family::stats::writes);
});
cf::get_read_latency_histogram.set(r, [&ctx] (std::unique_ptr<request> req) {
cf::get_read_latency_histogram_depricated.set(r, [&ctx] (std::unique_ptr<request> req) {
return get_cf_histogram(ctx, req->param["name"], &column_family::stats::reads);
});
cf::get_read_latency_histogram.set(r, [&ctx] (std::unique_ptr<request> req) {
return get_cf_rate_and_histogram(ctx, req->param["name"], &column_family::stats::reads);
});
cf::get_read_latency.set(r, [&ctx] (std::unique_ptr<request> req) {
return get_cf_stats_sum(ctx,req->param["name"] ,&column_family::stats::reads);
});
@@ -396,24 +481,40 @@ void set_column_family(http_context& ctx, routes& r) {
return get_cf_stats_sum(ctx, req->param["name"] ,&column_family::stats::writes);
});
cf::get_all_read_latency_histogram.set(r, [&ctx] (std::unique_ptr<request> req) {
cf::get_all_read_latency_histogram_depricated.set(r, [&ctx] (std::unique_ptr<request> req) {
return get_cf_histogram(ctx, &column_family::stats::writes);
});
cf::get_write_latency_histogram.set(r, [&ctx] (std::unique_ptr<request> req) {
cf::get_all_read_latency_histogram.set(r, [&ctx] (std::unique_ptr<request> req) {
return get_cf_rate_and_histogram(ctx, &column_family::stats::writes);
});
cf::get_write_latency_histogram_depricated.set(r, [&ctx] (std::unique_ptr<request> req) {
return get_cf_histogram(ctx, req->param["name"], &column_family::stats::writes);
});
cf::get_all_write_latency_histogram.set(r, [&ctx] (std::unique_ptr<request> req) {
cf::get_write_latency_histogram.set(r, [&ctx] (std::unique_ptr<request> req) {
return get_cf_rate_and_histogram(ctx, req->param["name"], &column_family::stats::writes);
});
cf::get_all_write_latency_histogram_depricated.set(r, [&ctx] (std::unique_ptr<request> req) {
return get_cf_histogram(ctx, &column_family::stats::writes);
});
cf::get_all_write_latency_histogram.set(r, [&ctx] (std::unique_ptr<request> req) {
return get_cf_rate_and_histogram(ctx, &column_family::stats::writes);
});
cf::get_pending_compactions.set(r, [&ctx] (std::unique_ptr<request> req) {
return get_cf_stats(ctx, req->param["name"], &column_family::stats::pending_compactions);
return map_reduce_cf(ctx, req->param["name"], int64_t(0), [](column_family& cf) {
return cf.get_compaction_strategy().estimated_pending_compactions(cf);
}, std::plus<int64_t>());
});
cf::get_all_pending_compactions.set(r, [&ctx] (std::unique_ptr<request> req) {
return get_cf_stats(ctx, &column_family::stats::pending_compactions);
return map_reduce_cf(ctx, int64_t(0), [](column_family& cf) {
return cf.get_compaction_strategy().estimated_pending_compactions(cf);
}, std::plus<int64_t>());
});
cf::get_live_ss_table_count.set(r, [&ctx] (std::unique_ptr<request> req) {
@@ -429,19 +530,19 @@ void set_column_family(http_context& ctx, routes& r) {
});
cf::get_live_disk_space_used.set(r, [&ctx] (std::unique_ptr<request> req) {
return get_cf_stats(ctx, req->param["name"], &column_family::stats::live_disk_space_used);
return sum_sstable(ctx, req->param["name"], false);
});
cf::get_all_live_disk_space_used.set(r, [&ctx] (std::unique_ptr<request> req) {
return get_cf_stats(ctx, &column_family::stats::live_disk_space_used);
return sum_sstable(ctx, false);
});
cf::get_total_disk_space_used.set(r, [&ctx] (std::unique_ptr<request> req) {
return get_cf_stats(ctx, req->param["name"], &column_family::stats::total_disk_space_used);
return sum_sstable(ctx, req->param["name"], true);
});
cf::get_all_total_disk_space_used.set(r, [&ctx] (std::unique_ptr<request> req) {
return get_cf_stats(ctx, &column_family::stats::total_disk_space_used);
return sum_sstable(ctx, true);
});
cf::get_min_row_size.set(r, [&ctx] (std::unique_ptr<request> req) {
@@ -471,7 +572,7 @@ void set_column_family(http_context& ctx, routes& r) {
cf::get_bloom_filter_false_positives.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, req->param["name"], uint64_t(0), [] (column_family& cf) {
return std::accumulate(cf.get_sstables()->begin(), cf.get_sstables()->end(), uint64_t(0), [](uint64_t s, auto& sst) {
return s + sst.second->filter_get_false_positive();
return s + sst->filter_get_false_positive();
});
}, std::plus<uint64_t>());
});
@@ -479,7 +580,7 @@ void set_column_family(http_context& ctx, routes& r) {
cf::get_all_bloom_filter_false_positives.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, uint64_t(0), [] (column_family& cf) {
return std::accumulate(cf.get_sstables()->begin(), cf.get_sstables()->end(), uint64_t(0), [](uint64_t s, auto& sst) {
return s + sst.second->filter_get_false_positive();
return s + sst->filter_get_false_positive();
});
}, std::plus<uint64_t>());
});
@@ -487,7 +588,7 @@ void set_column_family(http_context& ctx, routes& r) {
cf::get_recent_bloom_filter_false_positives.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, req->param["name"], uint64_t(0), [] (column_family& cf) {
return std::accumulate(cf.get_sstables()->begin(), cf.get_sstables()->end(), uint64_t(0), [](uint64_t s, auto& sst) {
return s + sst.second->filter_get_recent_false_positive();
return s + sst->filter_get_recent_false_positive();
});
}, std::plus<uint64_t>());
});
@@ -495,7 +596,7 @@ void set_column_family(http_context& ctx, routes& r) {
cf::get_all_recent_bloom_filter_false_positives.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, uint64_t(0), [] (column_family& cf) {
return std::accumulate(cf.get_sstables()->begin(), cf.get_sstables()->end(), uint64_t(0), [](uint64_t s, auto& sst) {
return s + sst.second->filter_get_recent_false_positive();
return s + sst->filter_get_recent_false_positive();
});
}, std::plus<uint64_t>());
});
@@ -503,8 +604,8 @@ void set_column_family(http_context& ctx, routes& r) {
cf::get_bloom_filter_false_ratio.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, req->param["name"], double(0), [] (column_family& cf) {
return std::accumulate(cf.get_sstables()->begin(), cf.get_sstables()->end(), double(0), [](double s, auto& sst) {
double f = sst.second->filter_get_false_positive();
return update_ratio(s, f, f + sst.second->filter_get_true_positive());
double f = sst->filter_get_false_positive();
return update_ratio(s, f, f + sst->filter_get_true_positive());
});
}, std::plus<double>());
});
@@ -512,8 +613,8 @@ void set_column_family(http_context& ctx, routes& r) {
cf::get_all_bloom_filter_false_ratio.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, double(0), [] (column_family& cf) {
return std::accumulate(cf.get_sstables()->begin(), cf.get_sstables()->end(), double(0), [](double s, auto& sst) {
double f = sst.second->filter_get_false_positive();
return update_ratio(s, f, f + sst.second->filter_get_true_positive());
double f = sst->filter_get_false_positive();
return update_ratio(s, f, f + sst->filter_get_true_positive());
});
}, std::plus<double>());
});
@@ -521,8 +622,8 @@ void set_column_family(http_context& ctx, routes& r) {
cf::get_recent_bloom_filter_false_ratio.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, req->param["name"], double(0), [] (column_family& cf) {
return std::accumulate(cf.get_sstables()->begin(), cf.get_sstables()->end(), double(0), [](double s, auto& sst) {
double f = sst.second->filter_get_recent_false_positive();
return update_ratio(s, f, f + sst.second->filter_get_recent_true_positive());
double f = sst->filter_get_recent_false_positive();
return update_ratio(s, f, f + sst->filter_get_recent_true_positive());
});
}, std::plus<double>());
});
@@ -530,8 +631,8 @@ void set_column_family(http_context& ctx, routes& r) {
cf::get_all_recent_bloom_filter_false_ratio.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, double(0), [] (column_family& cf) {
return std::accumulate(cf.get_sstables()->begin(), cf.get_sstables()->end(), double(0), [](double s, auto& sst) {
double f = sst.second->filter_get_recent_false_positive();
return update_ratio(s, f, f + sst.second->filter_get_recent_true_positive());
double f = sst->filter_get_recent_false_positive();
return update_ratio(s, f, f + sst->filter_get_recent_true_positive());
});
}, std::plus<double>());
});
@@ -539,7 +640,7 @@ void set_column_family(http_context& ctx, routes& r) {
cf::get_bloom_filter_disk_space_used.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, req->param["name"], uint64_t(0), [] (column_family& cf) {
return std::accumulate(cf.get_sstables()->begin(), cf.get_sstables()->end(), uint64_t(0), [](uint64_t s, auto& sst) {
return sst.second->filter_size();
return sst->filter_size();
});
}, std::plus<uint64_t>());
});
@@ -547,7 +648,7 @@ void set_column_family(http_context& ctx, routes& r) {
cf::get_all_bloom_filter_disk_space_used.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, uint64_t(0), [] (column_family& cf) {
return std::accumulate(cf.get_sstables()->begin(), cf.get_sstables()->end(), uint64_t(0), [](uint64_t s, auto& sst) {
return sst.second->filter_size();
return sst->filter_size();
});
}, std::plus<uint64_t>());
});
@@ -555,7 +656,7 @@ void set_column_family(http_context& ctx, routes& r) {
cf::get_bloom_filter_off_heap_memory_used.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, req->param["name"], uint64_t(0), [] (column_family& cf) {
return std::accumulate(cf.get_sstables()->begin(), cf.get_sstables()->end(), uint64_t(0), [](uint64_t s, auto& sst) {
return sst.second->filter_memory_size();
return sst->filter_memory_size();
});
}, std::plus<uint64_t>());
});
@@ -563,7 +664,7 @@ void set_column_family(http_context& ctx, routes& r) {
cf::get_all_bloom_filter_off_heap_memory_used.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, uint64_t(0), [] (column_family& cf) {
return std::accumulate(cf.get_sstables()->begin(), cf.get_sstables()->end(), uint64_t(0), [](uint64_t s, auto& sst) {
return sst.second->filter_memory_size();
return sst->filter_memory_size();
});
}, std::plus<uint64_t>());
});
@@ -571,7 +672,7 @@ void set_column_family(http_context& ctx, routes& r) {
cf::get_index_summary_off_heap_memory_used.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, req->param["name"], uint64_t(0), [] (column_family& cf) {
return std::accumulate(cf.get_sstables()->begin(), cf.get_sstables()->end(), uint64_t(0), [](uint64_t s, auto& sst) {
return sst.second->get_summary().memory_footprint();
return sst->get_summary().memory_footprint();
});
}, std::plus<uint64_t>());
});
@@ -579,7 +680,7 @@ void set_column_family(http_context& ctx, routes& r) {
cf::get_all_index_summary_off_heap_memory_used.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, uint64_t(0), [] (column_family& cf) {
return std::accumulate(cf.get_sstables()->begin(), cf.get_sstables()->end(), uint64_t(0), [](uint64_t s, auto& sst) {
return sst.second->get_summary().memory_footprint();
return sst->get_summary().memory_footprint();
});
}, std::plus<uint64_t>());
});
@@ -652,27 +753,35 @@ void set_column_family(http_context& ctx, routes& r) {
});
cf::get_row_cache_hit.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, req->param["name"], int64_t(0), [](const column_family& cf) {
return cf.get_row_cache().stats().hits;
}, std::plus<int64_t>());
return map_reduce_cf_raw(ctx, req->param["name"], utils::rate_moving_average(), [](const column_family& cf) {
return cf.get_row_cache().stats().hits.rate();
}, std::plus<utils::rate_moving_average>()).then([](const utils::rate_moving_average& m) {
return make_ready_future<json::json_return_type>(meter_to_json(m));
});
});
cf::get_all_row_cache_hit.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, int64_t(0), [](const column_family& cf) {
return cf.get_row_cache().stats().hits;
}, std::plus<int64_t>());
return map_reduce_cf_raw(ctx, utils::rate_moving_average(), [](const column_family& cf) {
return cf.get_row_cache().stats().hits.rate();
}, std::plus<utils::rate_moving_average>()).then([](const utils::rate_moving_average& m) {
return make_ready_future<json::json_return_type>(meter_to_json(m));
});
});
cf::get_row_cache_miss.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, req->param["name"], int64_t(0), [](const column_family& cf) {
return cf.get_row_cache().stats().misses;
}, std::plus<int64_t>());
return map_reduce_cf_raw(ctx, req->param["name"], utils::rate_moving_average(), [](const column_family& cf) {
return cf.get_row_cache().stats().misses.rate();
}, std::plus<utils::rate_moving_average>()).then([](const utils::rate_moving_average& m) {
return make_ready_future<json::json_return_type>(meter_to_json(m));
});
});
cf::get_all_row_cache_miss.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, int64_t(0), [](const column_family& cf) {
return cf.get_row_cache().stats().misses;
}, std::plus<int64_t>());
return map_reduce_cf_raw(ctx, utils::rate_moving_average(), [](const column_family& cf) {
return cf.get_row_cache().stats().misses.rate();
}, std::plus<utils::rate_moving_average>()).then([](const utils::rate_moving_average& m) {
return make_ready_future<json::json_return_type>(meter_to_json(m));
});
});
@@ -799,12 +908,11 @@ void set_column_family(http_context& ctx, routes& r) {
});
cf::get_sstable_count_per_level.set(r, [&ctx](std::unique_ptr<request> req) {
// TBD
// FIXME
// This is a workaround, until there will be an API to return the count
// per level, we return an empty array
vector<uint64_t> res;
return make_ready_future<json::json_return_type>(res);
return map_reduce_cf_raw(ctx, req->param["name"], std::vector<uint64_t>(), [](const column_family& cf) {
return cf.sstable_count_per_level();
}, concat_sstable_count_per_level).then([](const std::vector<uint64_t>& res) {
return make_ready_future<json::json_return_type>(res);
});
});
}
}

View File

@@ -34,31 +34,44 @@ future<> foreach_column_family(http_context& ctx, const sstring& name, std::func
template<class Mapper, class I, class Reducer>
future<json::json_return_type> map_reduce_cf(http_context& ctx, const sstring& name, I init,
future<I> map_reduce_cf_raw(http_context& ctx, const sstring& name, I init,
Mapper mapper, Reducer reducer) {
auto uuid = get_uuid(name, ctx.db.local());
return ctx.db.map_reduce0([mapper, uuid](database& db) {
return mapper(db.find_column_family(uuid));
}, init, reducer).then([](const I& res) {
}, init, reducer);
}
template<class Mapper, class I, class Reducer>
future<json::json_return_type> map_reduce_cf(http_context& ctx, const sstring& name, I init,
Mapper mapper, Reducer reducer) {
return map_reduce_cf_raw(ctx, name, init, mapper, reducer).then([](const I& res) {
return make_ready_future<json::json_return_type>(res);
});
}
template<class Mapper, class I, class Reducer, class Result>
future<json::json_return_type> map_reduce_cf(http_context& ctx, const sstring& name, I init,
future<I> map_reduce_cf_raw(http_context& ctx, const sstring& name, I init,
Mapper mapper, Reducer reducer, Result result) {
auto uuid = get_uuid(name, ctx.db.local());
return ctx.db.map_reduce0([mapper, uuid](database& db) {
return mapper(db.find_column_family(uuid));
}, init, reducer).then([result](const I& res) mutable {
}, init, reducer);
}
template<class Mapper, class I, class Reducer, class Result>
future<json::json_return_type> map_reduce_cf(http_context& ctx, const sstring& name, I init,
Mapper mapper, Reducer reducer, Result result) {
return map_reduce_cf_raw(ctx, name, init, mapper, reducer, result).then([result](const I& res) mutable {
result = res;
return make_ready_future<json::json_return_type>(result);
});
}
template<class Mapper, class I, class Reducer>
future<json::json_return_type> map_reduce_cf(http_context& ctx, I init,
future<I> map_reduce_cf_raw(http_context& ctx, I init,
Mapper mapper, Reducer reducer) {
return ctx.db.map_reduce0([mapper, init, reducer](database& db) {
auto res = init;
@@ -66,10 +79,18 @@ future<json::json_return_type> map_reduce_cf(http_context& ctx, I init,
res = reducer(res, mapper(*i.second.get()));
}
return res;
}, init, reducer).then([](const I& res) {
}, init, reducer);
}
template<class Mapper, class I, class Reducer>
future<json::json_return_type> map_reduce_cf(http_context& ctx, I init,
Mapper mapper, Reducer reducer) {
return map_reduce_cf_raw(ctx, init, mapper, reducer).then([](const I& res) {
return make_ready_future<json::json_return_type>(res);
});
}
future<json::json_return_type> get_cf_stats(http_context& ctx, const sstring& name,
int64_t column_family::stats::*f);

View File

@@ -22,6 +22,7 @@
#include "compaction_manager.hh"
#include "api/api-doc/compaction_manager.json.hh"
#include "db/system_keyspace.hh"
#include "column_family.hh"
namespace api {
@@ -78,7 +79,9 @@ void set_compaction_manager(http_context& ctx, routes& r) {
});
cm::get_pending_tasks.set(r, [&ctx] (std::unique_ptr<request> req) {
return get_cm_stats(ctx, &compaction_manager::stats::pending_tasks);
return map_reduce_cf(ctx, int64_t(0), [](column_family& cf) {
return cf.get_compaction_strategy().estimated_pending_compactions(cf);
}, std::plus<int64_t>());
});
cm::get_completed_tasks.set(r, [&ctx] (std::unique_ptr<request> req) {

View File

@@ -33,6 +33,25 @@ namespace sp = httpd::storage_proxy_json;
using proxy = service::storage_proxy;
using namespace json;
static future<utils::rate_moving_average> sum_timed_rate(distributed<proxy>& d, utils::timed_rate_moving_average proxy::stats::*f) {
return d.map_reduce0([f](const proxy& p) {return (p.get_stats().*f).rate();}, utils::rate_moving_average(),
std::plus<utils::rate_moving_average>());
}
static future<json::json_return_type> sum_timed_rate_as_obj(distributed<proxy>& d, utils::timed_rate_moving_average proxy::stats::*f) {
return sum_timed_rate(d, f).then([](const utils::rate_moving_average& val) {
httpd::utils_json::rate_moving_average m;
m = val;
return make_ready_future<json::json_return_type>(m);
});
}
static future<json::json_return_type> sum_timed_rate_as_long(distributed<proxy>& d, utils::timed_rate_moving_average proxy::stats::*f) {
return sum_timed_rate(d, f).then([](const utils::rate_moving_average& val) {
return make_ready_future<json::json_return_type>(val.count);
});
}
static future<json::json_return_type> sum_estimated_histogram(http_context& ctx, sstables::estimated_histogram proxy::stats::*f) {
return ctx.sp.map_reduce0([f](const proxy& p) {return p.get_stats().*f;}, sstables::estimated_histogram(),
sstables::merge).then([](const sstables::estimated_histogram& val) {
@@ -42,8 +61,8 @@ static future<json::json_return_type> sum_estimated_histogram(http_context& ctx
});
}
static future<json::json_return_type> total_latency(http_context& ctx, utils::ihistogram proxy::stats::*f) {
return ctx.sp.map_reduce0([f](const proxy& p) {return (p.get_stats().*f).mean * (p.get_stats().*f).count;}, 0.0,
static future<json::json_return_type> total_latency(http_context& ctx, utils::timed_rate_moving_average_and_histogram proxy::stats::*f) {
return ctx.sp.map_reduce0([f](const proxy& p) {return (p.get_stats().*f).hist.mean * (p.get_stats().*f).hist.count;}, 0.0,
std::plus<double>()).then([](double val) {
int64_t res = val;
return make_ready_future<json::json_return_type>(res);
@@ -291,41 +310,77 @@ void set_storage_proxy(http_context& ctx, routes& r) {
});
sp::get_read_metrics_timeouts.set(r, [&ctx](std::unique_ptr<request> req) {
return sum_stats(ctx.sp, &proxy::stats::read_timeouts);
return sum_timed_rate_as_long(ctx.sp, &proxy::stats::read_timeouts);
});
sp::get_read_metrics_unavailables.set(r, [&ctx](std::unique_ptr<request> req) {
return sum_stats(ctx.sp, &proxy::stats::read_unavailables);
return sum_timed_rate_as_long(ctx.sp, &proxy::stats::read_unavailables);
});
sp::get_range_metrics_timeouts.set(r, [&ctx](std::unique_ptr<request> req) {
return sum_stats(ctx.sp, &proxy::stats::range_slice_timeouts);
return sum_timed_rate_as_long(ctx.sp, &proxy::stats::range_slice_timeouts);
});
sp::get_range_metrics_unavailables.set(r, [&ctx](std::unique_ptr<request> req) {
return sum_stats(ctx.sp, &proxy::stats::range_slice_unavailables);
return sum_timed_rate_as_long(ctx.sp, &proxy::stats::range_slice_unavailables);
});
sp::get_write_metrics_timeouts.set(r, [&ctx](std::unique_ptr<request> req) {
return sum_stats(ctx.sp, &proxy::stats::write_timeouts);
return sum_timed_rate_as_long(ctx.sp, &proxy::stats::write_timeouts);
});
sp::get_write_metrics_unavailables.set(r, [&ctx](std::unique_ptr<request> req) {
return sum_stats(ctx.sp, &proxy::stats::write_unavailables);
return sum_timed_rate_as_long(ctx.sp, &proxy::stats::write_unavailables);
});
sp::get_range_metrics_latency_histogram.set(r, [&ctx](std::unique_ptr<request> req) {
sp::get_read_metrics_timeouts_rates.set(r, [&ctx](std::unique_ptr<request> req) {
return sum_timed_rate_as_obj(ctx.sp, &proxy::stats::read_timeouts);
});
sp::get_read_metrics_unavailables_rates.set(r, [&ctx](std::unique_ptr<request> req) {
return sum_timed_rate_as_obj(ctx.sp, &proxy::stats::read_unavailables);
});
sp::get_range_metrics_timeouts_rates.set(r, [&ctx](std::unique_ptr<request> req) {
return sum_timed_rate_as_obj(ctx.sp, &proxy::stats::range_slice_timeouts);
});
sp::get_range_metrics_unavailables_rates.set(r, [&ctx](std::unique_ptr<request> req) {
return sum_timed_rate_as_obj(ctx.sp, &proxy::stats::range_slice_unavailables);
});
sp::get_write_metrics_timeouts_rates.set(r, [&ctx](std::unique_ptr<request> req) {
return sum_timed_rate_as_obj(ctx.sp, &proxy::stats::write_timeouts);
});
sp::get_write_metrics_unavailables_rates.set(r, [&ctx](std::unique_ptr<request> req) {
return sum_timed_rate_as_obj(ctx.sp, &proxy::stats::write_unavailables);
});
sp::get_range_metrics_latency_histogram_depricated.set(r, [&ctx](std::unique_ptr<request> req) {
return sum_histogram_stats(ctx.sp, &proxy::stats::range);
});
sp::get_write_metrics_latency_histogram.set(r, [&ctx](std::unique_ptr<request> req) {
sp::get_write_metrics_latency_histogram_depricated.set(r, [&ctx](std::unique_ptr<request> req) {
return sum_histogram_stats(ctx.sp, &proxy::stats::write);
});
sp::get_read_metrics_latency_histogram.set(r, [&ctx](std::unique_ptr<request> req) {
sp::get_read_metrics_latency_histogram_depricated.set(r, [&ctx](std::unique_ptr<request> req) {
return sum_histogram_stats(ctx.sp, &proxy::stats::read);
});
sp::get_range_metrics_latency_histogram.set(r, [&ctx](std::unique_ptr<request> req) {
return sum_timer_stats(ctx.sp, &proxy::stats::range);
});
sp::get_write_metrics_latency_histogram.set(r, [&ctx](std::unique_ptr<request> req) {
return sum_timer_stats(ctx.sp, &proxy::stats::write);
});
sp::get_read_metrics_latency_histogram.set(r, [&ctx](std::unique_ptr<request> req) {
return sum_timer_stats(ctx.sp, &proxy::stats::read);
});
sp::get_read_estimated_histogram.set(r, [&ctx](std::unique_ptr<request> req) {
return sum_estimated_histogram(ctx, &proxy::stats::estimated_read);
});
@@ -342,7 +397,7 @@ void set_storage_proxy(http_context& ctx, routes& r) {
});
sp::get_range_estimated_histogram.set(r, [&ctx](std::unique_ptr<request> req) {
return sum_histogram_stats(ctx.sp, &proxy::stats::read);
return sum_timer_stats(ctx.sp, &proxy::stats::read);
});
sp::get_range_latency.set(r, [&ctx](std::unique_ptr<request> req) {

View File

@@ -31,6 +31,7 @@
#include "locator/snitch_base.hh"
#include "column_family.hh"
#include "log.hh"
#include "release.hh"
namespace api {
@@ -121,6 +122,9 @@ void set_storage_service(http_context& ctx, routes& r) {
return service::get_local_storage_service().get_release_version();
});
ss::get_scylla_release_version.set(r, [](const_req req) {
return scylla_version();
});
ss::get_schema_version.set(r, [](const_req req) {
return service::get_local_storage_service().get_schema_version();
});
@@ -659,16 +663,22 @@ void set_storage_service(http_context& ctx, routes& r) {
});
ss::set_trace_probability.set(r, [](std::unique_ptr<request> req) {
//TBD
unimplemented();
auto probability = req->get_query_param("probability");
return make_ready_future<json::json_return_type>(json_void());
try {
double real_prob = std::stod(probability.c_str());
return tracing::tracing::tracing_instance().invoke_on_all([real_prob] (auto& local_tracing) {
local_tracing.set_trace_probability(real_prob);
}).then([] {
return make_ready_future<json::json_return_type>(json_void());
});
} catch (...) {
throw httpd::bad_param_exception(sprint("Bad format of a probability value: \"%s\"", probability.c_str()));
}
});
ss::get_trace_probability.set(r, [](std::unique_ptr<request> req) {
//TBD
unimplemented();
return make_ready_future<json::json_return_type>(0);
return make_ready_future<json::json_return_type>(tracing::tracing::get_local_tracing_instance().get_trace_probability());
});
ss::enable_auto_compaction.set(r, [&ctx](std::unique_ptr<request> req) {

View File

@@ -63,5 +63,8 @@ public:
::feed_hash(as_collection_mutation(), h, def.type);
}
}
size_t memory_usage() const {
return _data.memory_usage();
}
friend std::ostream& operator<<(std::ostream&, const atomic_cell_or_collection&);
};

View File

@@ -47,7 +47,7 @@
#include "authorizer.hh"
#include "database.hh"
#include "cql3/query_processor.hh"
#include "cql3/statements/cf_statement.hh"
#include "cql3/statements/raw/cf_statement.hh"
#include "cql3/statements/create_table_statement.hh"
#include "db/config.hh"
#include "service/migration_manager.hh"
@@ -348,8 +348,8 @@ future<> auth::auth::setup_table(const sstring& name, const sstring& cql) {
return make_ready_future();
}
::shared_ptr<cql3::statements::cf_statement> parsed = static_pointer_cast<
cql3::statements::cf_statement>(cql3::query_processor::parse_statement(cql));
::shared_ptr<cql3::statements::raw::cf_statement> parsed = static_pointer_cast<
cql3::statements::raw::cf_statement>(cql3::query_processor::parse_statement(cql));
parsed->prepare_keyspace(AUTH_KS);
::shared_ptr<cql3::statements::create_table_statement> statement =
static_pointer_cast<cql3::statements::create_table_statement>(

View File

@@ -47,11 +47,8 @@
const sstring auth::data_resource::ROOT_NAME("data");
auth::data_resource::data_resource(level l, const sstring& ks, const sstring& cf)
: _ks(ks), _cf(cf)
: _level(l), _ks(ks), _cf(cf)
{
if (l != get_level()) {
throw std::invalid_argument("level/keyspace/column mismatch");
}
}
auth::data_resource::data_resource()
@@ -67,14 +64,7 @@ auth::data_resource::data_resource(const sstring& ks, const sstring& cf)
{}
auth::data_resource::level auth::data_resource::get_level() const {
if (!_cf.empty()) {
assert(!_ks.empty());
return level::COLUMN_FAMILY;
}
if (!_ks.empty()) {
return level::KEYSPACE;
}
return level::ROOT;
return _level;
}
auth::data_resource auth::data_resource::from_name(

View File

@@ -56,6 +56,7 @@ private:
static const sstring ROOT_NAME;
level _level;
sstring _ks;
sstring _cf;

View File

@@ -218,12 +218,12 @@ future<::shared_ptr<auth::authenticated_user> > auth::password_authenticator::au
// obsolete prepared statements pretty quickly.
// Rely on query processing caching statements instead, and lets assume
// that a map lookup string->statement is not gonna kill us much.
auto& qp = cql3::get_local_query_processor();
return qp.process(
sprint("SELECT %s FROM %s.%s WHERE %s = ?", SALTED_HASH,
auth::AUTH_KS, CREDENTIALS_CF, USER_NAME),
consistency_for_user(username), { username }, true).then_wrapped(
[=](future<::shared_ptr<cql3::untyped_result_set>> f) {
return futurize_apply([this, username, password] {
auto& qp = cql3::get_local_query_processor();
return qp.process(sprint("SELECT %s FROM %s.%s WHERE %s = ?", SALTED_HASH,
auth::AUTH_KS, CREDENTIALS_CF, USER_NAME),
consistency_for_user(username), {username}, true);
}).then_wrapped([=](future<::shared_ptr<cql3::untyped_result_set>> f) {
try {
auto res = f.get0();
if (res->empty() || !checkpw(password, res->one().get_as<sstring>(SALTED_HASH))) {
@@ -234,6 +234,8 @@ future<::shared_ptr<auth::authenticated_user> > auth::password_authenticator::au
std::throw_with_nested(exceptions::authentication_exception("Could not verify password"));
} catch (exceptions::request_execution_exception& e) {
std::throw_with_nested(exceptions::authentication_exception(e.what()));
} catch (...) {
std::throw_with_nested(exceptions::authentication_exception("authentication failed"));
}
});
}

View File

@@ -40,6 +40,7 @@
*/
#include <unordered_map>
#include <boost/algorithm/string.hpp>
#include "permission.hh"
const auth::permission_set auth::permissions::ALL_DATA =
@@ -75,7 +76,9 @@ const sstring& auth::permissions::to_string(permission p) {
}
auth::permission auth::permissions::from_string(const sstring& s) {
return permission_names.at(s);
sstring upper(s);
boost::to_upper(upper);
return permission_names.at(upper);
}
std::unordered_set<sstring> auth::permissions::to_strings(const permission_set& set) {

View File

@@ -28,7 +28,11 @@ class checked_file_impl : public file_impl {
public:
checked_file_impl(disk_error_signal_type& s, file f)
: _signal(s) , _file(f) {}
: _signal(s) , _file(f) {
_memory_dma_alignment = f.memory_dma_alignment();
_disk_read_dma_alignment = f.disk_read_dma_alignment();
_disk_write_dma_alignment = f.disk_write_dma_alignment();
}
virtual future<size_t> write_dma(uint64_t pos, const void* buffer, size_t len, const io_priority_class& pc) override {
return do_io_check(_signal, [&] {

View File

@@ -0,0 +1,127 @@
/*
* Copyright (C) 2016 ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#pragma once
#include "keys.hh"
#include "schema.hh"
#include "range.hh"
/**
* Represents the kind of bound in a range tombstone.
*/
enum class bound_kind : uint8_t {
excl_end = 0,
incl_start = 1,
// values 2 to 5 are reserved for forward Origin compatibility
incl_end = 6,
excl_start = 7,
};
std::ostream& operator<<(std::ostream& out, const bound_kind k);
bound_kind invert_kind(bound_kind k);
int32_t weight(bound_kind k);
static inline bound_kind flip_bound_kind(bound_kind bk)
{
switch (bk) {
case bound_kind::excl_end: return bound_kind::excl_start;
case bound_kind::incl_end: return bound_kind::incl_start;
case bound_kind::excl_start: return bound_kind::excl_end;
case bound_kind::incl_start: return bound_kind::incl_end;
}
abort();
}
class bound_view {
const static thread_local clustering_key empty_prefix;
public:
const clustering_key_prefix& prefix;
bound_kind kind;
bound_view(const clustering_key_prefix& prefix, bound_kind kind)
: prefix(prefix)
, kind(kind)
{ }
struct compare {
// To make it assignable and to avoid taking a schema_ptr, we
// wrap the schema reference.
std::reference_wrapper<const schema> _s;
compare(const schema& s) : _s(s)
{ }
bool operator()(const clustering_key_prefix& p1, int32_t w1, const clustering_key_prefix& p2, int32_t w2) const {
auto type = _s.get().clustering_key_prefix_type();
auto res = prefix_equality_tri_compare(type->types().begin(),
type->begin(p1), type->end(p1),
type->begin(p2), type->end(p2),
tri_compare);
if (res) {
return res < 0;
}
auto d1 = p1.size(_s);
auto d2 = p2.size(_s);
if (d1 == d2) {
return w1 < w2;
}
return d1 < d2 ? w1 <= 0 : w2 > 0;
}
bool operator()(const bound_view b, const clustering_key_prefix& p) const {
return operator()(b.prefix, weight(b.kind), p, 0);
}
bool operator()(const clustering_key_prefix& p, const bound_view b) const {
return operator()(p, 0, b.prefix, weight(b.kind));
}
bool operator()(const bound_view b1, const bound_view b2) const {
return operator()(b1.prefix, weight(b1.kind), b2.prefix, weight(b2.kind));
}
};
bool equal(const schema& s, const bound_view other) const {
return kind == other.kind && prefix.equal(s, other.prefix);
}
bool adjacent(const schema& s, const bound_view other) const {
return invert_kind(other.kind) == kind && prefix.equal(s, other.prefix);
}
static bound_view bottom() {
return {empty_prefix, bound_kind::incl_start};
}
static bound_view top() {
return {empty_prefix, bound_kind::incl_end};
}
/*
template<template<typename> typename T, typename U>
concept bool Range() {
return requires (T<U> range) {
{ range.start() } -> stdx::optional<U>;
{ range.end() } -> stdx::optional<U>;
};
};*/
template<template<typename> typename Range>
static std::pair<bound_view, bound_view> from_range(const Range<clustering_key_prefix>& range) {
return {
range.start() ? bound_view(range.start()->value(), range.start()->is_inclusive() ? bound_kind::incl_start : bound_kind::excl_start) : bottom(),
range.end() ? bound_view(range.end()->value(), range.end()->is_inclusive() ? bound_kind::incl_end : bound_kind::excl_end) : top(),
};
}
friend std::ostream& operator<<(std::ostream& out, const bound_view& b) {
return out << "{bound: prefix=" << b.prefix << ", kind=" << b.kind << "}";
}
};

138
clustering_key_filter.cc Normal file
View File

@@ -0,0 +1,138 @@
/*
* Copyright (C) 2016 ScyllaDB
*
* Modified by ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#include "clustering_key_filter.hh"
#include "keys.hh"
#include "query-request.hh"
#include "range.hh"
namespace query {
const clustering_row_ranges&
clustering_key_filtering_context::get_ranges(const partition_key& key) const {
static thread_local clustering_row_ranges full_range = {{}};
return _factory ? _factory->get_ranges(key) : full_range;
}
clustering_key_filtering_context clustering_key_filtering_context::create_no_filtering() {
return clustering_key_filtering_context{};
}
const clustering_key_filtering_context no_clustering_key_filtering =
clustering_key_filtering_context::create_no_filtering();
class stateless_clustering_key_filter_factory : public clustering_key_filter_factory {
clustering_key_filter _filter;
clustering_row_ranges _ranges;
public:
stateless_clustering_key_filter_factory(clustering_row_ranges&& ranges,
clustering_key_filter&& filter)
: _filter(std::move(filter)), _ranges(std::move(ranges)) {}
virtual clustering_key_filter get_filter(const partition_key& key) override {
return _filter;
}
virtual clustering_key_filter get_filter_for_sorted(const partition_key& key) override {
return _filter;
}
virtual const clustering_row_ranges& get_ranges(const partition_key& key) override {
return _ranges;
}
virtual bool want_static_columns(const partition_key& key) override {
return true;
}
};
class partition_slice_clustering_key_filter_factory : public clustering_key_filter_factory {
schema_ptr _schema;
const partition_slice& _slice;
clustering_key_prefix::prefix_equal_tri_compare _cmp;
clustering_row_ranges _ck_ranges;
public:
partition_slice_clustering_key_filter_factory(schema_ptr s, const partition_slice& slice)
: _schema(std::move(s)), _slice(slice), _cmp(*_schema) {}
virtual clustering_key_filter get_filter(const partition_key& key) override {
const clustering_row_ranges& ranges = _slice.row_ranges(*_schema, key);
return [this, &ranges] (const clustering_key& key) {
return std::any_of(std::begin(ranges), std::end(ranges),
[this, &key] (const clustering_range& r) { return r.contains(key, _cmp); });
};
}
virtual clustering_key_filter get_filter_for_sorted(const partition_key& key) override {
const clustering_row_ranges& ranges = _slice.row_ranges(*_schema, key);
return [this, &ranges] (const clustering_key& key) {
return std::any_of(std::begin(ranges), std::end(ranges),
[this, &key] (const clustering_range& r) { return r.contains(key, _cmp); });
};
}
virtual const clustering_row_ranges& get_ranges(const partition_key& key) override {
if (_slice.options.contains(query::partition_slice::option::reversed)) {
_ck_ranges = _slice.row_ranges(*_schema, key);
std::reverse(_ck_ranges.begin(), _ck_ranges.end());
return _ck_ranges;
}
return _slice.row_ranges(*_schema, key);
}
virtual bool want_static_columns(const partition_key& key) override {
return true;
}
};
static const shared_ptr<clustering_key_filter_factory>
create_partition_slice_filter(schema_ptr s, const partition_slice& slice) {
return ::make_shared<partition_slice_clustering_key_filter_factory>(std::move(s), slice);
}
const clustering_key_filtering_context
clustering_key_filtering_context::create(schema_ptr schema, const partition_slice& slice) {
static thread_local clustering_key_filtering_context accept_all = clustering_key_filtering_context(
::make_shared<stateless_clustering_key_filter_factory>(clustering_row_ranges{{}},
[](const clustering_key&) { return true; }));
static thread_local clustering_key_filtering_context reject_all = clustering_key_filtering_context(
::make_shared<stateless_clustering_key_filter_factory>(clustering_row_ranges{},
[](const clustering_key&) { return false; }));
if (slice.get_specific_ranges()) {
return clustering_key_filtering_context(create_partition_slice_filter(schema, slice));
}
const clustering_row_ranges& ranges = slice.default_row_ranges();
if (ranges.empty()) {
return reject_all;
}
if (ranges.size() == 1 && ranges[0].is_full()) {
return accept_all;
}
return clustering_key_filtering_context(create_partition_slice_filter(schema, slice));
}
}

82
clustering_key_filter.hh Normal file
View File

@@ -0,0 +1,82 @@
/*
* Copyright (C) 2016 ScyllaDB
*
* Modified by ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#pragma once
#include <functional>
#include <vector>
#include "core/shared_ptr.hh"
#include "database_fwd.hh"
#include "schema.hh"
template<typename T> class range;
namespace query {
class partition_slice;
// A predicate that tells if a clustering key should be accepted.
using clustering_key_filter = std::function<bool(const clustering_key&)>;
// A factory for clustering key filter which can be reused for multiple clustering keys.
class clustering_key_filter_factory {
public:
// Create a clustering key filter that can be used for multiple clustering keys with no restrictions.
virtual clustering_key_filter get_filter(const partition_key&) = 0;
// Create a clustering key filter that can be used for multiple clustering keys but they have to be sorted.
virtual clustering_key_filter get_filter_for_sorted(const partition_key&) = 0;
virtual const std::vector<range<clustering_key_prefix>>& get_ranges(const partition_key&) = 0;
// Whether we want to get the static row, in addition to the desired clustering rows
virtual bool want_static_columns(const partition_key&) = 0;
virtual ~clustering_key_filter_factory() = default;
};
class clustering_key_filtering_context {
private:
shared_ptr<clustering_key_filter_factory> _factory;
clustering_key_filtering_context() {};
clustering_key_filtering_context(shared_ptr<clustering_key_filter_factory> factory) : _factory(factory) {}
public:
// Create a clustering key filter that can be used for multiple clustering keys with no restrictions.
clustering_key_filter get_filter(const partition_key& key) const {
return _factory ? _factory->get_filter(key) : [] (const clustering_key&) { return true; };
}
// Create a clustering key filter that can be used for multiple clustering keys but they have to be sorted.
clustering_key_filter get_filter_for_sorted(const partition_key& key) const {
return _factory ? _factory->get_filter_for_sorted(key) : [] (const clustering_key&) { return true; };
}
const std::vector<range<clustering_key_prefix>>& get_ranges(const partition_key& key) const;
bool want_static_columns(const partition_key& key) const {
return _factory ? _factory->want_static_columns(key) : true;
}
static const clustering_key_filtering_context create(schema_ptr, const partition_slice&);
static clustering_key_filtering_context create_no_filtering();
};
extern const clustering_key_filtering_context no_clustering_key_filtering;
}

View File

@@ -22,6 +22,8 @@
#pragma once
class column_family;
class schema;
using schema_ptr = lw_shared_ptr<const schema>;
namespace sstables {
@@ -30,11 +32,12 @@ enum class compaction_strategy_type {
major,
size_tiered,
leveled,
// FIXME: Add support to DateTiered.
date_tiered,
};
class compaction_strategy_impl;
class sstable;
class sstable_set;
struct compaction_descriptor;
class compaction_strategy {
@@ -51,6 +54,16 @@ public:
// Return a list of sstables to be compacted after applying the strategy.
compaction_descriptor get_sstables_for_compaction(column_family& cfs, std::vector<lw_shared_ptr<sstable>> candidates);
// Some strategies may look at the compacted and resulting sstables to
// get some useful information for subsequent compactions.
void notify_completion(schema_ptr schema, const std::vector<lw_shared_ptr<sstable>>& removed, const std::vector<lw_shared_ptr<sstable>>& added);
// Return if parallel compaction is allowed by strategy.
bool parallel_compaction() const;
// An estimation of number of compaction for strategy to be satisfied.
int64_t estimated_pending_compactions(column_family& cf) const;
static sstring name(compaction_strategy_type type) {
switch (type) {
case compaction_strategy_type::null:
@@ -61,6 +74,8 @@ public:
return "SizeTieredCompactionStrategy";
case compaction_strategy_type::leveled:
return "LeveledCompactionStrategy";
case compaction_strategy_type::date_tiered:
return "DateTieredCompactionStrategy";
default:
throw std::runtime_error("Invalid Compaction Strategy");
}
@@ -77,6 +92,8 @@ public:
return compaction_strategy_type::size_tiered;
} else if (short_name == "LeveledCompactionStrategy") {
return compaction_strategy_type::leveled;
} else if (short_name == "DateTieredCompactionStrategy") {
return compaction_strategy_type::date_tiered;
} else {
throw exceptions::configuration_exception(sprint("Unable to find compaction strategy class '%s'", name));
}
@@ -87,6 +104,8 @@ public:
sstring name() const {
return name(type());
}
sstable_set make_sstable_set(schema_ptr schema) const;
};
// Creates a compaction_strategy object from one of the strategies available.

View File

@@ -0,0 +1,64 @@
/*
* Copyright (C) 2016 ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#pragma once
#include "query-request.hh"
#include <experimental/optional>
// Wraps ring_position so it is compatible with old-style C++: default constructor,
// stateless comparators, yada yada
class compatible_ring_position {
const schema* _schema = nullptr;
// optional to supply a default constructor, no more
std::experimental::optional<dht::ring_position> _rp;
public:
compatible_ring_position() noexcept = default;
compatible_ring_position(const schema& s, const dht::ring_position& rp)
: _schema(&s), _rp(rp) {
}
compatible_ring_position(const schema& s, dht::ring_position&& rp)
: _schema(&s), _rp(std::move(rp)) {
}
friend int tri_compare(const compatible_ring_position& x, const compatible_ring_position& y) {
return x._rp->tri_compare(*x._schema, *y._rp);
}
friend bool operator<(const compatible_ring_position& x, const compatible_ring_position& y) {
return tri_compare(x, y) < 0;
}
friend bool operator<=(const compatible_ring_position& x, const compatible_ring_position& y) {
return tri_compare(x, y) <= 0;
}
friend bool operator>(const compatible_ring_position& x, const compatible_ring_position& y) {
return tri_compare(x, y) > 0;
}
friend bool operator>=(const compatible_ring_position& x, const compatible_ring_position& y) {
return tri_compare(x, y) >= 0;
}
friend bool operator==(const compatible_ring_position& x, const compatible_ring_position& y) {
return tri_compare(x, y) == 0;
}
friend bool operator!=(const compatible_ring_position& x, const compatible_ring_position& y) {
return tri_compare(x, y) != 0;
}
};

View File

@@ -21,7 +21,10 @@
#pragma once
#include <boost/range/algorithm/copy.hpp>
#include <boost/range/adaptor/transformed.hpp>
#include "compound.hh"
#include "schema.hh"
//
// This header provides adaptors between the representation used by our compound_type<>
@@ -180,3 +183,348 @@ bytes to_legacy(CompoundType& type, bytes_view packed) {
std::copy(lv.begin(), lv.end(), legacy_form.begin());
return legacy_form;
}
// Represents a value serialized according to Origin's CompositeType.
// If is_compound is true, then the value is one or more components encoded as:
//
// <representation> ::= ( <component> )+
// <component> ::= <length> <value> <EOC>
// <length> ::= <uint16_t>
// <EOC> ::= <uint8_t>
//
// If false, then it encodes a single value, without a prefix length or a suffix EOC.
class composite final {
bytes _bytes;
bool _is_compound;
public:
composite(bytes&& b, bool is_compound)
: _bytes(std::move(b))
, _is_compound(is_compound)
{ }
composite(bytes&& b)
: _bytes(std::move(b))
, _is_compound(true)
{ }
composite()
: _bytes()
, _is_compound(true)
{ }
using size_type = uint16_t;
using eoc_type = int8_t;
/*
* The 'end-of-component' byte should always be 0 for actual column name.
* However, it can set to 1 for query bounds. This allows to query for the
* equivalent of 'give me the full range'. That is, if a slice query is:
* start = <3><"foo".getBytes()><0>
* end = <3><"foo".getBytes()><1>
* then we'll return *all* the columns whose first component is "foo".
* If for a component, the 'end-of-component' is != 0, there should not be any
* following component. The end-of-component can also be -1 to allow
* non-inclusive query. For instance:
* end = <3><"foo".getBytes()><-1>
* allows to query everything that is smaller than <3><"foo".getBytes()>, but
* not <3><"foo".getBytes()> itself.
*/
enum class eoc : eoc_type {
start = -1,
none = 0,
end = 1
};
using component = std::pair<bytes, eoc>;
using component_view = std::pair<bytes_view, eoc>;
private:
template<typename Value, typename = std::enable_if_t<!std::is_same<const data_value, std::decay_t<Value>>::value>>
static size_t size(Value& val) {
return val.size();
}
static size_t size(const data_value& val) {
return val.serialized_size();
}
template<typename Value, typename = std::enable_if_t<!std::is_same<data_value, std::decay_t<Value>>::value>>
static void write_value(Value&& val, bytes::iterator& out) {
out = std::copy(val.begin(), val.end(), out);
}
static void write_value(const data_value& val, bytes::iterator& out) {
val.serialize(out);
}
template<typename RangeOfSerializedComponents>
static void serialize_value(RangeOfSerializedComponents&& values, bytes::iterator& out, bool is_compound) {
if (!is_compound) {
auto it = values.begin();
write_value(std::forward<decltype(*it)>(*it), out);
return;
}
for (auto&& val : values) {
write<size_type>(out, static_cast<size_type>(size(val)));
write_value(std::forward<decltype(val)>(val), out);
// Range tombstones are not keys. For collections, only frozen
// values can be keys. Therefore, for as long as it is safe to
// assume that this code will be used to create keys, it is safe
// to assume the trailing byte is always zero.
write<eoc_type>(out, eoc_type(eoc::none));
}
}
template <typename RangeOfSerializedComponents>
static size_t serialized_size(RangeOfSerializedComponents&& values, bool is_compound) {
size_t len = 0;
auto it = values.begin();
if (it != values.end()) {
// CQL3 uses a specific prefix (0xFFFF) to encode "static columns"
// (CASSANDRA-6561). This does mean the maximum size of the first component of a
// composite is 65534, not 65535 (or we wouldn't be able to detect if the first 2
// bytes is the static prefix or not).
auto value_size = size(*it);
if (value_size > static_cast<size_type>(std::numeric_limits<size_type>::max() - uint8_t(is_compound))) {
throw std::runtime_error(sprint("First component size too large: %d > %d", value_size, std::numeric_limits<size_type>::max() - is_compound));
}
if (!is_compound) {
return value_size;
}
len += sizeof(size_type) + value_size + sizeof(eoc_type);
++it;
}
for ( ; it != values.end(); ++it) {
auto value_size = size(*it);
if (value_size > std::numeric_limits<size_type>::max()) {
throw std::runtime_error(sprint("Component size too large: %d > %d", value_size, std::numeric_limits<size_type>::max()));
}
len += sizeof(size_type) + value_size + sizeof(eoc_type);
}
return len;
}
public:
template <typename Describer>
auto describe_type(Describer f) const {
return f(const_cast<bytes&>(_bytes));
}
template<typename RangeOfSerializedComponents>
static bytes serialize_value(RangeOfSerializedComponents&& values, bool is_compound = true) {
auto size = serialized_size(values, is_compound);
bytes b(bytes::initialized_later(), size);
auto i = b.begin();
serialize_value(std::forward<decltype(values)>(values), i, is_compound);
return b;
}
class iterator : public std::iterator<std::input_iterator_tag, const component_view> {
bytes_view _v;
component_view _current;
private:
eoc to_eoc(int8_t eoc_byte) {
return eoc_byte == 0 ? eoc::none : (eoc_byte < 0 ? eoc::start : eoc::end);
}
void read_current() {
size_type len;
{
if (_v.empty()) {
_v = bytes_view(nullptr, 0);
return;
}
len = read_simple<size_type>(_v);
if (_v.size() < len) {
throw marshal_exception();
}
}
auto value = bytes_view(_v.begin(), len);
_v.remove_prefix(len);
_current = component_view(std::move(value), to_eoc(read_simple<eoc_type>(_v)));
}
public:
struct end_iterator_tag {};
iterator(const bytes_view& v, bool is_compound, bool is_static)
: _v(v) {
if (is_static) {
_v.remove_prefix(2);
}
if (is_compound) {
read_current();
} else {
_current = component_view(_v, eoc::none);
_v.remove_prefix(_v.size());
}
}
iterator(end_iterator_tag) : _v(nullptr, 0) {}
iterator& operator++() {
read_current();
return *this;
}
iterator operator++(int) {
iterator i(*this);
++(*this);
return i;
}
const value_type& operator*() const { return _current; }
const value_type* operator->() const { return &_current; }
bool operator!=(const iterator& i) const { return _v.begin() != i._v.begin(); }
bool operator==(const iterator& i) const { return _v.begin() == i._v.begin(); }
};
iterator begin() const {
return iterator(_bytes, _is_compound, is_static());
}
iterator end() const {
return iterator(iterator::end_iterator_tag());
}
boost::iterator_range<iterator> components() const & {
return { begin(), end() };
}
auto values() const & {
return components() | boost::adaptors::transformed([](auto&& c) { return c.first; });
}
std::vector<component> components() const && {
std::vector<component> result;
std::transform(begin(), end(), std::back_inserter(result), [](auto&& p) {
return component(bytes(p.first.begin(), p.first.end()), p.second);
});
return result;
}
std::vector<bytes> values() const && {
std::vector<bytes> result;
boost::copy(components() | boost::adaptors::transformed([](auto&& c) { return to_bytes(c.first); }), std::back_inserter(result));
return result;
}
const bytes& get_bytes() const {
return _bytes;
}
size_t size() const {
return _bytes.size();
}
bool empty() const {
return _bytes.empty();
}
static bool is_static(bytes_view bytes, bool is_compound) {
return is_compound && bytes.size() > 2 && (bytes[0] & bytes[1] & 0xff) == 0xff;
}
bool is_static() const {
return is_static(_bytes, _is_compound);
}
bool is_compound() const {
return _is_compound;
}
// The following factory functions assume this composite is a compound value.
template <typename ClusteringElement>
static composite from_clustering_element(const schema& s, const ClusteringElement& ce) {
return serialize_value(ce.components(s));
}
static composite from_exploded(const std::vector<bytes_view>& v, eoc marker = eoc::none) {
if (v.size() == 0) {
return bytes(size_t(1), bytes::value_type(marker));
}
auto b = serialize_value(v);
b.back() = eoc_type(marker);
return composite(std::move(b));
}
static composite static_prefix(const schema& s) {
static bytes static_marker(size_t(2), bytes::value_type(0xff));
std::vector<bytes_view> sv(s.clustering_key_size());
return static_marker + serialize_value(sv);
}
explicit operator bytes_view() const {
return _bytes;
}
template <typename Component>
friend inline std::ostream& operator<<(std::ostream& os, const std::pair<Component, eoc>& c) {
return os << "{value=" << c.first << "; eoc=" << sprint("0x%02x", eoc_type(c.second) & 0xff) << "}";
}
};
class composite_view final {
bytes_view _bytes;
bool _is_compound;
public:
composite_view(bytes_view b, bool is_compound = true)
: _bytes(b)
, _is_compound(is_compound)
{ }
composite_view(const composite& c)
: composite_view(static_cast<bytes_view>(c), c.is_compound())
{ }
composite_view()
: _bytes(nullptr, 0)
, _is_compound(true)
{ }
std::vector<bytes> explode() const {
if (!_is_compound) {
return { to_bytes(_bytes) };
}
std::vector<bytes> ret;
for (auto it = begin(), e = end(); it != e; ) {
ret.push_back(to_bytes(it->first));
auto marker = it->second;
++it;
if (it != e && marker != composite::eoc::none) {
throw runtime_exception(sprint("non-zero component divider found (%d) mid", sprint("0x%02x", composite::eoc_type(marker) & 0xff)));
}
}
return ret;
}
composite::iterator begin() const {
return composite::iterator(_bytes, _is_compound, is_static());
}
composite::iterator end() const {
return composite::iterator(composite::iterator::end_iterator_tag());
}
boost::iterator_range<composite::iterator> components() const {
return { begin(), end() };
}
auto values() const {
return components() | boost::adaptors::transformed([](auto&& c) { return c.first; });
}
size_t size() const {
return _bytes.size();
}
bool empty() const {
return _bytes.empty();
}
bool is_static() const {
return composite::is_static(_bytes, _is_compound);
}
explicit operator bytes_view() const {
return _bytes;
}
bool operator==(const composite_view& k) const { return k._bytes == _bytes && k._is_compound == _is_compound; }
bool operator!=(const composite_view& k) const { return !(k == *this); }
};

View File

@@ -32,7 +32,7 @@ enum class compressor {
class compression_parameters {
public:
static constexpr int32_t DEFAULT_CHUNK_LENGTH = 64 * 1024;
static constexpr int32_t DEFAULT_CHUNK_LENGTH = 4 * 1024;
static constexpr double DEFAULT_CRC_CHECK_CHANCE = 1.0;
static constexpr auto SSTABLE_COMPRESSION = "sstable_compression";

2
conf/housekeeping.cfg Normal file
View File

@@ -0,0 +1,2 @@
[housekeeping]
check-version: True

View File

@@ -784,7 +784,7 @@ commitlog_total_space_in_mb: -1
# can be: all - all traffic is compressed
# dc - traffic between different datacenters is compressed
# none - nothing is compressed.
# internode_compression: all
# internode_compression: none
# Enable or disable tcp_nodelay for inter-dc communication.
# Disabling it will result in larger (but fewer) network packets being sent,
@@ -805,3 +805,11 @@ commitlog_total_space_in_mb: -1
# true: relaxed environment checks; performance and reliability may degraade.
#
# developer_mode: false
# Idle-time background processing
#
# Scylla can perform certain jobs in the background while the system is otherwise idle,
# freeing processor resources when there is other work to be done.
#
# defragment_memory_on_idle: true

View File

@@ -162,6 +162,7 @@ modes = {
scylla_tests = [
'tests/mutation_test',
'tests/streamed_mutation_test',
'tests/schema_registry_test',
'tests/canonical_mutation_test',
'tests/range_test',
@@ -216,6 +217,9 @@ scylla_tests = [
'tests/dynamic_bitset_test',
'tests/auth_test',
'tests/idl_test',
'tests/range_tombstone_list_test',
'tests/anchorless_list_test',
'tests/database_test',
]
apps = [
@@ -278,6 +282,8 @@ scylla_core = (['database.cc',
'schema_registry.cc',
'bytes.cc',
'mutation.cc',
'streamed_mutation.cc',
'partition_version.cc',
'row_cache.cc',
'canonical_mutation.cc',
'frozen_mutation.cc',
@@ -293,16 +299,15 @@ scylla_core = (['database.cc',
'mutation_query.cc',
'key_reader.cc',
'keys.cc',
'clustering_key_filter.cc',
'sstables/sstables.cc',
'sstables/compress.cc',
'sstables/row.cc',
'sstables/key.cc',
'sstables/partition.cc',
'sstables/filter.cc',
'sstables/compaction.cc',
'sstables/compaction_strategy.cc',
'sstables/compaction_manager.cc',
'log.cc',
'transport/event.cc',
'transport/event_notifier.cc',
'transport/server.cc',
@@ -351,6 +356,7 @@ scylla_core = (['database.cc',
'cql3/statements/grant_statement.cc',
'cql3/statements/revoke_statement.cc',
'cql3/statements/alter_type_statement.cc',
'cql3/statements/alter_keyspace_statement.cc',
'cql3/update_parameters.cc',
'cql3/ut_name.cc',
'cql3/user_options.cc',
@@ -369,6 +375,7 @@ scylla_core = (['database.cc',
'cql3/operator.cc',
'cql3/relation.cc',
'cql3/column_identifier.cc',
'cql3/column_specification.cc',
'cql3/constants.cc',
'cql3/query_processor.cc',
'cql3/query_options.cc',
@@ -384,6 +391,7 @@ scylla_core = (['database.cc',
'cql3/selection/selection.cc',
'cql3/selection/selector.cc',
'cql3/restrictions/statement_restrictions.cc',
'cql3/result_set.cc',
'db/consistency_level.cc',
'db/system_keyspace.cc',
'db/schema_tables.cc',
@@ -470,6 +478,12 @@ scylla_core = (['database.cc',
'auth/data_resource.cc',
'auth/password_authenticator.cc',
'auth/permission.cc',
'tracing/tracing.cc',
'tracing/trace_keyspace_helper.cc',
'tracing/trace_state.cc',
'range_tombstone.cc',
'range_tombstone_list.cc',
'db/size_estimates_recorder.cc'
]
+ [Antlr3Grammar('cql3/Cql.g')]
+ [Thrift('interface/cassandra.thrift', 'Cassandra')]
@@ -529,6 +543,7 @@ idls = ['idl/gossip_digest.idl.hh',
'idl/query.idl.hh',
'idl/idl_test.idl.hh',
'idl/commitlog.idl.hh',
'idl/tracing.idl.hh',
]
scylla_tests_dependencies = scylla_core + api + idls + [
@@ -551,8 +566,6 @@ tests_not_using_seastar_test_framework = set([
'tests/keys_test',
'tests/partitioner_test',
'tests/map_difference_test',
'tests/frozen_mutation_test',
'tests/canonical_mutation_test',
'tests/perf/perf_mutation',
'tests/lsa_async_eviction_test',
'tests/lsa_sync_eviction_test',
@@ -573,6 +586,8 @@ tests_not_using_seastar_test_framework = set([
'tests/managed_vector_test',
'tests/dynamic_bitset_test',
'tests/idl_test',
'tests/range_tombstone_list_test',
'tests/anchorless_list_test',
])
for t in tests_not_using_seastar_test_framework:
@@ -589,7 +604,8 @@ deps['tests/sstable_test'] += ['tests/sstable_datafile_test.cc']
deps['tests/bytes_ostream_test'] = ['tests/bytes_ostream_test.cc']
deps['tests/UUID_test'] = ['utils/UUID_gen.cc', 'tests/UUID_test.cc']
deps['tests/murmur_hash_test'] = ['bytes.cc', 'utils/murmur_hash.cc', 'tests/murmur_hash_test.cc']
deps['tests/allocation_strategy_test'] = ['tests/allocation_strategy_test.cc', 'utils/logalloc.cc', 'log.cc', 'utils/dynamic_bitset.cc']
deps['tests/allocation_strategy_test'] = ['tests/allocation_strategy_test.cc', 'utils/logalloc.cc', 'utils/dynamic_bitset.cc']
deps['tests/anchorless_list_test'] = ['tests/anchorless_list_test.cc']
warnings = [
'-Wno-mismatched-tags', # clang-only

View File

@@ -35,7 +35,7 @@ class converting_mutation_partition_applier : public mutation_partition_visitor
deletable_row* _current_row;
private:
static bool is_compatible(const column_definition& new_def, const data_type& old_type, column_kind kind) {
return new_def.kind == kind && new_def.type->is_value_compatible_with(*old_type);
return ::is_compatible(new_def.kind, kind) && new_def.type->is_value_compatible_with(*old_type);
}
void accept_cell(row& dst, column_kind kind, const column_definition& new_def, const data_type& old_type, atomic_cell_view cell) {
if (is_compatible(new_def, old_type, kind) && cell.timestamp() > new_def.dropped_at()) {
@@ -90,8 +90,8 @@ public:
}
}
virtual void accept_row_tombstone(clustering_key_prefix_view prefix, tombstone t) override {
_p.apply_row_tombstone(_p_schema, prefix, t);
virtual void accept_row_tombstone(const range_tombstone& rt) override {
_p.apply_row_tombstone(_p_schema, rt);
}
virtual void accept_row(clustering_key_view key, tombstone deleted_at, const row_marker& rm) override {

View File

@@ -32,6 +32,9 @@ options {
@parser::includes {
#include "cql3/selection/writetime_or_ttl.hh"
#include "cql3/statements/raw/parsed_statement.hh"
#include "cql3/statements/raw/select_statement.hh"
#include "cql3/statements/alter_keyspace_statement.hh"
#include "cql3/statements/alter_table_statement.hh"
#include "cql3/statements/create_keyspace_statement.hh"
#include "cql3/statements/drop_keyspace_statement.hh"
@@ -43,12 +46,12 @@ options {
#include "cql3/statements/property_definitions.hh"
#include "cql3/statements/drop_table_statement.hh"
#include "cql3/statements/truncate_statement.hh"
#include "cql3/statements/select_statement.hh"
#include "cql3/statements/update_statement.hh"
#include "cql3/statements/delete_statement.hh"
#include "cql3/statements/raw/update_statement.hh"
#include "cql3/statements/raw/insert_statement.hh"
#include "cql3/statements/raw/delete_statement.hh"
#include "cql3/statements/index_prop_defs.hh"
#include "cql3/statements/use_statement.hh"
#include "cql3/statements/batch_statement.hh"
#include "cql3/statements/raw/use_statement.hh"
#include "cql3/statements/raw/batch_statement.hh"
#include "cql3/statements/create_user_statement.hh"
#include "cql3/statements/alter_user_statement.hh"
#include "cql3/statements/drop_user_statement.hh"
@@ -294,11 +297,11 @@ struct uninitialized {
/** STATEMENTS **/
query returns [shared_ptr<parsed_statement> stmnt]
query returns [shared_ptr<raw::parsed_statement> stmnt]
: st=cqlStatement (';')* EOF { $stmnt = st; }
;
cqlStatement returns [shared_ptr<parsed_statement> stmt]
cqlStatement returns [shared_ptr<raw::parsed_statement> stmt]
@after{ if (stmt) { stmt->set_bound_variables(_bind_variables); } }
: st1= selectStatement { $stmt = st1; }
| st2= insertStatement { $stmt = st2; }
@@ -316,9 +319,7 @@ cqlStatement returns [shared_ptr<parsed_statement> stmt]
| st13=dropIndexStatement { $stmt = st13; }
#endif
| st14=alterTableStatement { $stmt = st14; }
#if 0
| st15=alterKeyspaceStatement { $stmt = st15; }
#endif
| st16=grantStatement { $stmt = st16; }
| st17=revokeStatement { $stmt = st17; }
| st18=listPermissionsStatement { $stmt = st18; }
@@ -344,8 +345,8 @@ cqlStatement returns [shared_ptr<parsed_statement> stmt]
/*
* USE <KEYSPACE>;
*/
useStatement returns [::shared_ptr<use_statement> stmt]
: K_USE ks=keyspaceName { $stmt = ::make_shared<use_statement>(ks); }
useStatement returns [::shared_ptr<raw::use_statement> stmt]
: K_USE ks=keyspaceName { $stmt = ::make_shared<raw::use_statement>(ks); }
;
/**
@@ -354,11 +355,11 @@ useStatement returns [::shared_ptr<use_statement> stmt]
* WHERE KEY = "key1" AND COL > 1 AND COL < 100
* LIMIT <NUMBER>;
*/
selectStatement returns [shared_ptr<select_statement::raw_statement> expr]
selectStatement returns [shared_ptr<raw::select_statement> expr]
@init {
bool is_distinct = false;
::shared_ptr<cql3::term::raw> limit;
select_statement::parameters::orderings_type orderings;
raw::select_statement::parameters::orderings_type orderings;
bool allow_filtering = false;
}
: K_SELECT ( ( K_DISTINCT { is_distinct = true; } )?
@@ -371,8 +372,8 @@ selectStatement returns [shared_ptr<select_statement::raw_statement> expr]
( K_LIMIT rows=intValue { limit = rows; } )?
( K_ALLOW K_FILTERING { allow_filtering = true; } )?
{
auto params = ::make_shared<select_statement::parameters>(std::move(orderings), is_distinct, allow_filtering);
$expr = ::make_shared<select_statement::raw_statement>(std::move(cf), std::move(params),
auto params = ::make_shared<raw::select_statement::parameters>(std::move(orderings), is_distinct, allow_filtering);
$expr = ::make_shared<raw::select_statement>(std::move(cf), std::move(params),
std::move(sclause), std::move(wclause), std::move(limit));
}
;
@@ -426,7 +427,7 @@ whereClause returns [std::vector<cql3::relation_ptr> clause]
: relation[$clause] (K_AND relation[$clause])*
;
orderByClause[select_statement::parameters::orderings_type& orderings]
orderByClause[raw::select_statement::parameters::orderings_type& orderings]
@init{
bool reversed = false;
}
@@ -439,7 +440,7 @@ orderByClause[select_statement::parameters::orderings_type& orderings]
* USING TIMESTAMP <long>;
*
*/
insertStatement returns [::shared_ptr<update_statement::parsed_insert> expr]
insertStatement returns [::shared_ptr<raw::insert_statement> expr]
@init {
auto attrs = ::make_shared<cql3::attributes::raw>();
std::vector<::shared_ptr<cql3::column_identifier::raw>> column_names;
@@ -454,7 +455,7 @@ insertStatement returns [::shared_ptr<update_statement::parsed_insert> expr]
( K_IF K_NOT K_EXISTS { if_not_exists = true; } )?
( usingClause[attrs] )?
{
$expr = ::make_shared<update_statement::parsed_insert>(std::move(cf),
$expr = ::make_shared<raw::insert_statement>(std::move(cf),
std::move(attrs),
std::move(column_names),
std::move(values),
@@ -477,7 +478,7 @@ usingClauseObjective[::shared_ptr<cql3::attributes::raw> attrs]
* SET name1 = value1, name2 = value2
* WHERE key = value;
*/
updateStatement returns [::shared_ptr<update_statement::parsed_update> expr]
updateStatement returns [::shared_ptr<raw::update_statement> expr]
@init {
auto attrs = ::make_shared<cql3::attributes::raw>();
std::vector<std::pair<::shared_ptr<cql3::column_identifier::raw>, ::shared_ptr<cql3::operation::raw_update>>> operations;
@@ -488,7 +489,7 @@ updateStatement returns [::shared_ptr<update_statement::parsed_update> expr]
K_WHERE wclause=whereClause
( K_IF conditions=updateConditions )?
{
return ::make_shared<update_statement::parsed_update>(std::move(cf),
return ::make_shared<raw::update_statement>(std::move(cf),
std::move(attrs),
std::move(operations),
std::move(wclause),
@@ -507,7 +508,7 @@ updateConditions returns [conditions_type conditions]
* WHERE KEY = keyname
[IF (EXISTS | name = value, ...)];
*/
deleteStatement returns [::shared_ptr<delete_statement::parsed> expr]
deleteStatement returns [::shared_ptr<raw::delete_statement> expr]
@init {
auto attrs = ::make_shared<cql3::attributes::raw>();
std::vector<::shared_ptr<cql3::operation::raw_deletion>> column_deletions;
@@ -519,7 +520,7 @@ deleteStatement returns [::shared_ptr<delete_statement::parsed> expr]
K_WHERE wclause=whereClause
( K_IF ( K_EXISTS { if_exists = true; } | conditions=updateConditions ))?
{
return ::make_shared<delete_statement::parsed>(cf,
return ::make_shared<raw::delete_statement>(cf,
std::move(attrs),
std::move(column_deletions),
std::move(wclause),
@@ -566,11 +567,11 @@ usingClauseDelete[::shared_ptr<cql3::attributes::raw> attrs]
* ...
* APPLY BATCH
*/
batchStatement returns [shared_ptr<cql3::statements::batch_statement::parsed> expr]
batchStatement returns [shared_ptr<cql3::statements::raw::batch_statement> expr]
@init {
using btype = cql3::statements::batch_statement::type;
using btype = cql3::statements::raw::batch_statement::type;
btype type = btype::LOGGED;
std::vector<shared_ptr<cql3::statements::modification_statement::parsed>> statements;
std::vector<shared_ptr<cql3::statements::raw::modification_statement>> statements;
auto attrs = make_shared<cql3::attributes::raw>();
}
: K_BEGIN
@@ -579,11 +580,11 @@ batchStatement returns [shared_ptr<cql3::statements::batch_statement::parsed> ex
( s=batchStatementObjective ';'? { statements.push_back(std::move(s)); } )*
K_APPLY K_BATCH
{
$expr = ::make_shared<cql3::statements::batch_statement::parsed>(type, std::move(attrs), std::move(statements));
$expr = ::make_shared<cql3::statements::raw::batch_statement>(type, std::move(attrs), std::move(statements));
}
;
batchStatementObjective returns [shared_ptr<cql3::statements::modification_statement::parsed> statement]
batchStatementObjective returns [shared_ptr<cql3::statements::raw::modification_statement> statement]
: i=insertStatement { $statement = i; }
| u=updateStatement { $statement = u; }
| d=deleteStatement { $statement = d; }
@@ -809,15 +810,18 @@ dropTriggerStatement returns [DropTriggerStatement expr]
{ $expr = new DropTriggerStatement(cf, name.toString(), ifExists); }
;
#endif
/**
* ALTER KEYSPACE <KS> WITH <property> = <value>;
*/
alterKeyspaceStatement returns [AlterKeyspaceStatement expr]
@init { KSPropDefs attrs = new KSPropDefs(); }
alterKeyspaceStatement returns [shared_ptr<cql3::statements::alter_keyspace_statement> expr]
@init {
auto attrs = make_shared<cql3::statements::ks_prop_defs>();
}
: K_ALTER K_KEYSPACE ks=keyspaceName
K_WITH properties[attrs] { $expr = new AlterKeyspaceStatement(ks, attrs); }
K_WITH properties[attrs] { $expr = make_shared<cql3::statements::alter_keyspace_statement>(ks, attrs); }
;
#endif
/**
* ALTER COLUMN FAMILY <CF> ALTER <column> TYPE <newtype>;

View File

@@ -0,0 +1,56 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Copyright (C) 2016 ScyllaDB
*
* Modified by ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#include "cql3/column_specification.hh"
namespace cql3 {
bool column_specification::all_in_same_table(const std::vector<::shared_ptr<column_specification>>& names)
{
assert(!names.empty());
auto first = names.front();
return std::all_of(std::next(names.begin()), names.end(), [first] (auto&& spec) {
return spec->ks_name == first->ks_name && spec->cf_name == first->cf_name;
});
}
}

View File

@@ -75,6 +75,8 @@ public:
bool is_reversed_type() const {
return ::dynamic_pointer_cast<const reversed_type_impl>(type) != nullptr;
}
static bool all_in_same_table(const std::vector<::shared_ptr<column_specification>>& names);
};
}

View File

@@ -58,6 +58,9 @@ class result_message;
namespace cql3 {
class metadata;
shared_ptr<const metadata> make_empty_metadata();
class cql_statement {
public:
virtual ~cql_statement()
@@ -102,6 +105,15 @@ public:
virtual bool depends_on_keyspace(const sstring& ks_name) const = 0;
virtual bool depends_on_column_family(const sstring& cf_name) const = 0;
virtual shared_ptr<const metadata> get_result_metadata() const = 0;
};
class cql_statement_no_metadata : public cql_statement {
public:
virtual shared_ptr<const metadata> get_result_metadata() const override {
return make_empty_metadata();
}
};
}

View File

@@ -47,23 +47,23 @@ namespace cql3 {
thread_local const query_options::specific_options query_options::specific_options::DEFAULT{-1, {}, {}, api::missing_timestamp};
thread_local query_options query_options::DEFAULT{db::consistency_level::ONE, std::experimental::nullopt,
{}, false, query_options::specific_options::DEFAULT, cql_serialization_format::latest()};
std::vector<bytes_view_opt>(), false, query_options::specific_options::DEFAULT, cql_serialization_format::latest()};
query_options::query_options(db::consistency_level consistency,
std::experimental::optional<std::vector<sstring_view>> names,
std::vector<bytes_opt> values,
std::vector<bytes_view_opt> value_views,
bool skip_metadata,
specific_options options,
cql_serialization_format sf)
: _consistency(consistency)
, _names(std::move(names))
, _values(std::move(values))
, _value_views(std::move(value_views))
, _value_views()
, _skip_metadata(skip_metadata)
, _options(std::move(options))
, _cql_serialization_format(sf)
{
fill_value_views();
}
query_options::query_options(db::consistency_level consistency,
@@ -72,15 +72,13 @@ query_options::query_options(db::consistency_level consistency,
bool skip_metadata,
specific_options options,
cql_serialization_format sf)
: query_options(
consistency,
std::move(names),
{},
std::move(value_views),
skip_metadata,
std::move(options),
sf
)
: _consistency(consistency)
, _names(std::move(names))
, _values()
, _value_views(std::move(value_views))
, _skip_metadata(skip_metadata)
, _options(std::move(options))
, _cql_serialization_format(sf)
{
}
@@ -100,19 +98,11 @@ query_options::query_options(db::consistency_level cl, std::vector<bytes_opt> va
cl,
{},
std::move(values),
{},
false,
query_options::specific_options::DEFAULT,
cql_serialization_format::latest()
)
{
for (auto&& value : _values) {
if (value) {
_value_views.emplace_back(bytes_view{*value});
} else {
_value_views.emplace_back(std::experimental::nullopt);
}
}
}
query_options::query_options(std::vector<bytes_opt> values)
@@ -214,6 +204,18 @@ void query_options::prepare(const std::vector<::shared_ptr<column_specification>
}
}
_values = std::move(ordered_values);
fill_value_views();
}
void query_options::fill_value_views()
{
for (auto&& value : _values) {
if (value) {
_value_views.emplace_back(bytes_view{*value});
} else {
_value_views.emplace_back(std::experimental::nullopt);
}
}
}
}

View File

@@ -83,7 +83,6 @@ public:
explicit query_options(db::consistency_level consistency,
std::experimental::optional<std::vector<sstring_view>> names,
std::vector<bytes_opt> values,
std::vector<bytes_view_opt> value_views,
bool skip_metadata,
specific_options options,
cql_serialization_format sf);
@@ -132,6 +131,8 @@ public:
const specific_options& get_specific_options() const;
const query_options& for_statement(size_t i) const;
void prepare(const std::vector<::shared_ptr<column_specification>>& specs);
private:
void fill_value_views();
};
}

View File

@@ -58,7 +58,7 @@ logging::logger log("query_processor");
distributed<query_processor> _the_query_processor;
const sstring query_processor::CQL_VERSION = "3.2.0";
const sstring query_processor::CQL_VERSION = "3.2.1";
class query_processor::internal_state {
service::query_state _qs;
@@ -115,6 +115,7 @@ future<::shared_ptr<result_message>>
query_processor::process(const sstring_view& query_string, service::query_state& query_state, query_options& options)
{
log.trace("process: \"{}\"", query_string);
tracing::trace(query_state.get_trace_state(), "Parsing a statement");
auto p = get_statement(query_string, query_state.get_client_state());
options.prepare(p->bound_names);
auto cql_statement = p->statement;
@@ -127,6 +128,7 @@ query_processor::process(const sstring_view& query_string, service::query_state&
if (!queryState.getClientState().isInternal)
metrics.regularStatementsExecuted.inc();
#endif
tracing::trace(query_state.get_trace_state(), "Processing a statement");
return process_statement(std::move(cql_statement), query_state, options);
}
@@ -144,7 +146,7 @@ query_processor::process_statement(::shared_ptr<cql_statement> statement, servic
statement->validate(_proxy, client_state);
future<::shared_ptr<transport::messages::result_message>> fut = make_ready_future<::shared_ptr<transport::messages::result_message>>();
if (client_state._is_internal) {
if (client_state.is_internal()) {
fut = statement->execute_internal(_proxy, query_state, options);
} else {
fut = statement->execute(_proxy, query_state, options);
@@ -187,25 +189,25 @@ query_processor::prepare(const std::experimental::string_view& query_string, con
query_processor::get_stored_prepared_statement(const std::experimental::string_view& query_string, const sstring& keyspace, bool for_thrift)
{
if (for_thrift) {
throw std::runtime_error(sprint("%s not implemented", __PRETTY_FUNCTION__));
#if 0
Integer thriftStatementId = computeThriftId(queryString, keyspace);
ParsedStatement.Prepared existing = thriftPreparedStatements.get(thriftStatementId);
return existing == null ? null : ResultMessage.Prepared.forThrift(thriftStatementId, existing.boundNames);
#endif
auto statement_id = compute_thrift_id(query_string, keyspace);
auto it = _thrift_prepared_statements.find(statement_id);
if (it == _thrift_prepared_statements.end()) {
return ::shared_ptr<result_message::prepared>();
}
return ::make_shared<result_message::prepared::thrift>(statement_id, it->second);
} else {
auto statement_id = compute_id(query_string, keyspace);
auto it = _prepared_statements.find(statement_id);
if (it == _prepared_statements.end()) {
return ::shared_ptr<result_message::prepared>();
}
return ::make_shared<result_message::prepared>(statement_id, it->second);
return ::make_shared<result_message::prepared::cql>(statement_id, it->second);
}
}
future<::shared_ptr<transport::messages::result_message::prepared>>
query_processor::store_prepared_statement(const std::experimental::string_view& query_string, const sstring& keyspace,
::shared_ptr<statements::parsed_statement::prepared> prepared, bool for_thrift)
::shared_ptr<statements::prepared_statement> prepared, bool for_thrift)
{
#if 0
// Concatenate the current keyspace so we don't mix prepared statements between keyspace (#5352).
@@ -217,26 +219,20 @@ query_processor::store_prepared_statement(const std::experimental::string_view&
statementSize,
MAX_CACHE_PREPARED_MEMORY));
#endif
prepared->raw_cql_statement = query_string.data();
if (for_thrift) {
throw std::runtime_error(sprint("%s not implemented", __PRETTY_FUNCTION__));
#if 0
Integer statementId = computeThriftId(queryString, keyspace);
thriftPreparedStatements.put(statementId, prepared);
return ResultMessage.Prepared.forThrift(statementId, prepared.boundNames);
#endif
auto statement_id = compute_thrift_id(query_string, keyspace);
_thrift_prepared_statements.emplace(statement_id, prepared);
auto msg = ::make_shared<result_message::prepared::thrift>(statement_id, prepared);
return make_ready_future<::shared_ptr<result_message::prepared>>(std::move(msg));
} else {
auto statement_id = compute_id(query_string, keyspace);
_prepared_statements.emplace(statement_id, prepared);
auto msg = ::make_shared<result_message::prepared>(statement_id, prepared);
auto msg = ::make_shared<result_message::prepared::cql>(statement_id, prepared);
return make_ready_future<::shared_ptr<result_message::prepared>>(std::move(msg));
}
}
void query_processor::invalidate_prepared_statement(bytes statement_id)
{
_prepared_statements.erase(statement_id);
}
static bytes md5_calculate(const std::experimental::string_view& s)
{
constexpr size_t size = CryptoPP::Weak1::MD5::DIGESTSIZE;
@@ -246,26 +242,35 @@ static bytes md5_calculate(const std::experimental::string_view& s)
return std::move(bytes{reinterpret_cast<const int8_t*>(digest), size});
}
bytes query_processor::compute_id(const std::experimental::string_view& query_string, const sstring& keyspace)
{
sstring to_hash;
if (!keyspace.empty()) {
to_hash += keyspace;
}
to_hash += query_string.to_string();
return md5_calculate(to_hash);
static sstring hash_target(const std::experimental::string_view& query_string, const sstring& keyspace) {
return keyspace + query_string.to_string();
}
::shared_ptr<parsed_statement::prepared>
bytes query_processor::compute_id(const std::experimental::string_view& query_string, const sstring& keyspace)
{
return md5_calculate(hash_target(query_string, keyspace));
}
int32_t query_processor::compute_thrift_id(const std::experimental::string_view& query_string, const sstring& keyspace)
{
auto target = hash_target(query_string, keyspace);
uint32_t h = 0;
for (auto&& c : hash_target(query_string, keyspace)) {
h = 31*h + c;
}
return static_cast<int32_t>(h);
}
::shared_ptr<prepared_statement>
query_processor::get_statement(const sstring_view& query, const service::client_state& client_state)
{
#if 0
Tracing.trace("Parsing {}", queryStr);
#endif
::shared_ptr<parsed_statement> statement = parse_statement(query);
::shared_ptr<raw::parsed_statement> statement = parse_statement(query);
// Set keyspace for statement that require login
auto cf_stmt = dynamic_pointer_cast<cf_statement>(statement);
auto cf_stmt = dynamic_pointer_cast<raw::cf_statement>(statement);
if (cf_stmt) {
cf_stmt->prepare_keyspace(client_state);
}
@@ -276,7 +281,7 @@ query_processor::get_statement(const sstring_view& query, const service::client_
return statement->prepare(_db.local());
}
::shared_ptr<parsed_statement>
::shared_ptr<raw::parsed_statement>
query_processor::parse_statement(const sstring_view& query)
{
try {
@@ -309,7 +314,7 @@ query_processor::parse_statement(const sstring_view& query)
}
query_options query_processor::make_internal_options(
::shared_ptr<statements::parsed_statement::prepared> p,
::shared_ptr<statements::prepared_statement> p,
const std::initializer_list<data_value>& values,
db::consistency_level cl) {
if (p->bound_names.size() != values.size()) {
@@ -330,7 +335,7 @@ query_options query_processor::make_internal_options(
return query_options(cl, bound_values);
}
::shared_ptr<statements::parsed_statement::prepared> query_processor::prepare_internal(
::shared_ptr<statements::prepared_statement> query_processor::prepare_internal(
const sstring& query_string) {
auto& p = _internal_statements[query_string];
if (p == nullptr) {
@@ -352,7 +357,7 @@ future<::shared_ptr<untyped_result_set>> query_processor::execute_internal(
}
future<::shared_ptr<untyped_result_set>> query_processor::execute_internal(
::shared_ptr<statements::parsed_statement::prepared> p,
::shared_ptr<statements::prepared_statement> p,
const std::initializer_list<data_value>& values) {
auto opts = make_internal_options(p, values);
return do_with(std::move(opts),
@@ -376,7 +381,7 @@ future<::shared_ptr<untyped_result_set>> query_processor::process(
}
future<::shared_ptr<untyped_result_set>> query_processor::process(
::shared_ptr<statements::parsed_statement::prepared> p,
::shared_ptr<statements::prepared_statement> p,
db::consistency_level cl, const std::initializer_list<data_value>& values)
{
auto opts = make_internal_options(p, values, cl);
@@ -475,17 +480,9 @@ void query_processor::migration_subscriber::on_drop_aggregate(const sstring& ks_
void query_processor::migration_subscriber::remove_invalid_prepared_statements(sstring ks_name, std::experimental::optional<sstring> cf_name)
{
std::vector<bytes> invalid;
for (auto& kv : _qp->_prepared_statements) {
auto id = kv.first;
auto stmt = kv.second;
if (should_invalidate(ks_name, cf_name, stmt->statement)) {
invalid.emplace_back(id);
}
}
for (auto& id : invalid) {
_qp->invalidate_prepared_statement(id);
}
_qp->invalidate_prepared_statements([&] (::shared_ptr<cql_statement> stmt) {
return this->should_invalidate(ks_name, cf_name, stmt);
});
}
bool query_processor::migration_subscriber::should_invalidate(sstring ks_name, std::experimental::optional<sstring> cf_name, ::shared_ptr<cql_statement> statement)

View File

@@ -47,11 +47,13 @@
#include "core/shared_ptr.hh"
#include "exceptions/exceptions.hh"
#include "cql3/query_options.hh"
#include "cql3/statements/cf_statement.hh"
#include "cql3/statements/raw/parsed_statement.hh"
#include "cql3/statements/raw/cf_statement.hh"
#include "service/migration_manager.hh"
#include "service/query_state.hh"
#include "log.hh"
#include "core/distributed.hh"
#include "statements/prepared_statement.hh"
#include "transport/messages/result_message.hh"
#include "untyped_result_set.hh"
@@ -118,10 +120,10 @@ private:
};
#endif
std::unordered_map<bytes, ::shared_ptr<statements::parsed_statement::prepared>> _prepared_statements;
std::unordered_map<sstring, ::shared_ptr<statements::parsed_statement::prepared>> _internal_statements;
std::unordered_map<bytes, ::shared_ptr<statements::prepared_statement>> _prepared_statements;
std::unordered_map<int32_t, ::shared_ptr<statements::prepared_statement>> _thrift_prepared_statements;
std::unordered_map<sstring, ::shared_ptr<statements::prepared_statement>> _internal_statements;
#if 0
private static final ConcurrentLinkedHashMap<Integer, ParsedStatement.Prepared> thriftPreparedStatements;
// A map for prepared statements used internally (which we don't want to mix with user statement, in particular we don't
// bother with expiration on those.
@@ -211,20 +213,22 @@ private:
}
#endif
public:
::shared_ptr<statements::parsed_statement::prepared> get_prepared(const bytes& id) {
::shared_ptr<statements::prepared_statement> get_prepared(const bytes& id) {
auto it = _prepared_statements.find(id);
if (it == _prepared_statements.end()) {
return ::shared_ptr<statements::parsed_statement::prepared>{};
return ::shared_ptr<statements::prepared_statement>{};
}
return it->second;
}
#if 0
public ParsedStatement.Prepared getPreparedForThrift(Integer id)
{
return thriftPreparedStatements.get(id);
::shared_ptr<statements::prepared_statement> get_prepared_for_thrift(int32_t id) {
auto it = _thrift_prepared_statements.find(id);
if (it == _thrift_prepared_statements.end()) {
return ::shared_ptr<statements::prepared_statement>{};
}
return it->second;
}
#if 0
public static void validateKey(ByteBuffer key) throws InvalidRequestException
{
if (key == null || key.remaining() == 0)
@@ -328,23 +332,23 @@ public:
}
#endif
private:
query_options make_internal_options(::shared_ptr<statements::parsed_statement::prepared>, const std::initializer_list<data_value>&, db::consistency_level = db::consistency_level::ONE);
query_options make_internal_options(::shared_ptr<statements::prepared_statement>, const std::initializer_list<data_value>&, db::consistency_level = db::consistency_level::ONE);
public:
future<::shared_ptr<untyped_result_set>> execute_internal(
const sstring& query_string,
const std::initializer_list<data_value>& = { });
::shared_ptr<statements::parsed_statement::prepared> prepare_internal(const sstring& query);
::shared_ptr<statements::prepared_statement> prepare_internal(const sstring& query);
future<::shared_ptr<untyped_result_set>> execute_internal(
::shared_ptr<statements::parsed_statement::prepared>,
::shared_ptr<statements::prepared_statement>,
const std::initializer_list<data_value>& = { });
future<::shared_ptr<untyped_result_set>> process(
const sstring& query_string,
db::consistency_level, const std::initializer_list<data_value>& = { }, bool cache = false);
future<::shared_ptr<untyped_result_set>> process(
::shared_ptr<statements::parsed_statement::prepared>,
::shared_ptr<statements::prepared_statement>,
db::consistency_level, const std::initializer_list<data_value>& = { });
/*
@@ -429,22 +433,35 @@ public:
prepare(const std::experimental::string_view& query_string, const service::client_state& client_state, bool for_thrift);
static bytes compute_id(const std::experimental::string_view& query_string, const sstring& keyspace);
static int32_t compute_thrift_id(const std::experimental::string_view& query_string, const sstring& keyspace);
#if 0
private static Integer computeThriftId(String queryString, String keyspace)
{
String toHash = keyspace == null ? queryString : keyspace + queryString;
return toHash.hashCode();
}
#endif
private:
::shared_ptr<transport::messages::result_message::prepared>
get_stored_prepared_statement(const std::experimental::string_view& query_string, const sstring& keyspace, bool for_thrift);
future<::shared_ptr<transport::messages::result_message::prepared>>
store_prepared_statement(const std::experimental::string_view& query_string, const sstring& keyspace, ::shared_ptr<statements::parsed_statement::prepared> prepared, bool for_thrift);
store_prepared_statement(const std::experimental::string_view& query_string, const sstring& keyspace, ::shared_ptr<statements::prepared_statement> prepared, bool for_thrift);
void invalidate_prepared_statement(bytes statement_id);
// Erases the statements for which filter returns true.
template <typename Pred>
void invalidate_prepared_statements(Pred filter) {
static_assert(std::is_same<bool, std::result_of_t<Pred(::shared_ptr<cql_statement>)>>::value,
"bad Pred signature");
for (auto it = _prepared_statements.begin(); it != _prepared_statements.end(); ) {
if (filter(it->second->statement)) {
it = _prepared_statements.erase(it);
} else {
++it;
}
}
for (auto it = _thrift_prepared_statements.begin(); it != _thrift_prepared_statements.end(); ) {
if (filter(it->second->statement)) {
it = _thrift_prepared_statements.erase(it);
} else {
++it;
}
}
}
#if 0
public ResultMessage processPrepared(CQLStatement statement, QueryState queryState, QueryOptions options)
@@ -475,9 +492,9 @@ public:
future<::shared_ptr<transport::messages::result_message>> process_batch(::shared_ptr<statements::batch_statement>,
service::query_state& query_state, query_options& options);
::shared_ptr<statements::parsed_statement::prepared> get_statement(const std::experimental::string_view& query,
::shared_ptr<statements::prepared_statement> get_statement(const std::experimental::string_view& query,
const service::client_state& client_state);
static ::shared_ptr<statements::parsed_statement> parse_statement(const std::experimental::string_view& query);
static ::shared_ptr<statements::raw::parsed_statement> parse_statement(const std::experimental::string_view& query);
#if 0
private static long measure(Object key)

View File

@@ -394,7 +394,11 @@ public:
return bounds_range_type::bound(prefix, is_inclusive(b));
};
auto range = bounds_range_type(read_bound(statements::bound::START), read_bound(statements::bound::END));
return { range };
auto bounds = bound_view::from_range(range);
if (bound_view::compare(*_schema)(bounds.second, bounds.first)) {
return { };
}
return { std::move(range) };
}
#if 0
@Override

View File

@@ -46,6 +46,8 @@
#include "cartesian_product.hh"
#include "cql3/restrictions/primary_key_restrictions.hh"
#include "cql3/restrictions/single_column_restrictions.hh"
#include <boost/range/adaptor/transformed.hpp>
#include <boost/range/adaptor/filtered.hpp>
namespace cql3 {
@@ -340,7 +342,7 @@ single_column_primary_key_restrictions<partition_key>::bounds_ranges(const query
if (!r.is_singular()) {
throw exceptions::invalid_request_exception("Range queries on partition key values not supported.");
}
ranges.emplace_back(std::move(r).transform<query::ring_position>(
ranges.emplace_back(std::move(r).transform(
[this] (partition_key&& k) -> query::ring_position {
auto token = dht::global_partitioner().get_token(*_schema, k);
return { std::move(token), std::move(k) };
@@ -352,7 +354,14 @@ single_column_primary_key_restrictions<partition_key>::bounds_ranges(const query
template<>
std::vector<query::clustering_range>
single_column_primary_key_restrictions<clustering_key_prefix>::bounds_ranges(const query_options& options) const {
auto bounds = compute_bounds(options);
auto wrapping_bounds = compute_bounds(options);
auto bounds = boost::copy_range<query::clustering_row_ranges>(wrapping_bounds
| boost::adaptors::filtered([&](auto&& r) {
auto bounds = bound_view::from_range(r);
return !bound_view::compare(*_schema)(bounds.second, bounds.first);
})
| boost::adaptors::transformed([&](auto&& r) { return query::clustering_range(std::move(r));
}));
auto less_cmp = clustering_key_prefix::less_compare(*_schema);
std::sort(bounds.begin(), bounds.end(), [&] (query::clustering_range& x, query::clustering_range& y) {
if (!x.start() && !y.start()) {

191
cql3/result_set.cc Normal file
View File

@@ -0,0 +1,191 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Copyright (C) 2015 ScyllaDB
*
* Modified by ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#include "cql3/result_set.hh"
namespace cql3 {
metadata::metadata(std::vector<::shared_ptr<column_specification>> names_)
: metadata(flag_enum_set(), std::move(names_), names_.size(), {})
{ }
metadata::metadata(flag_enum_set flags, std::vector<::shared_ptr<column_specification>> names_, uint32_t column_count,
::shared_ptr<const service::pager::paging_state> paging_state)
: _flags(flags)
, names(std::move(names_))
, _column_count(column_count)
, _paging_state(std::move(paging_state))
{ }
// The maximum number of values that the ResultSet can hold. This can be bigger than columnCount due to CASSANDRA-4911
uint32_t metadata::value_count() const {
return _flags.contains<flag::NO_METADATA>() ? _column_count : names.size();
}
void metadata::add_non_serialized_column(::shared_ptr<column_specification> name) {
// See comment above. Because columnCount doesn't account the newly added name, it
// won't be serialized.
names.emplace_back(std::move(name));
}
bool metadata::all_in_same_cf() const {
if (_flags.contains<flag::NO_METADATA>()) {
return false;
}
return column_specification::all_in_same_table(names);
}
void metadata::set_has_more_pages(::shared_ptr<const service::pager::paging_state> paging_state) {
if (!paging_state) {
return;
}
_flags.set<flag::HAS_MORE_PAGES>();
_paging_state = std::move(paging_state);
}
void metadata::set_skip_metadata() {
_flags.set<flag::NO_METADATA>();
}
metadata::flag_enum_set metadata::flags() const {
return _flags;
}
uint32_t metadata::column_count() const {
return _column_count;
}
::shared_ptr<const service::pager::paging_state> metadata::paging_state() const {
return _paging_state;
}
const std::vector<::shared_ptr<column_specification>>& metadata::get_names() const {
return names;
}
prepared_metadata::prepared_metadata(const std::vector<::shared_ptr<column_specification>>& names,
const std::vector<uint16_t>& partition_key_bind_indices)
: _names{names}
, _partition_key_bind_indices{partition_key_bind_indices}
{
if (!names.empty() && column_specification::all_in_same_table(_names)) {
_flags.set<flag::GLOBAL_TABLES_SPEC>();
}
}
prepared_metadata::flag_enum_set prepared_metadata::flags() const {
return _flags;
}
const std::vector<::shared_ptr<column_specification>>& prepared_metadata::names() const {
return _names;
}
const std::vector<uint16_t>& prepared_metadata::partition_key_bind_indices() const {
return _partition_key_bind_indices;
}
result_set::result_set(std::vector<::shared_ptr<column_specification>> metadata_)
: _metadata(::make_shared<metadata>(std::move(metadata_)))
{ }
result_set::result_set(::shared_ptr<metadata> metadata)
: _metadata(std::move(metadata))
{ }
size_t result_set::size() const {
return _rows.size();
}
bool result_set::empty() const {
return _rows.empty();
}
void result_set::add_row(std::vector<bytes_opt> row) {
assert(row.size() == _metadata->value_count());
_rows.emplace_back(std::move(row));
}
void result_set::add_column_value(bytes_opt value) {
if (_rows.empty() || _rows.back().size() == _metadata->value_count()) {
std::vector<bytes_opt> row;
row.reserve(_metadata->value_count());
_rows.emplace_back(std::move(row));
}
_rows.back().emplace_back(std::move(value));
}
void result_set::reverse() {
std::reverse(_rows.begin(), _rows.end());
}
void result_set::trim(size_t limit) {
if (_rows.size() > limit) {
_rows.resize(limit);
}
}
metadata& result_set::get_metadata() {
return *_metadata;
}
const metadata& result_set::get_metadata() const {
return *_metadata;
}
const std::deque<std::vector<bytes_opt>>& result_set::rows() const {
return _rows;
}
shared_ptr<const cql3::metadata>
make_empty_metadata() {
static thread_local shared_ptr<const metadata> empty_metadata_cache = [] {
auto result = ::make_shared<metadata>(std::vector<::shared_ptr<cql3::column_specification>>{});
result->set_skip_metadata();
return result;
}();
return empty_metadata_cache;
}
}

View File

@@ -50,15 +50,11 @@
namespace cql3 {
class metadata {
#if 0
public static final CBCodec<Metadata> codec = new Codec();
public static final Metadata EMPTY = new Metadata(EnumSet.of(Flag.NO_METADATA), null, 0, null);
#endif
public:
enum class flag : uint8_t {
GLOBAL_TABLES_SPEC,
HAS_MORE_PAGES,
NO_METADATA
GLOBAL_TABLES_SPEC = 0,
HAS_MORE_PAGES = 1,
NO_METADATA = 2,
};
using flag_enum = super_enum<flag,
@@ -83,434 +79,90 @@ private:
::shared_ptr<const service::pager::paging_state> _paging_state;
public:
metadata(std::vector<::shared_ptr<column_specification>> names_)
: metadata(flag_enum_set(), std::move(names_), names_.size(), {})
{ }
metadata(std::vector<::shared_ptr<column_specification>> names_);
metadata(flag_enum_set flags, std::vector<::shared_ptr<column_specification>> names_, uint32_t column_count,
::shared_ptr<const service::pager::paging_state> paging_state)
: _flags(flags)
, names(std::move(names_))
, _column_count(column_count)
, _paging_state(std::move(paging_state))
{ }
::shared_ptr<const service::pager::paging_state> paging_state);
// The maximum number of values that the ResultSet can hold. This can be bigger than columnCount due to CASSANDRA-4911
uint32_t value_count() {
return _flags.contains<flag::NO_METADATA>() ? _column_count : names.size();
}
uint32_t value_count() const;
void add_non_serialized_column(::shared_ptr<column_specification> name) {
// See comment above. Because columnCount doesn't account the newly added name, it
// won't be serialized.
names.emplace_back(std::move(name));
}
void add_non_serialized_column(::shared_ptr<column_specification> name);
private:
bool all_in_same_cf() const {
if (_flags.contains<flag::NO_METADATA>()) {
return false;
}
assert(!names.empty());
auto first = names.front();
return std::all_of(std::next(names.begin()), names.end(), [first] (auto&& spec) {
return spec->ks_name == first->ks_name && spec->cf_name == first->cf_name;
});
}
bool all_in_same_cf() const;
public:
void set_has_more_pages(::shared_ptr<const service::pager::paging_state> paging_state) {
if (!paging_state) {
return;
}
void set_has_more_pages(::shared_ptr<const service::pager::paging_state> paging_state);
_flags.set<flag::HAS_MORE_PAGES>();
_paging_state = std::move(paging_state);
}
void set_skip_metadata();
void set_skip_metadata() {
_flags.set<flag::NO_METADATA>();
}
flag_enum_set flags() const;
flag_enum_set flags() const {
return _flags;
}
uint32_t column_count() const;
uint32_t column_count() const {
return _column_count;
}
auto paging_state() const {
return _paging_state;
}
auto const& get_names() const {
return names;
}
#if 0
@Override
public String toString()
{
StringBuilder sb = new StringBuilder();
if (names == null)
{
sb.append("[").append(columnCount).append(" columns]");
}
else
{
for (ColumnSpecification name : names)
{
sb.append("[").append(name.name);
sb.append("(").append(name.ksName).append(", ").append(name.cfName).append(")");
sb.append(", ").append(name.type).append("]");
}
}
if (flags.contains(Flag.HAS_MORE_PAGES))
sb.append(" (to be continued)");
return sb.toString();
}
private static class Codec implements CBCodec<Metadata>
{
public Metadata decode(ByteBuf body, int version)
{
// flags & column count
int iflags = body.readInt();
int columnCount = body.readInt();
EnumSet<Flag> flags = Flag.deserialize(iflags);
PagingState state = null;
if (flags.contains(Flag.HAS_MORE_PAGES))
state = PagingState.deserialize(CBUtil.readValue(body));
if (flags.contains(Flag.NO_METADATA))
return new Metadata(flags, null, columnCount, state);
boolean globalTablesSpec = flags.contains(Flag.GLOBAL_TABLES_SPEC);
String globalKsName = null;
String globalCfName = null;
if (globalTablesSpec)
{
globalKsName = CBUtil.readString(body);
globalCfName = CBUtil.readString(body);
}
// metadata (names/types)
List<ColumnSpecification> names = new ArrayList<ColumnSpecification>(columnCount);
for (int i = 0; i < columnCount; i++)
{
String ksName = globalTablesSpec ? globalKsName : CBUtil.readString(body);
String cfName = globalTablesSpec ? globalCfName : CBUtil.readString(body);
ColumnIdentifier colName = new ColumnIdentifier(CBUtil.readString(body), true);
AbstractType type = DataType.toType(DataType.codec.decodeOne(body, version));
names.add(new ColumnSpecification(ksName, cfName, colName, type));
}
return new Metadata(flags, names, names.size(), state);
}
public void encode(Metadata m, ByteBuf dest, int version)
{
boolean noMetadata = m.flags.contains(Flag.NO_METADATA);
boolean globalTablesSpec = m.flags.contains(Flag.GLOBAL_TABLES_SPEC);
boolean hasMorePages = m.flags.contains(Flag.HAS_MORE_PAGES);
assert version > 1 || (!m.flags.contains(Flag.HAS_MORE_PAGES) && !noMetadata): "version = " + version + ", flags = " + m.flags;
dest.writeInt(Flag.serialize(m.flags));
dest.writeInt(m.columnCount);
if (hasMorePages)
CBUtil.writeValue(m.pagingState.serialize(), dest);
if (!noMetadata)
{
if (globalTablesSpec)
{
CBUtil.writeString(m.names.get(0).ksName, dest);
CBUtil.writeString(m.names.get(0).cfName, dest);
}
for (int i = 0; i < m.columnCount; i++)
{
ColumnSpecification name = m.names.get(i);
if (!globalTablesSpec)
{
CBUtil.writeString(name.ksName, dest);
CBUtil.writeString(name.cfName, dest);
}
CBUtil.writeString(name.name.toString(), dest);
DataType.codec.writeOne(DataType.fromType(name.type, version), dest, version);
}
}
}
public int encodedSize(Metadata m, int version)
{
boolean noMetadata = m.flags.contains(Flag.NO_METADATA);
boolean globalTablesSpec = m.flags.contains(Flag.GLOBAL_TABLES_SPEC);
boolean hasMorePages = m.flags.contains(Flag.HAS_MORE_PAGES);
int size = 8;
if (hasMorePages)
size += CBUtil.sizeOfValue(m.pagingState.serialize());
if (!noMetadata)
{
if (globalTablesSpec)
{
size += CBUtil.sizeOfString(m.names.get(0).ksName);
size += CBUtil.sizeOfString(m.names.get(0).cfName);
}
for (int i = 0; i < m.columnCount; i++)
{
ColumnSpecification name = m.names.get(i);
if (!globalTablesSpec)
{
size += CBUtil.sizeOfString(name.ksName);
size += CBUtil.sizeOfString(name.cfName);
}
size += CBUtil.sizeOfString(name.name.toString());
size += DataType.codec.oneSerializedSize(DataType.fromType(name.type, version), version);
}
}
return size;
}
}
#endif
::shared_ptr<const service::pager::paging_state> paging_state() const;
const std::vector<::shared_ptr<column_specification>>& get_names() const;
};
inline ::shared_ptr<cql3::metadata> make_empty_metadata()
{
auto result = ::make_shared<cql3::metadata>(std::vector<::shared_ptr<cql3::column_specification>>{});
result->set_skip_metadata();
return result;
}
::shared_ptr<const cql3::metadata> make_empty_metadata();
class prepared_metadata {
public:
enum class flag : uint8_t {
GLOBAL_TABLES_SPEC = 0,
};
using flag_enum = super_enum<flag,
flag::GLOBAL_TABLES_SPEC>;
using flag_enum_set = enum_set<flag_enum>;
private:
flag_enum_set _flags;
std::vector<::shared_ptr<column_specification>> _names;
std::vector<uint16_t> _partition_key_bind_indices;
public:
prepared_metadata(const std::vector<::shared_ptr<column_specification>>& names,
const std::vector<uint16_t>& partition_key_bind_indices);
flag_enum_set flags() const;
const std::vector<::shared_ptr<column_specification>>& names() const;
const std::vector<uint16_t>& partition_key_bind_indices() const;
};
class result_set {
#if 0
private static final ColumnIdentifier COUNT_COLUMN = new ColumnIdentifier("count", false);
#endif
public:
::shared_ptr<metadata> _metadata;
std::deque<std::vector<bytes_opt>> _rows;
public:
result_set(std::vector<::shared_ptr<column_specification>> metadata_)
: _metadata(::make_shared<metadata>(std::move(metadata_)))
{ }
result_set(std::vector<::shared_ptr<column_specification>> metadata_);
result_set(::shared_ptr<metadata> metadata)
: _metadata(std::move(metadata))
{ }
result_set(::shared_ptr<metadata> metadata);
size_t size() const {
return _rows.size();
}
size_t size() const;
bool empty() const {
return _rows.empty();
}
bool empty() const;
void add_row(std::vector<bytes_opt> row) {
assert(row.size() == _metadata->value_count());
_rows.emplace_back(std::move(row));
}
void add_row(std::vector<bytes_opt> row);
void add_column_value(bytes_opt value) {
if (_rows.empty() || _rows.back().size() == _metadata->value_count()) {
std::vector<bytes_opt> row;
row.reserve(_metadata->value_count());
_rows.emplace_back(std::move(row));
}
void add_column_value(bytes_opt value);
_rows.back().emplace_back(std::move(value));
}
void reverse();
void reverse() {
std::reverse(_rows.begin(), _rows.end());
}
void trim(size_t limit) {
if (_rows.size() > limit) {
_rows.resize(limit);
}
}
void trim(size_t limit);
template<typename RowComparator>
void sort(RowComparator&& cmp) {
std::sort(_rows.begin(), _rows.end(), std::forward<RowComparator>(cmp));
}
metadata& get_metadata() {
return *_metadata;
}
metadata& get_metadata();
const metadata& get_metadata() const {
return *_metadata;
}
const metadata& get_metadata() const;
// Returns a range of rows. A row is a range of bytes_opt.
auto const& rows() const {
return _rows;
}
#if 0
public CqlResult toThriftResult()
{
assert metadata.names != null;
String UTF8 = "UTF8Type";
CqlMetadata schema = new CqlMetadata(new HashMap<ByteBuffer, String>(),
new HashMap<ByteBuffer, String>(),
// The 2 following ones shouldn't be needed in CQL3
UTF8, UTF8);
for (int i = 0; i < metadata.columnCount; i++)
{
ColumnSpecification spec = metadata.names.get(i);
ByteBuffer colName = ByteBufferUtil.bytes(spec.name.toString());
schema.name_types.put(colName, UTF8);
AbstractType<?> normalizedType = spec.type instanceof ReversedType ? ((ReversedType)spec.type).baseType : spec.type;
schema.value_types.put(colName, normalizedType.toString());
}
List<CqlRow> cqlRows = new ArrayList<CqlRow>(rows.size());
for (List<ByteBuffer> row : rows)
{
List<Column> thriftCols = new ArrayList<Column>(metadata.columnCount);
for (int i = 0; i < metadata.columnCount; i++)
{
Column col = new Column(ByteBufferUtil.bytes(metadata.names.get(i).name.toString()));
col.setValue(row.get(i));
thriftCols.add(col);
}
// The key of CqlRow shoudn't be needed in CQL3
cqlRows.add(new CqlRow(ByteBufferUtil.EMPTY_BYTE_BUFFER, thriftCols));
}
CqlResult res = new CqlResult(CqlResultType.ROWS);
res.setRows(cqlRows).setSchema(schema);
return res;
}
@Override
public String toString()
{
try
{
StringBuilder sb = new StringBuilder();
sb.append(metadata).append('\n');
for (List<ByteBuffer> row : rows)
{
for (int i = 0; i < row.size(); i++)
{
ByteBuffer v = row.get(i);
if (v == null)
{
sb.append(" | null");
}
else
{
sb.append(" | ");
if (metadata.flags.contains(Flag.NO_METADATA))
sb.append("0x").append(ByteBufferUtil.bytesToHex(v));
else
sb.append(metadata.names.get(i).type.getString(v));
}
}
sb.append('\n');
}
sb.append("---");
return sb.toString();
}
catch (Exception e)
{
throw new RuntimeException(e);
}
}
public static class Codec implements CBCodec<ResultSet>
{
/*
* Format:
* - metadata
* - rows count (4 bytes)
* - rows
*/
public ResultSet decode(ByteBuf body, int version)
{
Metadata m = Metadata.codec.decode(body, version);
int rowCount = body.readInt();
ResultSet rs = new ResultSet(m, new ArrayList<List<ByteBuffer>>(rowCount));
// rows
int totalValues = rowCount * m.columnCount;
for (int i = 0; i < totalValues; i++)
rs.addColumnValue(CBUtil.readValue(body));
return rs;
}
public void encode(ResultSet rs, ByteBuf dest, int version)
{
Metadata.codec.encode(rs.metadata, dest, version);
dest.writeInt(rs.rows.size());
for (List<ByteBuffer> row : rs.rows)
{
// Note that we do only want to serialize only the first columnCount values, even if the row
// as more: see comment on Metadata.names field.
for (int i = 0; i < rs.metadata.columnCount; i++)
CBUtil.writeValue(row.get(i), dest);
}
}
public int encodedSize(ResultSet rs, int version)
{
int size = Metadata.codec.encodedSize(rs.metadata, version) + 4;
for (List<ByteBuffer> row : rs.rows)
{
for (int i = 0; i < rs.metadata.columnCount; i++)
size += CBUtil.sizeOfValue(row.get(i));
}
return size;
}
}
public static enum Flag
{
// The order of that enum matters!!
GLOBAL_TABLES_SPEC,
HAS_MORE_PAGES,
NO_METADATA;
public static EnumSet<Flag> deserialize(int flags)
{
EnumSet<Flag> set = EnumSet.noneOf(Flag.class);
Flag[] values = Flag.values();
for (int n = 0; n < values.length; n++)
{
if ((flags & (1 << n)) != 0)
set.add(values[n]);
}
return set;
}
public static int serialize(EnumSet<Flag> flags)
{
int i = 0;
for (Flag flag : flags)
i |= 1 << flag.ordinal();
return i;
}
}
#endif
const std::deque<std::vector<bytes_opt>>& rows() const;
};
}

View File

@@ -232,7 +232,7 @@ uint32_t selection::add_column_for_ordering(const column_definition& c) {
raw_selector::to_selectables(raw_selectors, schema), db, schema, defs);
auto metadata = collect_metadata(schema, raw_selectors, *factories);
if (processes_selection(raw_selectors)) {
if (processes_selection(raw_selectors) || raw_selectors.size() != defs.size()) {
return ::make_shared<selection_with_processing>(schema, std::move(defs), std::move(metadata), std::move(factories));
} else {
return ::make_shared<simple_selection>(schema, std::move(defs), std::move(metadata), false);

View File

@@ -161,7 +161,7 @@ public:
return std::find(_columns.begin(), _columns.end(), &def) != _columns.end();
}
::shared_ptr<metadata> get_result_metadata() const {
::shared_ptr<const metadata> get_result_metadata() const {
return _metadata;
}

View File

@@ -0,0 +1,102 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Copyright (C) 2015 ScyllaDB
*
* Modified by ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#include "alter_keyspace_statement.hh"
#include "service/migration_manager.hh"
#include "db/system_keyspace.hh"
#include "database.hh"
cql3::statements::alter_keyspace_statement::alter_keyspace_statement(sstring name, ::shared_ptr<ks_prop_defs> attrs)
: _name(name)
, _attrs(std::move(attrs))
{}
const sstring& cql3::statements::alter_keyspace_statement::keyspace() const {
return _name;
}
future<> cql3::statements::alter_keyspace_statement::check_access(const service::client_state& state) {
return state.has_keyspace_access(_name, auth::permission::ALTER);
}
void cql3::statements::alter_keyspace_statement::validate(distributed<service::storage_proxy>& proxy, const service::client_state& state) {
try {
service::get_local_storage_proxy().get_db().local().find_keyspace(_name); // throws on failure
auto tmp = _name;
std::transform(tmp.begin(), tmp.end(), tmp.begin(), ::tolower);
if (tmp == db::system_keyspace::NAME) {
throw exceptions::invalid_request_exception("Cannot alter system keyspace");
}
_attrs->validate();
if (!bool(_attrs->get_replication_strategy_class()) && !_attrs->get_replication_options().empty()) {
throw exceptions::configuration_exception("Missing replication strategy class");
}
#if 0
// The strategy is validated through KSMetaData.validate() in announceKeyspaceUpdate below.
// However, for backward compatibility with thrift, this doesn't validate unexpected options yet,
// so doing proper validation here.
AbstractReplicationStrategy.validateReplicationStrategy(name,
AbstractReplicationStrategy.getClass(attrs.getReplicationStrategyClass()),
StorageService.instance.getTokenMetadata(),
DatabaseDescriptor.getEndpointSnitch(),
attrs.getReplicationOptions());
#endif
} catch (no_such_keyspace& e) {
std::throw_with_nested(exceptions::invalid_request_exception("Unknown keyspace " + _name));
}
}
future<bool> cql3::statements::alter_keyspace_statement::announce_migration(distributed<service::storage_proxy>& proxy, bool is_local_only) {
auto old_ksm = service::get_local_storage_proxy().get_db().local().find_keyspace(_name).metadata();
return service::get_local_migration_manager().announce_keyspace_update(_attrs->as_ks_metadata_update(old_ksm), is_local_only).then([] {
return true;
});
}
shared_ptr<transport::event::schema_change> cql3::statements::alter_keyspace_statement::change_event() {
return make_shared<transport::event::schema_change>(
transport::event::schema_change::change_type::UPDATED,
keyspace());
}

View File

@@ -41,80 +41,29 @@
#pragma once
#include <memory>
#include "cql3/statements/schema_altering_statement.hh"
#include "cql3/statements/ks_prop_defs.hh"
#include <memory>
namespace cql3 {
namespace statements {
class alter_keyspace_statement : public schema_altering_statement {
sstring _name;
std::unique_ptr<ks_prop_defs> _attrs;
::shared_ptr<ks_prop_defs> _attrs;
public:
alter_keyspace_statement(sstring name, std::unique_ptr<ks_prop_defs>&& attrs)
: _name{name}
, _attrs{std::move(attrs)}
{ }
alter_keyspace_statement(sstring name, ::shared_ptr<ks_prop_defs> attrs);
virtual const sstring& keyspace() const override {
return _name;
}
const sstring& keyspace() const override;
#if 0
public void checkAccess(ClientState state) throws UnauthorizedException, InvalidRequestException
{
state.hasKeyspaceAccess(name, Permission.ALTER);
}
public void validate(ClientState state) throws RequestValidationException
{
KSMetaData ksm = Schema.instance.getKSMetaData(name);
if (ksm == null)
throw new InvalidRequestException("Unknown keyspace " + name);
if (ksm.name.equalsIgnoreCase(SystemKeyspace.NAME))
throw new InvalidRequestException("Cannot alter system keyspace");
attrs.validate();
if (attrs.getReplicationStrategyClass() == null && !attrs.getReplicationOptions().isEmpty())
{
throw new ConfigurationException("Missing replication strategy class");
}
else if (attrs.getReplicationStrategyClass() != null)
{
// The strategy is validated through KSMetaData.validate() in announceKeyspaceUpdate below.
// However, for backward compatibility with thrift, this doesn't validate unexpected options yet,
// so doing proper validation here.
AbstractReplicationStrategy.validateReplicationStrategy(name,
AbstractReplicationStrategy.getClass(attrs.getReplicationStrategyClass()),
StorageService.instance.getTokenMetadata(),
DatabaseDescriptor.getEndpointSnitch(),
attrs.getReplicationOptions());
}
}
public boolean announceMigration(boolean isLocalOnly) throws RequestValidationException
{
KSMetaData ksm = Schema.instance.getKSMetaData(name);
// In the (very) unlikely case the keyspace was dropped since validate()
if (ksm == null)
throw new InvalidRequestException("Unknown keyspace " + name);
MigrationManager.announceKeyspaceUpdate(attrs.asKSMetadataUpdate(ksm), isLocalOnly);
return true;
}
public Event.SchemaChange changeEvent()
{
return new Event.SchemaChange(Event.SchemaChange.Change.UPDATED, keyspace());
}
#endif
future<> check_access(const service::client_state& state) override;
void validate(distributed<service::storage_proxy>& proxy, const service::client_state& state) override;
future<bool> announce_migration(distributed<service::storage_proxy>& proxy, bool is_local_only) override;
shared_ptr<transport::event::schema_change> change_event() override;
};
}
}

View File

@@ -46,9 +46,9 @@ uint32_t cql3::statements::authentication_statement::get_bound_terms() {
return 0;
}
::shared_ptr<cql3::statements::parsed_statement::prepared> cql3::statements::authentication_statement::prepare(
::shared_ptr<cql3::statements::prepared_statement> cql3::statements::authentication_statement::prepare(
database& db) {
return ::make_shared<parsed_statement::prepared>(this->shared_from_this());
return ::make_shared<prepared>(this->shared_from_this());
}
bool cql3::statements::authentication_statement::uses_function(

View File

@@ -41,15 +41,16 @@
#pragma once
#include "parsed_statement.hh"
#include "cql3/cql_statement.hh"
#include "prepared_statement.hh"
#include "raw/parsed_statement.hh"
#include "transport/messages_fwd.hh"
namespace cql3 {
namespace statements {
class authentication_statement : public parsed_statement, public cql_statement, public ::enable_shared_from_this<authentication_statement> {
class authentication_statement : public raw::parsed_statement, public cql_statement_no_metadata, public ::enable_shared_from_this<authentication_statement> {
public:
uint32_t get_bound_terms() override;

View File

@@ -46,7 +46,7 @@ uint32_t cql3::statements::authorization_statement::get_bound_terms() {
return 0;
}
::shared_ptr<cql3::statements::parsed_statement::prepared> cql3::statements::authorization_statement::prepare(
::shared_ptr<cql3::statements::prepared_statement> cql3::statements::authorization_statement::prepare(
database& db) {
return ::make_shared<parsed_statement::prepared>(this->shared_from_this());
}

View File

@@ -41,8 +41,9 @@
#pragma once
#include "parsed_statement.hh"
#include "cql3/cql_statement.hh"
#include "prepared_statement.hh"
#include "raw/parsed_statement.hh"
#include "transport/messages_fwd.hh"
namespace auth {
@@ -53,7 +54,7 @@ namespace cql3 {
namespace statements {
class authorization_statement : public parsed_statement, public cql_statement, public ::enable_shared_from_this<authorization_statement> {
class authorization_statement : public raw::parsed_statement, public cql_statement_no_metadata, public ::enable_shared_from_this<authorization_statement> {
public:
uint32_t get_bound_terms() override;

View File

@@ -38,6 +38,7 @@
*/
#include "batch_statement.hh"
#include "raw/batch_statement.hh"
#include "db/config.hh"
namespace cql3 {
@@ -68,7 +69,7 @@ void batch_statement::verify_batch_size(const std::vector<mutation>& mutations)
void accept_static_cell(column_id, collection_mutation_view v) override {
size += v.data.size();
}
void accept_row_tombstone(clustering_key_prefix_view, tombstone) override {}
void accept_row_tombstone(const range_tombstone&) override {}
void accept_row(clustering_key_view, tombstone, const row_marker&) override {}
void accept_row_cell(column_id, atomic_cell_view v) override {
size += v.value().size();
@@ -100,6 +101,30 @@ void batch_statement::verify_batch_size(const std::vector<mutation>& mutations)
}
}
namespace raw {
shared_ptr<prepared_statement>
batch_statement::prepare(database& db) {
auto&& bound_names = get_bound_variables();
std::vector<shared_ptr<cql3::statements::modification_statement>> statements;
for (auto&& parsed : _parsed_statements) {
statements.push_back(parsed->prepare(db, bound_names));
}
auto&& prep_attrs = _attrs->prepare(db, "[batch]", "[batch]");
prep_attrs->collect_marker_specification(bound_names);
cql3::statements::batch_statement batch_statement_(bound_names->size(), _type, std::move(statements), std::move(prep_attrs));
batch_statement_.validate();
return ::make_shared<prepared>(make_shared(std::move(batch_statement_)),
bound_names->get_specifications());
}
}
}
}

View File

@@ -39,6 +39,8 @@
#include "cql3/cql_statement.hh"
#include "modification_statement.hh"
#include "raw/modification_statement.hh"
#include "raw/batch_statement.hh"
#include "service/storage_proxy.hh"
#include "transport/messages/result_message.hh"
#include "timestamp.hh"
@@ -60,12 +62,10 @@ namespace statements {
* A <code>BATCH</code> statement parsed from a CQL query.
*
*/
class batch_statement : public cql_statement {
class batch_statement : public cql_statement_no_metadata {
static logging::logger _logger;
public:
enum class type {
LOGGED, UNLOGGED, COUNTER
};
using type = raw::batch_statement::type;
private:
int _bound_terms;
public:
@@ -168,16 +168,16 @@ public:
return _statements;
}
private:
future<std::vector<mutation>> get_mutations(distributed<service::storage_proxy>& storage, const query_options& options, bool local, api::timestamp_type now) {
future<std::vector<mutation>> get_mutations(distributed<service::storage_proxy>& storage, const query_options& options, bool local, api::timestamp_type now, tracing::trace_state_ptr trace_state) {
// Do not process in parallel because operations like list append/prepend depend on execution order.
return do_with(std::vector<mutation>(), [this, &storage, &options, now, local] (auto&& result) {
return do_with(std::vector<mutation>(), [this, &storage, &options, now, local, trace_state] (auto&& result) {
return do_for_each(boost::make_counting_iterator<size_t>(0),
boost::make_counting_iterator<size_t>(_statements.size()),
[this, &storage, &options, now, local, &result] (size_t i) {
[this, &storage, &options, now, local, &result, trace_state] (size_t i) {
auto&& statement = _statements[i];
auto&& statement_options = options.for_statement(i);
auto timestamp = _attrs->get_timestamp(now, statement_options);
return statement->get_mutations(storage, statement_options, local, timestamp).then([&result] (auto&& more) {
return statement->get_mutations(storage, statement_options, local, timestamp, trace_state).then([&result] (auto&& more) {
std::move(more.begin(), more.end(), std::back_inserter(result));
});
}).then([&result] {
@@ -213,8 +213,8 @@ private:
return execute_with_conditions(storage, options, query_state);
}
return get_mutations(storage, options, local, now).then([this, &storage, &options] (std::vector<mutation> ms) {
return execute_without_conditions(storage, std::move(ms), options.get_consistency());
return get_mutations(storage, options, local, now, query_state.get_trace_state()).then([this, &storage, &options, tr_state = query_state.get_trace_state()] (std::vector<mutation> ms) mutable {
return execute_without_conditions(storage, std::move(ms), options.get_consistency(), std::move(tr_state));
}).then([] {
return make_ready_future<shared_ptr<transport::messages::result_message>>(
make_shared<transport::messages::result_message::void_message>());
@@ -224,7 +224,8 @@ private:
future<> execute_without_conditions(
distributed<service::storage_proxy>& storage,
std::vector<mutation> mutations,
db::consistency_level cl) {
db::consistency_level cl,
tracing::trace_state_ptr tr_state) {
// FIXME: do we need to do this?
#if 0
// Extract each collection of cfs from it's IMutation and then lazily concatenate all of them into a single Iterable.
@@ -239,7 +240,7 @@ private:
verify_batch_size(mutations);
bool mutate_atomic = _type == type::LOGGED && mutations.size() > 1;
return storage.local().mutate_with_triggers(std::move(mutations), cl, mutate_atomic);
return storage.local().mutate_with_triggers(std::move(mutations), cl, mutate_atomic, std::move(tr_state));
}
future<shared_ptr<transport::messages::result_message>> execute_with_conditions(
@@ -317,46 +318,6 @@ public:
return sprint("BatchStatement(type=%s, statements=%s)", _type, join(", ", _statements));
}
#endif
class parsed : public cf_statement {
type _type;
shared_ptr<attributes::raw> _attrs;
std::vector<shared_ptr<modification_statement::parsed>> _parsed_statements;
public:
parsed(
type type_,
shared_ptr<attributes::raw> attrs,
std::vector<shared_ptr<modification_statement::parsed>> parsed_statements)
: cf_statement(nullptr)
, _type(type_)
, _attrs(std::move(attrs))
, _parsed_statements(std::move(parsed_statements)) {
}
virtual void prepare_keyspace(const service::client_state& state) override {
for (auto&& s : _parsed_statements) {
s->prepare_keyspace(state);
}
}
virtual shared_ptr<parsed_statement::prepared> prepare(database& db) override {
auto&& bound_names = get_bound_variables();
std::vector<shared_ptr<modification_statement>> statements;
for (auto&& parsed : _parsed_statements) {
statements.push_back(parsed->prepare(db, bound_names));
}
auto&& prep_attrs = _attrs->prepare(db, "[batch]", "[batch]");
prep_attrs->collect_marker_specification(bound_names);
batch_statement batch_statement_(bound_names->size(), _type, std::move(statements), std::move(prep_attrs));
batch_statement_.validate();
return ::make_shared<parsed_statement::prepared>(make_shared(std::move(batch_statement_)),
bound_names->get_specifications());
}
};
};
}

View File

@@ -92,9 +92,9 @@ void cf_prop_defs::validate() {
throw exceptions::configuration_exception(sstring("Missing sub-option '") + COMPACTION_STRATEGY_CLASS_KEY + "' for the '" + KW_COMPACTION + "' option.");
}
_compaction_strategy_class = sstables::compaction_strategy::type(strategy->second);
#if 0
compactionOptions.remove(COMPACTION_STRATEGY_CLASS_KEY);
remove_from_map_if_exists(KW_COMPACTION, COMPACTION_STRATEGY_CLASS_KEY);
#if 0
CFMetaData.validateCompactionOptions(compactionStrategyClass, compactionOptions);
#endif
}

View File

@@ -39,12 +39,15 @@
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#include "cql3/statements/cf_statement.hh"
#include "raw/cf_statement.hh"
#include "service/client_state.hh"
namespace cql3 {
namespace statements {
namespace raw {
cf_statement::cf_statement(::shared_ptr<cf_name> cf_name)
: _cf_name(std::move(cf_name))
{
@@ -81,3 +84,5 @@ const sstring& cf_statement::column_family() const
}
}
}

View File

@@ -44,7 +44,7 @@
#include "schema_altering_statement.hh"
#include "index_prop_defs.hh"
#include "index_target.hh"
#include "cf_statement.hh"
#include "raw/cf_statement.hh"
#include "cql3/index_name.hh"
#include "cql3/cql3_type.hh"

View File

@@ -47,6 +47,7 @@
#include <boost/range/algorithm/adjacent_find.hpp>
#include "cql3/statements/create_table_statement.hh"
#include "cql3/statements/prepared_statement.hh"
#include "schema_builder.hh"
@@ -160,7 +161,7 @@ create_table_statement::raw_statement::raw_statement(::shared_ptr<cf_name> name,
, _if_not_exists{if_not_exists}
{ }
::shared_ptr<parsed_statement::prepared> create_table_statement::raw_statement::prepare(database& db) {
::shared_ptr<prepared_statement> create_table_statement::raw_statement::prepare(database& db) {
// Column family name
const sstring& cf_name = _cf_name->get_column_family();
std::regex name_regex("\\w+");
@@ -347,7 +348,7 @@ create_table_statement::raw_statement::raw_statement(::shared_ptr<cf_name> name,
}
}
return ::make_shared<parsed_statement::prepared>(stmt);
return ::make_shared<prepared>(stmt);
}
data_type create_table_statement::raw_statement::get_type_and_remove(column_map_type& columns, ::shared_ptr<column_identifier> t)

View File

@@ -43,7 +43,7 @@
#include "cql3/statements/schema_altering_statement.hh"
#include "cql3/statements/cf_prop_defs.hh"
#include "cql3/statements/cf_statement.hh"
#include "cql3/statements/raw/cf_statement.hh"
#include "cql3/cql3_type.hh"
#include "service/migration_manager.hh"
@@ -116,7 +116,7 @@ private:
void add_column_metadata_from_aliases(schema_builder& builder, std::vector<bytes> aliases, const std::vector<data_type>& types, column_kind kind);
};
class create_table_statement::raw_statement : public cf_statement {
class create_table_statement::raw_statement : public raw::cf_statement {
private:
using defs_type = std::unordered_map<::shared_ptr<column_identifier>,
::shared_ptr<cql3_type::raw>,

View File

@@ -40,6 +40,7 @@
*/
#include "delete_statement.hh"
#include "raw/delete_statement.hh"
namespace cql3 {
@@ -76,11 +77,13 @@ void delete_statement::add_update_for_key(mutation& m, const exploded_clustering
}
}
::shared_ptr<modification_statement>
delete_statement::parsed::prepare_internal(database& db, schema_ptr schema, ::shared_ptr<variable_specifications> bound_names,
std::unique_ptr<attributes> attrs) {
namespace raw {
auto stmt = ::make_shared<delete_statement>(statement_type::DELETE, bound_names->size(), schema, std::move(attrs));
::shared_ptr<cql3::statements::modification_statement>
delete_statement::prepare_internal(database& db, schema_ptr schema, ::shared_ptr<variable_specifications> bound_names,
std::unique_ptr<attributes> attrs) {
using statement_type = cql3::statements::modification_statement::statement_type;
auto stmt = ::make_shared<cql3::statements::delete_statement>(statement_type::DELETE, bound_names->size(), schema, std::move(attrs));
for (auto&& deletion : _deletions) {
auto&& id = deletion->affected_column()->prepare_column_identifier(schema);
@@ -104,13 +107,13 @@ delete_statement::parsed::prepare_internal(database& db, schema_ptr schema, ::sh
return stmt;
}
delete_statement::parsed::parsed(::shared_ptr<cf_name> name,
delete_statement::delete_statement(::shared_ptr<cf_name> name,
::shared_ptr<attributes::raw> attrs,
std::vector<::shared_ptr<operation::raw_deletion>> deletions,
std::vector<::shared_ptr<relation>> where_clause,
conditions_vector conditions,
bool if_exists)
: modification_statement::parsed(std::move(name), std::move(attrs), std::move(conditions), false, if_exists)
: raw::modification_statement(std::move(name), std::move(attrs), std::move(conditions), false, if_exists)
, _deletions(std::move(deletions))
, _where_clause(std::move(where_clause))
{ }
@@ -118,3 +121,5 @@ delete_statement::parsed::parsed(::shared_ptr<cf_name> name,
}
}
}

View File

@@ -42,6 +42,7 @@
#pragma once
#include "cql3/statements/modification_statement.hh"
#include "cql3/statements/raw/modification_statement.hh"
#include "cql3/attributes.hh"
#include "cql3/operation.hh"
#include "database_fwd.hh"
@@ -79,22 +80,6 @@ public:
}
#endif
class parsed : public modification_statement::parsed {
private:
std::vector<::shared_ptr<operation::raw_deletion>> _deletions;
std::vector<::shared_ptr<relation>> _where_clause;
public:
parsed(::shared_ptr<cf_name> name,
::shared_ptr<attributes::raw> attrs,
std::vector<::shared_ptr<operation::raw_deletion>> deletions,
std::vector<::shared_ptr<relation>> where_clause,
conditions_vector conditions,
bool if_exists);
protected:
virtual ::shared_ptr<modification_statement> prepare_internal(database& db, schema_ptr schema,
::shared_ptr<variable_specifications> bound_names, std::unique_ptr<attributes> attrs);
};
};
}

View File

@@ -79,6 +79,18 @@ lw_shared_ptr<keyspace_metadata> ks_prop_defs::as_ks_metadata(sstring ks_name) {
return keyspace_metadata::new_keyspace(ks_name, get_replication_strategy_class().value(), options, get_boolean(KW_DURABLE_WRITES, true));
}
lw_shared_ptr<keyspace_metadata> ks_prop_defs::as_ks_metadata_update(lw_shared_ptr<keyspace_metadata> old) {
auto options = get_replication_options();
options.erase(REPLICATION_STRATEGY_CLASS_KEY);
auto sc = get_replication_strategy_class();
if (!sc) {
sc = old->strategy_name();
options = old->strategy_options();
}
return keyspace_metadata::new_keyspace(old->name(), *sc, options, get_boolean(KW_DURABLE_WRITES, true));
}
}
}

View File

@@ -66,6 +66,7 @@ public:
std::map<sstring, sstring> get_replication_options() const;
std::experimental::optional<sstring> get_replication_strategy_class() const;
lw_shared_ptr<keyspace_metadata> as_ks_metadata(sstring ks_name);
lw_shared_ptr<keyspace_metadata> as_ks_metadata_update(lw_shared_ptr<keyspace_metadata> old);
#if 0
public KSMetaData asKSMetadataUpdate(KSMetaData old) throws RequestValidationException

View File

@@ -40,6 +40,8 @@
*/
#include "cql3/statements/modification_statement.hh"
#include "cql3/statements/raw/modification_statement.hh"
#include "cql3/statements/prepared_statement.hh"
#include "cql3/restrictions/single_column_restriction.hh"
#include "cql3/single_column_relation.hh"
#include "validation.hh"
@@ -111,11 +113,11 @@ uint32_t modification_statement::get_bound_terms() {
return _bound_terms;
}
sstring modification_statement::keyspace() const {
const sstring& modification_statement::keyspace() const {
return s->ks_name();
}
sstring modification_statement::column_family() const {
const sstring& modification_statement::column_family() const {
return s->cf_name();
}
@@ -146,10 +148,10 @@ future<> modification_statement::check_access(const service::client_state& state
}
future<std::vector<mutation>>
modification_statement::get_mutations(distributed<service::storage_proxy>& proxy, const query_options& options, bool local, int64_t now) {
modification_statement::get_mutations(distributed<service::storage_proxy>& proxy, const query_options& options, bool local, int64_t now, tracing::trace_state_ptr trace_state) {
auto keys = make_lw_shared(build_partition_keys(options));
auto prefix = make_lw_shared(create_exploded_clustering_prefix(options));
return make_update_parameters(proxy, keys, prefix, options, local, now).then(
return make_update_parameters(proxy, keys, prefix, options, local, now, std::move(trace_state)).then(
[this, keys, prefix, now] (auto params_ptr) {
std::vector<mutation> mutations;
mutations.reserve(keys->size());
@@ -169,8 +171,9 @@ modification_statement::make_update_parameters(
lw_shared_ptr<exploded_clustering_prefix> prefix,
const query_options& options,
bool local,
int64_t now) {
return read_required_rows(proxy, std::move(keys), std::move(prefix), local, options.get_consistency()).then(
int64_t now,
tracing::trace_state_ptr trace_state) {
return read_required_rows(proxy, std::move(keys), std::move(prefix), local, options.get_consistency(), std::move(trace_state)).then(
[this, &options, now] (auto rows) {
return make_ready_future<std::unique_ptr<update_parameters>>(
std::make_unique<update_parameters>(s, options,
@@ -253,7 +256,8 @@ modification_statement::read_required_rows(
lw_shared_ptr<std::vector<partition_key>> keys,
lw_shared_ptr<exploded_clustering_prefix> prefix,
bool local,
db::consistency_level cl) {
db::consistency_level cl,
tracing::trace_state_ptr trace_state) {
if (!requires_read()) {
return make_ready_future<update_parameters::prefetched_rows_type>(
update_parameters::prefetched_rows_type{});
@@ -289,7 +293,7 @@ modification_statement::read_required_rows(
}
query::read_command cmd(s->id(), s->version(), ps, std::numeric_limits<uint32_t>::max());
// FIXME: ignoring "local"
return proxy.local().query(s, make_lw_shared(std::move(cmd)), std::move(pr), cl).then([this, ps] (auto result) {
return proxy.local().query(s, make_lw_shared(std::move(cmd)), std::move(pr), cl, std::move(trace_state)).then([this, ps] (auto result) {
return query::result_view::do_with(*result, [&] (query::result_view v) {
auto prefetched_rows = update_parameters::prefetched_rows_type({update_parameters::prefetch_data(s)});
v.consume(ps, prefetch_data_builder(s, prefetched_rows.value(), ps));
@@ -462,11 +466,12 @@ modification_statement::execute_without_condition(distributed<service::storage_p
db::validate_for_write(s->ks_name(), cl);
}
return get_mutations(proxy, options, false, options.get_timestamp(qs)).then([cl, &proxy] (auto mutations) {
return get_mutations(proxy, options, false, options.get_timestamp(qs), qs.get_trace_state()).then([cl, &proxy, &qs] (auto mutations) {
if (mutations.empty()) {
return now();
}
return proxy.local().mutate_with_triggers(std::move(mutations), cl, false);
return proxy.local().mutate_with_triggers(std::move(mutations), cl, false, qs.get_trace_state());
});
}
@@ -503,7 +508,7 @@ modification_statement::execute_internal(distributed<service::storage_proxy>& pr
if (has_conditions()) {
throw exceptions::unsupported_operation_exception();
}
return get_mutations(proxy, options, true, options.get_timestamp(qs)).then(
return get_mutations(proxy, options, true, options.get_timestamp(qs), qs.get_trace_state()).then(
[&proxy] (auto mutations) {
return proxy.local().mutate_locally(std::move(mutations));
}).then(
@@ -560,21 +565,23 @@ modification_statement::process_where_clause(database& db, std::vector<relation_
}
}
::shared_ptr<parsed_statement::prepared>
modification_statement::parsed::prepare(database& db) {
namespace raw {
::shared_ptr<prepared_statement>
modification_statement::modification_statement::prepare(database& db) {
auto bound_names = get_bound_variables();
auto statement = prepare(db, bound_names);
return ::make_shared<parsed_statement::prepared>(std::move(statement), *bound_names);
return ::make_shared<prepared>(std::move(statement), *bound_names);
}
::shared_ptr<modification_statement>
modification_statement::parsed::prepare(database& db, ::shared_ptr<variable_specifications> bound_names) {
::shared_ptr<cql3::statements::modification_statement>
modification_statement::prepare(database& db, ::shared_ptr<variable_specifications> bound_names) {
schema_ptr schema = validation::validate_column_family(db, keyspace(), column_family());
auto prepared_attributes = _attrs->prepare(db, keyspace(), column_family());
prepared_attributes->collect_marker_specification(bound_names);
::shared_ptr<modification_statement> stmt = prepare_internal(db, schema, bound_names, std::move(prepared_attributes));
::shared_ptr<cql3::statements::modification_statement> stmt = prepare_internal(db, schema, bound_names, std::move(prepared_attributes));
if (_if_not_exists || _if_exists || !_conditions.empty()) {
if (stmt->is_counter()) {
@@ -616,6 +623,8 @@ modification_statement::parsed::prepare(database& db, ::shared_ptr<variable_spec
return stmt;
}
}
void
modification_statement::validate(distributed<service::storage_proxy>&, const service::client_state& state) {
if (has_conditions() && attrs->is_timestamp_set()) {
@@ -688,7 +697,9 @@ void modification_statement::validate_where_clause_for_conditions() {
// no-op by default
}
modification_statement::parsed::parsed(::shared_ptr<cf_name> name, ::shared_ptr<attributes::raw> attrs, conditions_vector conditions, bool if_not_exists, bool if_exists)
namespace raw {
modification_statement::modification_statement(::shared_ptr<cf_name> name, ::shared_ptr<attributes::raw> attrs, conditions_vector conditions, bool if_not_exists, bool if_exists)
: cf_statement{std::move(name)}
, _attrs{std::move(attrs)}
, _conditions{std::move(conditions)}
@@ -699,3 +710,5 @@ modification_statement::parsed::parsed(::shared_ptr<cf_name> name, ::shared_ptr<
}
}
}

View File

@@ -42,7 +42,7 @@
#pragma once
#include "cql3/restrictions/restriction.hh"
#include "cql3/statements/cf_statement.hh"
#include "cql3/statements/raw/cf_statement.hh"
#include "cql3/column_identifier.hh"
#include "cql3/update_parameters.hh"
#include "cql3/column_condition.hh"
@@ -66,10 +66,13 @@ namespace cql3 {
namespace statements {
namespace raw { class modification_statement; }
/*
* Abstract parent class of individual modifications, i.e. INSERT, UPDATE and DELETE.
*/
class modification_statement : public cql_statement {
class modification_statement : public cql_statement_no_metadata {
private:
static thread_local const ::shared_ptr<column_identifier> CAS_RESULT_COLUMN;
@@ -117,9 +120,9 @@ public:
virtual uint32_t get_bound_terms() override;
virtual sstring keyspace() const;
virtual const sstring& keyspace() const;
virtual sstring column_family() const;
virtual const sstring& column_family() const;
virtual bool is_counter() const;
@@ -184,7 +187,8 @@ protected:
lw_shared_ptr<std::vector<partition_key>> keys,
lw_shared_ptr<exploded_clustering_prefix> prefix,
bool local,
db::consistency_level cl);
db::consistency_level cl,
tracing::trace_state_ptr trace_state);
public:
bool has_conditions();
@@ -327,7 +331,7 @@ public:
* @return vector of the mutations
* @throws invalid_request_exception on invalid requests
*/
future<std::vector<mutation>> get_mutations(distributed<service::storage_proxy>& proxy, const query_options& options, bool local, int64_t now);
future<std::vector<mutation>> get_mutations(distributed<service::storage_proxy>& proxy, const query_options& options, bool local, int64_t now, tracing::trace_state_ptr trace_state);
public:
future<std::unique_ptr<update_parameters>> make_update_parameters(
@@ -336,7 +340,8 @@ public:
lw_shared_ptr<exploded_clustering_prefix> prefix,
const query_options& options,
bool local,
int64_t now);
int64_t now,
tracing::trace_state_ptr trace_state);
protected:
/**
@@ -345,27 +350,7 @@ protected:
* @throws InvalidRequestException
*/
virtual void validate_where_clause_for_conditions();
public:
class parsed : public cf_statement {
public:
using conditions_vector = std::vector<std::pair<::shared_ptr<column_identifier::raw>, ::shared_ptr<column_condition::raw>>>;
protected:
const ::shared_ptr<attributes::raw> _attrs;
const std::vector<std::pair<::shared_ptr<column_identifier::raw>, ::shared_ptr<column_condition::raw>>> _conditions;
private:
const bool _if_not_exists;
const bool _if_exists;
protected:
parsed(::shared_ptr<cf_name> name, ::shared_ptr<attributes::raw> attrs, conditions_vector conditions, bool if_not_exists, bool if_exists);
public:
virtual ::shared_ptr<parsed_statement::prepared> prepare(database& db) override;
::shared_ptr<modification_statement> prepare(database& db, ::shared_ptr<variable_specifications> bound_names);;
protected:
virtual ::shared_ptr<modification_statement> prepare_internal(database& db, schema_ptr schema,
::shared_ptr<variable_specifications> bound_names, std::unique_ptr<attributes> attrs) = 0;
};
friend class raw::modification_statement;
};
std::ostream& operator<<(std::ostream& out, modification_statement::statement_type t);

View File

@@ -39,12 +39,16 @@
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#include "cql3/statements/parsed_statement.hh"
#include "raw/parsed_statement.hh"
#include "prepared_statement.hh"
namespace cql3 {
namespace statements {
namespace raw {
parsed_statement::~parsed_statement()
{ }
@@ -61,21 +65,23 @@ bool parsed_statement::uses_function(const sstring& ks_name, const sstring& func
return false;
}
parsed_statement::prepared::prepared(::shared_ptr<cql_statement> statement_, std::vector<::shared_ptr<column_specification>> bound_names_)
}
prepared_statement::prepared_statement(::shared_ptr<cql_statement> statement_, std::vector<::shared_ptr<column_specification>> bound_names_)
: statement(std::move(statement_))
, bound_names(std::move(bound_names_))
{ }
parsed_statement::prepared::prepared(::shared_ptr<cql_statement> statement_, const variable_specifications& names)
: prepared(statement_, names.get_specifications())
prepared_statement::prepared_statement(::shared_ptr<cql_statement> statement_, const variable_specifications& names)
: prepared_statement(statement_, names.get_specifications())
{ }
parsed_statement::prepared::prepared(::shared_ptr<cql_statement> statement_, variable_specifications&& names)
: prepared(statement_, std::move(names).get_specifications())
prepared_statement::prepared_statement(::shared_ptr<cql_statement> statement_, variable_specifications&& names)
: prepared_statement(statement_, std::move(names).get_specifications())
{ }
parsed_statement::prepared::prepared(::shared_ptr<cql_statement>&& statement_)
: prepared(statement_, std::vector<::shared_ptr<column_specification>>())
prepared_statement::prepared_statement(::shared_ptr<cql_statement>&& statement_)
: prepared_statement(statement_, std::vector<::shared_ptr<column_specification>>())
{ }
}

View File

@@ -0,0 +1,75 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Copyright (C) 2016 ScyllaDB
*
* Modified by ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#pragma once
#include "cql3/variable_specifications.hh"
#include "cql3/column_specification.hh"
#include "cql3/column_identifier.hh"
#include "cql3/cql_statement.hh"
#include "core/shared_ptr.hh"
#include <experimental/optional>
#include <vector>
namespace cql3 {
namespace statements {
class prepared_statement {
public:
sstring raw_cql_statement;
const ::shared_ptr<cql_statement> statement;
const std::vector<::shared_ptr<column_specification>> bound_names;
prepared_statement(::shared_ptr<cql_statement> statement_, std::vector<::shared_ptr<column_specification>> bound_names_);
prepared_statement(::shared_ptr<cql_statement> statement_, const variable_specifications& names);
prepared_statement(::shared_ptr<cql_statement> statement_, variable_specifications&& names);
prepared_statement(::shared_ptr<cql_statement>&& statement_);
};
}
}

View File

@@ -181,6 +181,21 @@ long property_definitions::to_long(sstring key, std::experimental::optional<sstr
}
}
void property_definitions::remove_from_map_if_exists(const sstring& name, const sstring& key)
{
auto it = _properties.find(name);
if (it == _properties.end()) {
return;
}
try {
auto map = boost::any_cast<std::map<sstring, sstring>>(it->second);
map.erase(key);
_properties[name] = map;
} catch (const boost::bad_any_cast& e) {
throw exceptions::syntax_exception(sprint("Invalid value for property '%s'. It should be a map.", name));
}
}
}
}

View File

@@ -79,6 +79,7 @@ protected:
std::experimental::optional<std::map<sstring, sstring>> get_map(const sstring& name) const;
void remove_from_map_if_exists(const sstring& name, const sstring& key);
public:
bool has_property(const sstring& name) const;

View File

@@ -0,0 +1,93 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Modified by ScyllaDB
* Copyright (C) 2015 ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#include "cql3/cql_statement.hh"
#include "modification_statement.hh"
#include "service/storage_proxy.hh"
#include "transport/messages/result_message.hh"
#include "timestamp.hh"
#include "log.hh"
#include "to_string.hh"
#include <boost/algorithm/cxx11/any_of.hpp>
#include <boost/algorithm/cxx11/all_of.hpp>
#include <boost/range/adaptor/transformed.hpp>
#include <boost/range/adaptor/uniqued.hpp>
#include <boost/iterator/counting_iterator.hpp>
#pragma once
namespace cql3 {
namespace statements {
namespace raw {
class batch_statement : public raw::cf_statement {
public:
enum class type {
LOGGED, UNLOGGED, COUNTER
};
private:
type _type;
shared_ptr<attributes::raw> _attrs;
std::vector<shared_ptr<raw::modification_statement>> _parsed_statements;
public:
batch_statement(
type type_,
shared_ptr<attributes::raw> attrs,
std::vector<shared_ptr<raw::modification_statement>> parsed_statements)
: cf_statement(nullptr)
, _type(type_)
, _attrs(std::move(attrs))
, _parsed_statements(std::move(parsed_statements)) {
}
virtual void prepare_keyspace(const service::client_state& state) override {
for (auto&& s : _parsed_statements) {
s->prepare_keyspace(state);
}
}
virtual shared_ptr<prepared> prepare(database& db) override;
};
}
}
}

View File

@@ -41,15 +41,20 @@
#pragma once
#include "cql3/statements/parsed_statement.hh"
#include "cql3/cf_name.hh"
#include <experimental/optional>
#include "parsed_statement.hh"
namespace service { class client_state; }
namespace cql3 {
namespace statements {
namespace raw {
/**
* Abstract class for statements that apply on a given column family.
*/
@@ -72,3 +77,5 @@ public:
}
}
}

View File

@@ -0,0 +1,76 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Copyright (C) 2015 ScyllaDB
*
* Modified by ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#pragma once
#include "cql3/statements/modification_statement.hh"
#include "cql3/statements/raw/modification_statement.hh"
#include "cql3/attributes.hh"
#include "cql3/operation.hh"
#include "database_fwd.hh"
namespace cql3 {
namespace statements {
namespace raw {
class delete_statement : public modification_statement {
private:
std::vector<::shared_ptr<operation::raw_deletion>> _deletions;
std::vector<::shared_ptr<relation>> _where_clause;
public:
delete_statement(::shared_ptr<cf_name> name,
::shared_ptr<attributes::raw> attrs,
std::vector<::shared_ptr<operation::raw_deletion>> deletions,
std::vector<::shared_ptr<relation>> where_clause,
conditions_vector conditions,
bool if_exists);
protected:
virtual ::shared_ptr<cql3::statements::modification_statement> prepare_internal(database& db, schema_ptr schema,
::shared_ptr<variable_specifications> bound_names, std::unique_ptr<attributes> attrs);
};
}
}
}

View File

@@ -0,0 +1,88 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Copyright (C) 2015 ScyllaDB
*
* Modified by ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#pragma once
#include "cql3/statements/modification_statement.hh"
#include "cql3/statements/raw/modification_statement.hh"
#include "cql3/column_identifier.hh"
#include "cql3/term.hh"
#include "database_fwd.hh"
#include <vector>
#include "unimplemented.hh"
namespace cql3 {
namespace statements {
namespace raw {
class insert_statement : public raw::modification_statement {
private:
const std::vector<::shared_ptr<column_identifier::raw>> _column_names;
const std::vector<::shared_ptr<term::raw>> _column_values;
public:
/**
* A parsed <code>INSERT</code> statement.
*
* @param name column family being operated on
* @param columnNames list of column names
* @param columnValues list of column values (corresponds to names)
* @param attrs additional attributes for statement (CL, timestamp, timeToLive)
*/
insert_statement(::shared_ptr<cf_name> name,
::shared_ptr<attributes::raw> attrs,
std::vector<::shared_ptr<column_identifier::raw>> column_names,
std::vector<::shared_ptr<term::raw>> column_values,
bool if_not_exists);
virtual ::shared_ptr<cql3::statements::modification_statement> prepare_internal(database& db, schema_ptr schema,
::shared_ptr<variable_specifications> bound_names, std::unique_ptr<attributes> attrs) override;
};
}
}
}

View File

@@ -0,0 +1,97 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Copyright (C) 2015 ScyllaDB
*
* Modified by ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#pragma once
#include "cql3/restrictions/restriction.hh"
#include "cql3/statements/raw/cf_statement.hh"
#include "cql3/column_identifier.hh"
#include "cql3/update_parameters.hh"
#include "cql3/column_condition.hh"
#include "cql3/cql_statement.hh"
#include "cql3/attributes.hh"
#include "cql3/operation.hh"
#include "cql3/relation.hh"
#include "db/consistency_level.hh"
#include "core/shared_ptr.hh"
#include "core/future-util.hh"
#include "unimplemented.hh"
#include "validation.hh"
#include "service/storage_proxy.hh"
#include <memory>
namespace cql3 {
namespace statements {
class modification_statement;
namespace raw {
class modification_statement : public cf_statement {
public:
using conditions_vector = std::vector<std::pair<::shared_ptr<column_identifier::raw>, ::shared_ptr<column_condition::raw>>>;
protected:
const ::shared_ptr<attributes::raw> _attrs;
const std::vector<std::pair<::shared_ptr<column_identifier::raw>, ::shared_ptr<column_condition::raw>>> _conditions;
private:
const bool _if_not_exists;
const bool _if_exists;
protected:
modification_statement(::shared_ptr<cf_name> name, ::shared_ptr<attributes::raw> attrs, conditions_vector conditions, bool if_not_exists, bool if_exists);
public:
virtual ::shared_ptr<prepared> prepare(database& db) override;
::shared_ptr<cql3::statements::modification_statement> prepare(database& db, ::shared_ptr<variable_specifications> bound_names);;
protected:
virtual ::shared_ptr<cql3::statements::modification_statement> prepare_internal(database& db, schema_ptr schema,
::shared_ptr<variable_specifications> bound_names, std::unique_ptr<attributes> attrs) = 0;
};
}
}
}

View File

@@ -17,7 +17,7 @@
*/
/*
* Copyright (C) 2014 ScyllaDB
* Copyright (C) 2016 ScyllaDB
*
* Modified by ScyllaDB
*/
@@ -44,9 +44,8 @@
#include "cql3/variable_specifications.hh"
#include "cql3/column_specification.hh"
#include "cql3/column_identifier.hh"
#include "cql3/cql_statement.hh"
#include "core/shared_ptr.hh"
#include <seastar/core/shared_ptr.hh>
#include <experimental/optional>
#include <vector>
@@ -55,31 +54,22 @@ namespace cql3 {
namespace statements {
class prepared_statement;
namespace raw {
class parsed_statement {
private:
::shared_ptr<variable_specifications> _variables;
public:
using prepared = statements::prepared_statement;
virtual ~parsed_statement();
shared_ptr<variable_specifications> get_bound_variables();
void set_bound_variables(const std::vector<::shared_ptr<column_identifier>>& bound_names);
class prepared {
public:
const ::shared_ptr<cql_statement> statement;
const std::vector<::shared_ptr<column_specification>> bound_names;
prepared(::shared_ptr<cql_statement> statement_, std::vector<::shared_ptr<column_specification>> bound_names_);
prepared(::shared_ptr<cql_statement> statement_, const variable_specifications& names);
prepared(::shared_ptr<cql_statement> statement_, variable_specifications&& names);
prepared(::shared_ptr<cql_statement>&& statement_);
};
virtual ::shared_ptr<prepared> prepare(database& db) = 0;
virtual bool uses_function(const sstring& ks_name, const sstring& function_name) const;
@@ -88,3 +78,5 @@ public:
}
}
}

View File

@@ -0,0 +1,153 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Copyright (C) 2015 ScyllaDB
*
* Modified by ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#pragma once
#include "cql3/statements/raw/cf_statement.hh"
#include "cql3/cql_statement.hh"
#include "cql3/selection/selection.hh"
#include "cql3/selection/raw_selector.hh"
#include "cql3/restrictions/statement_restrictions.hh"
#include "cql3/result_set.hh"
#include "exceptions/unrecognized_entity_exception.hh"
#include "service/client_state.hh"
#include "core/shared_ptr.hh"
#include "core/distributed.hh"
#include "validation.hh"
namespace cql3 {
namespace statements {
namespace raw {
/**
* Encapsulates a completely parsed SELECT query, including the target
* column family, expression, result count, and ordering clause.
*
*/
class select_statement : public cf_statement
{
public:
class parameters final {
public:
using orderings_type = std::vector<std::pair<shared_ptr<column_identifier::raw>, bool>>;
private:
const orderings_type _orderings;
const bool _is_distinct;
const bool _allow_filtering;
public:
parameters();
parameters(orderings_type orderings,
bool is_distinct,
bool allow_filtering);
bool is_distinct();
bool allow_filtering();
orderings_type const& orderings();
};
template<typename T>
using compare_fn = std::function<bool(const T&, const T&)>;
using result_row_type = std::vector<bytes_opt>;
using ordering_comparator_type = compare_fn<result_row_type>;
private:
::shared_ptr<parameters> _parameters;
std::vector<::shared_ptr<selection::raw_selector>> _select_clause;
std::vector<::shared_ptr<relation>> _where_clause;
::shared_ptr<term::raw> _limit;
public:
select_statement(::shared_ptr<cf_name> cf_name,
::shared_ptr<parameters> parameters,
std::vector<::shared_ptr<selection::raw_selector>> select_clause,
std::vector<::shared_ptr<relation>> where_clause,
::shared_ptr<term::raw> limit);
virtual ::shared_ptr<prepared> prepare(database& db) override;
private:
::shared_ptr<restrictions::statement_restrictions> prepare_restrictions(
database& db,
schema_ptr schema,
::shared_ptr<variable_specifications> bound_names,
::shared_ptr<selection::selection> selection);
/** Returns a ::shared_ptr<term> for the limit or null if no limit is set */
::shared_ptr<term> prepare_limit(database& db, ::shared_ptr<variable_specifications> bound_names);
static void verify_ordering_is_allowed(::shared_ptr<restrictions::statement_restrictions> restrictions);
static void validate_distinct_selection(schema_ptr schema,
::shared_ptr<selection::selection> selection,
::shared_ptr<restrictions::statement_restrictions> restrictions);
void handle_unrecognized_ordering_column(::shared_ptr<column_identifier> column);
select_statement::ordering_comparator_type get_ordering_comparator(schema_ptr schema,
::shared_ptr<selection::selection> selection,
::shared_ptr<restrictions::statement_restrictions> restrictions);
bool is_reversed(schema_ptr schema);
/** If ALLOW FILTERING was not specified, this verifies that it is not needed */
void check_needs_filtering(::shared_ptr<restrictions::statement_restrictions> restrictions);
bool contains_alias(::shared_ptr<column_identifier> name);
::shared_ptr<column_specification> limit_receiver();
#if 0
public:
virtual sstring to_string() override {
return sstring("raw_statement(")
+ "name=" + cf_name->to_string()
+ ", selectClause=" + to_string(_select_clause)
+ ", whereClause=" + to_string(_where_clause)
+ ", isDistinct=" + to_string(_parameters->is_distinct())
+ ")";
}
};
#endif
};
}
}
}

View File

@@ -0,0 +1,91 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Copyright (C) 2015 ScyllaDB
*
* Modified by ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#pragma once
#include "cql3/statements/modification_statement.hh"
#include "cql3/statements/raw/modification_statement.hh"
#include "cql3/column_identifier.hh"
#include "cql3/term.hh"
#include "database_fwd.hh"
#include <vector>
#include "unimplemented.hh"
namespace cql3 {
namespace statements {
class update_statement;
namespace raw {
class update_statement : public raw::modification_statement {
private:
// Provided for an UPDATE
std::vector<std::pair<::shared_ptr<column_identifier::raw>, ::shared_ptr<operation::raw_update>>> _updates;
std::vector<relation_ptr> _where_clause;
public:
/**
* Creates a new UpdateStatement from a column family name, columns map, consistency
* level, and key term.
*
* @param name column family being operated on
* @param attrs additional attributes for statement (timestamp, timeToLive)
* @param updates a map of column operations to perform
* @param whereClause the where clause
*/
update_statement(::shared_ptr<cf_name> name,
::shared_ptr<attributes::raw> attrs,
std::vector<std::pair<::shared_ptr<column_identifier::raw>, ::shared_ptr<operation::raw_update>>> updates,
std::vector<relation_ptr> where_clause,
conditions_vector conditions);
protected:
virtual ::shared_ptr<cql3::statements::modification_statement> prepare_internal(database& db, schema_ptr schema,
::shared_ptr<variable_specifications> bound_names, std::unique_ptr<attributes> attrs);
};
}
}
}

View File

@@ -0,0 +1,68 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Copyright (C) 2014 ScyllaDB
*
* Modified by ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#pragma once
#include "cql3/statements/raw/parsed_statement.hh"
namespace cql3 {
namespace statements {
class prepared_statement;
namespace raw {
class use_statement : public parsed_statement {
private:
const sstring _keyspace;
public:
use_statement(sstring keyspace);
virtual ::shared_ptr<prepared> prepare(database& db) override;
};
}
}
}

View File

@@ -86,9 +86,9 @@ void schema_altering_statement::prepare_keyspace(const service::client_state& st
}
}
::shared_ptr<parsed_statement::prepared> schema_altering_statement::prepare(database& db)
::shared_ptr<prepared_statement> schema_altering_statement::prepare(database& db)
{
return ::make_shared<parsed_statement::prepared>(this->shared_from_this());
return ::make_shared<prepared>(this->shared_from_this());
}
future<::shared_ptr<messages::result_message>>

View File

@@ -44,7 +44,7 @@
#include "transport/messages_fwd.hh"
#include "transport/event.hh"
#include "cql3/statements/cf_statement.hh"
#include "cql3/statements/raw/cf_statement.hh"
#include "cql3/cql_statement.hh"
#include "core/shared_ptr.hh"
@@ -60,7 +60,7 @@ namespace messages = transport::messages;
/**
* Abstract class for statements that alter the schema.
*/
class schema_altering_statement : public cf_statement, public cql_statement, public ::enable_shared_from_this<schema_altering_statement> {
class schema_altering_statement : public raw::cf_statement, public cql_statement_no_metadata, public ::enable_shared_from_this<schema_altering_statement> {
private:
const bool _is_column_family_level;

View File

@@ -40,6 +40,7 @@
*/
#include "cql3/statements/select_statement.hh"
#include "cql3/statements/raw/select_statement.hh"
#include "transport/messages/result_message.hh"
#include "cql3/selection/selection.hh"
@@ -60,8 +61,8 @@ select_statement::parameters::parameters()
{ }
select_statement::parameters::parameters(orderings_type orderings,
bool is_distinct,
bool allow_filtering)
bool is_distinct,
bool allow_filtering)
: _orderings{std::move(orderings)}
, _is_distinct{is_distinct}
, _allow_filtering{allow_filtering}
@@ -80,21 +81,21 @@ select_statement::parameters::orderings_type const& select_statement::parameters
}
select_statement::select_statement(schema_ptr schema,
uint32_t bound_terms,
::shared_ptr<parameters> parameters,
::shared_ptr<selection::selection> selection,
::shared_ptr<restrictions::statement_restrictions> restrictions,
bool is_reversed,
ordering_comparator_type ordering_comparator,
::shared_ptr<term> limit)
: _schema(schema)
, _bound_terms(bound_terms)
, _parameters(std::move(parameters))
, _selection(std::move(selection))
, _restrictions(std::move(restrictions))
, _is_reversed(is_reversed)
, _limit(std::move(limit))
, _ordering_comparator(std::move(ordering_comparator))
uint32_t bound_terms,
::shared_ptr<parameters> parameters,
::shared_ptr<selection::selection> selection,
::shared_ptr<restrictions::statement_restrictions> restrictions,
bool is_reversed,
ordering_comparator_type ordering_comparator,
::shared_ptr<term> limit)
: _schema(schema)
, _bound_terms(bound_terms)
, _parameters(std::move(parameters))
, _selection(std::move(selection))
, _restrictions(std::move(restrictions))
, _is_reversed(is_reversed)
, _limit(std::move(limit))
, _ordering_comparator(std::move(ordering_comparator))
{
_opts = _selection->get_query_options();
}
@@ -117,7 +118,7 @@ select_statement::for_selection(schema_ptr schema, ::shared_ptr<selection::selec
::shared_ptr<term>{});
}
::shared_ptr<cql3::metadata> select_statement::get_result_metadata() const {
::shared_ptr<const cql3::metadata> select_statement::get_result_metadata() const {
// FIXME: COUNT needs special result metadata handling.
return _selection->get_result_metadata();
}
@@ -151,7 +152,8 @@ const sstring& select_statement::column_family() const {
}
query::partition_slice
select_statement::make_partition_slice(const query_options& options) {
select_statement::make_partition_slice(const query_options& options)
{
std::vector<column_id> static_columns;
std::vector<column_id> regular_columns;
@@ -212,7 +214,10 @@ bool select_statement::needs_post_query_ordering() const {
}
future<shared_ptr<transport::messages::result_message>>
select_statement::execute(distributed<service::storage_proxy>& proxy, service::query_state& state, const query_options& options) {
select_statement::execute(distributed<service::storage_proxy>& proxy,
service::query_state& state,
const query_options& options)
{
auto cl = options.get_consistency();
validate_for_read(_schema->ks_name(), cl);
@@ -221,7 +226,7 @@ select_statement::execute(distributed<service::storage_proxy>& proxy, service::q
auto now = db_clock::now();
auto command = ::make_lw_shared<query::read_command>(_schema->id(), _schema->version(),
make_partition_slice(options), limit, to_gc_clock(now));
make_partition_slice(options), limit, to_gc_clock(now), tracing::make_trace_info(state.get_trace_state()), query::max_partitions, options.get_timestamp(state));
int32_t page_size = options.get_page_size();
@@ -281,9 +286,13 @@ select_statement::execute(distributed<service::storage_proxy>& proxy, service::q
}
future<shared_ptr<transport::messages::result_message>>
select_statement::execute(distributed<service::storage_proxy>& proxy, lw_shared_ptr<query::read_command> cmd, std::vector<query::partition_range>&& partition_ranges,
service::query_state& state, const query_options& options, db_clock::time_point now) {
select_statement::execute(distributed<service::storage_proxy>& proxy,
lw_shared_ptr<query::read_command> cmd,
std::vector<query::partition_range>&& partition_ranges,
service::query_state& state,
const query_options& options,
db_clock::time_point now)
{
// If this is a query with IN on partition key, ORDER BY clause and LIMIT
// is specified we need to get "limit" rows from each partition since there
// is no way to tell which of these rows belong to the query result before
@@ -294,26 +303,28 @@ select_statement::execute(distributed<service::storage_proxy>& proxy, lw_shared_
return map_reduce(prs.begin(), prs.end(), [this, &proxy, &state, &options, cmd] (auto pr) {
std::vector<query::partition_range> prange { pr };
auto command = ::make_lw_shared<query::read_command>(*cmd);
return proxy.local().query(_schema, command, std::move(prange), options.get_consistency());
return proxy.local().query(_schema, command, std::move(prange), options.get_consistency(), state.get_trace_state());
}, std::move(merger));
}).then([this, &options, now, cmd] (auto result) {
return this->process_results(std::move(result), cmd, options, now);
});
} else {
return proxy.local().query(_schema, cmd, std::move(partition_ranges), options.get_consistency())
return proxy.local().query(_schema, cmd, std::move(partition_ranges), options.get_consistency(), state.get_trace_state())
.then([this, &options, now, cmd] (auto result) {
return this->process_results(std::move(result), cmd, options, now);
});
}
}
future<::shared_ptr<transport::messages::result_message>>
select_statement::execute_internal(distributed<service::storage_proxy>& proxy, service::query_state& state, const query_options& options) {
select_statement::execute_internal(distributed<service::storage_proxy>& proxy,
service::query_state& state,
const query_options& options)
{
int32_t limit = get_limit(options);
auto now = db_clock::now();
auto command = ::make_lw_shared<query::read_command>(_schema->id(), _schema->version(),
make_partition_slice(options), limit);
make_partition_slice(options), limit, to_gc_clock(now), std::experimental::nullopt, query::max_partitions, options.get_timestamp(state));
auto partition_ranges = _restrictions->get_partition_key_ranges(options);
if (needs_post_query_ordering() && _limit) {
@@ -322,23 +333,24 @@ select_statement::execute_internal(distributed<service::storage_proxy>& proxy, s
return map_reduce(prs.begin(), prs.end(), [this, &proxy, &state, command] (auto pr) {
std::vector<query::partition_range> prange { pr };
auto cmd = ::make_lw_shared<query::read_command>(*command);
return proxy.local().query(_schema, cmd, std::move(prange), db::consistency_level::ONE);
return proxy.local().query(_schema, cmd, std::move(prange), db::consistency_level::ONE, state.get_trace_state());
}, std::move(merger));
}).then([command, this, &options, now] (auto result) {
return this->process_results(std::move(result), command, options, now);
}).finally([command] { });
} else {
return proxy.local().query(_schema, command, std::move(partition_ranges), db::consistency_level::ONE).then([command, this, &options, now] (auto result) {
return proxy.local().query(_schema, command, std::move(partition_ranges), db::consistency_level::ONE, state.get_trace_state()).then([command, this, &options, now] (auto result) {
return this->process_results(std::move(result), command, options, now);
}).finally([command] {});
}
}
shared_ptr<transport::messages::result_message> select_statement::process_results(
foreign_ptr<lw_shared_ptr<query::result>> results,
lw_shared_ptr<query::read_command> cmd, const query_options& options,
db_clock::time_point now) {
shared_ptr<transport::messages::result_message>
select_statement::process_results(foreign_ptr<lw_shared_ptr<query::result>> results,
lw_shared_ptr<query::read_command> cmd,
const query_options& options,
db_clock::time_point now)
{
cql3::selection::result_set_builder builder(*_selection, now,
options.get_cql_serialization_format());
query::result_view::consume(*results, cmd->slice,
@@ -356,11 +368,13 @@ shared_ptr<transport::messages::result_message> select_statement::process_result
return ::make_shared<transport::messages::result_message::rows>(std::move(rs));
}
select_statement::raw_statement::raw_statement(::shared_ptr<cf_name> cf_name,
::shared_ptr<parameters> parameters,
std::vector<::shared_ptr<selection::raw_selector>> select_clause,
std::vector<::shared_ptr<relation>> where_clause,
::shared_ptr<term::raw> limit)
namespace raw {
select_statement::select_statement(::shared_ptr<cf_name> cf_name,
::shared_ptr<parameters> parameters,
std::vector<::shared_ptr<selection::raw_selector>> select_clause,
std::vector<::shared_ptr<relation>> where_clause,
::shared_ptr<term::raw> limit)
: cf_statement(std::move(cf_name))
, _parameters(std::move(parameters))
, _select_clause(std::move(select_clause))
@@ -368,8 +382,7 @@ select_statement::raw_statement::raw_statement(::shared_ptr<cf_name> cf_name,
, _limit(std::move(limit))
{ }
::shared_ptr<parsed_statement::prepared>
select_statement::raw_statement::prepare(database& db) {
::shared_ptr<prepared_statement> select_statement::prepare(database& db) {
schema_ptr schema = validation::validate_column_family(db, keyspace(), column_family());
auto bound_names = get_bound_variables();
@@ -394,7 +407,7 @@ select_statement::raw_statement::prepare(database& db) {
check_needs_filtering(restrictions);
auto stmt = ::make_shared<select_statement>(schema,
auto stmt = ::make_shared<cql3::statements::select_statement>(schema,
bound_names->size(),
_parameters,
std::move(selection),
@@ -403,13 +416,14 @@ select_statement::raw_statement::prepare(database& db) {
std::move(ordering_comparator),
prepare_limit(db, bound_names));
return ::make_shared<parsed_statement::prepared>(std::move(stmt), std::move(*bound_names));
return ::make_shared<prepared>(std::move(stmt), std::move(*bound_names));
}
::shared_ptr<restrictions::statement_restrictions>
select_statement::raw_statement::prepare_restrictions(database& db, schema_ptr schema,
::shared_ptr<variable_specifications> bound_names,
::shared_ptr<selection::selection> selection)
select_statement::prepare_restrictions(database& db,
schema_ptr schema,
::shared_ptr<variable_specifications> bound_names,
::shared_ptr<selection::selection> selection)
{
try {
return ::make_shared<restrictions::statement_restrictions>(db, schema, std::move(_where_clause), bound_names,
@@ -424,7 +438,8 @@ select_statement::raw_statement::prepare_restrictions(database& db, schema_ptr s
/** Returns a ::shared_ptr<term> for the limit or null if no limit is set */
::shared_ptr<term>
select_statement::raw_statement::prepare_limit(database& db, ::shared_ptr<variable_specifications> bound_names) {
select_statement::prepare_limit(database& db, ::shared_ptr<variable_specifications> bound_names)
{
if (!_limit) {
return {};
}
@@ -434,8 +449,7 @@ select_statement::raw_statement::prepare_limit(database& db, ::shared_ptr<variab
return prep_limit;
}
void select_statement::raw_statement::verify_ordering_is_allowed(
::shared_ptr<restrictions::statement_restrictions> restrictions)
void select_statement::verify_ordering_is_allowed(::shared_ptr<restrictions::statement_restrictions> restrictions)
{
if (restrictions->uses_secondary_indexing()) {
throw exceptions::invalid_request_exception("ORDER BY with 2ndary indexes is not supported.");
@@ -445,9 +459,9 @@ void select_statement::raw_statement::verify_ordering_is_allowed(
}
}
void select_statement::raw_statement::validate_distinct_selection(schema_ptr schema,
::shared_ptr<selection::selection> selection,
::shared_ptr<restrictions::statement_restrictions> restrictions)
void select_statement::validate_distinct_selection(schema_ptr schema,
::shared_ptr<selection::selection> selection,
::shared_ptr<restrictions::statement_restrictions> restrictions)
{
for (auto&& def : selection->get_columns()) {
if (!def->is_partition_key() && !def->is_static()) {
@@ -471,8 +485,7 @@ void select_statement::raw_statement::validate_distinct_selection(schema_ptr sch
}
}
void select_statement::raw_statement::handle_unrecognized_ordering_column(
::shared_ptr<column_identifier> column)
void select_statement::handle_unrecognized_ordering_column(::shared_ptr<column_identifier> column)
{
if (contains_alias(column)) {
throw exceptions::invalid_request_exception(sprint("Aliases are not allowed in order by clause ('%s')", *column));
@@ -481,9 +494,9 @@ void select_statement::raw_statement::handle_unrecognized_ordering_column(
}
select_statement::ordering_comparator_type
select_statement::raw_statement::get_ordering_comparator(schema_ptr schema,
::shared_ptr<selection::selection> selection,
::shared_ptr<restrictions::statement_restrictions> restrictions)
select_statement::get_ordering_comparator(schema_ptr schema,
::shared_ptr<selection::selection> selection,
::shared_ptr<restrictions::statement_restrictions> restrictions)
{
if (!restrictions->key_is_in_relation()) {
return {};
@@ -530,8 +543,7 @@ select_statement::raw_statement::get_ordering_comparator(schema_ptr schema,
};
}
bool select_statement::raw_statement::is_reversed(schema_ptr schema) {
bool select_statement::is_reversed(schema_ptr schema) {
assert(_parameters->orderings().size() > 0);
parameters::orderings_type::size_type i = 0;
bool is_reversed_ = false;
@@ -576,8 +588,7 @@ bool select_statement::raw_statement::is_reversed(schema_ptr schema) {
}
/** If ALLOW FILTERING was not specified, this verifies that it is not needed */
void select_statement::raw_statement::check_needs_filtering(
::shared_ptr<restrictions::statement_restrictions> restrictions)
void select_statement::check_needs_filtering(::shared_ptr<restrictions::statement_restrictions> restrictions)
{
// non-key-range non-indexed queries cannot involve filtering underneath
if (!_parameters->allow_filtering() && (restrictions->is_key_range() || restrictions->uses_secondary_indexing())) {
@@ -593,16 +604,19 @@ void select_statement::raw_statement::check_needs_filtering(
}
}
bool select_statement::raw_statement::contains_alias(::shared_ptr<column_identifier> name) {
bool select_statement::contains_alias(::shared_ptr<column_identifier> name) {
return std::any_of(_select_clause.begin(), _select_clause.end(), [name] (auto raw) {
return raw->alias && *name == *raw->alias;
});
}
::shared_ptr<column_specification> select_statement::raw_statement::limit_receiver() {
::shared_ptr<column_specification> select_statement::limit_receiver() {
return ::make_shared<column_specification>(keyspace(), column_family(), ::make_shared<column_identifier>("[limit]", true),
int32_type);
}
}
}
}

View File

@@ -41,7 +41,8 @@
#pragma once
#include "cql3/statements/cf_statement.hh"
#include "cql3/statements/raw/cf_statement.hh"
#include "cql3/statements/raw/select_statement.hh"
#include "cql3/cql_statement.hh"
#include "cql3/selection/selection.hh"
#include "cql3/selection/raw_selector.hh"
@@ -64,22 +65,7 @@ namespace statements {
*/
class select_statement : public cql_statement {
public:
class parameters final {
public:
using orderings_type = std::vector<std::pair<shared_ptr<column_identifier::raw>, bool>>;
private:
const orderings_type _orderings;
const bool _is_distinct;
const bool _allow_filtering;
public:
parameters();
parameters(orderings_type orderings,
bool is_distinct,
bool allow_filtering);
bool is_distinct();
bool allow_filtering();
orderings_type const& orderings();
};
using parameters = raw::select_statement::parameters;
private:
static constexpr int DEFAULT_COUNT_PAGE_SIZE = 10000;
static thread_local const ::shared_ptr<parameters> _default_parameters;
@@ -92,10 +78,10 @@ private:
::shared_ptr<term> _limit;
template<typename T>
using compare_fn = std::function<bool(const T&, const T&)>;
using compare_fn = raw::select_statement::compare_fn<T>;
using result_row_type = std::vector<bytes_opt>;
using ordering_comparator_type = compare_fn<result_row_type>;
using result_row_type = raw::select_statement::result_row_type;
using ordering_comparator_type = raw::select_statement::ordering_comparator_type;
/**
* The comparator used to orders results when multiple keys are selected (using IN).
@@ -121,7 +107,7 @@ public:
static ::shared_ptr<select_statement> for_selection(
schema_ptr schema, ::shared_ptr<selection::selection> selection);
::shared_ptr<cql3::metadata> get_result_metadata() const;
virtual ::shared_ptr<const cql3::metadata> get_result_metadata() const override;
virtual uint32_t get_bound_terms() override;
virtual future<> check_access(const service::client_state& state) override;
virtual void validate(distributed<service::storage_proxy>&, const service::client_state& state) override;
@@ -430,69 +416,6 @@ private:
result.add(row.getColumn(def.name));
}
#endif
public:
class raw_statement;
};
class select_statement::raw_statement : public cf_statement
{
private:
::shared_ptr<parameters> _parameters;
std::vector<::shared_ptr<selection::raw_selector>> _select_clause;
std::vector<::shared_ptr<relation>> _where_clause;
::shared_ptr<term::raw> _limit;
public:
raw_statement(::shared_ptr<cf_name> cf_name,
::shared_ptr<parameters> parameters,
std::vector<::shared_ptr<selection::raw_selector>> select_clause,
std::vector<::shared_ptr<relation>> where_clause,
::shared_ptr<term::raw> limit);
virtual ::shared_ptr<prepared> prepare(database& db) override;
private:
::shared_ptr<restrictions::statement_restrictions> prepare_restrictions(
database& db,
schema_ptr schema,
::shared_ptr<variable_specifications> bound_names,
::shared_ptr<selection::selection> selection);
/** Returns a ::shared_ptr<term> for the limit or null if no limit is set */
::shared_ptr<term> prepare_limit(database& db, ::shared_ptr<variable_specifications> bound_names);
static void verify_ordering_is_allowed(::shared_ptr<restrictions::statement_restrictions> restrictions);
static void validate_distinct_selection(schema_ptr schema,
::shared_ptr<selection::selection> selection,
::shared_ptr<restrictions::statement_restrictions> restrictions);
void handle_unrecognized_ordering_column(::shared_ptr<column_identifier> column);
select_statement::ordering_comparator_type get_ordering_comparator(schema_ptr schema,
::shared_ptr<selection::selection> selection,
::shared_ptr<restrictions::statement_restrictions> restrictions);
bool is_reversed(schema_ptr schema);
/** If ALLOW FILTERING was not specified, this verifies that it is not needed */
void check_needs_filtering(::shared_ptr<restrictions::statement_restrictions> restrictions);
bool contains_alias(::shared_ptr<column_identifier> name);
::shared_ptr<column_specification> limit_receiver();
#if 0
public:
virtual sstring to_string() override {
return sstring("raw_statement(")
+ "name=" + cf_name->to_string()
+ ", selectClause=" + to_string(_select_clause)
+ ", whereClause=" + to_string(_where_clause)
+ ", isDistinct=" + to_string(_parameters->is_distinct())
+ ")";
}
};
#endif
};
}

View File

@@ -40,6 +40,7 @@
*/
#include "cql3/statements/truncate_statement.hh"
#include "cql3/statements/prepared_statement.hh"
#include "cql3/cql_statement.hh"
#include <experimental/optional>
@@ -58,9 +59,9 @@ uint32_t truncate_statement::get_bound_terms()
return 0;
}
::shared_ptr<parsed_statement::prepared> truncate_statement::prepare(database& db)
::shared_ptr<prepared_statement> truncate_statement::prepare(database& db)
{
return ::make_shared<parsed_statement::prepared>(this->shared_from_this());
return ::make_shared<prepared>(this->shared_from_this());
}
bool truncate_statement::uses_function(const sstring& ks_name, const sstring& function_name) const

View File

@@ -41,7 +41,7 @@
#pragma once
#include "cql3/statements/cf_statement.hh"
#include "cql3/statements/raw/cf_statement.hh"
#include "cql3/cql_statement.hh"
#include <experimental/optional>
@@ -50,7 +50,7 @@ namespace cql3 {
namespace statements {
class truncate_statement : public cf_statement, public cql_statement, public ::enable_shared_from_this<truncate_statement> {
class truncate_statement : public raw::cf_statement, public cql_statement_no_metadata, public ::enable_shared_from_this<truncate_statement> {
public:
truncate_statement(::shared_ptr<cf_name> name);

View File

@@ -40,6 +40,8 @@
*/
#include "update_statement.hh"
#include "raw/update_statement.hh"
#include "raw/insert_statement.hh"
#include "unimplemented.hh"
#include "cql3/operation_impl.hh"
@@ -108,21 +110,24 @@ void update_statement::add_update_for_key(mutation& m, const exploded_clustering
#endif
}
update_statement::parsed_insert::parsed_insert(::shared_ptr<cf_name> name,
namespace raw {
insert_statement::insert_statement( ::shared_ptr<cf_name> name,
::shared_ptr<attributes::raw> attrs,
std::vector<::shared_ptr<column_identifier::raw>> column_names,
std::vector<::shared_ptr<term::raw>> column_values,
bool if_not_exists)
: modification_statement::parsed{std::move(name), std::move(attrs), conditions_vector{}, if_not_exists, false}
: raw::modification_statement{std::move(name), std::move(attrs), conditions_vector{}, if_not_exists, false}
, _column_names{std::move(column_names)}
, _column_values{std::move(column_values)}
{ }
::shared_ptr<modification_statement>
update_statement::parsed_insert::prepare_internal(database& db, schema_ptr schema,
::shared_ptr<cql3::statements::modification_statement>
insert_statement::prepare_internal(database& db, schema_ptr schema,
::shared_ptr<variable_specifications> bound_names, std::unique_ptr<attributes> attrs)
{
auto stmt = ::make_shared<update_statement>(statement_type::INSERT, bound_names->size(), schema, std::move(attrs));
using statement_type = cql3::statements::modification_statement::statement_type;
auto stmt = ::make_shared<cql3::statements::update_statement>(statement_type::INSERT, bound_names->size(), schema, std::move(attrs));
// Created from an INSERT
if (stmt->is_counter()) {
@@ -164,21 +169,22 @@ update_statement::parsed_insert::prepare_internal(database& db, schema_ptr schem
return stmt;
}
update_statement::parsed_update::parsed_update(::shared_ptr<cf_name> name,
update_statement::update_statement( ::shared_ptr<cf_name> name,
::shared_ptr<attributes::raw> attrs,
std::vector<std::pair<::shared_ptr<column_identifier::raw>, ::shared_ptr<operation::raw_update>>> updates,
std::vector<relation_ptr> where_clause,
conditions_vector conditions)
: modification_statement::parsed(std::move(name), std::move(attrs), std::move(conditions), false, false)
: raw::modification_statement(std::move(name), std::move(attrs), std::move(conditions), false, false)
, _updates(std::move(updates))
, _where_clause(std::move(where_clause))
{ }
::shared_ptr<modification_statement>
update_statement::parsed_update::prepare_internal(database& db, schema_ptr schema,
::shared_ptr<cql3::statements::modification_statement>
update_statement::prepare_internal(database& db, schema_ptr schema,
::shared_ptr<variable_specifications> bound_names, std::unique_ptr<attributes> attrs)
{
auto stmt = ::make_shared<update_statement>(statement_type::UPDATE, bound_names->size(), schema, std::move(attrs));
using statement_type = cql3::statements::modification_statement::statement_type;
auto stmt = ::make_shared<cql3::statements::update_statement>(statement_type::UPDATE, bound_names->size(), schema, std::move(attrs));
for (auto&& entry : _updates) {
auto id = entry.first->prepare_column_identifier(schema);
@@ -203,3 +209,5 @@ update_statement::parsed_update::prepare_internal(database& db, schema_ptr schem
}
}
}

View File

@@ -42,6 +42,7 @@
#pragma once
#include "cql3/statements/modification_statement.hh"
#include "cql3/statements/raw/modification_statement.hh"
#include "cql3/column_identifier.hh"
#include "cql3/term.hh"
@@ -69,55 +70,6 @@ private:
virtual bool require_full_clustering_key() const override;
virtual void add_update_for_key(mutation& m, const exploded_clustering_prefix& prefix, const update_parameters& params) override;
public:
class parsed_insert : public modification_statement::parsed {
private:
const std::vector<::shared_ptr<column_identifier::raw>> _column_names;
const std::vector<::shared_ptr<term::raw>> _column_values;
public:
/**
* A parsed <code>INSERT</code> statement.
*
* @param name column family being operated on
* @param columnNames list of column names
* @param columnValues list of column values (corresponds to names)
* @param attrs additional attributes for statement (CL, timestamp, timeToLive)
*/
parsed_insert(::shared_ptr<cf_name> name,
::shared_ptr<attributes::raw> attrs,
std::vector<::shared_ptr<column_identifier::raw>> column_names,
std::vector<::shared_ptr<term::raw>> column_values,
bool if_not_exists);
virtual ::shared_ptr<modification_statement> prepare_internal(database& db, schema_ptr schema,
::shared_ptr<variable_specifications> bound_names, std::unique_ptr<attributes> attrs) override;
};
class parsed_update : public modification_statement::parsed {
private:
// Provided for an UPDATE
std::vector<std::pair<::shared_ptr<column_identifier::raw>, ::shared_ptr<operation::raw_update>>> _updates;
std::vector<relation_ptr> _where_clause;
public:
/**
* Creates a new UpdateStatement from a column family name, columns map, consistency
* level, and key term.
*
* @param name column family being operated on
* @param attrs additional attributes for statement (timestamp, timeToLive)
* @param updates a map of column operations to perform
* @param whereClause the where clause
*/
parsed_update(::shared_ptr<cf_name> name,
::shared_ptr<attributes::raw> attrs,
std::vector<std::pair<::shared_ptr<column_identifier::raw>, ::shared_ptr<operation::raw_update>>> updates,
std::vector<relation_ptr> where_clause,
conditions_vector conditions);
protected:
virtual ::shared_ptr<modification_statement> prepare_internal(database& db, schema_ptr schema,
::shared_ptr<variable_specifications> bound_names, std::unique_ptr<attributes> attrs);
};
};
}

View File

@@ -40,6 +40,7 @@
*/
#include "cql3/statements/use_statement.hh"
#include "cql3/statements/raw/use_statement.hh"
#include "transport/messages/result_message.hh"
@@ -57,14 +58,23 @@ uint32_t use_statement::get_bound_terms()
return 0;
}
::shared_ptr<parsed_statement::prepared> use_statement::prepare(database& db)
namespace raw {
use_statement::use_statement(sstring keyspace)
: _keyspace(keyspace)
{
return ::make_shared<parsed_statement::prepared>(this->shared_from_this());
}
::shared_ptr<prepared_statement> use_statement::prepare(database& db)
{
return ::make_shared<prepared>(make_shared<cql3::statements::use_statement>(_keyspace));
}
}
bool use_statement::uses_function(const sstring& ks_name, const sstring& function_name) const
{
return parsed_statement::uses_function(ks_name, function_name);
return false;
}
bool use_statement::depends_on_keyspace(const sstring& ks_name) const

View File

@@ -41,15 +41,16 @@
#pragma once
#include "cql3/statements/parsed_statement.hh"
#include "transport/messages_fwd.hh"
#include "cql3/cql_statement.hh"
#include "cql3/statements/raw/parsed_statement.hh"
#include "prepared_statement.hh"
namespace cql3 {
namespace statements {
class use_statement : public parsed_statement, public cql_statement, public ::enable_shared_from_this<use_statement> {
class use_statement : public cql_statement_no_metadata {
private:
const sstring _keyspace;
@@ -58,8 +59,6 @@ public:
virtual uint32_t get_bound_terms() override;
virtual ::shared_ptr<prepared> prepare(database& db) override;
virtual bool uses_function(const sstring& ks_name, const sstring& function_name) const override;
virtual bool depends_on_keyspace(const sstring& ks_name) const override;

View File

@@ -103,7 +103,7 @@ public:
virtual bool uses_function(const sstring& ks_name, const sstring& function_name) const = 0;
virtual sstring to_string() const {
return sprint("term@%p", this);
return sprint("term@%p", static_cast<const void*>(this));
}
friend std::ostream& operator<<(std::ostream& out, const term& t) {

File diff suppressed because it is too large Load Diff

View File

@@ -69,6 +69,7 @@
#include "utils/histogram.hh"
#include "sstables/estimated_histogram.hh"
#include "sstables/compaction.hh"
#include "sstables/sstable_set.hh"
#include "key_reader.hh"
#include <seastar/core/rwlock.hh>
#include <seastar/core/shared_future.hh>
@@ -98,35 +99,97 @@ void make(database& db, bool durable, bool volatile_testing_only);
}
}
class throttle_state {
size_t _max_space;
logalloc::region_group& _region_group;
throttle_state* _parent;
class replay_position_reordered_exception : public std::exception {};
circular_buffer<promise<>> _throttled_requests;
timer<> _throttling_timer{[this] { unthrottle(); }};
void unthrottle();
bool should_throttle() const {
if (_region_group.memory_used() > _max_space) {
return true;
}
if (_parent) {
return _parent->should_throttle();
}
return false;
}
using shared_memtable = lw_shared_ptr<memtable>;
class memtable_list;
class dirty_memory_manager: public logalloc::region_group_reclaimer {
// We need a separate boolean, because from the LSA point of view, pressure may still be
// mounting, in which case the pressure flag could be set back on if we force it off.
bool _db_shutdown_requested = false;
database* _db;
logalloc::region_group _region_group;
// We would like to serialize the flushing of memtables. While flushing many memtables
// simultaneously can sustain high levels of throughput, the memory is not freed until the
// memtable is totally gone. That means that if we have throttled requests, they will stay
// throttled for a long time. Even when we have virtual dirty, that only provides a rough
// estimate, and we can't release requests that early.
//
// Ideally, we'd allow one memtable flush per shard (or per database object), and write-behind
// would take care of the rest. But that still has issues, so we'll limit parallelism to some
// number (4), that we will hopefully reduce to 1 when write behind works.
//
// When streaming is going on, we'll separate half of that for the streaming code, which
// effectively increases the total to 6. That is a bit ugly and a bit redundant with the I/O
// Scheduler, but it's the easiest way not to hurt the common case (no streaming) and will have
// to do for the moment. Hopefully we can set both to 1 soon (with write behind)
//
// FIXME: enable write behind and set both to 1. Right now we will take advantage of the fact
// that memtables and streaming will use different specialized classes here and set them as
// default values here.
size_t _concurrency;
semaphore _flush_serializer;
seastar::gate _waiting_flush_gate;
std::vector<shared_memtable> _pending_flushes;
void maybe_do_active_flush();
protected:
virtual memtable_list& get_memtable_list(column_family& cf) = 0;
virtual void start_reclaiming() override;
public:
throttle_state(size_t max_space, logalloc::region_group& region, throttle_state* parent = nullptr)
: _max_space(max_space)
, _region_group(region)
, _parent(parent)
{}
future<> shutdown();
future<> throttle();
dirty_memory_manager(database* db, size_t threshold, size_t concurrency)
: logalloc::region_group_reclaimer(threshold)
, _db(db)
, _region_group(*this)
, _concurrency(concurrency)
, _flush_serializer(concurrency) {}
dirty_memory_manager(database* db, dirty_memory_manager *parent, size_t threshold, size_t concurrency)
: logalloc::region_group_reclaimer(threshold)
, _db(db)
, _region_group(&parent->_region_group, *this)
, _concurrency(concurrency)
, _flush_serializer(concurrency) {}
logalloc::region_group& region_group() {
return _region_group;
}
const logalloc::region_group& region_group() const {
return _region_group;
}
template <typename Func>
future<> serialize_flush(Func&& func) {
return seastar::with_gate(_waiting_flush_gate, [this, func] () mutable {
return with_semaphore(_flush_serializer, 1, func).finally([this] {
maybe_do_active_flush();
});
});
}
};
class streaming_dirty_memory_manager: public dirty_memory_manager {
virtual memtable_list& get_memtable_list(column_family& cf) override;
public:
streaming_dirty_memory_manager(database& db, dirty_memory_manager *parent, size_t threshold) : dirty_memory_manager(&db, parent, threshold, 2) {}
};
class replay_position_reordered_exception : public std::exception {};
class memtable_dirty_memory_manager: public dirty_memory_manager {
virtual memtable_list& get_memtable_list(column_family& cf) override;
public:
memtable_dirty_memory_manager(database& db, dirty_memory_manager* parent, size_t threshold) : dirty_memory_manager(&db, parent, threshold, 4) {}
// This constructor will be called for the system tables (no parent). Its flushes are usually drive by us
// and not the user, and tend to be small in size. So we'll allow only two slots.
memtable_dirty_memory_manager(database& db, size_t threshold) : dirty_memory_manager(&db, threshold, 2) {}
memtable_dirty_memory_manager() : dirty_memory_manager(nullptr, std::numeric_limits<size_t>::max(), 4) {}
};
extern thread_local memtable_dirty_memory_manager default_dirty_memory_manager;
// We could just add all memtables, regardless of types, to a single list, and
// then filter them out when we read them. Here's why I have chosen not to do
@@ -147,19 +210,21 @@ class replay_position_reordered_exception : public std::exception {};
// If we are going to have different methods, better have different instances
// of a common class.
class memtable_list {
using shared_memtable = lw_shared_ptr<memtable>;
public:
enum class flush_behavior { delayed, immediate };
private:
std::vector<shared_memtable> _memtables;
std::function<future<> ()> _seal_fn;
std::function<future<> (flush_behavior)> _seal_fn;
std::function<schema_ptr()> _current_schema;
size_t _max_memtable_size;
logalloc::region_group* _dirty_memory_region_group;
dirty_memory_manager* _dirty_memory_manager;
public:
memtable_list(std::function<future<> ()> seal_fn, std::function<schema_ptr()> cs, size_t max_memtable_size, logalloc::region_group* region_group)
memtable_list(std::function<future<> (flush_behavior)> seal_fn, std::function<schema_ptr()> cs, size_t max_memtable_size, dirty_memory_manager* dirty_memory_manager)
: _memtables({})
, _seal_fn(seal_fn)
, _current_schema(cs)
, _max_memtable_size(max_memtable_size)
, _dirty_memory_region_group(region_group) {
, _dirty_memory_manager(dirty_memory_manager) {
add_memtable();
}
@@ -179,8 +244,8 @@ public:
return _memtables.size();
}
future<> seal_active_memtable() {
return _seal_fn();
future<> seal_active_memtable(flush_behavior behavior) {
return _seal_fn(behavior);
}
auto begin() noexcept {
@@ -215,12 +280,12 @@ public:
if (should_flush()) {
// FIXME: if sparse, do some in-memory compaction first
// FIXME: maybe merge with other in-memory memtables
_seal_fn();
seal_active_memtable(flush_behavior::immediate);
}
}
private:
lw_shared_ptr<memtable> new_memtable() {
return make_lw_shared<memtable>(_current_schema(), _dirty_memory_region_group);
return make_lw_shared<memtable>(_current_schema(), &(_dirty_memory_manager->region_group()));
}
};
@@ -247,10 +312,12 @@ public:
bool enable_incremental_backups = false;
size_t max_memtable_size = 5'000'000;
size_t max_streaming_memtable_size = 5'000'000;
logalloc::region_group* dirty_memory_region_group = nullptr;
logalloc::region_group* streaming_dirty_memory_region_group = nullptr;
::dirty_memory_manager* dirty_memory_manager = &default_dirty_memory_manager;
::dirty_memory_manager* streaming_dirty_memory_manager = &default_dirty_memory_manager;
restricted_mutation_reader_config read_concurrency_config;
restricted_mutation_reader_config streaming_read_concurrency_config;
::cf_stats* cf_stats = nullptr;
uint64_t max_cached_partition_size_in_bytes;
};
struct no_commitlog {};
struct stats {
@@ -263,13 +330,13 @@ public:
int64_t live_sstable_count = 0;
/** Estimated number of compactions pending for this column family */
int64_t pending_compactions = 0;
utils::ihistogram reads{256};
utils::ihistogram writes{256};
utils::timed_rate_moving_average_and_histogram reads{256};
utils::timed_rate_moving_average_and_histogram writes{256};
sstables::estimated_histogram estimated_read;
sstables::estimated_histogram estimated_write;
sstables::estimated_histogram estimated_sstable_per_read;
utils::ihistogram tombstone_scanned;
utils::ihistogram live_scanned;
utils::timed_rate_moving_average_and_histogram tombstone_scanned;
utils::timed_rate_moving_average_and_histogram live_scanned;
};
struct snapshot_details {
@@ -280,6 +347,7 @@ private:
schema_ptr _schema;
config _config;
stats _stats;
lw_shared_ptr<memtable_list> _memtables;
// In older incarnations, we simply commited the mutations to memtables.
@@ -300,12 +368,37 @@ private:
// memory throttling mechanism, guaranteeing we will not overload the
// server.
lw_shared_ptr<memtable_list> _streaming_memtables;
utils::phased_barrier _streaming_flush_phaser;
friend class memtable_dirty_memory_manager;
friend class streaming_dirty_memory_manager;
// If mutations are fragmented during streaming the sstables cannot be made
// visible immediately after memtable flush, because that could cause
// readers to see only a part of a partition thus violating isolation
// guarantees.
// Mutations that are sent in fragments are kept separately in per-streaming
// plan memtables and the resulting sstables are not made visible until
// the streaming is complete.
struct streaming_memtable_big {
lw_shared_ptr<memtable_list> memtables;
std::vector<sstables::shared_sstable> sstables;
seastar::gate flush_in_progress;
};
std::unordered_map<utils::UUID, lw_shared_ptr<streaming_memtable_big>> _streaming_memtables_big;
future<> flush_streaming_big_mutations(utils::UUID plan_id);
void apply_streaming_big_mutation(schema_ptr m_schema, utils::UUID plan_id, const frozen_mutation& m);
future<> seal_active_streaming_memtable_big(streaming_memtable_big& smb);
lw_shared_ptr<memtable_list> make_memory_only_memtable_list();
lw_shared_ptr<memtable_list> make_memtable_list();
lw_shared_ptr<memtable_list> make_streaming_memtable_list();
lw_shared_ptr<memtable_list> make_streaming_memtable_big_list(streaming_memtable_big& smb);
sstables::compaction_strategy _compaction_strategy;
// generation -> sstable. Ordered by key so we can easily get the most recent.
lw_shared_ptr<sstable_list> _sstables;
lw_shared_ptr<sstables::sstable_set> _sstables;
// sstables that have been compacted (so don't look up in query) but
// have not been deleted yet, so must not GC any tombstones in other sstables
// that may delete data in these sstables:
@@ -326,10 +419,7 @@ private:
db::replay_position _highest_flushed_rp;
// Provided by the database that owns this commitlog
db::commitlog* _commitlog;
sstables::compaction_strategy _compaction_strategy;
compaction_manager& _compaction_manager;
// Whether or not a cf is queued by its compaction manager.
bool _compaction_manager_queued = false;
int _compaction_disabled = 0;
class memtable_flush_queue;
std::unique_ptr<memtable_flush_queue> _flush_queue;
@@ -349,7 +439,7 @@ private:
lw_shared_ptr<memtable> new_memtable();
lw_shared_ptr<memtable> new_streaming_memtable();
future<stop_iteration> try_flush_memtable_to_sstable(lw_shared_ptr<memtable> memt);
future<> update_cache(memtable&, lw_shared_ptr<sstable_list> old_sstables);
future<> update_cache(memtable&, sstables::shared_sstable exclude_sstable);
struct merge_comparator;
// update the sstable generation, making sure that new new sstables don't overwrite this one.
@@ -376,11 +466,14 @@ private:
// Caller needs to ensure that column_family remains live (FIXME: relax this).
// The 'range' parameter must be live as long as the reader is used.
// Mutations returned by the reader will all have given schema.
mutation_reader make_sstable_reader(schema_ptr schema, const query::partition_range& range, const io_priority_class& pc) const;
mutation_reader make_sstable_reader(schema_ptr schema,
const query::partition_range& range,
query::clustering_key_filtering_context ck_filtering,
const io_priority_class& pc) const;
mutation_source sstables_as_mutation_source();
key_source sstables_as_key_source() const;
partition_presence_checker make_partition_presence_checker(lw_shared_ptr<sstable_list> old_sstables);
partition_presence_checker make_partition_presence_checker(sstables::shared_sstable exclude_sstable);
std::chrono::steady_clock::time_point _sstable_writes_disabled_at;
void do_trigger_compaction();
public:
@@ -414,6 +507,7 @@ public:
// will be scheduled under the priority class given by pc.
mutation_reader make_reader(schema_ptr schema,
const query::partition_range& range = query::full_partition_range,
const query::clustering_key_filtering_context& ck_filtering = query::no_clustering_key_filtering,
const io_priority_class& pc = default_priority_class()) const;
mutation_source as_mutation_source() const;
@@ -434,9 +528,13 @@ public:
}
logalloc::occupancy_stats occupancy() const;
private:
column_family(schema_ptr schema, config cfg, db::commitlog* cl, compaction_manager&);
public:
column_family(schema_ptr schema, config cfg, db::commitlog& cl, compaction_manager&);
column_family(schema_ptr schema, config cfg, no_commitlog, compaction_manager&);
column_family(schema_ptr schema, config cfg, db::commitlog& cl, compaction_manager& cm)
: column_family(schema, std::move(cfg), &cl, cm) {}
column_family(schema_ptr schema, config cfg, no_commitlog, compaction_manager& cm)
: column_family(schema, std::move(cfg), nullptr, cm) {}
column_family(column_family&&) = delete; // 'this' is being captured during construction
~column_family();
const schema_ptr& schema() const { return _schema; }
@@ -449,7 +547,7 @@ public:
// The mutation is always upgraded to current schema.
void apply(const frozen_mutation& m, const schema_ptr& m_schema, const db::replay_position& = db::replay_position());
void apply(const mutation& m, const db::replay_position& = db::replay_position());
void apply_streaming_mutation(schema_ptr, const frozen_mutation&);
void apply_streaming_mutation(schema_ptr, utils::UUID plan_id, const frozen_mutation&, bool fragmented);
// Returns at most "cmd.limit" rows
future<lw_shared_ptr<query::result>> query(schema_ptr,
@@ -462,7 +560,8 @@ public:
future<> stop();
future<> flush();
future<> flush(const db::replay_position&);
future<> flush_streaming_mutations(std::vector<query::partition_range> ranges = std::vector<query::partition_range>{});
future<> flush_streaming_mutations(utils::UUID plan_id, std::vector<query::partition_range> ranges = std::vector<query::partition_range>{});
future<> fail_streaming_mutations(utils::UUID plan_id);
future<> clear(); // discards memtable(s) without flushing them to disk.
future<db::replay_position> discard_sstables(db_clock::time_point);
@@ -473,10 +572,14 @@ public:
future<int64_t> disable_sstable_write() {
_sstable_writes_disabled_at = std::chrono::steady_clock::now();
return _sstables_lock.write_lock().then([this] {
if (_sstables->empty()) {
if (_sstables->all()->empty()) {
return make_ready_future<int64_t>(0);
}
return make_ready_future<int64_t>((*_sstables->rbegin()).first);
int64_t max = 0;
for (auto&& s : *_sstables->all()) {
max = std::max(max, s->generation());
}
return make_ready_future<int64_t>(max);
});
}
@@ -490,6 +593,17 @@ public:
return std::chrono::steady_clock::now() - _sstable_writes_disabled_at;
}
// This function will iterate through upload directory in column family,
// and will do the following for each sstable found:
// 1) Mutate sstable level to 0.
// 2) Create hard links to its components in column family dir.
// 3) Remove all of its components in upload directory.
// At the end, it's expected that upload dir is empty and all of its
// previous content was moved to column family dir.
//
// Return a vector containing descriptor of sstables to be loaded.
future<std::vector<sstables::entry_descriptor>> flush_upload_dir();
// Make sure the generation numbers are sequential, starting from "start".
// Generations before "start" are left untouched.
//
@@ -536,9 +650,11 @@ public:
_config.enable_incremental_backups = val;
}
lw_shared_ptr<sstable_list> get_sstables();
lw_shared_ptr<sstable_list> get_sstables_including_compacted_undeleted();
size_t sstables_count();
lw_shared_ptr<sstable_list> get_sstables() const;
lw_shared_ptr<sstable_list> get_sstables_including_compacted_undeleted() const;
std::vector<sstables::shared_sstable> select_sstables(const query::partition_range& range) const;
size_t sstables_count() const;
std::vector<uint64_t> sstable_count_per_level() const;
int64_t get_unleveled_sstables() const;
void start_compaction();
@@ -553,10 +669,6 @@ public:
return _compaction_strategy;
}
bool compaction_manager_queued() const;
void set_compaction_manager_queued(bool compaction_manager_queued);
bool pending_compactions() const;
const stats& get_stats() const {
return _stats;
}
@@ -569,9 +681,7 @@ public:
Result run_with_compaction_disabled(Func && func) {
++_compaction_disabled;
return _compaction_manager.remove(this).then(std::forward<Func>(func)).finally([this] {
// #934. The pending counter is actually a great indicator into whether we
// actually need to trigger a compaction again.
if (--_compaction_disabled == 0 && _stats.pending_compactions > 0) {
if (--_compaction_disabled == 0) {
// we're turning if on again, use function that does not increment
// the counter further.
do_trigger_compaction();
@@ -587,7 +697,7 @@ private:
// But it is possible to synchronously wait for the seal to complete by
// waiting on this future. This is useful in situations where we want to
// synchronously flush data to disk.
future<> seal_active_memtable();
future<> seal_active_memtable(memtable_list::flush_behavior behavior = memtable_list::flush_behavior::delayed);
// I am assuming here that the repair process will potentially send ranges containing
// few mutations, definitely not enough to fill a memtable. It wants to know whether or
@@ -610,9 +720,19 @@ private:
// repair can now choose whatever strategy - small or big ranges - it wants, resting assure
// that the incoming memtables will be coalesced together.
shared_promise<> _waiting_streaming_flushes;
timer<> _delayed_streaming_flush{[this] { seal_active_streaming_memtable(); }};
future<> seal_active_streaming_memtable();
timer<> _delayed_streaming_flush{[this] { seal_active_streaming_memtable_immediate(); }};
future<> seal_active_streaming_memtable_delayed();
future<> seal_active_streaming_memtable_immediate();
future<> seal_active_streaming_memtable(memtable_list::flush_behavior behavior) {
if (behavior == memtable_list::flush_behavior::delayed) {
return seal_active_streaming_memtable_delayed();
} else if (behavior == memtable_list::flush_behavior::immediate) {
return seal_active_streaming_memtable_immediate();
} else {
// Impossible
assert(0);
}
}
// filter manifest.json files out
static bool manifest_json_filter(const sstring& fname);
@@ -707,8 +827,8 @@ public:
const lw_shared_ptr<user_types_metadata>& user_types() const {
return _user_types;
}
void add_column_family(const schema_ptr& s) {
_cf_meta_data.emplace(s->cf_name(), s);
void add_or_update_column_family(const schema_ptr& s) {
_cf_meta_data[s->cf_name()] = s;
}
void remove_column_family(const schema_ptr& s) {
_cf_meta_data.erase(s->cf_name());
@@ -733,9 +853,10 @@ public:
bool enable_incremental_backups = false;
size_t max_memtable_size = 5'000'000;
size_t max_streaming_memtable_size = 5'000'000;
logalloc::region_group* dirty_memory_region_group = nullptr;
logalloc::region_group* streaming_dirty_memory_region_group = nullptr;
::dirty_memory_manager* dirty_memory_manager = &default_dirty_memory_manager;
::dirty_memory_manager* streaming_dirty_memory_manager = &default_dirty_memory_manager;
restricted_mutation_reader_config read_concurrency_config;
restricted_mutation_reader_config streaming_read_concurrency_config;
::cf_stats* cf_stats = nullptr;
};
private:
@@ -747,16 +868,30 @@ public:
: _metadata(std::move(metadata))
, _config(std::move(cfg))
{}
const lw_shared_ptr<keyspace_metadata>& metadata() const {
void update_from(lw_shared_ptr<keyspace_metadata>);
/** Note: return by shared pointer value, since the meta data is
* semi-volatile. I.e. we could do alter keyspace at any time, and
* boom, it is replaced.
*/
lw_shared_ptr<keyspace_metadata> metadata() const {
return _metadata;
}
void create_replication_strategy(const std::map<sstring, sstring>& options);
/**
* This should not really be return by reference, since replication
* strategy is also volatile in that it could be replaced at "any" time.
* However, all current uses at least are "instantateous", i.e. does not
* carry it across a continuation. So it is sort of same for now, but
* should eventually be refactored.
*/
locator::abstract_replication_strategy& get_replication_strategy();
const locator::abstract_replication_strategy& get_replication_strategy() const;
column_family::config make_column_family_config(const schema& s) const;
column_family::config make_column_family_config(const schema& s, const db::config& db_config) const;
future<> make_directory_for_column_family(const sstring& name, utils::UUID uuid);
void add_column_family(const schema_ptr& s) {
_metadata->add_column_family(s);
void add_or_update_column_family(const schema_ptr& s) {
_metadata->add_or_update_column_family(s);
}
void add_user_type(const user_type ut) {
_metadata->add_user_type(ut);
@@ -779,7 +914,7 @@ public:
const sstring& datadir() const {
return _config.datadir;
}
private:
sstring column_family_directory(const sstring& name, utils::UUID uuid) const;
};
@@ -811,9 +946,12 @@ class database {
lw_shared_ptr<db_stats> _stats;
logalloc::region_group _dirty_memory_region_group;
logalloc::region_group _streaming_dirty_memory_region_group;
std::unique_ptr<db::config> _cfg;
size_t _memtable_total_space = 500 << 20;
size_t _streaming_memtable_total_space = 500 << 20;
memtable_dirty_memory_manager _system_dirty_memory_manager;
memtable_dirty_memory_manager _dirty_memory_manager;
streaming_dirty_memory_manager _streaming_dirty_memory_manager;
semaphore _read_concurrency_sem{max_concurrent_reads()};
restricted_mutation_reader_config _read_concurrency_config;
semaphore _system_read_concurrency_sem{max_system_concurrent_reads()};
@@ -823,9 +961,6 @@ class database {
std::unordered_map<utils::UUID, lw_shared_ptr<column_family>> _column_families;
std::unordered_map<std::pair<sstring, sstring>, utils::UUID, utils::tuple_hash> _ks_cf_to_uuid;
std::unique_ptr<db::commitlog> _commitlog;
std::unique_ptr<db::config> _cfg;
size_t _memtable_total_space = 500 << 20;
size_t _streaming_memtable_total_space = 500 << 20;
utils::UUID _version;
// compaction_manager object is referenced by all column families of a database.
compaction_manager _compaction_manager;
@@ -833,7 +968,7 @@ class database {
bool _enable_incremental_backups = false;
future<> init_commitlog();
future<> apply_in_memory(const frozen_mutation& m, const schema_ptr& m_schema, const db::replay_position&);
future<> apply_in_memory(const frozen_mutation& m, schema_ptr m_schema, db::replay_position);
future<> populate(sstring datadir);
future<> populate_keyspace(sstring datadir, sstring ks_name);
@@ -845,9 +980,6 @@ private:
friend void db::system_keyspace::make(database& db, bool durable, bool volatile_testing_only);
void setup_collectd();
throttle_state _memtables_throttler;
throttle_state _streaming_throttler;
future<> do_apply(schema_ptr, const frozen_mutation&);
public:
static utils::UUID empty_version;
@@ -894,7 +1026,7 @@ public:
keyspace& find_keyspace(const sstring& name);
const keyspace& find_keyspace(const sstring& name) const;
bool has_keyspace(const sstring& name) const;
void update_keyspace(const sstring& name);
future<> update_keyspace(const sstring& name);
void drop_keyspace(const sstring& name);
const auto& keyspaces() const { return _keyspaces; }
std::vector<sstring> get_non_system_keyspaces() const;
@@ -916,7 +1048,7 @@ public:
future<lw_shared_ptr<query::result>> query(schema_ptr, const query::read_command& cmd, query::result_request request, const std::vector<query::partition_range>& ranges);
future<reconcilable_result> query_mutations(schema_ptr, const query::read_command& cmd, const query::partition_range& range);
future<> apply(schema_ptr, const frozen_mutation&);
future<> apply_streaming_mutation(schema_ptr, const frozen_mutation&);
future<> apply_streaming_mutation(schema_ptr, utils::UUID plan_id, const frozen_mutation&, bool fragmented);
keyspace::config make_keyspace_config(const keyspace_metadata& ksm);
const sstring& get_snitch_name() const;
future<> clear_snapshot(sstring tag, std::vector<sstring> keyspace_names);
@@ -938,6 +1070,8 @@ public:
return _column_families;
}
std::vector<lw_shared_ptr<column_family>> get_non_system_column_families() const;
const std::unordered_map<std::pair<sstring, sstring>, utils::UUID, utils::tuple_hash>&
get_column_families_mapping() const {
return _ks_cf_to_uuid;
@@ -960,7 +1094,7 @@ public:
future<> drop_column_family(const sstring& ks_name, const sstring& cf_name, timestamp_func);
const logalloc::region_group& dirty_memory_region_group() const {
return _dirty_memory_region_group;
return _dirty_memory_manager.region_group();
}
std::unordered_set<sstring> get_initial_tokens();

View File

@@ -257,9 +257,26 @@ future<> db::batchlog_manager::replay_all_failed_batches() {
// FIXME: verify that the above is reasonably true.
return limiter->reserve(size).then([this, mutations = std::move(mutations), id] {
_stats.write_attempts += mutations.size();
return _qp.proxy().local().mutate(mutations, db::consistency_level::ANY);
// #1222 - change cl level to ALL, emulating origins behaviour of sending/hinting
// to all natural end points.
// Note however that origin uses hints here, and actually allows for this
// send to partially or wholly fail in actually sending stuff. Since we don't
// have hints (yet), send with CL=ALL, and hope we can re-do this soon.
// See below, we use retry on write failure.
return _qp.proxy().local().mutate(mutations, db::consistency_level::ALL, nullptr);
});
}).then([this, id] {
}).then_wrapped([this, id](future<> batch_result) {
try {
batch_result.get();
} catch (no_such_keyspace& ex) {
// should probably ignore and drop the batch
} catch (...) {
// timeout, overload etc.
// Do _not_ remove the batch, assuning we got a node write error.
// Since we don't have hints (which origin is satisfied with),
// we have to resort to keeping this batch to next lap.
return make_ready_future<>();
}
// delete batch
auto schema = _qp.db().local().find_schema(system_keyspace::NAME, system_keyspace::BATCHLOG);
auto key = partition_key::from_singular(*schema, id);

View File

@@ -63,6 +63,7 @@
#include "utils/data_input.hh"
#include "utils/crc.hh"
#include "utils/runtime.hh"
#include "utils/flush_queue.hh"
#include "log.hh"
#include "commitlog_entry.hh"
#include "service/priority_manager.hh"
@@ -194,7 +195,6 @@ public:
stats totals;
future<> begin_write() {
_gate.enter();
++totals.pending_writes; // redundant, given semaphore. but easier to read
if (totals.pending_writes >= cfg.max_active_writes) {
++totals.write_limit_exceeded;
@@ -205,11 +205,9 @@ public:
void end_write() {
_write_semaphore.signal();
--totals.pending_writes;
_gate.leave();
}
future<> begin_flush() {
_gate.enter();
++totals.pending_flushes;
if (totals.pending_flushes >= cfg.max_active_flushes) {
++totals.flush_limit_exceeded;
@@ -220,11 +218,10 @@ public:
void end_flush() {
_flush_semaphore.signal();
--totals.pending_flushes;
_gate.leave();
}
bool should_wait_for_write() const {
return _write_semaphore.waiters() > 0 || _flush_semaphore.waiters() > 0;
return cfg.mode == sync_mode::BATCH || _write_semaphore.waiters() > 0 || _flush_semaphore.waiters() > 0;
}
segment_manager(config c)
@@ -276,7 +273,7 @@ public:
future<sseg_ptr> allocate_segment(bool active);
future<> clear();
future<> sync_all_segments();
future<> sync_all_segments(bool shutdown = false);
future<> shutdown();
scollectd::registrations create_counters();
@@ -391,18 +388,21 @@ class db::commitlog::segment: public enable_lw_shared_from_this<segment> {
uint64_t _buf_pos = 0;
bool _closed = false;
size_t _needed_size = 0;
using buffer_type = segment_manager::buffer_type;
using sseg_ptr = segment_manager::sseg_ptr;
using clock_type = segment_manager::clock_type;
using time_point = segment_manager::time_point;
buffer_type _buffer;
rwlock _dwrite; // used as a barrier between write & flush
std::unordered_map<cf_id_type, position_type> _cf_dirty;
time_point _sync_time;
seastar::gate _gate;
uint64_t _write_waiters = 0;
semaphore _queue;
utils::flush_queue<replay_position> _pending_ops;
uint64_t _num_allocs = 0;
std::unordered_set<table_schema_version> _known_schema_versions;
@@ -413,9 +413,7 @@ class db::commitlog::segment: public enable_lw_shared_from_this<segment> {
// This is maintaining the semantica of only using the write-lock
// as a gate for flushing, i.e. once we've begun a flush for position X
// we are ok with writes to positions > X
return _segment_manager->begin_flush().then(std::bind(&rwlock::write_lock, &_dwrite)).finally([this] {
_dwrite.write_unlock();
});
return _segment_manager->begin_flush();
}
void end_flush() {
@@ -426,11 +424,10 @@ class db::commitlog::segment: public enable_lw_shared_from_this<segment> {
// This is maintaining the semantica of only using the write-lock
// as a gate for flushing, i.e. once we've begun a flush for position X
// we are ok with writes to positions > X
return _segment_manager->begin_write().then(std::bind(&rwlock::read_lock, &_dwrite));
return _segment_manager->begin_write();
}
void end_write() {
_dwrite.read_unlock();
_segment_manager->end_write();
}
@@ -456,7 +453,7 @@ public:
segment(::shared_ptr<segment_manager> m, const descriptor& d, file && f, bool active)
: _segment_manager(std::move(m)), _desc(std::move(d)), _file(std::move(f)),
_file_name(_segment_manager->cfg.commit_log_location + "/" + _desc.filename()), _sync_time(
clock_type::now()), _queue(0)
clock_type::now())
{
++_segment_manager->totals.segments_created;
logger.debug("Created new {} segment {}", active ? "active" : "reserve", *this);
@@ -513,21 +510,30 @@ public:
_sync_time = clock_type::now();
}
// See class comment for info
future<sseg_ptr> sync() {
future<sseg_ptr> sync(bool shutdown = false) {
/**
* If we are shutting down, we first
* close the allocation gate, thus no new
* data can be appended. Then we just issue a
* flush, which will wait for any queued ops
* to complete as well. Then we close the ops
* queue, just to be sure.
*/
if (shutdown) {
auto me = shared_from_this();
return _gate.close().then([me] {
return me->sync().finally([me] {
// When we get here, nothing should add ops,
// and we should have waited out all pending.
return me->_pending_ops.close();
});
});
}
// Note: this is not a marker for when sync was finished.
// It is when it was initiated
reset_sync_time();
if (position() <= _flush_pos) {
logger.trace("Sync not needed {}: ({} / {})", *this, position(), _flush_pos);
return make_ready_future<sseg_ptr>(shared_from_this());
}
return cycle().then([](sseg_ptr seg) {
return seg->flush();
});
}
future<> shutdown() {
return _gate.close();
return cycle(true);
}
// See class comment for info
future<sseg_ptr> flush(uint64_t pos = 0) {
@@ -536,46 +542,56 @@ public:
if (pos == 0) {
pos = _file_pos;
}
if (pos != 0 && pos <= _flush_pos) {
logger.trace("{} already synced! ({} < {})", *this, pos, _flush_pos);
return make_ready_future<sseg_ptr>(std::move(me));
}
logger.trace("Syncing {} {} -> {}", *this, _flush_pos, pos);
// Make sure all disk writes are done.
// This is not 100% neccesary, we really only need the ones below our flush pos,
// but since we pretty much assume that task ordering will make this the case anyway...
return begin_flush().then(
[this, me, pos]() mutable {
pos = std::max(pos, _file_pos);
if (pos <= _flush_pos) {
logger.trace("{} already synced! ({} < {})", *this, pos, _flush_pos);
return make_ready_future<sseg_ptr>(std::move(me));
}
return _file.flush().then_wrapped([this, pos, me](future<> f) {
try {
f.get();
// TODO: retry/ignore/fail/stop - optional behaviour in origin.
// we fast-fail the whole commit.
_flush_pos = std::max(pos, _flush_pos);
++_segment_manager->totals.flush_count;
logger.trace("{} synced to {}", *this, _flush_pos);
return make_ready_future<sseg_ptr>(std::move(me));
} catch (...) {
logger.error("Failed to flush commits to disk: {}", std::current_exception());
throw;
}
});
}).finally([this, me] {
end_flush();
logger.trace("Syncing {} {} -> {}", *this, _flush_pos, pos);
// Only run the flush when all write ops at lower rp:s
// have completed.
replay_position rp(_desc.id, position_type(pos));
// Run like this to ensure flush ordering, and making flushes "waitable"
return _pending_ops.run_with_ordered_post_op(rp, [] { return make_ready_future<>(); }, [this, pos, me, rp] {
assert(_pending_ops.has_operation(rp));
return do_flush(pos);
});
}
future<sseg_ptr> do_flush(uint64_t pos) {
auto me = shared_from_this();
return begin_flush().then([this, pos]() {
if (pos <= _flush_pos) {
logger.trace("{} already synced! ({} < {})", *this, pos, _flush_pos);
return make_ready_future<>();
}
return _file.flush().then_wrapped([this, pos](future<> f) {
try {
f.get();
// TODO: retry/ignore/fail/stop - optional behaviour in origin.
// we fast-fail the whole commit.
_flush_pos = std::max(pos, _flush_pos);
++_segment_manager->totals.flush_count;
logger.trace("{} synced to {}", *this, _flush_pos);
} catch (...) {
logger.error("Failed to flush commits to disk: {}", std::current_exception());
throw;
}
});
}).finally([this] {
end_flush();
}).then([me] {
return make_ready_future<sseg_ptr>(me);
});
}
/**
* Allocate a new buffer
*/
void new_buffer(size_t s) {
assert(_buffer.empty());
s += _needed_size;
_needed_size = 0;
auto overhead = segment_overhead_size;
if (_file_pos == 0) {
overhead += descriptor_header_size;
@@ -604,25 +620,32 @@ public:
_segment_manager->totals.total_size += k;
}
bool buffer_is_empty() const {
return _buf_pos <= segment_overhead_size
|| (_file_pos == 0 && _buf_pos <= (segment_overhead_size + descriptor_header_size));
}
/**
* Send any buffer contents to disk and get a new tmp buffer
*/
// See class comment for info
future<sseg_ptr> cycle() {
future<sseg_ptr> cycle(bool flush_after = false) {
if (_buffer.empty()) {
return flush_after ? flush() : make_ready_future<sseg_ptr>(shared_from_this());
}
auto size = clear_buffer_slack();
auto buf = std::move(_buffer);
auto off = _file_pos;
auto top = off + size;
auto num = _num_allocs;
_file_pos += size;
_file_pos = top;
_buf_pos = 0;
_num_allocs = 0;
auto me = shared_from_this();
assert(!me.owned());
if (size == 0) {
return make_ready_future<sseg_ptr>(std::move(me));
}
auto * p = buf.get_write();
assert(std::count(p, p + 2 * sizeof(uint32_t), 0) == 2 * sizeof(uint32_t));
@@ -654,41 +677,50 @@ public:
forget_schema_versions();
// acquire read lock
return begin_write().then([this, size, off, buf = std::move(buf)]() mutable {
auto written = make_lw_shared<size_t>(0);
auto p = buf.get();
return repeat([this, size, off, written, p]() mutable {
auto&& priority_class = service::get_local_commitlog_priority();
return _file.dma_write(off + *written, p + *written, size - *written, priority_class).then_wrapped([this, size, written](future<size_t>&& f) {
try {
auto bytes = std::get<0>(f.get());
*written += bytes;
_segment_manager->totals.bytes_written += bytes;
_segment_manager->totals.total_size_on_disk += bytes;
++_segment_manager->totals.cycle_count;
if (*written == size) {
return make_ready_future<stop_iteration>(stop_iteration::yes);
replay_position rp(_desc.id, position_type(off));
logger.trace("Writing {} entries, {} k in {} -> {}", num, size, off, off + size);
// The write will be allowed to start now, but flush (below) must wait for not only this,
// but all previous write/flush pairs.
return _pending_ops.run_with_ordered_post_op(rp, [this, size, off, buf = std::move(buf)]() mutable {
// This could "block", if we have to many pending writes.
return begin_write().then([this, size, off, buf = std::move(buf)]() mutable {
auto written = make_lw_shared<size_t>(0);
auto p = buf.get();
return repeat([this, size, off, written, p]() mutable {
auto&& priority_class = service::get_local_commitlog_priority();
return _file.dma_write(off + *written, p + *written, size - *written, priority_class).then_wrapped([this, size, written](future<size_t>&& f) {
try {
auto bytes = std::get<0>(f.get());
*written += bytes;
_segment_manager->totals.bytes_written += bytes;
_segment_manager->totals.total_size_on_disk += bytes;
++_segment_manager->totals.cycle_count;
if (*written == size) {
return make_ready_future<stop_iteration>(stop_iteration::yes);
}
// gah, partial write. should always get here with dma chunk sized
// "bytes", but lets make sure...
logger.debug("Partial write {}: {}/{} bytes", *this, *written, size);
*written = align_down(*written, alignment);
return make_ready_future<stop_iteration>(stop_iteration::no);
// TODO: retry/ignore/fail/stop - optional behaviour in origin.
// we fast-fail the whole commit.
} catch (...) {
logger.error("Failed to persist commits to disk for {}: {}", *this, std::current_exception());
throw;
}
// gah, partial write. should always get here with dma chunk sized
// "bytes", but lets make sure...
logger.debug("Partial write {}: {}/{} bytes", *this, *written, size);
*written = align_down(*written, alignment);
return make_ready_future<stop_iteration>(stop_iteration::no);
// TODO: retry/ignore/fail/stop - optional behaviour in origin.
// we fast-fail the whole commit.
} catch (...) {
logger.error("Failed to persist commits to disk for {}: {}", *this, std::current_exception());
throw;
}
});
}).finally([this, buf = std::move(buf)]() mutable {
_segment_manager->release_buffer(std::move(buf));
});
}).finally([this, buf = std::move(buf)]() mutable {
_segment_manager->release_buffer(std::move(buf));
}).finally([this]() {
end_write(); // release
});
}).then([me] {
return make_ready_future<sseg_ptr>(std::move(me));
}).finally([me, this]() {
end_write(); // release
}, [me, flush_after, top, rp] { // lambda instead of bind, so we keep "me" alive.
assert(me->_pending_ops.has_operation(rp));
return flush_after ? me->do_flush(top) : make_ready_future<sseg_ptr>(me);
});
}
@@ -697,9 +729,7 @@ public:
++_write_waiters;
logger.trace("Too many pending writes. Must wait.");
return f.finally([this] {
if (--_write_waiters == 0) {
_queue.signal(_queue.waiters());
}
--_write_waiters;
});
}
return make_ready_future<sseg_ptr>(shared_from_this());
@@ -719,19 +749,52 @@ public:
* buffer memory usage might grow...
*/
bool must_wait_for_alloc() {
return _write_waiters > 0;
// Note: write_waiters is decremented _after_ both semaphores and
// flush queue might be cleared. So we should not look only at it.
// But we still don't want to look at "should_wait_for_write" directly,
// since that is "global" and includes other segments, and we want to
// know if _this_ segment has blocking write ops pending.
// So we also check that the flush queue is non-empty.
return _write_waiters > 0 && !_pending_ops.empty();
}
future<sseg_ptr> wait_for_alloc() {
auto me = shared_from_this();
++_segment_manager->totals.pending_allocations;
logger.trace("Previous allocation is blocking. Must wait.");
return _queue.wait().then([me] { // TODO: do we need a finally?
return _pending_ops.wait_for_pending().then([me] { // TODO: do we need a finally?
--me->_segment_manager->totals.pending_allocations;
return make_ready_future<sseg_ptr>(me);
});
}
future<sseg_ptr> batch_cycle() {
/**
* For batch mode we force a write "immediately".
* However, we first wait for all previous writes/flushes
* to complete.
*
* This has the benefit of allowing several allocations to
* queue up in a single buffer.
*/
auto me = shared_from_this();
auto fp = _file_pos;
return _pending_ops.wait_for_pending().then([me = std::move(me), fp] {
if (fp != me->_file_pos) {
// some other request already wrote this buffer.
// If so, wait for the operation at our intended file offset
// to finish, then we know the flush is complete and we
// are in accord.
// (Note: wait_for_pending(pos) waits for operation _at_ pos (and before),
replay_position rp(me->_desc.id, position_type(fp));
return me->_pending_ops.wait_for_pending(rp).then([me, fp] {
assert(me->_flush_pos > fp);
return make_ready_future<sseg_ptr>(me);
});
}
return me->sync();
});
}
/**
* Add a "mutation" to the segment.
*/
@@ -758,7 +821,16 @@ public:
} else if (_buffer.empty()) {
new_buffer(s);
} else if (s > (_buffer.size() - _buf_pos)) { // enough data?
op = maybe_wait_for_write(cycle());
_needed_size += s; // hint to next new_buffer, in case we are not first.
if (_segment_manager->cfg.mode == sync_mode::BATCH) {
// TODO: this could cause starvation if we're really unlucky.
// If we run batch mode and find ourselves not fit in a non-empty
// buffer, we must force a cycle and wait for it (to keep flush order)
// This will most likely cause parallel writes, and consecutive flushes.
op = cycle(true);
} else {
op = maybe_wait_for_write(cycle());
}
}
if (op) {
@@ -793,11 +865,12 @@ public:
out.write(crc.checksum());
++_segment_manager->totals.allocation_count;
++_num_allocs;
_gate.leave();
if (_segment_manager->cfg.mode == sync_mode::BATCH) {
return sync().then([rp](sseg_ptr) {
return batch_cycle().then([rp](auto s) {
return make_ready_future<replay_position>(rp);
});
}
@@ -1123,18 +1196,22 @@ void db::commitlog::segment_manager::discard_completed_segments(
discard_unused_segments();
}
std::ostream& db::operator<<(std::ostream& out, const db::commitlog::segment& s) {
namespace db {
std::ostream& operator<<(std::ostream& out, const db::commitlog::segment& s) {
return out << s._desc.filename();
}
std::ostream& db::operator<<(std::ostream& out, const db::commitlog::segment::cf_mark& m) {
std::ostream& operator<<(std::ostream& out, const db::commitlog::segment::cf_mark& m) {
return out << (m.s._cf_dirty | boost::adaptors::map_keys);
}
std::ostream& db::operator<<(std::ostream& out, const db::replay_position& p) {
std::ostream& operator<<(std::ostream& out, const db::replay_position& p) {
return out << "{" << p.shard_id() << ", " << p.base_id() << ", " << p.pos << "}";
}
}
void db::commitlog::segment_manager::discard_unused_segments() {
logger.trace("Checking for unused segments ({} active)", _segments.size());
@@ -1157,10 +1234,10 @@ void db::commitlog::segment_manager::discard_unused_segments() {
}
}
future<> db::commitlog::segment_manager::sync_all_segments() {
future<> db::commitlog::segment_manager::sync_all_segments(bool shutdown) {
logger.debug("Issuing sync for all segments");
return parallel_for_each(_segments, [this](sseg_ptr s) {
return s->sync().then([](sseg_ptr s) {
return parallel_for_each(_segments, [this, shutdown](sseg_ptr s) {
return s->sync(shutdown).then([](sseg_ptr s) {
logger.debug("Synced segment {}", *s);
});
});
@@ -1170,11 +1247,9 @@ future<> db::commitlog::segment_manager::shutdown() {
if (!_shutdown) {
_shutdown = true; // no re-arm, no create new segments.
_timer.cancel(); // no more timer calls
return parallel_for_each(_segments, [this](sseg_ptr s) {
return s->shutdown(); // close each segment (no more alloc)
}).then(std::bind(&segment_manager::sync_all_segments, this)).then([this] { // flush all
return _gate.close(); // wait for any pending ops
});
// Now first wait for periodic task to finish, then sync and close all
// segments, flushing out any remaining data.
return _gate.close().then(std::bind(&segment_manager::sync_all_segments, this, true));
}
return make_ready_future<>();
}

View File

@@ -114,22 +114,22 @@ future<> db::commitlog_replayer::impl::init() {
auto& pp = _rpm[p1.first][p2.first];
pp = std::max(pp, p2.second);
auto& min = _min_pos[p1.first];
min = (min == replay_position()) ? p2.second : std::min(p2.second, min);
auto i = _min_pos.find(p1.first);
if (i == _min_pos.end() || p2.second < i->second) {
_min_pos[p1.first] = p2.second;
}
}
}
}, [this](cql3::query_processor& qp) {
return do_with(shard_rpm_map{}, [this, &qp](shard_rpm_map& map) {
return parallel_for_each(qp.db().local().get_column_families(), [&map, &qp](auto& cfp) {
auto uuid = cfp.first;
for (auto& sst : *cfp.second->get_sstables() | boost::adaptors::map_values) {
for (auto& sst : *cfp.second->get_sstables()) {
try {
auto p = sst->get_stats_metadata().position;
logger.trace("sstable {} -> rp {}", sst->get_filename(), p);
if (p != replay_position()) {
auto& pp = map[p.shard_id()][uuid];
pp = std::max(pp, p);
}
auto& pp = map[p.shard_id()][uuid];
pp = std::max(pp, p);
} catch (...) {
logger.warn("Could not read sstable metadata {}", std::current_exception());
}
@@ -151,6 +151,25 @@ future<> db::commitlog_replayer::impl::init() {
});
});
}).finally([this] {
// bugfix: the above map-reduce will not_ detect if sstables
// are _missing_ from a CF. And because of re-sharding, we can't
// just insert initial zeros into the maps, because we don't know
// how many shards there we're last time.
// However, this only affects global min pos, since
// for each CF, the worst that happens is that we have a missing
// entry -> empty replay_pos == min value. But calculating
// global min pos will be off, since we will only base it on
// existing sstables-per-shard.
// So, go through all CF:s and check, if a shard mapping does not
// have data for it, assume we must set global pos to zero.
for (auto&p : _qp.local().db().local().get_column_families()) {
for (auto&p1 : _rpm) { // for each shard
if (!p1.second.count(p.first)) {
_min_pos[p1.first] = replay_position();
}
}
}
for (auto&p : _min_pos) {
logger.debug("minimum position for shard {}: {}", p.first, p.second);
}

View File

@@ -47,27 +47,28 @@ db::config::config()
namespace bpo = boost::program_options;
namespace db {
// Special "validator" for boost::program_options to allow reading options
// into an unordered_map<string, string> (we have in config.hh a bunch of
// those). This validator allows the parameter of each option to look like
// 'key=value'. It also allows multiple occurrences of this option to add
// multiple entries into the map. "String" can be any time which can be
// converted from std::string, e.g., sstring.
template<typename String>
static void validate(boost::any& out, const std::vector<std::string>& in,
std::unordered_map<String, String>*, int) {
db::string_map*, int) {
if (out.empty()) {
out = boost::any(std::unordered_map<String, String>());
out = boost::any(db::string_map());
}
auto* p = boost::any_cast<std::unordered_map<String, String>>(&out);
auto* p = boost::any_cast<db::string_map>(&out);
for (const auto& s : in) {
auto i = s.find_first_of('=');
if (i == std::string::npos) {
throw boost::program_options::invalid_option_value(s);
}
(*p)[String(s.substr(0, i))] = String(s.substr(i+1));
(*p)[sstring(s.substr(0, i))] = sstring(s.substr(i+1));
}
}
}
namespace YAML {
/*
@@ -114,26 +115,27 @@ struct convert<db::config::string_list> {
}
};
template<typename K, typename V>
struct convert<std::unordered_map<K, V>> {
static Node encode(const std::unordered_map<K, V>& rhs) {
template<>
struct convert<db::string_map> {
static Node encode(const db::string_map& rhs) {
Node node(NodeType::Map);
for (auto& p : rhs) {
node.force_insert(p.first, p.second);
}
return node;
}
static bool decode(const Node& node, std::unordered_map<K, V>& rhs) {
static bool decode(const Node& node, db::string_map& rhs) {
if (!node.IsMap()) {
return false;
}
rhs.clear();
for (auto& n : node) {
rhs[n.first.as<K>()] = n.second.as<V>();
rhs[n.first.as<sstring>()] = n.second.as<sstring>();
}
return true;
}
};
template<>
struct convert<db::config::seed_provider_type> {
static Node encode(const db::config::seed_provider_type& rhs) {
@@ -167,6 +169,7 @@ struct convert<db::config::seed_provider_type> {
}
namespace db {
template<typename... Args>
std::basic_ostream<Args...> & operator<<(std::basic_ostream<Args...> & os, const db::config::string_map & map) {
int n = 0;
@@ -198,6 +201,7 @@ std::basic_istream<Args...> & operator>>(std::basic_istream<Args...> & is, db::c
return is;
}
}
/*
* Helper type to do compile time exclusion of Unused/invalid options from

Some files were not shown because too many files have changed in this diff Show More